{"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef interactions_iter(self, nbunch=None, t=None):\n seen = {} # helper dict to keep track of multiply stored interactions\n if nbunch is None:\n nodes_nbrs_succ = self._succ.items()\n else:\n nodes_nbrs_succ = [(n, self._succ[n]) for n in self.nbunch_iter(nbunch)]\n\n for n, nbrs in nodes_nbrs_succ:\n for nbr in nbrs:\n if t is not None:\n if nbr not in seen and self.__presence_test(n, nbr, t):\n yield (n, nbr, {\"t\": [t]})\n else:\n if nbr not in seen:\n yield (nbr, n, self._succ[n][nbr])\n seen[n] = 1\n\n del seen", "response": "Return an iterator over the interaction present in a given snapshot."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn an iterator over the in - interactions present in a given snapshot.", "response": "def in_interactions_iter(self, nbunch=None, t=None):\n \"\"\"Return an iterator over the in interactions present in a given snapshot.\n\n Edges are returned as tuples in the order (node, neighbor).\n\n Parameters\n ----------\n nbunch : iterable container, optional (default= all nodes)\n A container of nodes. The container will be iterated\n through once.\n t : snapshot id (default=None)\n If None the the method returns an iterator over the edges of the flattened graph.\n\n Returns\n -------\n edge_iter : iterator\n An iterator of (u,v) tuples of interaction.\n\n Notes\n -----\n Nodes in nbunch that are not in the graph will be (quietly) ignored.\n For directed graphs this returns the out-interaction.\n\n Examples\n --------\n >>> G = dn.DynDiGraph()\n >>> G.add_interaction(0,1, 0)\n >>> G.add_interaction(1,2, 0)\n >>> G.add_interaction(2,3,1)\n >>> [e for e in G.in_interactions_iter(t=0)]\n [(0, 1), (1, 2)]\n >>> list(G.in_interactions_iter())\n [(0, 1), (1, 2), (2, 3)]\n \"\"\"\n if nbunch is None:\n nodes_nbrs_pred = self._pred.items()\n else:\n nodes_nbrs_pred = [(n, self._pred[n]) for n in self.nbunch_iter(nbunch)]\n\n for n, nbrs in nodes_nbrs_pred:\n\n for nbr in nbrs:\n if t is not None:\n if self.__presence_test(nbr, n, t):\n yield (nbr, n, {\"t\": [t]})\n else:\n if nbr in self._pred[n]:\n yield (nbr, n, self._pred[n][nbr])"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn an iterator over the out - interactions present in a given snapshot.", "response": "def out_interactions_iter(self, nbunch=None, t=None):\n \"\"\"Return an iterator over the out interactions present in a given snapshot.\n\n Edges are returned as tuples\n in the order (node, neighbor).\n\n Parameters\n ----------\n nbunch : iterable container, optional (default= all nodes)\n A container of nodes. The container will be iterated\n through once.\n t : snapshot id (default=None)\n If None the the method returns an iterator over the edges of the flattened graph.\n\n Returns\n -------\n edge_iter : iterator\n An iterator of (u,v) tuples of interaction.\n\n Notes\n -----\n Nodes in nbunch that are not in the graph will be (quietly) ignored.\n For directed graphs this returns the out-interaction.\n\n Examples\n --------\n >>> G = dn.DynDiGraph()\n >>> G.add_interaction(0,1, 0)\n >>> G.add_interaction(1,2, 0)\n >>> G.add_interaction(2,3,1)\n >>> [e for e in G.out_interactions_iter(t=0)]\n [(0, 1), (1, 2)]\n >>> list(G.out_interactions_iter())\n [(0, 1), (1, 2), (2, 3)]\n \"\"\"\n if nbunch is None:\n nodes_nbrs_succ = self._succ.items()\n else:\n nodes_nbrs_succ = [(n, self._succ[n]) for n in self.nbunch_iter(nbunch)]\n\n for n, nbrs in nodes_nbrs_succ:\n for nbr in nbrs:\n if t is not None:\n if self.__presence_test(n, nbr, t):\n yield (n, nbr, {\"t\": [t]})\n else:\n if nbr in self._succ[n]:\n yield (n, nbr, self._succ[n][nbr])"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the number of interaction between two nodes at time t.", "response": "def number_of_interactions(self, u=None, v=None, t=None):\n \"\"\"Return the number of interaction between two nodes at time t.\n\n Parameters\n ----------\n u, v : nodes, optional (default=all interaction)\n If u and v are specified, return the number of interaction between\n u and v. Otherwise return the total number of all interaction.\n t : snapshot id (default=None)\n If None will be returned the number of edges on the flattened graph.\n\n Returns\n -------\n nedges : int\n The number of interaction in the graph. If nodes u and v are specified\n return the number of interaction between those nodes. If a single node is specified return None.\n\n See Also\n --------\n size\n\n Examples\n --------\n >>> G = dn.DynDiGraph()\n >>> G.add_path([0,1,2,3], t=0)\n >>> G.number_of_interactions()\n 3\n >>> G.number_of_interactions(0,1, t=0)\n 1\n >>> G.add_edge(3, 4, t=1)\n >>> G.number_of_interactions()\n 4\n \"\"\"\n if t is None:\n if u is None:\n return int(self.size())\n elif u is not None and v is not None:\n if v in self._succ[u]:\n return 1\n else:\n return 0\n else:\n if u is None:\n return int(self.size(t))\n elif u is not None and v is not None:\n if v in self._succ[u]:\n if self.__presence_test(u, v, t):\n return 1\n else:\n return 0"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns True if the interaction u v is in the graph at time t.", "response": "def has_interaction(self, u, v, t=None):\n \"\"\"Return True if the interaction (u,v) is in the graph at time t.\n\n Parameters\n ----------\n u, v : nodes\n Nodes can be, for example, strings or numbers.\n Nodes must be hashable (and not None) Python objects.\n t : snapshot id (default=None)\n If None will be returned the presence of the interaction on the flattened graph.\n\n\n Returns\n -------\n edge_ind : bool\n True if interaction is in the graph, False otherwise.\n\n Examples\n --------\n Can be called either using two nodes u,v or interaction tuple (u,v)\n\n >>> G = dn.DynDiGraph()\n >>> G.add_interaction(0,1, t=0)\n >>> G.add_interaction(1,2, t=0)\n >>> G.add_interaction(2,3, t=0)\n >>> G.has_interaction(0,1, t=0)\n True\n >>> G.has_interaction(0,1, t=1)\n False\n \"\"\"\n try:\n if t is None:\n return v in self._succ[u]\n else:\n return v in self._succ[u] and self.__presence_test(u, v, t)\n except KeyError:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef has_successor(self, u, v, t=None):\n return self.has_interaction(u, v, t)", "response": "Return True if node u has successor v at time t."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef has_predecessor(self, u, v, t=None):\n return self.has_interaction(v, u, t)", "response": "Return True if node u has predecessor v at time t."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef successors_iter(self, n, t=None):\n try:\n if t is None:\n return iter(self._succ[n])\n else:\n return iter([i for i in self._succ[n] if self.__presence_test(n, i, t)])\n except KeyError:\n raise nx.NetworkXError(\"The node %s is not in the graph.\" % (n,))", "response": "Return an iterator over the successors of n at time t."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef predecessors_iter(self, n, t=None):\n try:\n if t is None:\n return iter(self._pred[n])\n else:\n return iter([i for i in self._pred[n] if self.__presence_test(i, n, t)])\n except KeyError:\n raise nx.NetworkXError(\"The node %s is not in the graph.\" % (n,))", "response": "Return an iterator over predecessors nodes of n at time t."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the in degree of a node or nodes at time t.", "response": "def in_degree(self, nbunch=None, t=None):\n \"\"\"Return the in degree of a node or nodes at time t.\n\n The node in degree is the number of incoming interaction to that node in a given time frame.\n\n Parameters\n ----------\n nbunch : iterable container, optional (default=all nodes)\n A container of nodes. The container will be iterated\n through once.\n\n t : snapshot id (default=None)\n If None will be returned the degree of nodes on the flattened graph.\n\n\n Returns\n -------\n nd : dictionary, or number\n A dictionary with nodes as keys and degree as values or\n a number if a single node is specified.\n\n Examples\n --------\n >>> G = dn.DynDiGraph()\n >>> G.add_interaction(0,1, t=0)\n >>> G.add_interaction(1,2, t=0)\n >>> G.add_interaction(2,3, t=0)\n >>> G.in_degree(0, t=0)\n 1\n >>> G.in_degree([0,1], t=1)\n {0: 0, 1: 0}\n >>> list(G.in_degree([0,1], t=0).values())\n [1, 2]\n \"\"\"\n if nbunch in self: # return a single node\n return next(self.in_degree_iter(nbunch, t))[1]\n else: # return a dict\n return dict(self.in_degree_iter(nbunch, t))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef in_degree_iter(self, nbunch=None, t=None):\n if nbunch is None:\n nodes_nbrs = self._pred.items()\n else:\n nodes_nbrs = ((n, self._pred[n]) for n in self.nbunch_iter(nbunch))\n\n if t is None:\n for n, nbrs in nodes_nbrs:\n deg = len(self._pred[n])\n yield (n, deg)\n else:\n for n, nbrs in nodes_nbrs:\n edges_t = len([v for v in nbrs.keys() if self.__presence_test(v, n, t)])\n if edges_t > 0:\n yield (n, edges_t)\n else:\n yield (n, 0)", "response": "Return an iterator over the in degree of the node at time t."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef out_degree(self, nbunch=None, t=None):\n if nbunch in self: # return a single node\n return next(self.out_degree_iter(nbunch, t))[1]\n else: # return a dict\n return dict(self.out_degree_iter(nbunch, t))", "response": "Return the out degree of a node or nodes at time t."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning an iterator over the out degree of a node in a given timeframe.", "response": "def out_degree_iter(self, nbunch=None, t=None):\n \"\"\"Return an iterator for (node, out_degree) at time t.\n\n The node out degree is the number of interactions outgoing from the node in a given timeframe.\n\n Parameters\n ----------\n nbunch : iterable container, optional (default=all nodes)\n A container of nodes. The container will be iterated\n through once.\n\n t : snapshot id (default=None)\n If None will be returned an iterator over the degree of nodes on the flattened graph.\n\n\n Returns\n -------\n nd_iter : an iterator\n The iterator returns two-tuples of (node, degree).\n\n See Also\n --------\n degree\n\n Examples\n --------\n >>> G = dn.DynDiGraph()\n >>> G.add_interaction(0, 1, t=0)\n >>> list(G.out_degree_iter(0, t=0))\n [(0, 1)]\n >>> list(G.out_degree_iter([0,1], t=0))\n [(0, 1)]\n \"\"\"\n if nbunch is None:\n nodes_nbrs = self._succ.items()\n else:\n nodes_nbrs = ((n, self._succ[n]) for n in self.nbunch_iter(nbunch))\n\n if t is None:\n for n, nbrs in nodes_nbrs:\n deg = len(self._succ[n])\n yield (n, deg)\n else:\n for n, nbrs in nodes_nbrs:\n edges_t = len([v for v in nbrs.keys() if self.__presence_test(n, v, t)])\n if edges_t > 0:\n yield (n, edges_t)\n else:\n yield (n, 0)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef to_undirected(self, reciprocal=False):\n from .dyngraph import DynGraph\n H = DynGraph()\n H.name = self.name\n H.add_nodes_from(self)\n\n if reciprocal is True:\n for u in self._node:\n for v in self._node:\n if u >= v:\n try:\n outc = self._succ[u][v]['t']\n intc = self._pred[u][v]['t']\n for o in outc:\n r = set(range(o[0], o[1]+1))\n for i in intc:\n r2 = set(range(i[0], i[1]+1))\n inter = list(r & r2)\n if len(inter) == 1:\n H.add_interaction(u, v, t=inter[0])\n elif len(inter) > 1:\n H.add_interaction(u, v, t=inter[0], e=inter[-1])\n\n except:\n pass\n\n else:\n for it in self.interactions_iter():\n for t in it[2]['t']:\n H.add_interaction(it[0], it[1], t=t[0], e=t[1])\n\n H.graph = deepcopy(self.graph)\n H._node = deepcopy(self._node)\n return H", "response": "Return an undirected representation of the dynamic graph."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef write_interactions(G, path, delimiter=' ', encoding='utf-8'):\n for line in generate_interactions(G, delimiter):\n line += '\\n'\n path.write(line.encode(encoding))", "response": "Write a DyNetx graph in interaction list format."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreads a DyNetx file containing a list of interaction lines.", "response": "def read_interactions(path, comments=\"#\", directed=False, delimiter=None,\n nodetype=None, timestamptype=None, encoding='utf-8', keys=False):\n \"\"\"Read a DyNetx graph from interaction list format.\n\n\n Parameters\n ----------\n\n path : basestring\n The desired output filename\n\n delimiter : character\n Column delimiter\n \"\"\"\n ids = None\n lines = (line.decode(encoding) for line in path)\n if keys:\n ids = read_ids(path.name, delimiter=delimiter, timestamptype=timestamptype)\n\n return parse_interactions(lines, comments=comments, directed=directed, delimiter=delimiter, nodetype=nodetype,\n timestamptype=timestamptype, keys=ids)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nwrite a DyNetx graph in snapshot graph list format.", "response": "def write_snapshots(G, path, delimiter=' ', encoding='utf-8'):\n \"\"\"Write a DyNetx graph in snapshot graph list format.\n\n\n Parameters\n ----------\n\n G : graph\n A DyNetx graph.\n\n path : basestring\n The desired output filename\n\n delimiter : character\n Column delimiter\n \"\"\"\n for line in generate_snapshots(G, delimiter):\n line += '\\n'\n path.write(line.encode(encoding))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreads a DyNetx file from snapshot graph list format.", "response": "def read_snapshots(path, comments=\"#\", directed=False, delimiter=None,\n nodetype=None, timestamptype=None, encoding='utf-8', keys=False):\n \"\"\"Read a DyNetx graph from snapshot graph list format.\n\n\n Parameters\n ----------\n\n path : basestring\n The desired output filename\n\n delimiter : character\n Column delimiter\n \"\"\"\n ids = None\n lines = (line.decode(encoding) for line in path)\n if keys:\n ids = read_ids(path.name, delimiter=delimiter, timestamptype=timestamptype)\n\n return parse_snapshots(lines, comments=comments, directed=directed, delimiter=delimiter, nodetype=nodetype,\n timestamptype=timestamptype, keys=ids)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfunction to ensure clean opening and closing of files.", "response": "def open_file(path_arg, mode='r'):\n \"\"\"Decorator to ensure clean opening and closing of files.\n\n Parameters\n ----------\n path_arg : int\n Location of the path argument in args. Even if the argument is a\n named positional argument (with a default value), you must specify its\n index as a positional argument.\n mode : str\n String for opening mode.\n\n Returns\n -------\n _open_file : function\n Function which cleanly executes the io.\n\n Examples\n --------\n Decorate functions like this::\n\n @open_file(0,'r')\n def read_function(pathname):\n pass\n\n @open_file(1,'w')\n def write_function(G,pathname):\n pass\n\n @open_file(1,'w')\n def write_function(G, pathname='graph.dot')\n pass\n\n @open_file('path', 'w+')\n def another_function(arg, **kwargs):\n path = kwargs['path']\n pass\n \"\"\"\n # Note that this decorator solves the problem when a path argument is\n # specified as a string, but it does not handle the situation when the\n # function wants to accept a default of None (and then handle it).\n # Here is an example:\n #\n # @open_file('path')\n # def some_function(arg1, arg2, path=None):\n # if path is None:\n # fobj = tempfile.NamedTemporaryFile(delete=False)\n # close_fobj = True\n # else:\n # # `path` could have been a string or file object or something\n # # similar. In any event, the decorator has given us a file object\n # # and it will close it for us, if it should.\n # fobj = path\n # close_fobj = False\n #\n # try:\n # fobj.write('blah')\n # finally:\n # if close_fobj:\n # fobj.close()\n #\n # Normally, we'd want to use \"with\" to ensure that fobj gets closed.\n # However, recall that the decorator will make `path` a file object for\n # us, and using \"with\" would undesirably close that file object. Instead,\n # you use a try block, as shown above. When we exit the function, fobj will\n # be closed, if it should be, by the decorator.\n\n @decorator\n def _open_file(func, *args, **kwargs):\n\n # Note that since we have used @decorator, *args, and **kwargs have\n # already been resolved to match the function signature of func. This\n # means default values have been propagated. For example, the function\n # func(x, y, a=1, b=2, **kwargs) if called as func(0,1,b=5,c=10) would\n # have args=(0,1,1,5) and kwargs={'c':10}.\n\n # First we parse the arguments of the decorator. The path_arg could\n # be an positional argument or a keyword argument. Even if it is\n try:\n # path_arg is a required positional argument\n # This works precisely because we are using @decorator\n path = args[path_arg]\n except TypeError:\n # path_arg is a keyword argument. It is \"required\" in the sense\n # that it must exist, according to the decorator specification,\n # It can exist in `kwargs` by a developer specified default value\n # or it could have been explicitly set by the user.\n try:\n path = kwargs[path_arg]\n except KeyError:\n # Could not find the keyword. Thus, no default was specified\n # in the function signature and the user did not provide it.\n msg = 'Missing required keyword argument: {0}'\n raise nx.NetworkXError(msg.format(path_arg))\n else:\n is_kwarg = True\n except IndexError:\n # A \"required\" argument was missing. This can only happen if\n # the decorator of the function was incorrectly specified.\n # So this probably is not a user error, but a developer error.\n msg = \"path_arg of open_file decorator is incorrect\"\n raise nx.NetworkXError(msg)\n else:\n is_kwarg = False\n\n # Now we have the path_arg. There are two types of input to consider:\n # 1) string representing a path that should be opened\n # 2) an already opened file object\n if is_string_like(path):\n ext = splitext(path)[1]\n fobj = _dispatch_dict[ext](path, mode=mode)\n close_fobj = True\n elif hasattr(path, 'read'):\n # path is already a file-like object\n fobj = path\n close_fobj = False\n else:\n # could be None, in which case the algorithm will deal with it\n fobj = path\n close_fobj = False\n\n # Insert file object into args or kwargs.\n if is_kwarg:\n new_args = args\n kwargs[path_arg] = fobj\n else:\n # args is a tuple, so we must convert to list before modifying it.\n new_args = list(args)\n new_args[path_arg] = fobj\n\n # Finally, we call the original function, making sure to close the fobj.\n try:\n result = func(*new_args, **kwargs)\n finally:\n if close_fobj:\n fobj.close()\n\n return result\n\n return _open_file"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the number of edges between two nodes at time t.", "response": "def number_of_interactions(G, u=None, v=None, t=None):\n \"\"\"Return the number of edges between two nodes at time t.\n\n Parameters\n ----------\n u, v : nodes, optional (default=all edges)\n If u and v are specified, return the number of edges between\n u and v. Otherwise return the total number of all edges.\n t : snapshot id (default=None)\n If None will be returned the number of edges on the flattened graph.\n\n\n Returns\n -------\n nedges : int\n The number of edges in the graph. If nodes u and v are specified\n return the number of edges between those nodes.\n\n Examples\n --------\n >>> G = dn.DynGraph()\n >>> G.add_path([0,1,2,3], t=0)\n >>> dn.number_of_interactions(G, t=0)\n \"\"\"\n return G.number_of_interactions(u, v, t)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef density(G, t=None):\n n = number_of_nodes(G, t)\n m = number_of_interactions(G, t)\n if m == 0 or m is None or n <= 1:\n return 0\n\n d = m / (n * (n - 1))\n if not G.is_directed():\n d *= 2\n return d", "response": "r Return the density of a graph at timestamp t."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef degree_histogram(G, t=None):\n counts = Counter(d for n, d in G.degree(t=t).items())\n return [counts.get(i, 0) for i in range(max(counts) + 1)]", "response": "Return a list of the frequency of each degree value in a DyNetx graph."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmodifying a networkx graph to prevent further change by adding or removing nodes or edges.", "response": "def freeze(G):\n \"\"\"Modify graph to prevent further change by adding or removing nodes or edges.\n\n Node and edge data can still be modified.\n\n Parameters\n ----------\n\n G : graph\n A NetworkX graph\n\n\n Notes\n -----\n To \"unfreeze\" a graph you must make a copy by creating a new graph object.\n\n See Also\n --------\n is_frozen\n \"\"\"\n G.add_node = frozen\n G.add_nodes_from = frozen\n G.remove_node = frozen\n G.remove_nodes_from = frozen\n G.add_edge = frozen\n G.add_edges_from = frozen\n G.remove_edge = frozen\n G.remove_edges_from = frozen\n G.clear = frozen\n G.frozen = True\n return G"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_star(G, nodes, t, **attr):\n nlist = iter(nodes)\n v = next(nlist)\n edges = ((v, n) for n in nlist)\n G.add_interactions_from(edges, t, **attr)", "response": "Adds a star at time t to the DyNetx graph G."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_path(G, nodes, t, **attr):\n nlist = list(nodes)\n edges = zip(nlist[:-1], nlist[1:])\n G.add_interactions_from(edges, t, **attr)", "response": "Adds a path at time t to the DyNetx graph G at time t."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef create_empty_copy(G, with_data=True):\n H = G.__class__()\n H.add_nodes_from(G.nodes(data=with_data))\n if with_data:\n H.graph.update(G.graph)\n return H", "response": "Create an empty copy of the graph G with all of the edges removed."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_node_attributes(G, values, name=None):\n # Set node attributes based on type of `values`\n if name is not None: # `values` must not be a dict of dict\n try: # `values` is a dict\n for n, v in values.items():\n try:\n G.node[n][name] = values[n]\n except KeyError:\n pass\n except AttributeError: # `values` is a constant\n for n in G:\n G.node[n][name] = values\n else: # `values` must be dict of dict\n for n, d in values.items():\n try:\n G.node[n].update(d)\n except KeyError:\n pass", "response": "Set node attributes from dictionary of nodes and values keyed by node."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting node attributes from graph", "response": "def get_node_attributes(G, name):\n \"\"\"Get node attributes from graph\n\n Parameters\n ----------\n G : DyNetx Graph\n\n name : string\n Attribute name\n\n Returns\n -------\n Dictionary of attributes keyed by node.\n \"\"\"\n return {n: d[name] for n, d in G.node.items() if name in d}"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn all of the neighbors of a node in the graph at time t.", "response": "def all_neighbors(graph, node, t=None):\n \"\"\" Returns all of the neighbors of a node in the graph at time t.\n\n If the graph is directed returns predecessors as well as successors.\n\n Parameters\n ----------\n\n graph : DyNetx graph\n Graph to find neighbors.\n\n node : node\n The node whose neighbors will be returned.\n\n t : snapshot id (default=None)\n If None the neighbors are identified on the flattened graph.\n\n Returns\n -------\n\n neighbors : iterator\n Iterator of neighbors\n \"\"\"\n if graph.is_directed():\n values = chain(graph.predecessors(node, t=t), graph.successors(node, t=t))\n else:\n values = graph.neighbors(node, t=t)\n return values"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef non_neighbors(graph, node, t=None):\n if graph.is_directed():\n values = chain(graph.predecessors(node, t=t), graph.successors(node, t=t))\n else:\n values = graph.neighbors(node, t=t)\n\n nbors = set(values) | {node}\n return (nnode for nnode in graph if nnode not in nbors)", "response": "Returns the non - neighbors of the node at time t."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef non_interactions(graph, t=None):\n # if graph.is_directed():\n # for u in graph:\n # for v in non_neighbors(graph, u, t):\n # yield (u, v)\n #else:\n nodes = set(graph)\n while nodes:\n u = nodes.pop()\n for v in nodes - set(graph[u]):\n yield (u, v)", "response": "Returns the non - existent edges in the graph at time t."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef getMsg(self):\n '''\n getMsg - Generate a default message based on parameters to FunctionTimedOut exception'\n\n @return - Message\n '''\n return 'Function %s (args=%s) (kwargs=%s) timed out after %f seconds.\\n' %(self.timedOutFunction.__name__, repr(self.timedOutArgs), repr(self.timedOutKwargs), self.timedOutAfter)", "response": "Generate a default message based on parameters to FunctionTimedOut exception"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nretrying the timed - out function with same arguments.", "response": "def retry(self, timeout=RETRY_SAME_TIMEOUT):\n '''\n retry - Retry the timed-out function with same arguments.\n\n @param timeout Default RETRY_SAME_TIMEOUT\n \n If RETRY_SAME_TIMEOUT : Will retry the function same args, same timeout\n If a float/int : Will retry the function same args with provided timeout\n If None : Will retry function same args no timeout\n\n @return - Returnval from function\n '''\n if timeout is None:\n return self.timedOutFunction(*(self.timedOutArgs), **self.timedOutKwargs)\n \n from .dafunc import func_timeout\n\n if timeout == RETRY_SAME_TIMEOUT:\n timeout = self.timedOutAfter\n\n return func_timeout(timeout, self.timedOutFunction, args=self.timedOutArgs, kwargs=self.timedOutKwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nruns a function for up to timeout seconds. Returns the result of the function.", "response": "def func_timeout(timeout, func, args=(), kwargs=None):\n '''\n func_timeout - Runs the given function for up to #timeout# seconds.\n\n Raises any exceptions #func# would raise, returns what #func# would return (unless timeout is exceeded), in which case it raises FunctionTimedOut\n\n @param timeout - Maximum number of seconds to run #func# before terminating\n\n @param func - The function to call\n\n @param args - Any ordered arguments to pass to the function\n\n @param kwargs - Keyword arguments to pass to the function.\n\n\n @raises - FunctionTimedOut if #timeout# is exceeded, otherwise anything #func# could raise will be raised\n\n If the timeout is exceeded, FunctionTimedOut will be raised within the context of the called function every two seconds until it terminates,\n but will not block the calling thread (a new thread will be created to perform the join). If possible, you should try/except FunctionTimedOut\n to return cleanly, but in most cases it will 'just work'.\n\n @return - The return value that #func# gives\n '''\n\n if not kwargs:\n kwargs = {}\n if not args:\n args = ()\n\n ret = []\n exception = []\n isStopped = False\n\n def funcwrap(args2, kwargs2):\n try:\n ret.append( func(*args2, **kwargs2) )\n except FunctionTimedOut:\n # Don't print traceback to stderr if we time out\n pass\n except Exception as e:\n exc_info = sys.exc_info()\n if isStopped is False:\n # Assemble the alternate traceback, excluding this function\n # from the trace (by going to next frame)\n # Pytohn3 reads native from __traceback__, \n # python2 has a different form for \"raise\"\n e.__traceback__ = exc_info[2].tb_next\n exception.append( e )\n\n thread = StoppableThread(target=funcwrap, args=(args, kwargs))\n thread.daemon = True\n\n thread.start()\n thread.join(timeout)\n\n stopException = None\n if thread.isAlive():\n isStopped = True\n\n class FunctionTimedOutTempType(FunctionTimedOut):\n def __init__(self):\n return FunctionTimedOut.__init__(self, '', timeout, func, args, kwargs)\n\n FunctionTimedOutTemp = type('FunctionTimedOut' + str( hash( \"%d_%d_%d_%d\" %(id(timeout), id(func), id(args), id(kwargs))) ), FunctionTimedOutTempType.__bases__, dict(FunctionTimedOutTempType.__dict__))\n\n stopException = FunctionTimedOutTemp\n thread._stopThread(stopException)\n thread.join(min(.1, timeout / 50.0))\n raise FunctionTimedOut('', timeout, func, args, kwargs)\n else:\n # We can still cleanup the thread here..\n # Still give a timeout... just... cuz..\n thread.join(.5)\n\n if exception:\n raise_exception(exception)\n\n if ret:\n return ret[0]"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a new path integral and train it and return the ground - state energy E0 and psi ** 2 for potential V.", "response": "def analyze_theory(V, x0list=[], plot=False):\n \"\"\" Extract ground-state energy E0 and psi**2 for potential V. \"\"\"\n # initialize path integral\n T = 4.\n ndT = 8. # use larger ndT to reduce discretization error (goes like 1/ndT**2)\n neval = 3e5 # should probably use more evaluations (10x?)\n nitn = 6\n alpha = 0.1 # damp adaptation\n\n # create integrator and train it (no x0list)\n integrand = PathIntegrand(V=V, T=T, ndT=ndT)\n integ = vegas.Integrator(integrand.region, alpha=alpha)\n integ(integrand, neval=neval, nitn=nitn / 2, alpha=2 * alpha)\n\n # evaluate path integral with trained integrator and x0list\n integrand = PathIntegrand(V=V, x0list=x0list, T=T, ndT=ndT)\n results = integ(integrand, neval=neval, nitn=nitn, alpha=alpha)\n print(results.summary())\n E0 = -np.log(results['exp(-E0*T)']) / T\n print('Ground-state energy = %s Q = %.2f\\n' % (E0, results.Q))\n\n if len(x0list) <= 0:\n return E0\n psi2 = results['exp(-E0*T) * psi(x0)**2'] / results['exp(-E0*T)']\n print('%5s %-12s %-10s' % ('x', 'psi**2', 'sho-exact'))\n print(27 * '-')\n for i, (x0i, psi2i) in enumerate(zip(x0list, psi2)):\n exact = np.exp(- x0i ** 2) / np.sqrt(np.pi) #* np.exp(-T / 2.)\n print(\n \"%5.1f %-12s %-10.5f\"\n % (x0i, psi2i, exact)\n )\n if plot:\n plot_results(E0, x0list, psi2, T)\n return E0"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nstops the thread and return True if it is still alive.", "response": "def _stopThread(self, exception, raiseEvery=2.0):\n '''\n _stopThread - @see StoppableThread.stop\n '''\n if self.isAlive() is False:\n return True\n\n self._stderr = open(os.devnull, 'w')\n\n # Create \"joining\" thread which will raise the provided exception\n # on a repeat, until the thread stops.\n joinThread = JoinThread(self, exception, repeatEvery=raiseEvery)\n\n # Try to prevent spurrious prints\n joinThread._stderr = self._stderr\n joinThread.start()\n joinThread._stderr = self._stderr"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef run(self):\n '''\n run - The thread main. Will attempt to stop and join the attached thread.\n '''\n\n # Try to silence default exception printing.\n self.otherThread._Thread__stderr = self._stderr\n if hasattr(self.otherThread, '_Thread__stop'):\n # If py2, call this first to start thread termination cleanly.\n # Python3 does not need such ( nor does it provide.. )\n self.otherThread._Thread__stop()\n while self.otherThread.isAlive():\n # We loop raising exception incase it's caught hopefully this breaks us far out.\n ctypes.pythonapi.PyThreadState_SetAsyncExc(ctypes.c_long(self.otherThread.ident), ctypes.py_object(self.exception))\n self.otherThread.join(self.repeatEvery)\n\n try:\n self._stderr.close()\n except:\n pass", "response": "The main thread. Will attempt to stop the attached thread and join the attached thread."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef run(self):\n with open('src/vegas/__init__.py', 'a') as vfile:\n vfile.write(\"\\n__version__ = '%s'\\n\" % VEGAS_VERSION)\n _build_py.run(self)", "response": "Append version number to vegas. py"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _make_map(self, limit):\n ny = 2000\n y = numpy.random.uniform(0., 1., (ny,1))\n limit = numpy.arctan(limit)\n m = AdaptiveMap([[-limit, limit]], ninc=100)\n theta = numpy.empty(y.shape, float)\n jac = numpy.empty(y.shape[0], float)\n for itn in range(10):\n m.map(y, theta, jac)\n tan_theta = numpy.tan(theta[:, 0])\n x = self.scale * tan_theta\n fx = (tan_theta ** 2 + 1) * numpy.exp(-(x ** 2) / 2.)\n m.add_training_data(y, (jac * fx) ** 2)\n m.adapt(alpha=1.5)\n return numpy.array(m.grid[0])", "response": "Make a map that is adapted to the pdf."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning integrand using the tan mapping.", "response": "def _expval(self, f, nopdf):\n \"\"\" Return integrand using the tan mapping. \"\"\"\n def ff(theta, nopdf=nopdf):\n tan_theta = numpy.tan(theta)\n x = self.scale * tan_theta\n jac = self.scale * (tan_theta ** 2 + 1.)\n if nopdf:\n pdf = jac * self.pdf.pjac[None, :]\n else:\n pdf = jac * numpy.exp(-(x ** 2) / 2.) / numpy.sqrt(2 * numpy.pi)\n dp = self.pdf.x2dpflat(x)\n parg = None\n ans = None\n fparg_is_dict = False\n # iterate through the batch\n for i, (dpi, pdfi) in enumerate(zip(dp, pdf)):\n p = self.pdf.meanflat + dpi\n if parg is None:\n # first time only\n if self.pdf.shape is None:\n parg = _gvar.BufferDict(self.pdf.g, buf=p)\n else:\n parg = p.reshape(self.pdf.shape)\n else:\n if parg.shape is None:\n parg.buf = p\n else:\n parg.flat[:] = p\n fparg = 1. if f is None else f(parg)\n if ans is None:\n # first time only\n if hasattr(fparg, 'keys'):\n fparg_is_dict = True\n if not isinstance(fparg, _gvar.BufferDict):\n fparg = _gvar.BufferDict(fparg)\n ans = _gvar.BufferDict()\n for k in fparg:\n ans[k] = numpy.empty(\n (len(pdf),) + fparg.slice_shape(k)[1], float\n )\n else:\n if numpy.shape(fparg) == ():\n ans = numpy.empty(len(pdf), float)\n else:\n ans = numpy.empty(\n (len(pdf),) + numpy.shape(fparg), float\n )\n if fparg_is_dict:\n prod_pdfi = numpy.prod(pdfi)\n for k in ans:\n ans[k][i] = fparg[k]\n ans[k][i] *= prod_pdfi\n else:\n if not isinstance(fparg, numpy.ndarray):\n fparg = numpy.asarray(fparg)\n ans[i] = fparg * numpy.prod(pdfi)\n return ans\n return ff"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_starting_chunk(filename, length=1024):\n # Ensure we open the file in binary mode\n try:\n with open(filename, 'rb') as f:\n chunk = f.read(length)\n return chunk\n except IOError as e:\n print(e)", "response": "Reads the first little chunk of a file."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef dump(self, function_name):\n\n assert isinstance(function_name, str)\n self.stdout.write(function_name)\n self.stdout.write(\"\\n\")\n self.stdout.write(\"-\" * len(function_name))\n self.stdout.write(\"\\n\\n\")\n\n byte_code = self.interpreter.compiled_functions[function_name]\n self.stdout.write(byte_code.dump())\n self.stdout.write(\"\\n\")", "response": "Pretty - dump the bytecode for the given function name."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncalls startapp to generate app with custom user model.", "response": "def handle(self, **options):\n \"\"\"Call \"startapp\" to generate app with custom user model.\"\"\"\n template = os.path.dirname(os.path.abspath(__file__)) + \"/app_template\"\n name = options.pop(\"name\")\n call_command(\"startapp\", name, template=template, **options)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate and save a User with the given email and password.", "response": "def _create_user(self, email, password, **extra_fields):\n \"\"\"Create and save a User with the given email and password.\"\"\"\n if not email:\n raise ValueError(\"The given email must be set\")\n email = self.normalize_email(email)\n user = self.model(email=email, **extra_fields)\n user.set_password(password)\n user.save(using=self._db)\n return user"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef nwise(iterable, n):\n iters = itertools.tee(iterable, n)\n iters = (itertools.islice(it, i, None) for i, it in enumerate(iters))\n return itertools.izip(*iters)", "response": "Iterate through a sequence with a defined length windown"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef coordinates(g, nodes):\n c = [g.node[n]['coordinate'] for n in nodes]\n return c", "response": "Returns a list of ( lon lat ) pairs from nodes in an osmgraph created with osmgraph."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nstep along a path through a directed graph unless there is an intersection Example graph: Note that edge (1, 2) and (2, 3) are bidirectional, i.e., (2, 1) and (3, 2) are also edges 1 -- 2 -- 3 -->-- 5 -->-- 7 | | ^ v | | 4 6 >>> step(g, 1, 2) 3 >>> step(g, 3, 5) None >>> step(g, 2, 3) 5 >>> step(g, 2, 3, inbound=True) None >>> step(g, 7, 5, 3, backward=True) 3 >>> def f(g, n1, n2, backward): if n2 == 5: return 7 return None >>> step(g, 3, 5, continue_fn=f) 7 Parameters ---------- g : networkx DiGraph n1 : node id in g n2 : node id in g (n1, n2) must be an edge in g inbound : bool (default False) whether incoming edges should be considered backward : bool (default False) whether edges are in reverse order (i.e., point from n2 to n1) continue_fn : callable (optional) if at an intersection, continue_fn is called to indicate how to proceed continue_fn takes the form: f(g, n1, n2, backward) where all arguments are as passed into step. f should return a node id such that f(g, n1, n2, backward) is a successors of n2. f should return None if no way forward. Returns ------- The next node in the path from n1 to n2. Returns None if there are no edges from n2 or multiple edges from n2", "response": "def step(g, n1, n2, inbound=False, backward=False, continue_fn=None):\n \"\"\"\n Step along a path through a directed graph unless there is an intersection\n\n Example graph:\n Note that edge (1, 2) and (2, 3) are bidirectional, i.e., (2, 1) and\n (3, 2) are also edges\n\n 1 -- 2 -- 3 -->-- 5 -->-- 7\n | |\n ^ v\n | |\n 4 6\n\n >>> step(g, 1, 2)\n 3\n >>> step(g, 3, 5)\n None\n\n >>> step(g, 2, 3)\n 5\n >>> step(g, 2, 3, inbound=True)\n None\n\n >>> step(g, 7, 5, 3, backward=True)\n 3\n\n >>> def f(g, n1, n2, backward):\n if n2 == 5:\n return 7\n return None\n >>> step(g, 3, 5, continue_fn=f)\n 7\n\n\n Parameters\n ----------\n g : networkx DiGraph\n n1 : node id in g\n n2 : node id in g\n (n1, n2) must be an edge in g\n inbound : bool (default False)\n whether incoming edges should be considered\n backward : bool (default False)\n whether edges are in reverse order (i.e., point from n2 to n1)\n continue_fn : callable (optional)\n if at an intersection, continue_fn is called to indicate how to\n proceed\n\n continue_fn takes the form:\n f(g, n1, n2, backward) where all arguments are as passed into step.\n f should return a node id such that f(g, n1, n2, backward) is a\n successors of n2. f should return None if no way forward.\n\n Returns\n -------\n The next node in the path from n1 to n2. Returns None if there\n are no edges from n2 or multiple edges from n2\n\n \"\"\"\n forw = g.successors\n back = g.predecessors\n if backward:\n back, forw = forw, back\n\n nodes = forw(n2)\n if inbound:\n nodes = set(nodes + back(n2))\n candidates = [n for n in nodes if n != n1]\n\n if len(candidates) == 1:\n result = candidates[0]\n elif continue_fn:\n result = continue_fn(g, n1, n2, backward)\n else:\n result = None\n return result"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nmoving a node from n1 to n2 in a graph.", "response": "def move(g, n1, n2, **kwargs):\n \"\"\"\n Step along a graph until it ends or reach an intersection\n\n Example graph:\n Note that edge (1, 2) and (2, 3) are bidirectional, i.e., (2, 1) and\n (3, 2) are also edges\n\n 1 -- 2 -- 3 -->-- 5 -->-- 7\n | |\n ^ v\n | |\n 4 6\n\n >>> list(move(g, 1, 2))\n [1, 2, 3, 5] # Stops at 5 because you can get to both 6 and 7 from 3\n\n >>> step(g, 1, 2, inbound=True)\n [1, 2, 3]\n\n\n Parameters\n ----------\n Same as step()\n\n Yields\n ------\n Node IDs until either there is no path forward or the path reaches\n an intersection\n\n \"\"\"\n prev = n1\n curr = n2\n _next = step(g, prev, curr, **kwargs)\n\n yield prev\n yield curr\n\n visited_nodes = set([prev, curr])\n while _next:\n yield _next\n if _next in visited_nodes:\n return\n visited_nodes.add(_next)\n prev = curr\n curr = _next\n _next = step(g, prev, curr, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns True if a node is an intersection of the graph g with n.", "response": "def is_intersection(g, n):\n \"\"\"\n Determine if a node is an intersection\n\n graph: 1 -->-- 2 -->-- 3\n\n >>> is_intersection(g, 2)\n False\n\n graph:\n 1 -- 2 -- 3\n |\n 4\n\n >>> is_intersection(g, 2)\n True\n\n Parameters\n ----------\n g : networkx DiGraph\n n : node id\n\n Returns\n -------\n bool\n\n \"\"\"\n return len(set(g.predecessors(n) + g.successors(n))) > 2"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef as_dict(self):\n if not self._is_valid:\n self.validate()\n\n from .converters import to_dict\n return to_dict(self)", "response": "Returns the model as a dict"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef nodes_callback(self, data):\n for node_id, tags, coords in data:\n # Discard the coords because they go into add_coords\n self.nodes[node_id] = tags", "response": "Callback for nodes with tags"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef ways_callback(self, data):\n for way_id, tags, nodes in data:\n # Imposm passes all ways through regardless of whether the tags\n # have been filtered or not. It needs to do this in order to\n # handle relations, but we don't care about relations at the\n # moment.\n if tags:\n self.ways[way_id] = (tags, nodes)", "response": "Callback for all ways"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_graph(self, parse_direction=False):\n g = nx.DiGraph()\n\n for way_id, (tags, nodes) in self.ways.items():\n # If oneway is '-1', reverse the way and treat as a normal oneway\n if tags.get('oneway') == '-1':\n nodes = reversed(nodes)\n tags['oneway'] = 'yes'\n oneway = tags.get('oneway') == 'yes'\n\n for n0, n1 in tools.pairwise(nodes):\n g.add_edge(n0, n1, attr_dict=tags)\n if parse_direction:\n g[n0][n1]['_direction'] = 'forward'\n if not oneway:\n g.add_edge(n1, n0, attr_dict=tags)\n if parse_direction:\n g[n1][n0]['_direction'] = 'backward'\n\n g.node[n0].update(self._node_properties(n0))\n g.node[n1].update(self._node_properties(n1))\n\n return g", "response": "Returns a networkx directed graph of the received data."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nparsing an OSM file into a networkx graph.", "response": "def parse_file(filename, parse_direction=False, **kwargs):\n \"\"\"\n Return an OSM networkx graph from the input OSM file\n\n Only works with OSM xml, xml.bz2 and pbf files. This function cannot take\n OSM QA tile files. Use parse_qa_tile() for QA tiles.\n\n >>> graph = parse_file(filename)\n\n \"\"\"\n importer, parser = make_importer_parser(OSMParser, **kwargs)\n parser.parse(filename)\n\n return importer.get_graph(parse_direction=parse_direction)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nparses OSM networkx data into a networkx graph.", "response": "def parse_data(data, type, **kwargs):\n \"\"\"\n Return an OSM networkx graph from the input OSM data\n\n Parameters\n ----------\n data : string\n type : string ('xml' or 'pbf')\n\n >>> graph = parse_data(data, 'xml')\n\n \"\"\"\n suffixes = {\n 'xml': '.osm',\n 'pbf': '.pbf',\n }\n try:\n suffix = suffixes[type]\n except KeyError:\n raise ValueError('Unknown data type \"%s\"' % type)\n\n fd, filename = tempfile.mkstemp(suffix=suffix)\n try:\n os.write(fd, data)\n os.close(fd)\n return parse_file(filename, **kwargs)\n finally:\n os.remove(filename)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef parse_qa_tile(x, y, zoom, data, parse_direction=False, **kwargs):\n import osmqa\n importer, parser = make_importer_parser(osmqa.QATileParser, **kwargs)\n parser.parse_data(x, y, zoom, data)\n return importer.get_graph(parse_direction=parse_direction)", "response": "Parse an OSM networkx graph from the input OSM QA tile data"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns file name without path.", "response": "def _basename(fname):\n \"\"\"Return file name without path.\"\"\"\n if not isinstance(fname, Path):\n fname = Path(fname)\n path, name, ext = fname.parent, fname.stem, fname.suffix\n return path, name, ext"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nread Seabird bottle - trip file", "response": "def from_bl(fname):\n \"\"\"Read Seabird bottle-trip (bl) file\n\n Example\n -------\n >>> from pathlib import Path\n >>> import ctd\n >>> data_path = Path(__file__).parents[1].joinpath(\"tests\", \"data\")\n >>> df = ctd.from_bl(str(data_path.joinpath('bl', 'bottletest.bl')))\n >>> df._metadata[\"time_of_reset\"]\n datetime.datetime(2018, 6, 25, 20, 8, 55)\n\n \"\"\"\n df = pd.read_csv(\n fname,\n skiprows=2,\n parse_dates=[1],\n index_col=0,\n names=[\"bottle_number\", \"time\", \"startscan\", \"endscan\"],\n )\n df._metadata = {\n \"time_of_reset\": pd.to_datetime(\n linecache.getline(str(fname), 2)[6:-1]\n ).to_pydatetime()\n }\n return df"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef from_btl(fname):\n f = _read_file(fname)\n metadata = _parse_seabird(f.readlines(), ftype=\"btl\")\n\n f.seek(0)\n\n df = pd.read_fwf(\n f,\n header=None,\n index_col=False,\n names=metadata[\"names\"],\n parse_dates=False,\n skiprows=metadata[\"skiprows\"],\n )\n f.close()\n\n # At this point the data frame is not correctly lined up (multiple rows\n # for avg, std, min, max or just avg, std, etc).\n # Also needs date,time,and bottle number to be converted to one per line.\n\n # Get row types, see what you have: avg, std, min, max or just avg, std.\n rowtypes = df[df.columns[-1]].unique()\n # Get times and dates which occur on second line of each bottle.\n dates = df.iloc[:: len(rowtypes), 1].reset_index(drop=True)\n times = df.iloc[1 :: len(rowtypes), 1].reset_index(drop=True)\n datetimes = dates + \" \" + times\n\n # Fill the Date column with datetimes.\n df.loc[:: len(rowtypes), \"Date\"] = datetimes.values\n df.loc[1 :: len(rowtypes), \"Date\"] = datetimes.values\n\n # Fill missing rows.\n df[\"Bottle\"] = df[\"Bottle\"].fillna(method=\"ffill\")\n df[\"Date\"] = df[\"Date\"].fillna(method=\"ffill\")\n\n df[\"Statistic\"] = df[\"Statistic\"].str.replace(r\"\\(|\\)\", \"\") # (avg) to avg\n\n name = _basename(fname)[1]\n\n dtypes = {\n \"bpos\": int,\n \"pumps\": bool,\n \"flag\": bool,\n \"Bottle\": int,\n \"Scan\": int,\n \"Statistic\": str,\n \"Date\": str,\n }\n for column in df.columns:\n if column in dtypes:\n df[column] = df[column].astype(dtypes[column])\n else:\n try:\n df[column] = df[column].astype(float)\n except ValueError:\n warnings.warn(\"Could not convert %s to float.\" % column)\n\n df[\"Date\"] = pd.to_datetime(df[\"Date\"])\n metadata[\"name\"] = str(name)\n setattr(df, \"_metadata\", metadata)\n return df", "response": "Reads a Seabird CTD BTL - ASCII format file and returns a DataFrame with the data."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef from_edf(fname):\n f = _read_file(fname)\n header, names = [], []\n for k, line in enumerate(f.readlines()):\n line = line.strip()\n if line.startswith(\"Serial Number\"):\n serial = line.strip().split(\":\")[1].strip()\n elif line.startswith(\"Latitude\"):\n try:\n hemisphere = line[-1]\n lat = line.strip(hemisphere).split(\":\")[1].strip()\n lat = np.float_(lat.split())\n if hemisphere == \"S\":\n lat = -(lat[0] + lat[1] / 60.0)\n elif hemisphere == \"N\":\n lat = lat[0] + lat[1] / 60.0\n except (IndexError, ValueError):\n lat = None\n elif line.startswith(\"Longitude\"):\n try:\n hemisphere = line[-1]\n lon = line.strip(hemisphere).split(\":\")[1].strip()\n lon = np.float_(lon.split())\n if hemisphere == \"W\":\n lon = -(lon[0] + lon[1] / 60.0)\n elif hemisphere == \"E\":\n lon = lon[0] + lon[1] / 60.0\n except (IndexError, ValueError):\n lon = None\n else:\n header.append(line)\n if line.startswith(\"Field\"):\n col, unit = [l.strip().lower() for l in line.split(\":\")]\n names.append(unit.split()[0])\n if line == \"// Data\":\n skiprows = k + 1\n break\n\n f.seek(0)\n df = pd.read_csv(\n f,\n header=None,\n index_col=None,\n names=names,\n skiprows=skiprows,\n delim_whitespace=True,\n )\n f.close()\n\n df.set_index(\"depth\", drop=True, inplace=True)\n df.index.name = \"Depth [m]\"\n name = _basename(fname)[1]\n\n metadata = {\n \"lon\": lon,\n \"lat\": lat,\n \"name\": str(name),\n \"header\": \"\\n\".join(header),\n \"serial\": serial,\n }\n setattr(df, \"_metadata\", metadata)\n return df", "response": "This function is used to create a ctd object from an EDF file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef from_cnv(fname):\n f = _read_file(fname)\n metadata = _parse_seabird(f.readlines(), ftype=\"cnv\")\n\n f.seek(0)\n df = pd.read_fwf(\n f,\n header=None,\n index_col=None,\n names=metadata[\"names\"],\n skiprows=metadata[\"skiprows\"],\n delim_whitespace=True,\n widths=[11] * len(metadata[\"names\"]),\n )\n f.close()\n\n key_set = False\n prkeys = [\"prDM\", \"prdM\", \"pr\"]\n for prkey in prkeys:\n try:\n df.set_index(prkey, drop=True, inplace=True)\n key_set = True\n except KeyError:\n continue\n if not key_set:\n raise KeyError(\n f\"Could not find pressure field (supported names are {prkeys}).\"\n )\n df.index.name = \"Pressure [dbar]\"\n\n name = _basename(fname)[1]\n\n dtypes = {\"bpos\": int, \"pumps\": bool, \"flag\": bool}\n for column in df.columns:\n if column in dtypes:\n df[column] = df[column].astype(dtypes[column])\n else:\n try:\n df[column] = df[column].astype(float)\n except ValueError:\n warnings.warn(\"Could not convert %s to float.\" % column)\n\n metadata[\"name\"] = str(name)\n setattr(df, \"_metadata\", metadata)\n return df", "response": "Reads Seabird CTD CNV - ASCII format and returns a DataFrame with the data in the Seabird CTD structure."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef rosette_summary(fname):\n ros = from_cnv(fname)\n ros[\"pressure\"] = ros.index.values.astype(float)\n ros[\"nbf\"] = ros[\"nbf\"].astype(int)\n ros.set_index(\"nbf\", drop=True, inplace=True, verify_integrity=False)\n return ros", "response": "This function creates a SBE\n from a CNV file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nextrapolating data to zones where the shallow stations are shadowed by the deep stations.", "response": "def extrap_sec(data, dist, depth, w1=1.0, w2=0):\n \"\"\"\n Extrapolates `data` to zones where the shallow stations are shadowed by\n the deep stations. The shadow region usually cannot be extrapolates via\n linear interpolation.\n\n The extrapolation is applied using the gradients of the `data` at a certain\n level.\n\n Parameters\n ----------\n data : array_like\n Data to be extrapolated\n dist : array_like\n Stations distance\n fd : float\n Decay factor [0-1]\n\n\n Returns\n -------\n Sec_extrap : array_like\n Extrapolated variable\n\n \"\"\"\n from scipy.interpolate import interp1d\n\n new_data1 = []\n for row in data:\n mask = ~np.isnan(row)\n if mask.any():\n y = row[mask]\n if y.size == 1:\n row = np.repeat(y, len(mask))\n else:\n x = dist[mask]\n f_i = interp1d(x, y)\n f_x = _extrap1d(f_i)\n row = f_x(dist)\n new_data1.append(row)\n\n new_data2 = []\n for col in data.T:\n mask = ~np.isnan(col)\n if mask.any():\n y = col[mask]\n if y.size == 1:\n col = np.repeat(y, len(mask))\n else:\n z = depth[mask]\n f_i = interp1d(z, y)\n f_z = _extrap1d(f_i)\n col = f_z(depth)\n new_data2.append(col)\n\n new_data = np.array(new_data1) * w1 + np.array(new_data2).T * w2\n return new_data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef gen_topomask(h, lon, lat, dx=1.0, kind=\"linear\", plot=False):\n\n import gsw\n from scipy.interpolate import interp1d\n\n h, lon, lat = list(map(np.asanyarray, (h, lon, lat)))\n # Distance in km.\n x = np.append(0, np.cumsum(gsw.distance(lon, lat)[0] / 1e3))\n h = -gsw.z_from_p(h, lat.mean())\n Ih = interp1d(x, h, kind=kind, bounds_error=False, fill_value=h[-1])\n xm = np.arange(0, x.max() + dx, dx)\n hm = Ih(xm)\n\n return xm, hm", "response": "Generate a topography mask from an oceanographic transect taking the deepest CTD scan as the depth of each station."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef cell_thermal_mass(temperature, conductivity):\n\n alpha = 0.03 # Thermal anomaly amplitude.\n beta = 1.0 / 7 # Thermal anomaly time constant (1/beta).\n\n sample_interval = 1 / 15.0\n a = 2 * alpha / (sample_interval * beta + 2)\n b = 1 - (2 * a / alpha)\n dCodT = 0.1 * (1 + 0.006 * [temperature - 20])\n dT = np.diff(temperature)\n ctm = -1.0 * b * conductivity + a * (dCodT) * dT # [S/m]\n return ctm", "response": "Calculates the thermal mass of a cell."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncomputes the thickness of water separating the mixed surface layer from the thermocline.", "response": "def barrier_layer_thickness(SA, CT):\n \"\"\"\n Compute the thickness of water separating the mixed surface layer from the\n thermocline. A more precise definition would be the difference between\n mixed layer depth (MLD) calculated from temperature minus the mixed layer\n depth calculated using density.\n\n \"\"\"\n import gsw\n\n sigma_theta = gsw.sigma0(SA, CT)\n mask = mixed_layer_depth(CT)\n mld = np.where(mask)[0][-1]\n sig_surface = sigma_theta[0]\n sig_bottom_mld = gsw.sigma0(SA[0], CT[mld])\n d_sig_t = sig_surface - sig_bottom_mld\n d_sig = sigma_theta - sig_bottom_mld\n mask = d_sig < d_sig_t # Barrier layer.\n return Series(mask, index=SA.index, name=\"BLT\")"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef plot_cast(df, secondary_y=False, label=None, *args, **kwargs):\n\n ax = kwargs.pop(\"ax\", None)\n fignums = plt.get_fignums()\n if ax is None and not fignums:\n ax = plt.axes()\n fig = ax.get_figure()\n fig.set_size_inches((5.25, 6.75))\n else:\n ax = plt.gca()\n fig = plt.gcf()\n\n figsize = kwargs.pop(\"figsize\", fig.get_size_inches())\n fig.set_size_inches(figsize)\n\n y_inverted = False\n if not getattr(ax, \"y_inverted\", False):\n setattr(ax, \"y_inverted\", True)\n y_inverted = True\n\n if secondary_y:\n ax = ax.twiny()\n\n xlabel = getattr(df, \"name\", None)\n ylabel = getattr(df.index, \"name\", None)\n\n if isinstance(df, pd.DataFrame):\n labels = label if label else df.columns\n for k, (col, series) in enumerate(df.iteritems()):\n ax.plot(series, series.index, label=labels[k])\n elif isinstance(df, pd.Series):\n label = label if label else str(df.name)\n ax.plot(df.values, df.index, label=label, *args, **kwargs)\n\n ax.set_ylabel(ylabel)\n ax.set_xlabel(xlabel)\n if y_inverted and not secondary_y:\n ax.invert_yaxis()\n return ax", "response": "Plot a CTD variable with the index in the y - axis instead of x - axis."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef split(df):\n idx = df.index.argmax() + 1\n down = df.iloc[:idx]\n # Reverse index to orient it as a CTD cast.\n up = df.iloc[idx:][::-1]\n return down, up", "response": "Returns a tuple with down and up - cast."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef lp_filter(df, sample_rate=24.0, time_constant=0.15):\n\n from scipy import signal\n\n # Butter is closer to what SBE is doing with their cosine filter.\n Wn = (1.0 / time_constant) / (sample_rate * 2.0)\n b, a = signal.butter(2, Wn, \"low\")\n new_df = df.copy()\n new_df.index = signal.filtfilt(b, a, df.index.values)\n return new_df", "response": "Filter a series with time constant and for\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef press_check(df):\n new_df = df.copy()\n press = new_df.copy().index.values\n\n ref = press[0]\n inversions = np.diff(np.r_[press, press[-1]]) < 0\n mask = np.zeros_like(inversions)\n for k, p in enumerate(inversions):\n if p:\n ref = press[k]\n cut = press[k + 1 :] < ref\n mask[k + 1 :][cut] = True\n new_df[mask] = np.NaN\n return new_df", "response": "Check that pressure reversals in the index are indeed in the correct order."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef bindata(df, delta=1.0, method=\"average\"):\n start = np.floor(df.index[0])\n stop = np.ceil(df.index[-1])\n new_index = np.arange(start, stop, delta)\n binned = pd.cut(df.index, bins=new_index)\n if method == \"average\":\n new_df = df.groupby(binned).mean()\n new_df.index = new_index[:-1]\n elif method == \"interpolate\":\n raise NotImplementedError(\n \"Bin-average via interpolation method is not Implemented yet.\"\n )\n else:\n raise ValueError(\n f\"Expected method `average` or `interpolate`, but got {method}.\"\n )\n return new_df", "response": "Bin average the index to a given interval."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _despike(series, n1, n2, block, keep):\n\n data = series.values.astype(float).copy()\n roll = _rolling_window(data, block)\n roll = ma.masked_invalid(roll)\n std = n1 * roll.std(axis=1)\n mean = roll.mean(axis=1)\n # Use the last value to fill-up.\n std = np.r_[std, np.tile(std[-1], block - 1)]\n mean = np.r_[mean, np.tile(mean[-1], block - 1)]\n mask = np.abs(data - mean.filled(fill_value=np.NaN)) > std.filled(\n fill_value=np.NaN\n )\n data[mask] = np.NaN\n\n # Pass two recompute the mean and std without the flagged values from pass\n # one and removed the flagged data.\n roll = _rolling_window(data, block)\n roll = ma.masked_invalid(roll)\n std = n2 * roll.std(axis=1)\n mean = roll.mean(axis=1)\n # Use the last value to fill-up.\n std = np.r_[std, np.tile(std[-1], block - 1)]\n mean = np.r_[mean, np.tile(mean[-1], block - 1)]\n values = series.values.astype(float)\n mask = np.abs(values - mean.filled(fill_value=np.NaN)) > std.filled(\n fill_value=np.NaN\n )\n\n clean = series.astype(float).copy()\n clean[mask] = np.NaN\n return clean", "response": "Passes with Standard deviation\n n1 and n2 with window size block."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef despike(df, n1=2, n2=20, block=100, keep=0):\n if isinstance(df, pd.Series):\n new_df = _despike(df, n1=n1, n2=n2, block=block, keep=keep)\n else:\n new_df = df.apply(_despike, n1=n1, n2=n2, block=block, keep=keep)\n return new_df", "response": "Wild Edit Seabird - like function. Passes with Standard deviation\n n1 and n2 with window size block."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _smooth(series, window_len, window):\n\n windows = {\n \"flat\": np.ones,\n \"hanning\": np.hanning,\n \"hamming\": np.hamming,\n \"bartlett\": np.bartlett,\n \"blackman\": np.blackman,\n }\n data = series.values.copy()\n\n if window_len < 3:\n return pd.Series(data, index=series.index, name=series.name)\n\n if window not in list(windows.keys()):\n raise ValueError(\n \"\"\"window must be one of 'flat', 'hanning',\n 'hamming', 'bartlett', 'blackman'\"\"\"\n )\n\n s = np.r_[\n 2 * data[0] - data[window_len:1:-1],\n data,\n 2 * data[-1] - data[-1:-window_len:-1],\n ]\n\n w = windows[window](window_len)\n\n data = np.convolve(w / w.sum(), s, mode=\"same\")\n data = data[window_len - 1 : -window_len + 1]\n return pd.Series(data, index=series.index, name=series.name)", "response": "Smooth the data using a window with requested size."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef smooth(df, window_len=11, window=\"hanning\"):\n if isinstance(df, pd.Series):\n new_df = _smooth(df, window_len=window_len, window=window)\n else:\n new_df = df.apply(_smooth, window_len=window_len, window=window)\n return new_df", "response": "Smooth the data using a window with requested size."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ninserting a new job into the currently used tube and returns the ID of the new job.", "response": "def put(self,\n body: Body,\n priority: int = DEFAULT_PRIORITY,\n delay: int = DEFAULT_DELAY,\n ttr: int = DEFAULT_TTR) -> int:\n \"\"\"Inserts a job into the currently used tube and returns the job ID.\n\n :param body: The data representing the job.\n :param priority: An integer between 0 and 4,294,967,295 where 0 is the\n most urgent.\n :param delay: The number of seconds to delay the job for.\n :param ttr: The maximum number of seconds the job can be reserved for\n before timing out.\n \"\"\"\n if isinstance(body, str):\n if self.encoding is None:\n raise TypeError(\"Unable to encode string with no encoding set\")\n body = body.encode(self.encoding)\n cmd = b'put %d %d %d %d\\r\\n%b' % (priority, delay, ttr, len(body), body)\n return self._int_cmd(cmd, b'INSERTED')"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nchange the currently used tube.", "response": "def use(self, tube: str) -> None:\n \"\"\"Changes the currently used tube.\n\n :param tube: The tube to use.\n \"\"\"\n self._send_cmd(b'use %b' % tube.encode('ascii'), b'USING')"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreserve a job from a tube on the watch list giving this client exclusive access to it. Returns the reserved job.", "response": "def reserve(self, timeout: Optional[int] = None) -> Job:\n \"\"\"Reserves a job from a tube on the watch list, giving this client\n exclusive access to it for the TTR. Returns the reserved job.\n\n This blocks until a job is reserved unless a ``timeout`` is given,\n which will raise a :class:`TimedOutError ` if\n a job cannot be reserved within that time.\n\n :param timeout: The maximum number of seconds to wait.\n \"\"\"\n if timeout is None:\n cmd = b'reserve'\n else:\n cmd = b'reserve-with-timeout %d' % timeout\n return self._job_cmd(cmd, b'RESERVED')"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef delete(self, job: JobOrID) -> None:\n self._send_cmd(b'delete %d' % _to_id(job), b'DELETED')", "response": "Deletes a job.\n\n :param job: The job or job ID to delete."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef release(self,\n job: Job,\n priority: int = DEFAULT_PRIORITY,\n delay: int = DEFAULT_DELAY) -> None:\n \"\"\"Releases a reserved job.\n\n :param job: The job to release.\n :param priority: An integer between 0 and 4,294,967,295 where 0 is the\n most urgent.\n :param delay: The number of seconds to delay the job for.\n \"\"\"\n self._send_cmd(b'release %d %d %d' % (job.id, priority, delay), b'RELEASED')", "response": "Releases a reserved job."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nbury a reserved job.", "response": "def bury(self, job: Job, priority: int = DEFAULT_PRIORITY) -> None:\n \"\"\"Buries a reserved job.\n\n :param job: The job to bury.\n :param priority: An integer between 0 and 4,294,967,295 where 0 is the\n most urgent.\n \"\"\"\n self._send_cmd(b'bury %d %d' % (job.id, priority), b'BURIED')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nrefresh the TTR of a reserved job.", "response": "def touch(self, job: Job) -> None:\n \"\"\"Refreshes the TTR of a reserved job.\n\n :param job: The job to touch.\n \"\"\"\n self._send_cmd(b'touch %d' % job.id, b'TOUCHED')"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef watch(self, tube: str) -> int:\n return self._int_cmd(b'watch %b' % tube.encode('ascii'), b'WATCHING')", "response": "Adds a tube to the watch list. Returns the number of tubes this client is watching."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef ignore(self, tube: str) -> int:\n return self._int_cmd(b'ignore %b' % tube.encode('ascii'), b'WATCHING')", "response": "Removes a tube from the watch list. Returns the number of tubes this client is watching."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef kick(self, bound: int) -> int:\n return self._int_cmd(b'kick %d' % bound, b'KICKED')", "response": "Moves delayed and buried jobs into the ready queue and returns the number of jobs effected."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nmove a delayed or buried job into the ready queue.", "response": "def kick_job(self, job: JobOrID) -> None:\n \"\"\"Moves a delayed or buried job into the ready queue.\n\n :param job: The job or job ID to kick.\n \"\"\"\n self._send_cmd(b'kick-job %d' % _to_id(job), b'KICKED')"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef stats_job(self, job: JobOrID) -> Stats:\n return self._stats_cmd(b'stats-job %d' % _to_id(job))", "response": "Returns the job statistics for the given job."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the statistics for the given tube.", "response": "def stats_tube(self, tube: str) -> Stats:\n \"\"\"Returns tube statistics.\n\n :param tube: The tube to return statistics for.\n \"\"\"\n return self._stats_cmd(b'stats-tube %b' % tube.encode('ascii'))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nprevent jobs from being reserved from a tube for a period of time.", "response": "def pause_tube(self, tube: str, delay: int) -> None:\n \"\"\"Prevents jobs from being reserved from a tube for a period of time.\n\n :param tube: The tube to pause.\n :param delay: The number of seconds to pause the tube for.\n \"\"\"\n self._send_cmd(b'pause-tube %b %d' % (tube.encode('ascii'), delay), b'PAUSED')"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconsumes messages from an AMQP queue using a Python callback.", "response": "def consume(exchange, queue_name, routing_key, callback, app_name):\n \"\"\"Consume messages from an AMQP queue using a Python callback.\"\"\"\n # The configuration validates these are not null and contain all required keys\n # when it is loaded.\n\n bindings = config.conf[\"bindings\"]\n queues = config.conf[\"queues\"]\n\n # The CLI and config.DEFAULTS have different defaults for the queue\n # settings at the moment. We should select a universal default in the\n # future and remove this. Unfortunately that will break backwards compatibility.\n if queues == config.DEFAULTS[\"queues\"]:\n queues[config._default_queue_name][\"durable\"] = True\n queues[config._default_queue_name][\"auto_delete\"] = False\n\n if queue_name:\n queues = {queue_name: config.conf[\"queues\"][config._default_queue_name]}\n for binding in bindings:\n binding[\"queue\"] = queue_name\n\n if exchange:\n for binding in bindings:\n binding[\"exchange\"] = exchange\n\n if routing_key:\n for binding in bindings:\n binding[\"routing_keys\"] = routing_key\n\n callback_path = callback or config.conf[\"callback\"]\n if not callback_path:\n raise click.ClickException(\n \"A Python path to a callable object that accepts the message must be provided\"\n ' with the \"--callback\" command line option or in the configuration file'\n )\n try:\n module, cls = callback_path.strip().split(\":\")\n except ValueError:\n raise click.ClickException(\n \"Unable to parse the callback path ({}); the \"\n 'expected format is \"my_package.module:'\n 'callable_object\"'.format(callback_path)\n )\n try:\n module = importlib.import_module(module)\n except ImportError as e:\n provider = \"--callback argument\" if callback else \"configuration file\"\n raise click.ClickException(\n \"Failed to import the callback module ({}) provided in the {}\".format(\n str(e), provider\n )\n )\n\n try:\n callback = getattr(module, cls)\n except AttributeError as e:\n raise click.ClickException(\n \"Unable to import {} ({}); is the package installed? The python path should \"\n 'be in the format \"my_package.module:callable_object\"'.format(\n callback_path, str(e)\n )\n )\n\n if app_name:\n config.conf[\"client_properties\"][\"app\"] = app_name\n\n _log.info(\"Starting consumer with %s callback\", callback_path)\n try:\n deferred_consumers = api.twisted_consume(\n callback, bindings=bindings, queues=queues\n )\n deferred_consumers.addCallback(_consume_callback)\n deferred_consumers.addErrback(_consume_errback)\n except ValueError as e:\n click_version = pkg_resources.get_distribution(\"click\").parsed_version\n if click_version < pkg_resources.parse_version(\"7.0\"):\n raise click.exceptions.BadOptionUsage(str(e))\n else:\n raise click.exceptions.BadOptionUsage(\"callback\", str(e))\n\n reactor.run()\n sys.exit(_exit_code)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nhandles any errors that occur during consumer registration.", "response": "def _consume_errback(failure):\n \"\"\"Handle any errors that occur during consumer registration.\"\"\"\n global _exit_code\n if failure.check(exceptions.BadDeclaration):\n _log.error(\n \"Unable to declare the %s object on the AMQP broker. The \"\n \"broker responded with %s. Check permissions for your user.\",\n failure.value.obj_type,\n failure.value.reason,\n )\n _exit_code = 10\n elif failure.check(exceptions.PermissionException):\n _exit_code = 15\n _log.error(\n \"The consumer could not proceed because of a permissions problem: %s\",\n str(failure.value),\n )\n elif failure.check(exceptions.ConnectionException):\n _exit_code = 14\n _log.error(failure.value.reason)\n else:\n _exit_code = 11\n _log.exception(\n \"An unexpected error (%r) occurred while registering the \"\n \"consumer, please report this bug.\",\n failure.value,\n )\n try:\n reactor.stop()\n except error.ReactorNotRunning:\n pass"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _consume_callback(consumers):\n for consumer in consumers:\n\n def errback(failure):\n global _exit_code\n if failure.check(exceptions.HaltConsumer):\n _exit_code = failure.value.exit_code\n if _exit_code:\n _log.error(\n \"Consumer halted with non-zero exit code (%d): %s\",\n _exit_code,\n str(failure.value.reason),\n )\n elif failure.check(exceptions.ConsumerCanceled):\n _exit_code = 12\n _log.error(\n \"The consumer was canceled server-side, check with system administrators.\"\n )\n elif failure.check(exceptions.PermissionException):\n _exit_code = 15\n _log.error(\n \"The consumer could not proceed because of a permissions problem: %s\",\n str(failure.value),\n )\n else:\n _exit_code = 13\n _log.error(\n \"Unexpected error occurred in consumer %r: %r\", consumer, failure\n )\n try:\n reactor.stop()\n except error.ReactorNotRunning:\n pass\n\n def callback(consumer):\n _log.info(\"The %r consumer halted.\", consumer)\n if all([c.result.called for c in consumers]):\n _log.info(\"All consumers have stopped; shutting down.\")\n try:\n # Last consumer out shuts off the lights\n reactor.stop()\n except error.ReactorNotRunning:\n pass\n\n consumer.result.addCallbacks(callback, errback)", "response": "Callback when consumers are successfully registered."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _add_timeout(deferred, timeout):\n try:\n deferred.addTimeout(timeout, reactor)\n except AttributeError:\n # Twisted 12.2 (in EL7) does not have the addTimeout API, so make do with\n # the slightly more annoying approach of scheduling a call to cancel which\n # is then canceled if the deferred succeeds before the timeout is up.\n delayed_cancel = reactor.callLater(timeout, deferred.cancel)\n\n def cancel_cancel_call(result):\n \"\"\"Halt the delayed call to cancel if the deferred fires before the timeout.\"\"\"\n if not delayed_cancel.called:\n delayed_cancel.cancel()\n return result\n\n deferred.addBoth(cancel_cancel_call)", "response": "Add a timeout to the given deferred."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nallocates a new AMQP channel.", "response": "def _allocate_channel(self):\n \"\"\"\n Allocate a new AMQP channel.\n\n Raises:\n NoFreeChannels: If this connection has reached its maximum number of channels.\n \"\"\"\n try:\n channel = yield self.channel()\n except pika.exceptions.NoFreeChannels:\n raise NoFreeChannels()\n _std_log.debug(\"Created AMQP channel id %d\", channel.channel_number)\n if self._confirms:\n yield channel.confirm_delivery()\n defer.returnValue(channel)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\npublishing a message to an AMQP exchange.", "response": "def publish(self, message, exchange):\n \"\"\"\n Publish a :class:`fedora_messaging.message.Message` to an `exchange`_\n on the message broker.\n\n Args:\n message (message.Message): The message to publish.\n exchange (str): The name of the AMQP exchange to publish to\n\n Raises:\n NoFreeChannels: If there are no available channels on this connection.\n If this occurs, you can either reduce the number of consumers on this\n connection or create an additional connection.\n PublishReturned: If the broker rejected the message. This can happen if\n there are resource limits that have been reached (full disk, for example)\n or if the message will be routed to 0 queues and the exchange is set to\n reject such messages.\n\n .. _exchange: https://www.rabbitmq.com/tutorials/amqp-concepts.html#exchanges\n \"\"\"\n message.validate()\n try:\n yield self._channel.basic_publish(\n exchange=exchange,\n routing_key=message._encoded_routing_key,\n body=message._encoded_body,\n properties=message._properties,\n )\n except (pika.exceptions.NackError, pika.exceptions.UnroutableError) as e:\n _std_log.error(\"Message was rejected by the broker (%s)\", str(e))\n raise PublishReturned(reason=e)\n except pika.exceptions.ChannelClosed:\n self._channel = yield self._allocate_channel()\n yield self.publish(message, exchange)\n except pika.exceptions.ConnectionClosed as e:\n raise ConnectionException(reason=e)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nregistering a message consumer that executes the provided callback when a message is received.", "response": "def consume(self, callback, queue, previous_consumer=None):\n \"\"\"\n Register a message consumer that executes the provided callback when\n messages are received.\n\n The queue must exist prior to calling this method. If a consumer\n already exists for the given queue, the callback is simply updated and\n any new messages for that consumer use the new callback.\n\n Args:\n callback (callable): The callback to invoke when a message is received.\n queue (str): The name of the queue to consume from.\n previous_consumer (ConsumerV2): If this is the resumption of a prior\n consumer, you can provide the previous consumer so its result\n deferred can be re-used.\n\n Returns:\n Deferred: A Deferred that fires when the consumer is successfully\n registered with the message broker. The callback receives a\n :class:`.ConsumerV2` object that represents the AMQP consumer.\n The Deferred may error back with a :class:`PermissionException`\n if the user cannot read from the queue, a\n :class:`NoFreeChannels` if this connection has hit its channel\n limit, or a :class:`ConnectionException` if the connection dies\n before the consumer is successfully registered.\n\n NoFreeChannels: If there are no available channels on this connection.\n If this occurs, you can either reduce the number of consumers on this\n connection or create an additional connection.\n \"\"\"\n if queue in self._consumers:\n self._consumers[queue].callback = callback\n defer.returnValue(self._consumers[queue])\n\n if previous_consumer is not None:\n consumer = previous_consumer\n else:\n consumer = ConsumerV2(queue=queue, callback=callback)\n consumer._protocol = self\n consumer._channel = yield self._allocate_channel()\n try:\n queue_object, _ = yield consumer._channel.basic_consume(\n queue=consumer.queue, consumer_tag=consumer._tag\n )\n except pika.exceptions.ChannelClosed as exc:\n if exc.args[0] == 403:\n raise PermissionException(\n obj_type=\"queue\", description=queue, reason=exc.args[1]\n )\n else:\n raise ConnectionException(reason=exc)\n\n def on_cancel_callback(frame):\n \"\"\"\n Called when the consumer is canceled server-side.\n\n This can happen, for example, when the queue is deleted.\n To handle this, we do the necessary book-keeping to remove the consumer\n and then fire the errback on the consumer so the caller of\n :func:`fedora_messaging.api.consume` can decide what to do.\n\n Args:\n frame (pika.frame.Method): The cancel method from the server,\n unused here because we already know what consumer is being\n canceled.\n \"\"\"\n _std_log.error(\"%r was canceled by the AMQP broker!\", consumer)\n\n # If client and server are racing to cancel it might already be gone which\n # is why both are marked as no cover.\n try:\n del self._consumers[consumer.queue]\n except KeyError: # pragma: no cover\n pass\n try:\n del self.factory._consumers[consumer.queue]\n except KeyError: # pragma: no cover\n pass\n consumer._running = False\n consumer.result.errback(fail=ConsumerCanceled())\n\n try:\n consumer._channel.add_on_cancel_callback(on_cancel_callback)\n except AttributeError:\n pass # pika 1.0.0+\n\n def read_loop_errback(failure):\n \"\"\"\n Handle errors coming out of the read loop.\n\n There are two basic categories of errors: ones where the ``consumer.result``\n Deferred needs to be fired because the error is not recoverable, ones\n where we can recover from by letting the connection restart, and ones\n which are fatal for this consumer only (the queue was deleted by an\n administrator).\n\n Args:\n failure (twisted.python.failure.Failure): The exception raised by\n the read loop encapsulated in a Failure.\n \"\"\"\n exc = failure.value\n if failure.check(pika.exceptions.ConsumerCancelled):\n # Pika 1.0.0+ raises this exception. To support previous versions\n # we register a callback (called below) ourselves with the channel.\n on_cancel_callback(None)\n elif failure.check(pika.exceptions.ChannelClosed):\n if exc.args[0] == 403:\n # This is a mis-configuration, the consumer can register itself,\n # but it doesn't have permissions to read from the queue,\n # so no amount of restarting will help.\n e = PermissionException(\n obj_type=\"queue\",\n description=queue,\n reason=failure.value.args[1],\n )\n consumer.result.errback(Failure(e, PermissionException))\n consumer.cancel()\n else:\n _std_log.exception(\n \"Consumer halted (%r) unexpectedly; \"\n \"the connection should restart.\",\n failure,\n )\n elif failure.check(error.ConnectionDone, error.ConnectionLost):\n _std_log.warning(\n \"The connection to the broker was lost (%r), consumer halted; \"\n \"the connection should restart and consuming will resume.\",\n exc,\n )\n elif failure.check(pika.exceptions.AMQPError):\n _std_log.exception(\n \"An unexpected AMQP error occurred; the connection should \"\n \"restart, but please report this as a bug.\"\n )\n else:\n consumer.result.errback(failure)\n consumer.cancel()\n\n consumer._read_loop = self._read(queue_object, consumer)\n consumer._read_loop.addErrback(read_loop_errback)\n\n self._consumers[queue] = consumer\n _std_log.info(\"Successfully registered AMQP consumer %r\", consumer)\n defer.returnValue(consumer)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef declare_exchanges(self, exchanges):\n channel = yield self._allocate_channel()\n try:\n for exchange in exchanges:\n args = exchange.copy()\n args.setdefault(\"passive\", config.conf[\"passive_declares\"])\n try:\n yield channel.exchange_declare(**args)\n except pika.exceptions.ChannelClosed as e:\n raise BadDeclaration(\"exchange\", args, e)\n finally:\n try:\n channel.close()\n except pika.exceptions.AMQPError:\n pass", "response": "A method that declares a number of exchanges at once."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndeclaring a list of queues.", "response": "def declare_queues(self, queues):\n \"\"\"\n Declare a list of queues.\n\n Args:\n queues (list of dict): A list of dictionaries, where each dictionary\n represents an exchange. Each dictionary can have the following keys:\n\n * queue (str): The name of the queue\n * passive (bool): If true, this will just assert that the queue exists,\n but won't create it if it doesn't. Defaults to the configuration value\n :ref:`conf-passive-declares`\n * durable (bool): Whether or not the queue is durable\n * exclusive (bool): Whether or not the queue is exclusive to this connection.\n * auto_delete (bool): Whether or not the queue should be automatically\n deleted once this connection ends.\n * arguments (dict): Additional arguments for the creation of the queue.\n Raises:\n NoFreeChannels: If there are no available channels on this connection.\n If this occurs, you can either reduce the number of consumers on this\n connection or create an additional connection.\n BadDeclaration: If a queue could not be declared. This can occur\n if the queue already exists, but does its type does not match\n (e.g. it is declared as a durable queue, but exists as a non-durable\n queue). It can also occur if it does not exist, but the current\n user does not have permissions to create the object.\n \"\"\"\n channel = yield self._allocate_channel()\n try:\n for queue in queues:\n args = queue.copy()\n args.setdefault(\"passive\", config.conf[\"passive_declares\"])\n try:\n yield channel.queue_declare(**args)\n except pika.exceptions.ChannelClosed as e:\n raise BadDeclaration(\"queue\", args, e)\n finally:\n try:\n channel.close()\n except pika.exceptions.AMQPError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndeclare a set of bindings between queues and exchanges.", "response": "def bind_queues(self, bindings):\n \"\"\"\n Declare a set of bindings between queues and exchanges.\n\n Args:\n bindings (list of dict): A list of binding definitions. Each dictionary\n must contain the \"queue\" key whose value is the name of the queue\n to create the binding on, as well as the \"exchange\" key whose value\n should be the name of the exchange to bind to. Additional acceptable\n keys are any keyword arguments accepted by\n :meth:`pika.channel.Channel.queue_bind`.\n\n Raises:\n NoFreeChannels: If there are no available channels on this connection.\n If this occurs, you can either reduce the number of consumers on this\n connection or create an additional connection.\n BadDeclaration: If a binding could not be declared. This can occur if the\n queue or exchange don't exist, or if they do, but the current user does\n not have permissions to create bindings.\n \"\"\"\n channel = yield self._allocate_channel()\n try:\n for binding in bindings:\n try:\n yield channel.queue_bind(**binding)\n except pika.exceptions.ChannelClosed as e:\n raise BadDeclaration(\"binding\", binding, e)\n finally:\n try:\n channel.close()\n except pika.exceptions.AMQPError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsignaling to consumers they should stop after finishing any messages currently being processed then close the connection.", "response": "def halt(self):\n \"\"\"\n Signal to consumers they should stop after finishing any messages\n currently being processed, then close the connection.\n\n Returns:\n defer.Deferred: fired when all consumers have successfully stopped\n and the connection is closed.\n \"\"\"\n if self.is_closed:\n # We were asked to stop because the connection is already gone.\n # There's no graceful way to stop because we can't acknowledge\n # messages in the middle of being processed.\n _std_log.info(\"Disconnect requested, but AMQP connection already gone\")\n self._channel = None\n return\n\n _std_log.info(\n \"Waiting for %d consumer(s) to finish processing before halting\",\n len(self._consumers),\n )\n pending_cancels = []\n for c in list(self._consumers.values()):\n pending_cancels.append(c.cancel())\n yield defer.gatherResults(pending_cancels)\n _std_log.info(\"Finished canceling %d consumers\", len(self._consumers))\n\n try:\n yield self.close()\n except pika.exceptions.ConnectionWrongStateError:\n pass # Already closing, not a problem since that's what we want.\n self._consumers = {}\n self._channel = None"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _on_message(self, delivery_frame, properties, body, consumer):\n _legacy_twisted_log.msg(\n \"Message arrived with delivery tag {tag} for {consumer}\",\n tag=delivery_frame.delivery_tag,\n consumer=consumer,\n logLevel=logging.DEBUG,\n )\n try:\n message = get_message(delivery_frame.routing_key, properties, body)\n message.queue = consumer.queue\n except ValidationError:\n _legacy_twisted_log.msg(\n \"Message id {msgid} did not pass validation; ignoring message\",\n msgid=properties.message_id,\n logLevel=logging.WARNING,\n )\n yield consumer.channel.basic_nack(\n delivery_tag=delivery_frame.delivery_tag, requeue=False\n )\n return\n\n try:\n _legacy_twisted_log.msg(\n \"Consuming message from topic {topic!r} (id {msgid})\",\n topic=message.topic,\n msgid=properties.message_id,\n )\n yield defer.maybeDeferred(consumer.callback, message)\n except Nack:\n _legacy_twisted_log.msg(\n \"Returning message id {msgid} to the queue\",\n msgid=properties.message_id,\n logLevel=logging.WARNING,\n )\n yield consumer.channel.basic_nack(\n delivery_tag=delivery_frame.delivery_tag, requeue=True\n )\n except Drop:\n _legacy_twisted_log.msg(\n \"Consumer requested message id {msgid} be dropped\",\n msgid=properties.message_id,\n logLevel=logging.WARNING,\n )\n yield consumer.channel.basic_nack(\n delivery_tag=delivery_frame.delivery_tag, requeue=False\n )\n except HaltConsumer:\n _legacy_twisted_log.msg(\n \"Consumer indicated it wishes consumption to halt, shutting down\"\n )\n yield consumer.channel.basic_ack(delivery_tag=delivery_frame.delivery_tag)\n yield self.cancel(consumer.queue)\n except Exception:\n _legacy_twisted_log.msg(\n \"Received unexpected exception from consumer {c}\",\n c=consumer,\n logLevel=logging.ERROR,\n )\n yield consumer.channel.basic_nack(\n delivery_tag=0, multiple=True, requeue=True\n )\n yield self.cancel(consumer.queue)\n else:\n yield consumer.channel.basic_ack(delivery_tag=delivery_frame.delivery_tag)", "response": "A message arrived from the server."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nregisters a message consumer that executes the provided callback when a message is received.", "response": "def consume(self, callback, queue):\n \"\"\"\n Register a message consumer that executes the provided callback when\n messages are received.\n\n The queue must exist prior to calling this method. If a consumer\n already exists for the given queue, the callback is simply updated and\n any new messages for that consumer use the new callback.\n\n If :meth:`resumeProducing` has not been called when this method is called,\n it will be called for you.\n\n Args:\n callback (callable): The callback to invoke when a message is received.\n queue (str): The name of the queue to consume from.\n\n Returns:\n fedora_messaging.twisted.protocol.Consumer: A namedtuple that\n identifies this consumer.\n\n NoFreeChannels: If there are no available channels on this connection.\n If this occurs, you can either reduce the number of consumers on this\n connection or create an additional connection.\n \"\"\"\n if queue in self._consumers and self._consumers[queue].channel.is_open:\n consumer = Consumer(\n tag=self._consumers[queue].tag,\n queue=queue,\n callback=callback,\n channel=self._consumers[queue].channel,\n )\n self._consumers[queue] = consumer\n defer.returnValue(consumer)\n\n channel = yield self._allocate_channel()\n consumer = Consumer(\n tag=str(uuid.uuid4()), queue=queue, callback=callback, channel=channel\n )\n\n self._consumers[queue] = consumer\n if not self._running:\n yield self.resumeProducing()\n defer.returnValue(consumer)\n\n queue_object, _ = yield consumer.channel.basic_consume(\n queue=consumer.queue, consumer_tag=consumer.tag\n )\n deferred = self._read(queue_object, consumer)\n deferred.addErrback(\n lambda f: _legacy_twisted_log.msg,\n \"_read failed on consumer {c}\",\n c=consumer,\n logLevel=logging.ERROR,\n )\n _legacy_twisted_log.msg(\"Successfully registered AMQP consumer {c}\", c=consumer)\n defer.returnValue(consumer)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef cancel(self, queue):\n try:\n consumer = self._consumers[queue]\n yield consumer.channel.basic_cancel(consumer_tag=consumer.tag)\n except pika.exceptions.AMQPChannelError:\n # Consumers are tied to channels, so if this channel is dead the\n # consumer should already be canceled (and we can't get to it anyway)\n pass\n except KeyError:\n defer.returnValue(None)\n\n try:\n yield consumer.channel.close()\n except pika.exceptions.AMQPChannelError:\n pass\n\n del self._consumers[queue]", "response": "Cancel the consumer for a queue."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nstarts or resumes the retrieval of messages from the server queue.", "response": "def resumeProducing(self):\n \"\"\"\n Starts or resumes the retrieval of messages from the server queue.\n\n This method starts receiving messages from the server, they will be\n passed to the consumer callback.\n\n .. note:: This is called automatically when :meth:`.consume` is called,\n so users should not need to call this unless :meth:`.pauseProducing`\n has been called.\n\n Returns:\n defer.Deferred: fired when the production is ready to start\n \"\"\"\n # Start consuming\n self._running = True\n for consumer in self._consumers.values():\n queue_object, _ = yield consumer.channel.basic_consume(\n queue=consumer.queue, consumer_tag=consumer.tag\n )\n deferred = self._read(queue_object, consumer)\n deferred.addErrback(\n lambda f: _legacy_twisted_log.msg,\n \"_read failed on consumer {c}\",\n c=consumer,\n logLevel=logging.ERROR,\n )\n _legacy_twisted_log.msg(\"AMQP connection successfully established\")"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\npause the consumer and cancel all consumers that are currently being processed.", "response": "def pauseProducing(self):\n \"\"\"\n Pause the reception of messages by canceling all existing consumers.\n This does not disconnect from the server.\n\n Message reception can be resumed with :meth:`resumeProducing`.\n\n Returns:\n Deferred: fired when the production is paused.\n \"\"\"\n if not self._running:\n return\n # Exit the read loop and cancel the consumer on the server.\n self._running = False\n for consumer in self._consumers.values():\n yield consumer.channel.basic_cancel(consumer_tag=consumer.tag)\n _legacy_twisted_log.msg(\"Paused retrieval of messages for the server queue\")"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nstops producing messages and disconnect from the server.", "response": "def stopProducing(self):\n \"\"\"\n Stop producing messages and disconnect from the server.\n Returns:\n Deferred: fired when the production is stopped.\n \"\"\"\n _legacy_twisted_log.msg(\"Disconnecting from the AMQP broker\")\n yield self.pauseProducing()\n yield self.close()\n self._consumers = {}\n self._channel = None"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_class(schema_name):\n global _registry_loaded\n if not _registry_loaded:\n load_message_classes()\n\n try:\n return _schema_name_to_class[schema_name]\n except KeyError:\n _log.warning(\n 'The schema \"%s\" is not in the schema registry! Either install '\n \"the package with its schema definition or define a schema. \"\n \"Falling back to the default schema...\",\n schema_name,\n )\n return Message", "response": "Returns the sub - class of the specified schema name."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nretrieve the schema name associated with a message class.", "response": "def get_name(cls):\n \"\"\"\n Retrieve the schema name associated with a message class.\n\n Returns:\n str: The schema name.\n\n Raises:\n TypeError: If the message class isn't registered. Check your entry point\n for correctness.\n \"\"\"\n global _registry_loaded\n if not _registry_loaded:\n load_message_classes()\n\n try:\n return _class_to_schema_name[cls]\n except KeyError:\n raise TypeError(\n \"The class {} is not in the message registry, which indicates it is\"\n ' not in the current list of entry points for \"fedora_messaging\".'\n \" Please check that the class has been added to your package's\"\n \" entry points.\".format(repr(cls))\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nloads the Fedora. messages entry points and register the message classes.", "response": "def load_message_classes():\n \"\"\"Load the 'fedora.messages' entry points and register the message classes.\"\"\"\n for message in pkg_resources.iter_entry_points(\"fedora.messages\"):\n cls = message.load()\n _log.info(\n \"Registering the '%s' key as the '%r' class in the Message \"\n \"class registry\",\n message.name,\n cls,\n )\n _schema_name_to_class[message.name] = cls\n _class_to_schema_name[cls] = message.name\n global _registry_loaded\n _registry_loaded = True"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconstructs a new Message instance from the properties and body.", "response": "def get_message(routing_key, properties, body):\n \"\"\"\n Construct a Message instance given the routing key, the properties and the\n body received from the AMQP broker.\n\n Args:\n routing_key (str): The AMQP routing key (will become the message topic)\n properties (pika.BasicProperties): the AMQP properties\n body (bytes): The encoded message body\n\n Raises:\n ValidationError: If Message validation failed or message body\n docoding/loading is impossible.\n \"\"\"\n if properties.headers is None:\n _log.error(\n \"Message (body=%r) arrived without headers. \" \"A publisher is misbehaving!\",\n body,\n )\n properties.headers = {}\n\n try:\n MessageClass = get_class(properties.headers[\"fedora_messaging_schema\"])\n except KeyError:\n _log.error(\n \"Message (headers=%r, body=%r) arrived without a schema header.\"\n \" A publisher is misbehaving!\",\n properties.headers,\n body,\n )\n MessageClass = Message\n\n try:\n severity = properties.headers[\"fedora_messaging_severity\"]\n except KeyError:\n _log.error(\n \"Message (headers=%r, body=%r) arrived without a severity.\"\n \" A publisher is misbehaving! Defaulting to INFO.\",\n properties.headers,\n body,\n )\n severity = INFO\n\n if properties.content_encoding is None:\n _log.error(\"Message arrived without a content encoding\")\n properties.content_encoding = \"utf-8\"\n try:\n body = body.decode(properties.content_encoding)\n except UnicodeDecodeError as e:\n _log.error(\n \"Unable to decode message body %r with %s content encoding\",\n body,\n properties.content_encoding,\n )\n raise ValidationError(e)\n\n try:\n body = json.loads(body)\n except ValueError as e:\n _log.error(\"Failed to load message body %r, %r\", body, e)\n raise ValidationError(e)\n\n message = MessageClass(\n body=body, topic=routing_key, properties=properties, severity=severity\n )\n try:\n message.validate()\n _log.debug(\"Successfully validated message %r\", message)\n except jsonschema.exceptions.ValidationError as e:\n _log.error(\"Message validation of %r failed: %r\", message, e)\n raise ValidationError(e)\n return message"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nserializing a list of messages to a JSON formatted str.", "response": "def dumps(messages):\n \"\"\"\n Serialize messages to a JSON formatted str\n\n Args:\n messages (list): The list of messages to serialize. Each message in\n the messages is subclass of Messge.\n\n Returns:\n str: Serialized messages.\n\n Raises:\n TypeError: If at least one message is not instance of Message class or subclass.\n \"\"\"\n serialized_messages = []\n try:\n for message in messages:\n message_dict = message._dump()\n serialized_messages.append(message_dict)\n except AttributeError:\n _log.error(\"Improper object for messages serialization.\")\n raise TypeError(\"Message have to be instance of Message class or subclass.\")\n\n return json.dumps(serialized_messages, sort_keys=True)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _filter_headers(self):\n headers = {}\n for user in self.usernames:\n headers[\"fedora_messaging_user_{}\".format(user)] = True\n for package in self.packages:\n headers[\"fedora_messaging_rpm_{}\".format(package)] = True\n for container in self.containers:\n headers[\"fedora_messaging_container_{}\".format(container)] = True\n for module in self.modules:\n headers[\"fedora_messaging_module_{}\".format(module)] = True\n for flatpak in self.flatpaks:\n headers[\"fedora_messaging_flatpak_{}\".format(flatpak)] = True\n return headers", "response": "Add headers designed for filtering messages based on objects."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nvalidates the message headers and body with the user - provided schema.", "response": "def validate(self):\n \"\"\"\n Validate the headers and body with the message schema, if any.\n\n In addition to the user-provided schema, all messages are checked against\n the base schema which requires certain message headers and the that body\n be a JSON object.\n\n .. warning:: This method should not be overridden by sub-classes.\n\n Raises:\n jsonschema.ValidationError: If either the message headers or the message body\n are invalid.\n jsonschema.SchemaError: If either the message header schema or the message body\n schema are invalid.\n \"\"\"\n for schema in (self.headers_schema, Message.headers_schema):\n _log.debug(\n 'Validating message headers \"%r\" with schema \"%r\"',\n self._headers,\n schema,\n )\n jsonschema.validate(self._headers, schema)\n for schema in (self.body_schema, Message.body_schema):\n _log.debug(\n 'Validating message body \"%r\" with schema \"%r\"', self.body, schema\n )\n jsonschema.validate(self.body, schema)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndumping message attributes. Returns: dict: A dictionary of message attributes.", "response": "def _dump(self):\n \"\"\"\n Dump message attributes.\n\n Returns:\n dict: A dictionary of message attributes.\n \"\"\"\n return {\n \"topic\": self.topic,\n \"headers\": self._headers,\n \"id\": self.id,\n \"body\": self.body,\n \"queue\": self.queue,\n }"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _check_callback(callback):\n # If the callback is a class, create an instance of it first\n if inspect.isclass(callback):\n callback_object = callback()\n if not callable(callback_object):\n raise ValueError(\n \"Callback must be a class that implements __call__ or a function.\"\n )\n elif callable(callback):\n callback_object = callback\n else:\n raise ValueError(\n \"Callback must be a class that implements __call__ or a function.\"\n )\n\n return callback_object", "response": "Checks if a callback is potentially a class and creates an instance of it."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef twisted_consume(callback, bindings=None, queues=None):\n if isinstance(bindings, dict):\n bindings = [bindings]\n callback = _check_callback(callback)\n\n global _twisted_service\n if _twisted_service is None:\n _twisted_service = service.FedoraMessagingServiceV2(config.conf[\"amqp_url\"])\n reactor.callWhenRunning(_twisted_service.startService)\n # Twisted is killing the underlying connection before stopService gets\n # called, so we need to add it as a pre-shutdown event to gracefully\n # finish up messages in progress.\n reactor.addSystemEventTrigger(\n \"before\", \"shutdown\", _twisted_service.stopService\n )\n\n return _twisted_service._service.factory.consume(callback, bindings, queues)", "response": "Start a consumer using the provided callback and run it using the Twisted - managed thread pool."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nstarting a message consumer that executes the provided callback when messages are received from the operating system.", "response": "def consume(callback, bindings=None, queues=None):\n \"\"\"\n Start a message consumer that executes the provided callback when messages are\n received.\n\n This API is blocking and will not return until the process receives a signal\n from the operating system.\n\n .. warning:: This API is runs the callback in the IO loop thread. This means\n if your callback could run for a length of time near the heartbeat interval,\n which is likely on the order of 60 seconds, the broker will kill the TCP\n connection and the message will be re-delivered on start-up.\n\n For now, use the :func:`twisted_consume` API which runs the\n callback in a thread and continues to handle AMQP events while the\n callback runs if you have a long-running callback.\n\n The callback receives a single positional argument, the message:\n\n >>> from fedora_messaging import api\n >>> def my_callback(message):\n ... print(message)\n >>> bindings = [{'exchange': 'amq.topic', 'queue': 'demo', 'routing_keys': ['#']}]\n >>> queues = {\n ... \"demo\": {\"durable\": False, \"auto_delete\": True, \"exclusive\": True, \"arguments\": {}}\n ... }\n >>> api.consume(my_callback, bindings=bindings, queues=queues)\n\n If the bindings and queue arguments are not provided, they will be loaded from\n the configuration.\n\n For complete documentation on writing consumers, see the :ref:`consumers`\n documentation.\n\n Args:\n callback (callable): A callable object that accepts one positional argument,\n a :class:`Message` or a class object that implements the ``__call__``\n method. The class will be instantiated before use.\n bindings (dict or list of dict): Bindings to declare before consuming. This\n should be the same format as the :ref:`conf-bindings` configuration.\n queues (dict): The queue or queues to declare and consume from. This should be\n in the same format as the :ref:`conf-queues` configuration dictionary where\n each key is a queue name and each value is a dictionary of settings for that\n queue.\n\n Raises:\n fedora_messaging.exceptions.HaltConsumer: If the consumer requests that\n it be stopped.\n ValueError: If the consumer provide callback that is not a class that\n implements __call__ and is not a function, if the bindings argument\n is not a dict or list of dicts with the proper keys, or if the queues\n argument isn't a dict with the proper keys.\n \"\"\"\n if isinstance(bindings, dict):\n bindings = [bindings]\n\n if bindings is None:\n bindings = config.conf[\"bindings\"]\n else:\n try:\n config.validate_bindings(bindings)\n except exceptions.ConfigurationException as e:\n raise ValueError(e.message)\n\n if queues is None:\n queues = config.conf[\"queues\"]\n else:\n try:\n config.validate_queues(queues)\n except exceptions.ConfigurationException as e:\n raise ValueError(e.message)\n\n session = _session.ConsumerSession()\n session.consume(callback, bindings=bindings, queues=queues)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef publish(message, exchange=None):\n pre_publish_signal.send(publish, message=message)\n\n if exchange is None:\n exchange = config.conf[\"publish_exchange\"]\n\n global _session_cache\n if not hasattr(_session_cache, \"session\"):\n _session_cache.session = _session.PublisherSession()\n\n try:\n _session_cache.session.publish(message, exchange=exchange)\n publish_signal.send(publish, message=message)\n except exceptions.PublishException as e:\n publish_failed_signal.send(publish, message=message, reason=e)\n raise", "response": "Publish a message to an exchange."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate the Protocol instance.", "response": "def buildProtocol(self, addr):\n \"\"\"Create the Protocol instance.\n\n See the documentation of\n `twisted.internet.protocol.ReconnectingClientFactory` for details.\n \"\"\"\n self.resetDelay()\n self.client = self.protocol(self._parameters)\n self.client.factory = self\n self.client.ready.addCallback(lambda _: self._on_client_ready())\n return self.client"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _on_client_ready(self):\n _legacy_twisted_log.msg(\"Successfully connected to the AMQP broker.\")\n yield self.client.resumeProducing()\n\n yield self.client.declare_exchanges(self.exchanges)\n yield self.client.declare_queues(self.queues)\n yield self.client.bind_queues(self.bindings)\n for queue, callback in self.consumers.items():\n yield self.client.consume(callback, queue)\n\n _legacy_twisted_log.msg(\"Successfully declared all AMQP objects.\")\n self._client_ready.callback(None)", "response": "Called when the client is ready to send and receive messages."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef clientConnectionLost(self, connector, reason):\n if not isinstance(reason.value, error.ConnectionDone):\n _legacy_twisted_log.msg(\n \"Lost connection to the AMQP broker ({reason})\",\n reason=reason.value,\n logLevel=logging.WARNING,\n )\n if self._client_ready.called:\n # Renew the ready deferred, it will callback when the\n # next connection is ready.\n self._client_ready = defer.Deferred()\n protocol.ReconnectingClientFactory.clientConnectionLost(self, connector, reason)", "response": "Called when the connection to the broker has been lost."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef clientConnectionFailed(self, connector, reason):\n _legacy_twisted_log.msg(\n \"Connection to the AMQP broker failed ({reason})\",\n reason=reason.value,\n logLevel=logging.WARNING,\n )\n protocol.ReconnectingClientFactory.clientConnectionFailed(\n self, connector, reason\n )", "response": "Called when the client has failed to connect to the broker."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef stopTrying(self):\n protocol.ReconnectingClientFactory.stopTrying(self)\n if not self._client_ready.called:\n self._client_ready.errback(\n pika.exceptions.AMQPConnectionError(\n u\"Could not connect, reconnection cancelled.\"\n )\n )", "response": "Stop trying to reconnect to the broker."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef stopFactory(self):\n if self.client:\n yield self.client.stopProducing()\n protocol.ReconnectingClientFactory.stopFactory(self)", "response": "Stop the factory.\n\n See the documentation of\n `twisted.internet.protocol.ReconnectingClientFactory` for details."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef consume(self, callback, queue):\n self.consumers[queue] = callback\n if self._client_ready.called:\n return self.client.consume(callback, queue)", "response": "Register a new consumer."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncancels the consumer for a queue.", "response": "def cancel(self, queue):\n \"\"\"\n Cancel the consumer for a queue.\n\n This removes the consumer from the list of consumers to be configured for\n every connection.\n\n Args:\n queue (str): The name of the queue the consumer is subscribed to.\n Returns:\n defer.Deferred or None: Either a Deferred that fires when the consumer\n is canceled, or None if the consumer was already canceled. Wrap\n the call in :func:`defer.maybeDeferred` to always receive a Deferred.\n \"\"\"\n try:\n del self.consumers[queue]\n except KeyError:\n pass\n if self.client:\n return self.client.cancel(queue)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef publish(self, message, exchange=None):\n exchange = exchange or config.conf[\"publish_exchange\"]\n while True:\n client = yield self.whenConnected()\n try:\n yield client.publish(message, exchange)\n break\n except ConnectionException:\n continue", "response": "Publish a message to an AMQP exchange."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates the Protocol instance.", "response": "def buildProtocol(self, addr):\n \"\"\"Create the Protocol instance.\n\n See the documentation of\n `twisted.internet.protocol.ReconnectingClientFactory` for details.\n \"\"\"\n self._client = self.protocol(self._parameters, confirms=self.confirms)\n self._client.factory = self\n\n @defer.inlineCallbacks\n def on_ready(unused_param=None):\n \"\"\"Reset the connection delay when the AMQP handshake is complete.\"\"\"\n _std_log.debug(\"AMQP handshake completed; connection ready for use\")\n self.resetDelay()\n self._client_deferred.callback(self._client)\n self._client_deferred = defer.Deferred()\n\n # Restart any consumer from previous connections that wasn't canceled\n # including queues and bindings, as the queue might not have been durable\n for consumer, queue, bindings in self._consumers.values():\n _std_log.info(\"Re-registering the %r consumer\", consumer)\n yield self._client.declare_queues([queue])\n yield self._client.bind_queues(bindings)\n yield self._client.consume(consumer.callback, consumer.queue, consumer)\n\n def on_ready_errback(failure):\n \"\"\"If opening the connection fails or is lost, this errback is called.\"\"\"\n if failure.check(pika.exceptions.AMQPConnectionError, error.ConnectionDone):\n # In this case the connection failed to open. This will be called\n # if the TLS handshake goes wrong (likely) and it may be called if\n # the AMQP handshake fails. It's *probably* a problem with the\n # credentials.\n msg = (\n \"The TCP connection appears to have started, but the TLS or AMQP handshake \"\n \"with the broker failed; check your connection and authentication \"\n \"parameters and ensure your user has permission to access the vhost\"\n )\n wrapped_failure = Failure(\n exc_value=ConnectionException(reason=msg, original=failure),\n exc_type=ConnectionException,\n )\n self._client_deferred.errback(wrapped_failure)\n else:\n # Some unexpected exception happened\n _std_log.exception(\n \"The connection failed with an unexpected exception; please report this bug.\"\n )\n\n self._client.ready.addCallback(on_ready)\n self._client.ready.addErrback(on_ready_errback)\n return self._client"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef stopFactory(self):\n if self._client:\n yield self._client.halt()\n protocol.ReconnectingClientFactory.stopFactory(self)", "response": "Stop the factory.\n\n See the documentation of\n `twisted.internet.protocol.ReconnectingClientFactory` for details."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef when_connected(self):\n if self._client and not self._client.is_closed:\n return defer.succeed(self._client)\n else:\n return self._client_deferred", "response": "Returns a Deferred that fires with a connectedFedoraMessagingProtocolV2 instance."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef consume(self, callback, bindings, queues):\n expanded_bindings = collections.defaultdict(list)\n for binding in bindings:\n for key in binding[\"routing_keys\"]:\n b = binding.copy()\n del b[\"routing_keys\"]\n b[\"routing_key\"] = key\n expanded_bindings[b[\"queue\"]].append(b)\n\n expanded_queues = []\n for name, settings in queues.items():\n q = {\"queue\": name}\n q.update(settings)\n expanded_queues.append(q)\n\n protocol = yield self.when_connected()\n\n consumers = []\n for queue in expanded_queues:\n yield protocol.declare_queues([queue])\n b = expanded_bindings.get(queue[\"queue\"], [])\n yield protocol.bind_queues(b)\n consumer = yield protocol.consume(callback, queue[\"queue\"])\n self._consumers[queue[\"queue\"]] = (consumer, queue, b)\n consumers.append(consumer)\n\n defer.returnValue(consumers)", "response": "Start a consumer that will consume messages from a queue."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncancels a consumer that was previously started with consume.", "response": "def cancel(self, consumers):\n \"\"\"\n Cancel a consumer that was previously started with consume.\n\n Args:\n consumer (list of fedora_messaging.api.Consumer): The consumers to cancel.\n \"\"\"\n for consumer in consumers:\n del self._consumers[consumer.queue]\n protocol = yield self.when_connected()\n yield protocol.cancel(consumer)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef cancel(self):\n # Remove it from protocol and factory so it doesn't restart later.\n try:\n del self._protocol._consumers[self.queue]\n except (KeyError, AttributeError):\n pass\n try:\n del self._protocol.factory._consumers[self.queue]\n except (KeyError, AttributeError):\n pass\n # Signal to the _read loop it's time to stop and wait for it to finish\n # with whatever message it might be working on, then wait for the deferred\n # to fire which indicates it is done.\n self._running = False\n yield self._read_loop\n try:\n yield self._channel.basic_cancel(consumer_tag=self._tag)\n except pika.exceptions.AMQPChannelError:\n # Consumers are tied to channels, so if this channel is dead the\n # consumer should already be canceled (and we can't get to it anyway)\n pass\n try:\n yield self._channel.close()\n except pika.exceptions.AMQPChannelError:\n pass\n if not self.result.called:\n self.result.callback(self)", "response": "Cancel the consumer and clean up resources associated with it."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nvalidate the bindings configuration.", "response": "def validate_bindings(bindings):\n \"\"\"\n Validate the bindings configuration.\n\n Raises:\n exceptions.ConfigurationException: If the configuration provided is of an\n invalid format.\n \"\"\"\n if not isinstance(bindings, (list, tuple)):\n raise exceptions.ConfigurationException(\n \"bindings must be a list or tuple of dictionaries, but was a {}\".format(\n type(bindings)\n )\n )\n\n for binding in bindings:\n missing_keys = []\n for key in (\"queue\", \"exchange\", \"routing_keys\"):\n if key not in binding:\n missing_keys.append(key)\n if missing_keys:\n raise exceptions.ConfigurationException(\n \"a binding is missing the following keys from its settings \"\n \"value: {}\".format(missing_keys)\n )\n\n if not isinstance(binding[\"routing_keys\"], (list, tuple)):\n raise exceptions.ConfigurationException(\n \"routing_keys must be a list or tuple, but was a {}\".format(\n type(binding[\"routing_keys\"])\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nvalidate the queues configuration.", "response": "def validate_queues(queues):\n \"\"\"\n Validate the queues configuration.\n\n Raises:\n exceptions.ConfigurationException: If the configuration provided is of an\n invalid format.\n \"\"\"\n if not isinstance(queues, dict):\n raise exceptions.ConfigurationException(\n \"'queues' must be a dictionary mapping queue names to settings.\"\n )\n\n for queue, settings in queues.items():\n if not isinstance(settings, dict):\n raise exceptions.ConfigurationException(\n \"the {} queue in the 'queues' setting has a value of type {}, but it \"\n \"should be a dictionary of settings.\".format(queue, type(settings))\n )\n missing_keys = []\n for key in (\"durable\", \"auto_delete\", \"exclusive\", \"arguments\"):\n if key not in settings:\n missing_keys.append(key)\n if missing_keys:\n raise exceptions.ConfigurationException(\n \"the {} queue is missing the following keys from its settings \"\n \"value: {}\".format(queue, missing_keys)\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef validate_client_properties(props):\n for key in (\"version\", \"information\", \"product\"):\n # Nested dictionaries are not merged so key can be missing\n if key not in props:\n props[key] = DEFAULTS[\"client_properties\"][key]\n # Don't let users override these as they identify this library in AMQP\n if props[key] != DEFAULTS[\"client_properties\"][key]:\n raise exceptions.ConfigurationException(\n '\"{}\" is a reserved keyword in client_properties'.format(key)\n )", "response": "Validate the client properties setting."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _validate(self):\n for key in self:\n if key not in DEFAULTS:\n raise exceptions.ConfigurationException(\n 'Unknown configuration key \"{}\"! Valid configuration keys are'\n \" {}\".format(key, list(DEFAULTS.keys()))\n )\n\n validate_queues(self[\"queues\"])\n validate_bindings(self[\"bindings\"])\n validate_client_properties(self[\"client_properties\"])", "response": "Perform checks on the configuration to assert its validity."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nload application configuration from a file and merge it with the default configuration.", "response": "def load_config(self, config_path=None):\n \"\"\"\n Load application configuration from a file and merge it with the default\n configuration.\n\n If the ``FEDORA_MESSAGING_CONF`` environment variable is set to a\n filesystem path, the configuration will be loaded from that location.\n Otherwise, the path defaults to ``/etc/fedora-messaging/config.toml``.\n \"\"\"\n self.loaded = True\n config = copy.deepcopy(DEFAULTS)\n\n if config_path is None:\n if \"FEDORA_MESSAGING_CONF\" in os.environ:\n config_path = os.environ[\"FEDORA_MESSAGING_CONF\"]\n else:\n config_path = \"/etc/fedora-messaging/config.toml\"\n\n if os.path.exists(config_path):\n _log.info(\"Loading configuration from {}\".format(config_path))\n with open(config_path) as fd:\n try:\n file_config = toml.load(fd)\n for key in file_config:\n config[key.lower()] = file_config[key]\n except toml.TomlDecodeError as e:\n msg = \"Failed to parse {}: error at line {}, column {}: {}\".format(\n config_path, e.lineno, e.colno, e.msg\n )\n raise exceptions.ConfigurationException(msg)\n else:\n _log.info(\"The configuration file, {}, does not exist.\".format(config_path))\n\n self.update(config)\n self._validate()\n return self"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngenerates a Twisted SSL context object from a pika connection parameter object.", "response": "def _ssl_context_factory(parameters):\n \"\"\"\n Produce a Twisted SSL context object from a pika connection parameter object.\n This is necessary as Twisted manages the connection, not Pika.\n\n Args:\n parameters (pika.ConnectionParameters): The connection parameters built\n from the fedora_messaging configuration.\n \"\"\"\n client_cert = None\n ca_cert = None\n key = config.conf[\"tls\"][\"keyfile\"]\n cert = config.conf[\"tls\"][\"certfile\"]\n ca_file = config.conf[\"tls\"][\"ca_cert\"]\n if ca_file:\n with open(ca_file, \"rb\") as fd:\n # Open it in binary mode since otherwise Twisted will immediately\n # re-encode it as ASCII, which won't work if the cert bundle has\n # comments that can't be encoded with ASCII.\n ca_cert = ssl.Certificate.loadPEM(fd.read())\n if key and cert:\n # Note that _configure_tls_parameters sets the auth mode to EXTERNAL\n # if both key and cert are defined, so we don't need to do that here.\n with open(key) as fd:\n client_keypair = fd.read()\n with open(cert) as fd:\n client_keypair += fd.read()\n client_cert = ssl.PrivateCertificate.loadPEM(client_keypair)\n\n hostname = parameters.host\n if not isinstance(hostname, six.text_type):\n # Twisted requires the hostname as decoded text, which it isn't in Python 2\n # Decode with the system encoding since this came from the config file. Die,\n # Python 2, die.\n hostname = hostname.decode(locale.getdefaultlocale()[1])\n try:\n context_factory = ssl.optionsForClientTLS(\n hostname,\n trustRoot=ca_cert or ssl.platformTrust(),\n clientCertificate=client_cert,\n extraCertificateOptions={\"raiseMinimumTo\": ssl.TLSVersion.TLSv1_2},\n )\n except AttributeError:\n # Twisted 12.2 path for EL7 :(\n context_factory = ssl.CertificateOptions(\n certificate=client_cert.original,\n privateKey=client_cert.privateKey.original,\n caCerts=[ca_cert.original] or ssl.platformTrust(),\n verify=True,\n requireCertificate=True,\n verifyOnce=False,\n enableSessions=False,\n )\n\n return context_factory"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef stopService(self):\n self._service.factory.stopTrying()\n yield self._service.factory.stopFactory()\n yield service.MultiService.stopService(self)", "response": "Gracefully stop the service."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconfigures the TLS connection parameters for the current object.", "response": "def _configure_tls_parameters(parameters):\n \"\"\"\n Configure the pika connection parameters for TLS based on the configuration.\n\n This modifies the object provided to it. This accounts for whether or not\n the new API based on the standard library's SSLContext is available for\n pika.\n\n Args:\n parameters (pika.ConnectionParameters): The connection parameters to apply\n TLS connection settings to.\n \"\"\"\n cert = config.conf[\"tls\"][\"certfile\"]\n key = config.conf[\"tls\"][\"keyfile\"]\n if cert and key:\n _log.info(\n \"Authenticating with server using x509 (certfile: %s, keyfile: %s)\",\n cert,\n key,\n )\n parameters.credentials = pika.credentials.ExternalCredentials()\n else:\n cert, key = None, None\n\n if SSLOptions is None:\n parameters.ssl = True\n parameters.ssl_options = {\n \"keyfile\": key,\n \"certfile\": cert,\n \"ca_certs\": config.conf[\"tls\"][\"ca_cert\"],\n \"cert_reqs\": ssl.CERT_REQUIRED,\n \"ssl_version\": ssl.PROTOCOL_TLSv1_2,\n }\n else:\n ssl_context = ssl.create_default_context()\n if config.conf[\"tls\"][\"ca_cert\"]:\n try:\n ssl_context.load_verify_locations(cafile=config.conf[\"tls\"][\"ca_cert\"])\n except ssl.SSLError as e:\n raise ConfigurationException(\n 'The \"ca_cert\" setting in the \"tls\" section is invalid ({})'.format(\n e\n )\n )\n ssl_context.options |= ssl.OP_NO_SSLv2\n ssl_context.options |= ssl.OP_NO_SSLv3\n ssl_context.options |= ssl.OP_NO_TLSv1\n ssl_context.options |= ssl.OP_NO_TLSv1_1\n ssl_context.verify_mode = ssl.CERT_REQUIRED\n ssl_context.check_hostname = True\n if cert and key:\n try:\n ssl_context.load_cert_chain(cert, key)\n except ssl.SSLError as e:\n raise ConfigurationException(\n 'The \"keyfile\" setting in the \"tls\" section is invalid ({})'.format(\n e\n )\n )\n parameters.ssl_options = SSLOptions(\n ssl_context, server_hostname=parameters.host\n )"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef publish(self, message, exchange=None):\n message.validate()\n try:\n self._connect_and_publish(exchange, message)\n except (pika_errs.NackError, pika_errs.UnroutableError) as e:\n _log.warning(\"Message was rejected by the broker (%s)\", str(e))\n raise PublishReturned(reason=e)\n except (pika_errs.ConnectionClosed, pika_errs.AMQPChannelError):\n # Because this is a blocking connection (and thus can't heartbeat)\n # we might need to restart the connection.\n _log.info(\"Resetting connection to %s\", self._parameters.host)\n self._connection = self._channel = None\n try:\n self._connect_and_publish(exchange, message)\n except (pika_errs.NackError, pika_errs.UnroutableError) as e:\n _log.warning(\"Message was rejected by the broker (%s)\", str(e))\n raise PublishReturned(reason=e)\n except pika_errs.AMQPError as e:\n _log.error(str(e))\n if self._connection and self._connection.is_open:\n self._connection.close()\n raise ConnectionException(reason=e)\n except pika_errs.AMQPError as e:\n if self._connection and self._connection.is_open:\n self._connection.close()\n raise ConnectionException(reason=e)", "response": "Publish a message to an AMQP exchange."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _shutdown(self):\n if self._channel:\n _log.info(\"Halting %r consumer sessions\", self._channel.consumer_tags)\n self._running = False\n if self._connection and self._connection.is_open:\n self._connection.close()\n # Reset the signal handler\n for signum in (signal.SIGTERM, signal.SIGINT):\n signal.signal(signum, signal.SIG_DFL)", "response": "Gracefully shut down the consumer and exit."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _on_cancelok(self, cancel_frame):\n _log.info(\"Consumer canceled; returning all unprocessed messages to the queue\")\n self._channel.basic_nack(delivery_tag=0, multiple=True, requeue=True)", "response": "Called when the server acknowledges a cancel request."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _on_qosok(self, qosok_frame):\n for name, args in self._exchanges.items():\n self._channel.exchange_declare(\n exchange=name,\n exchange_type=args[\"type\"],\n durable=args[\"durable\"],\n auto_delete=args[\"auto_delete\"],\n arguments=args[\"arguments\"],\n passive=config.conf[\"passive_declares\"],\n callback=self._on_exchange_declareok,\n )\n for name, args in self._queues.items():\n self._channel.queue_declare(\n queue=name,\n durable=args[\"durable\"],\n auto_delete=args[\"auto_delete\"],\n exclusive=args[\"exclusive\"],\n arguments=args[\"arguments\"],\n passive=config.conf[\"passive_declares\"],\n callback=self._on_queue_declareok,\n )", "response": "Called when the server acknowledges the QoS settings."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _on_channel_close(self, channel, reply_code_or_reason, reply_text=None):\n if isinstance(reply_code_or_reason, pika_errs.ChannelClosed):\n reply_code = reply_code_or_reason.reply_code\n reply_text = reply_code_or_reason.reply_text\n elif isinstance(reply_code_or_reason, int):\n reply_code = reply_code_or_reason\n else:\n reply_code = 0\n reply_text = str(reply_code_or_reason)\n\n _log.info(\"Channel %r closed (%d): %s\", channel, reply_code, reply_text)\n self._channel = None", "response": "Callback invoked when the channel is closed."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _on_connection_open(self, connection):\n _log.info(\"Successfully opened connection to %s\", connection.params.host)\n self._channel = connection.channel(on_open_callback=self._on_channel_open)", "response": "Callback invoked when the connection is successfully established."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ninvoke when a queue is successfully declared.", "response": "def _on_queue_declareok(self, frame):\n \"\"\"\n Callback invoked when a queue is successfully declared.\n\n Args:\n frame (pika.frame.Method): The message sent from the server.\n \"\"\"\n _log.info(\"Successfully declared the %s queue\", frame.method.queue)\n for binding in self._bindings:\n if binding[\"queue\"] == frame.method.queue:\n for key in binding[\"routing_keys\"]:\n _log.info(\n \"Asserting %s is bound to %s with the %s key\",\n binding[\"queue\"],\n binding[\"exchange\"],\n key,\n )\n self._channel.queue_bind(\n callback=None,\n queue=binding[\"queue\"],\n exchange=binding[\"exchange\"],\n routing_key=key,\n )\n bc_args = dict(queue=frame.method.queue)\n if _pika_version < pkg_resources.parse_version(\"1.0.0b1\"):\n bc_args[\"consumer_callback\"] = self._on_message\n else:\n bc_args[\"on_message_callback\"] = self._on_message\n tag = self._channel.basic_consume(**bc_args)\n self._consumers[tag] = binding[\"queue\"]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef call_later(self, delay, callback):\n if hasattr(self._connection.ioloop, \"call_later\"):\n self._connection.ioloop.call_later(delay, callback)\n else:\n self._connection.ioloop.add_timeout(delay, callback)", "response": "Schedule a one - shot timeout given delay seconds."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef consume(self, callback, bindings=None, queues=None, exchanges=None):\n self._bindings = bindings or config.conf[\"bindings\"]\n self._queues = queues or config.conf[\"queues\"]\n self._exchanges = exchanges or config.conf[\"exchanges\"]\n\n # If the callback is a class, create an instance of it first\n if inspect.isclass(callback):\n cb_obj = callback()\n if not callable(cb_obj):\n raise ValueError(\n \"Callback must be a class that implements __call__\"\n \" or a function.\"\n )\n self._consumer_callback = cb_obj\n elif callable(callback):\n self._consumer_callback = callback\n else:\n raise ValueError(\n \"Callback must be a class that implements __call__\" \" or a function.\"\n )\n self._running = True\n self.connect()\n self._connection.ioloop.start()", "response": "This method is used to consume messages from a message queue."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the avatar URL from the email s From header.", "response": "def get_avatar(from_header, size=64, default=\"retro\"):\n \"\"\"Get the avatar URL from the email's From header.\n\n Args:\n from_header (str): The email's From header. May contain the sender's full name.\n\n Returns:\n str: The URL to that sender's avatar.\n \"\"\"\n params = OrderedDict([(\"s\", size), (\"d\", default)])\n query = parse.urlencode(params)\n address = email.utils.parseaddr(from_header)[1]\n value_hash = sha256(address.encode(\"utf-8\")).hexdigest()\n return \"https://seccdn.libravatar.org/avatar/{}?{}\".format(value_hash, query)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef url(self):\n base_url = \"https://lists.fedoraproject.org/archives\"\n archived_at = self._get_archived_at()\n if archived_at and archived_at.startswith(\"<\"):\n archived_at = archived_at[1:]\n if archived_at and archived_at.endswith(\">\"):\n archived_at = archived_at[:-1]\n if archived_at and archived_at.startswith(\"http\"):\n return archived_at\n elif archived_at:\n return base_url + archived_at\n else:\n return None", "response": "An URL to the email in HyperKitty."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_requirements(requirements_file=\"requirements.txt\"):\n with open(requirements_file) as fd:\n lines = fd.readlines()\n dependencies = []\n for line in lines:\n maybe_dep = line.strip()\n if maybe_dep.startswith(\"#\"):\n # Skip pure comment lines\n continue\n if maybe_dep.startswith(\"git+\"):\n # VCS reference for dev purposes, expect a trailing comment\n # with the normal requirement\n __, __, maybe_dep = maybe_dep.rpartition(\"#\")\n else:\n # Ignore any trailing comment\n maybe_dep, __, __ = maybe_dep.partition(\"#\")\n # Remove any whitespace and assume non-empty results are dependencies\n maybe_dep = maybe_dep.strip()\n if maybe_dep:\n dependencies.append(maybe_dep)\n return dependencies", "response": "Get the contents of a file listing the requirements. txt."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef user_avatar_url(username, size=64, default=\"retro\"):\n openid = \"http://{}.id.fedoraproject.org/\".format(username)\n return libravatar_url(openid=openid, size=size, default=default)", "response": "Get the avatar URL of the given Fedora username."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the URL to an avatar from Libravatar.", "response": "def libravatar_url(email=None, openid=None, size=64, default=\"retro\"):\n \"\"\"Get the URL to an avatar from libravatar.\n\n Either the user's email or openid must be provided.\n\n If you want to use Libravatar federation (through DNS), you should install\n and use the ``libravatar`` library instead. Check out the\n ``libravatar.libravatar_url()`` function.\n\n Args:\n email (str): The user's email\n openid (str): The user's OpenID\n size (int): Size of the avatar in pixels (it's a square).\n default (str): Default avatar to return if not found.\n Returns:\n str: The URL to the avatar image.\n Raises:\n ValueError: If neither email nor openid are provided.\n \"\"\"\n # We use an OrderedDict here to make testing easier (URL strings become\n # predictable).\n params = collections.OrderedDict([(\"s\", size), (\"d\", default)])\n query = parse.urlencode(params)\n if email:\n value = email\n elif openid:\n value = openid\n else:\n raise ValueError(\"You must provide either the email or the openid.\")\n idhash = sha256(value.encode(\"utf-8\")).hexdigest()\n return \"https://seccdn.libravatar.org/avatar/%s?%s\" % (idhash, query)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets parameters for a resource.", "response": "def get_parameters(parser, token):\n \"\"\"\n {% get_parameters except_field %}\n \"\"\"\n args = token.split_contents()\n if len(args) < 2:\n raise template.TemplateSyntaxError(\n \"get_parameters tag takes at least 1 argument\"\n )\n return GetParametersNode(args[1].strip())"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a list of all field names on the instance.", "response": "def get_all_fields(obj):\n \"\"\"Returns a list of all field names on the instance.\"\"\"\n fields = []\n for f in obj._meta.fields:\n fname = f.name\n\n get_choice = \"get_\" + fname + \"_display\"\n if hasattr(obj, get_choice):\n value = getattr(obj, get_choice)()\n else:\n try:\n value = getattr(obj, fname)\n except Exception:\n value = None\n\n if isinstance(value, list):\n value = \",\".join(str(v) for v in value)\n\n if f.editable and value and f.name:\n fields.append(\n {\"label\": f.verbose_name, \"name\": f.name, \"value\": value}\n )\n\n return fields"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef query_string(context, **kwargs):\n params = context[\"request\"].GET.copy()\n\n for key, value in list(kwargs.items()):\n params[key] = value\n\n return \"?\" + params.urlencode()", "response": "Add param to the given query string"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef query_string_ordering(context, value, **kwargs):\n params = context[\"request\"].GET.copy()\n\n # if the given value is '-id', it's core value would be 'id'\n core_value = value\n default_order = \"asc\"\n\n if core_value[0] == \"-\":\n core_value = value[1:]\n default_order = \"desc\"\n\n current_value = \"\"\n\n # by preference get the current ordering value from the filter\n # so that even if no explicit ordering is in the URL, we still\n # get the implicit ordering, the page's default\n # See generics.filters.FilterSet\n if \"filter\" in context:\n current_value = context[\"filter\"].ordered_value()\n elif \"ordering\" in params:\n current_value = params[\"ordering\"]\n\n # The first two clauses check if the current ordering is on the\n # same field as the desired one, in which case we reverse the direction.\n # If it's on another field, we use the default direction.\n if current_value == core_value:\n order_prefix = \"-\"\n elif current_value == \"-\" + core_value:\n order_prefix = \"\"\n elif default_order == \"desc\":\n order_prefix = \"-\"\n else:\n order_prefix = \"\"\n\n params[\"ordering\"] = order_prefix + core_value\n return \"?\" + params.urlencode()", "response": "This function returns a query string that can be used to order the pages in the given language."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef arctic_url(context, link, *args, **kwargs):\n\n def reverse_mutable_url_args(url_args):\n mutated_url_args = []\n for arg in url_args:\n\n # listview item, and argument is a string\n if \"item\" in context and type(arg) == str:\n # try to get attribute of this object\n try:\n arg = getattr(context[\"v\"], arg.split(\".\")[-1])\n # if not found fallback to row pk, which is always first column\n except Exception:\n arg = context[\"item\"][0]\n mutated_url_args.append(arg)\n return reverse(link, args=mutated_url_args, kwargs=None)\n\n url_args = args\n\n # set arguments defined in urls if provided\n if type(link) in (tuple, list):\n context[\"urls\"][link[0]] = list(link[1:])\n link = link[0]\n if link in context[\"urls\"]:\n\n # for where the params directly given. e.g. ('article-detail',\n # (self.object.pk,))\n url_args = context[\"urls\"][link]\n\n # list given, which means it's mutable!\n if isinstance(url_args, list):\n return reverse_mutable_url_args(url_args)\n\n return reverse(link, args=url_args, kwargs=None)", "response": "Resolves links into urls with optional url arguments set in self. urls."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the role model that is active in this project.", "response": "def get_role_model():\n \"\"\"\n Returns the Role model that is active in this project.\n \"\"\"\n app_model = getattr(settings, \"ARCTIC_ROLE_MODEL\", \"arctic.Role\")\n\n try:\n return django_apps.get_model(app_model)\n except ValueError:\n raise ImproperlyConfigured(\n \"ARCTIC_ROLE_MODEL must be of the \" \"form 'app_label.model_name'\"\n )\n except LookupError:\n raise ImproperlyConfigured(\n \"ARCTIC_ROLE_MODEL refers to model '%s' that has not been \"\n \"installed\" % settings.ARCTIC_ROLE_MODEL\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_user_role_model():\n app_model = getattr(settings, \"ARCTIC_USER_ROLE_MODEL\", \"arctic.UserRole\")\n\n try:\n return django_apps.get_model(app_model)\n except ValueError:\n raise ImproperlyConfigured(\n \"ARCTIC_USER_ROLE_MODEL must be of the \"\n \"form 'app_label.model_name'\"\n )\n except LookupError:\n raise ImproperlyConfigured(\n \"ARCTIC_USER_ROLE_MODEL refers to model '%s' that has not been \"\n \"installed\" % settings.ARCTIC_USER_ROLE_MODEL\n )", "response": "Returns the UserRole model that is active in this project."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef dispatch(self, request, *args, **kwargs):\n if self.requires_login:\n if settings.LOGIN_URL is None or settings.LOGOUT_URL is None:\n raise ImproperlyConfigured(\n \"LOGIN_URL and LOGOUT_URL \"\n \"has to be defined if requires_login is True\"\n )\n\n if not request.user.is_authenticated:\n return redirect(\n \"%s?next=%s\"\n % (\n resolve_url(settings.LOGIN_URL),\n quote(request.get_full_path()),\n )\n )\n\n return super(View, self).dispatch(request, *args, **kwargs)", "response": "This method is used to redirect to the login page if the user is not authenticated."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a list of all the breadcrumbs that can be used for this object.", "response": "def get_breadcrumbs(self):\n \"\"\"\n Breadcrumb format: (('name', 'url'), ...) or None if not used.\n \"\"\"\n if not self.breadcrumbs:\n return None\n else:\n allowed_breadcrumbs = []\n for breadcrumb in self.breadcrumbs:\n\n # check permission based on named_url\n if breadcrumb[1] is not None and not view_from_url(\n breadcrumb[1]\n ).has_permission(self.request.user):\n continue\n\n obj = self if not hasattr(self, \"object\") else self.object\n url = (\n None\n if not breadcrumb[1]\n else reverse_url(breadcrumb[1], obj)\n )\n allowed_breadcrumbs.append({\"name\": breadcrumb[0], \"url\": url})\n return allowed_breadcrumbs"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_tabs(self):\n if not self.tabs:\n return None\n else:\n allowed_tabs = []\n for tab in self.tabs:\n\n # check permission based on named_url\n if not view_from_url(tab[1]).has_permission(self.request.user):\n continue\n\n obj = self if not hasattr(self, \"object\") else self.object\n url = reverse_url(tab[1], obj)\n allowed_tabs.append(\n {\n \"name\": tab[0],\n \"active\": self.request.path == url,\n \"url\": self.in_modal(url),\n }\n )\n return allowed_tabs", "response": "Return a list of tabs that can be used in the current page."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn all media required to render this view including forms.", "response": "def media(self):\n \"\"\"\n Return all media required to render this view, including forms.\n \"\"\"\n media = self._get_common_media()\n media += self._get_view_media()\n media += self.get_media_assets()\n return media"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngathering view - level media assets and css", "response": "def _get_view_media(self):\n \"\"\"\n Gather view-level media assets\n \"\"\"\n try:\n css = self.Media.css\n except AttributeError:\n css = {}\n try:\n js = self.Media.js\n except AttributeError:\n js = []\n return Media(css=css, js=js)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_list_header(self):\n model = self.object_list.model\n result = []\n if not self.get_fields():\n result.append(\n {\"name\": \"\", \"verbose\": str(model._meta.verbose_name)}\n )\n else:\n prefix = self.get_prefix()\n ordering_fields = self.get_ordering_fields()\n for field_name in self.get_fields():\n item = {}\n if isinstance(field_name, tuple):\n # custom property that is not a field of the model\n name = field_name[0]\n item[\"label\"] = field_name[1]\n else:\n name = field_name\n try:\n field_meta = find_field_meta(model, field_name)\n if field_meta._verbose_name: # noqa\n # explicitly set on the model, so don't change\n item[\"label\"] = field_meta._verbose_name # noqa\n else:\n # title-case the field name (issue #80)\n item[\"label\"] = field_meta.verbose_name.title()\n except FieldDoesNotExist:\n item[\"label\"] = field_name\n except AttributeError:\n item[\"label\"] = field_name\n item[\"name\"] = prefix + name\n if name in ordering_fields:\n item[\"order_url\"], item[\n \"order_direction\"\n ] = self.ordering_url(name)\n result.append(item)\n\n return result", "response": "Returns a list of dictionaries with the field names labels links field css classes order_url and order_direction"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_ordering(self):\n if self.sorting_field:\n return [self.sorting_field]\n prefix = self.get_prefix()\n fields = self.get_ordering_with_prefix()\n if self.prefix:\n fields = [f.replace(prefix, \"\", 1) for f in fields]\n return [\n f\n for f in fields\n if f.lstrip(\"-\") in self.get_ordering_fields_lookups()\n ]", "response": "Ordering used for queryset filtering."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_list_header(self):\n result = []\n for field_name in self.get_fields():\n item = {}\n if isinstance(field_name, tuple):\n # custom property that is not a field of the model\n item[\"name\"] = field_name[0]\n item[\"label\"] = field_name[1]\n else:\n item[\"name\"] = field_name\n item[\"label\"] = field_name.title()\n if item[\"name\"] in self.get_ordering_fields():\n item[\"order_url\"], item[\"order_direction\"] = self.ordering_url(\n item[\"name\"]\n )\n result.append(item)\n\n return result", "response": "Returns a list of dictionaries with the field names labels links field css classes order_url and order_direction"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_paginator(\n self,\n dataset,\n per_page,\n orphans=0,\n allow_empty_first_page=True,\n **kwargs\n ):\n \"\"\"Return an instance of the paginator for this view.\"\"\"\n return IndefinitePaginator(\n dataset,\n per_page,\n orphans=orphans,\n allow_empty_first_page=allow_empty_first_page,\n **kwargs\n )", "response": "Return an instance of the paginator for this view."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get(self, request, *args, **kwargs):\n self.object = self.get_object()\n can_delete = True\n protected_objects = []\n collector_message = None\n collector = Collector(using=\"default\")\n try:\n collector.collect([self.object])\n except ProtectedError as e:\n collector_message = (\n \"Cannot delete %s because it has relations \"\n \"that depends on it.\" % self.object\n )\n protected_objects = e.protected_objects\n can_delete = False\n\n if can_delete and self.redirect:\n messages.success(request, self.get_success_message(self.object))\n return self.delete(request, *args, **kwargs)\n\n context = self.get_context_data(\n object=self.object,\n can_delete=can_delete,\n collector_message=collector_message,\n protected_objects=protected_objects,\n )\n return self.render_to_response(context)", "response": "Catch protected relations and show to user."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_modal_link(self, url, obj={}):\n if not (url in self.modal_links.keys()):\n return None\n\n try:\n if type(obj) != dict:\n obj.obj = str(obj)\n obj = vars(obj)\n link = self.modal_links[url]\n if link[\"type\"] == \"confirm\":\n link[\"message\"] = link[\"message\"].format(**obj)\n link[\"title\"] = link[\"title\"].format(**obj)\n link[\"ok\"] # triggers a KeyError exception if not existent\n link[\"cancel\"]\n elif link[\"type\"] == \"iframe\":\n try:\n link[\"size\"]\n except KeyError:\n link[\"size\"] = \"medium\"\n else:\n raise ImproperlyConfigured(\n \"modal_links type: \" + link[\"type\"] + \" is unsupported\"\n )\n return link\n except KeyError as e:\n raise ImproperlyConfigured(\n \"modal_links misses the following attribute: \" + str(e)\n )\n except AttributeError:\n return None", "response": "Returns the metadata for a modal link that needs to be confirmed if it exists."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_success_url(self):\n if not self.success_url:\n if self.request.GET.get(\"inmodal\"):\n return reverse(\"arctic:redirect_to_parent\")\n raise ImproperlyConfigured(\n \"No URL to redirect to. Provide a success_url.\"\n )\n return self.in_modal(str(self.success_url))", "response": "Return the URL to redirect to after processing a valid form."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _set_has_no_columns(self, has_no_column, col_avg, col_last, fields):\n for index, field in has_no_column.items():\n if index == len(has_no_column):\n field_name = \"{field}|{col_last}\".format(\n field=field, col_last=col_last\n )\n has_no_column[index] = self._return_field(field_name, fields)\n else:\n field_name = \"{field}|{col_avg}\".format(\n field=field, col_avg=col_avg\n )\n has_no_column[index] = self._return_field(field_name, fields)\n return has_no_column", "response": "Regenerate has_no_column by adding the amount of columns at the end of the has_no_column."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncalculate the average of all columns and return a rounded down number.", "response": "def _calc_avg_and_last_val(self, has_no_column, sum_existing_columns):\n \"\"\"\n Calculate the average of all columns and return a rounded down number.\n Store the remainder and add it to the last row. Could be implemented\n better. If the enduser wants more control, he can also just add the\n amount of columns. Will work fine with small number (<4) of items in a\n row.\n\n :param has_no_column:\n :param sum_existing_columns:\n :return: average, columns_for_last_element\n \"\"\"\n sum_no_columns = len(has_no_column)\n columns_left = self.ALLOWED_COLUMNS - sum_existing_columns\n\n if sum_no_columns == 0:\n columns_avg = columns_left\n else:\n columns_avg = int(columns_left / sum_no_columns)\n\n remainder = columns_left - (columns_avg * sum_no_columns)\n columns_for_last_element = columns_avg + remainder\n return columns_avg, columns_for_last_element"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _split_str(self, field):\n field_items = field.split(\"|\")\n if len(field_items) == 2:\n return field_items[0], field_items[1]\n elif len(field_items) == 1:\n return field_items[0], None", "response": "Split the string into title|7 into ( title 7"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a url link for sorting the given field.", "response": "def ordering_url(self, field_name):\n \"\"\"\n Creates a url link for sorting the given field.\n\n The direction of sorting will be either ascending, if the field is not\n yet sorted, or the opposite of the current sorting if sorted.\n \"\"\"\n path = self.request.path\n direction = \"\"\n query_params = self.request.GET.copy()\n ordering = self.request.GET.get(\"order\", \"\").split(\",\")\n field = self._get_ordering_field_lookup(field_name)\n if not ordering:\n ordering = self.get_default_ordering()\n merged_ordering = list(ordering) # copy the list\n\n for ordering_field in self.get_ordering_fields_lookups():\n if (ordering_field.lstrip(\"-\") not in ordering) and (\n (\"-\" + ordering_field.lstrip(\"-\")) not in ordering\n ):\n merged_ordering.append(ordering_field)\n\n new_ordering = []\n for item in merged_ordering:\n if item.lstrip(\"-\") == field.lstrip(\"-\"):\n if (item[0] == \"-\") or not (item in ordering):\n if item in ordering:\n direction = \"desc\"\n new_ordering.insert(0, item.lstrip(\"-\"))\n else:\n direction = \"asc\"\n new_ordering.insert(0, \"-\" + item)\n\n query_params[\"order\"] = \",\".join(new_ordering)\n\n return (path + \"?\" + query_params.urlencode(safe=\",\"), direction)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nhook to dynamically change the fields that will be displayed", "response": "def get_fields(self, strip_labels=False):\n \"\"\"\n Hook to dynamically change the fields that will be displayed\n \"\"\"\n if strip_labels:\n return [\n f[0] if type(f) in (tuple, list) else f for f in self.fields\n ]\n return self.fields"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_ordering_fields_lookups(self):\n ordering_field = []\n for field_name in self.get_ordering_fields():\n ordering_field.append(self._get_ordering_field_lookup(field_name))\n return ordering_field", "response": "Gets the real model fields to order by by"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_ordering_field_lookup(self, field_name):\n field = field_name\n get_field = getattr(self, \"get_%s_ordering_field\" % field_name, None)\n if get_field:\n field = get_field()\n return field", "response": "get real model field to order by by\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_advanced_search_form(self, data):\n if self.get_advanced_search_form_class():\n self._advanced_search_form = self.get_advanced_search_form_class()(\n data=data\n )\n return self._advanced_search_form", "response": "Hook to dynamically change the advanced search form"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsynchronize all the roles defined in the settings to the database.", "response": "def sync(cls):\n \"\"\"\n Save all the roles defined in the settings that are not yet in the db\n this is needed to create a foreign key relation between a user and a\n role. Roles that are no longer specified in settings are set as\n inactive.\n \"\"\"\n try:\n settings_roles = set(settings.ARCTIC_ROLES.keys())\n except AttributeError:\n settings_roles = set()\n\n saved_roles = set(Role.objects.values_list(\"name\", flat=True))\n unsaved_roles = settings_roles - saved_roles\n unused_roles = saved_roles - settings_roles - set([cls.ADMIN])\n\n # ensure that admin is not defined in settings\n if cls.ADMIN in settings_roles:\n raise ImproperlyConfigured(\n '\"' + cls.ADMIN + '\" role is reserved '\n \"and cannot be defined in settings\"\n )\n\n # ensure that admin exists in the database\n if cls.ADMIN not in saved_roles:\n Role(name=cls.ADMIN, is_active=True).save()\n\n # check if the role defined in settings already exists in the database\n # and if it does ensure it is enabled.\n for role in saved_roles:\n if role in settings_roles:\n saved_role = Role.objects.get(name=role)\n if not saved_role.is_active:\n saved_role.is_active = True\n saved_role.save()\n\n for role in unsaved_roles:\n Role(name=role).save()\n\n for role in unused_roles:\n unused_role = Role.objects.get(name=role)\n unused_role.is_active = False\n unused_role.save()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_permission_required(cls):\n if cls.permission_required is None:\n raise ImproperlyConfigured(\n \"{0} is missing the permission_required attribute. \"\n \"Define {0}.permission_required, or override \"\n \"{0}.get_permission_required().\".format(cls.__name__)\n )\n if isinstance(cls.permission_required, six.string_types):\n if cls.permission_required != \"\":\n perms = (cls.permission_required,)\n else:\n perms = ()\n else:\n perms = cls.permission_required\n\n return perms", "response": "Get permission required property. Must return an iterable."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef has_permission(cls, user):\n # no login is needed, so its always fine\n if not cls.requires_login:\n return True\n\n # if user is somehow not logged in\n if not user.is_authenticated:\n return False\n\n # attribute permission_required is mandatory, returns tuple\n perms = cls.get_permission_required()\n # if perms are defined and empty, we skip checking\n if not perms:\n return True\n\n # get role of user, skip admin role\n role = user.urole.role.name\n if role == cls.ADMIN:\n return True\n\n # check if at least one permissions is valid\n for permission in perms:\n if cls.check_permission(role, permission):\n return True\n\n # permission denied\n return False", "response": "Returns True if the user has permission for the given user."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef check_permission(cls, role, permission):\n result = permission in settings.ARCTIC_ROLES[role]\n # will try to call a method with the same name as the permission\n # to enable an object level permission check.\n if result:\n try:\n return getattr(cls, permission)(role)\n except AttributeError:\n pass\n return result", "response": "Check if role contains permission"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef str_to_bool(val):\n if isinstance(val, str):\n val = val.lower()\n\n return val in [\"true\", \"on\", \"yes\", True]", "response": "Helper function to turn a string representation of true into\n boolean True."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a Page object with pagination bar.", "response": "def arctic_paginate(parser, token):\n \"\"\"\n Renders a Page object with pagination bar.\n Example::\n {% arctic_paginate page_obj paginator=page_obj.paginator range=10 %}\n Named Parameters::\n range - The size of the pagination bar (ie, if set to 10 then, at most,\n 10 page numbers will display at any given time) Defaults to\n None, which shows all pages.\n show_first_last - Accepts \"true\" or \"false\". Determines whether or not\n to show the first and last page links. Defaults to\n \"false\"\n \"\"\"\n bits = token.split_contents()\n if len(bits) < 2:\n raise TemplateSyntaxError(\n \"'%s' takes at least one argument\"\n \" (Page object reference)\" % bits[0]\n )\n page = parser.compile_filter(bits[1])\n kwargs = {}\n bits = bits[2:]\n\n kwarg_re = re.compile(r\"(\\w+)=(.+)\")\n\n if len(bits):\n for bit in bits:\n match = kwarg_re.match(bit)\n if not match:\n raise TemplateSyntaxError(\n \"Malformed arguments to bootstrap_pagination paginate tag\"\n )\n name, value = match.groups()\n kwargs[name] = parser.compile_filter(value)\n\n return PaginationNode(page, kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a dictionary which is a frendlier format for a menu definition.", "response": "def menu(menu_config=None, **kwargs):\n \"\"\"\n Tranforms a menu definition into a dictionary which is a frendlier format\n to parse in a template.\n \"\"\"\n request = kwargs.pop(\"request\", None)\n user = kwargs.pop(\"user\", None)\n url_full_name = \":\".join(\n [request.resolver_match.namespace, request.resolver_match.url_name]\n )\n\n if not menu_config:\n menu_config = settings.ARCTIC_MENU\n\n menu_dict = OrderedDict()\n for menu_entry in menu_config:\n if type(menu_entry) in (list, tuple):\n\n # check permission based on named_url\n path = None\n if menu_entry[1]:\n if not view_from_url(menu_entry[1]).has_permission(user):\n continue\n\n path = reverse(menu_entry[1])\n # icons and collapse are optional\n icon = None\n if (len(menu_entry) >= 3) and (\n not type(menu_entry[2]) in (list, tuple)\n ):\n icon = menu_entry[2]\n active_weight = len(path) if path else 0\n menu_dict[menu_entry[0]] = {\n \"url\": menu_entry[1],\n \"icon\": icon,\n \"submenu\": None,\n \"active\": is_active(menu_entry, url_full_name),\n \"active_weight\": active_weight,\n }\n\n # check if the last item in a menu entry is a submenu\n submenu = _get_submenu(menu_entry)\n if submenu:\n menu_dict[menu_entry[0]][\"submenu\"] = menu(\n submenu, user=user, request=request\n )\n return menu_clean(menu_dict)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef menu_clean(menu_config):\n max_weight = -1\n for _, value in list(menu_config.items()):\n if value[\"submenu\"]:\n for _, v in list(value[\"submenu\"].items()):\n if v[\"active\"]:\n # parent inherits the weight of the axctive child\n value[\"active\"] = True\n value[\"active_weight\"] = v[\"active_weight\"]\n\n if value[\"active\"]:\n max_weight = max(value[\"active_weight\"], max_weight)\n\n if max_weight > 0:\n # one of the items is active: make items with lesser weight inactive\n for _, value in list(menu_config.items()):\n if value[\"active\"] and value[\"active_weight\"] < max_weight:\n value[\"active\"] = False\n return menu_config", "response": "Clean the menu items to ensure that only the largest weight is active."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef view_from_url(named_url): # noqa\n # code below is `stolen` from django's reverse method.\n resolver = get_resolver(get_urlconf())\n\n if type(named_url) in (list, tuple):\n named_url = named_url[0]\n parts = named_url.split(\":\")\n parts.reverse()\n view = parts[0]\n path = parts[1:]\n current_path = None\n resolved_path = []\n ns_pattern = \"\"\n ns_converters = {}\n\n # if it's a local url permission already given, so we just return true\n if named_url.startswith(\"#\"):\n\n class LocalUrlDummyView:\n @staticmethod\n def has_permission(user):\n return True\n\n return LocalUrlDummyView\n\n while path:\n ns = path.pop()\n current_ns = current_path.pop() if current_path else None\n\n # Lookup the name to see if it could be an app identifier\n try:\n app_list = resolver.app_dict[ns]\n # Yes! Path part matches an app in the current Resolver\n if current_ns and current_ns in app_list:\n # If we are reversing for a particular app,\n # use that namespace\n ns = current_ns\n elif ns not in app_list:\n # The name isn't shared by one of the instances\n # (i.e., the default) so just pick the first instance\n # as the default.\n ns = app_list[0]\n except KeyError:\n pass\n\n if ns != current_ns:\n current_path = None\n\n try:\n extra, resolver = resolver.namespace_dict[ns]\n resolved_path.append(ns)\n ns_pattern = ns_pattern + extra\n try:\n ns_converters.update(resolver.pattern.converters)\n except Exception:\n pass\n except KeyError as key:\n if resolved_path:\n raise NoReverseMatch(\n \"%s is not a registered namespace inside '%s'\"\n % (key, \":\".join(resolved_path))\n )\n else:\n raise NoReverseMatch(\"%s is not a registered namespace\" % key)\n if ns_pattern:\n try:\n resolver = get_ns_resolver(\n ns_pattern, resolver, tuple(ns_converters.items())\n )\n except Exception:\n resolver = get_ns_resolver(ns_pattern, resolver)\n\n # custom code, get view from reverse_dict\n reverse_dict = resolver.reverse_dict.dict()\n for key, url_obj in reverse_dict.items():\n if url_obj == reverse_dict[view] and key != view:\n module = importlib.import_module(key.__module__)\n return getattr(module, key.__name__)", "response": "Finds and returns the view class from a named url."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef find_attribute(obj, value):\n if \"__\" in value:\n value_list = value.split(\"__\")\n attr = get_attribute(obj, value_list[0])\n return find_attribute(attr, \"__\".join(value_list[1:]))\n return get_attribute(obj, value)", "response": "Find the attribute connected to the last object in a chain of objects."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the attribute of the object.", "response": "def get_attribute(obj, value):\n \"\"\"\n Normally the result of list_items for listviews are a set of model objects.\n But when you want a GROUP_BY query (with 'values' method), than\n the result will be a dict. This method will help you find an item for\n either objects or dictionaries.\n \"\"\"\n if type(obj) == dict:\n return dict.get(obj, value)\n else:\n return getattr(obj, value)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfinding the meta object that corresponds to the given value.", "response": "def find_field_meta(obj, value):\n \"\"\"\n In a model, finds the attribute meta connected to the last object when\n a chain of connected objects is given in a string separated with double\n underscores.\n \"\"\"\n if \"__\" in value:\n value_list = value.split(\"__\")\n child_obj = obj._meta.get_field(value_list[0]).rel.to\n return find_field_meta(child_obj, \"__\".join(value_list[1:]))\n return obj._meta.get_field(value)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_field_class(qs, field_name):\n try:\n return qs.model._meta.get_field(field_name).__class__.__name__\n # while annotating, it's possible that field does not exists.\n except FieldDoesNotExist:\n return None", "response": "Given a queryset and a field name it will return the field s class"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreversing a named url, in addition to the standard django reverse, it also accepts a list of ('named url', 'field1', 'field2', ...) and will use the value of the supplied fields as arguments. When a fallback field is given it will use it as an argument if none other are given.", "response": "def reverse_url(url, obj, fallback_field=None):\n \"\"\"\n Reverses a named url, in addition to the standard django reverse, it also\n accepts a list of ('named url', 'field1', 'field2', ...) and will use the\n value of the supplied fields as arguments.\n When a fallback field is given it will use it as an argument if none other\n are given.\n \"\"\"\n args = []\n if type(url) in (list, tuple):\n named_url = url[0]\n for arg in url[1:]:\n if type(obj) is dict:\n args.append(obj[arg])\n else:\n args.append(find_attribute(obj, arg))\n else:\n if url.startswith(\"#\"): # local url\n return url\n named_url = url\n if obj and fallback_field:\n if type(obj) is dict:\n args = [obj[fallback_field]]\n else:\n args = [get_attribute(obj, fallback_field)]\n\n # Instead of giving NoReverseMatch exception it's more desirable,\n # for field_links in listviews to just ignore the link.\n if fallback_field and not args:\n return \"\"\n\n return reverse(named_url, args=args)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ntrying to get a setting from the django settings if not available defaults to the one defined in defaults. py", "response": "def arctic_setting(setting_name, valid_options=None):\n \"\"\"\n Tries to get a setting from the django settings, if not available defaults\n to the one defined in defaults.py\n \"\"\"\n try:\n value = getattr(settings, setting_name)\n if valid_options and value not in valid_options:\n error_message = \"Invalid value for {}, must be one of: {}\".format(\n setting_name, str(valid_options)\n )\n raise ImproperlyConfigured(error_message)\n except AttributeError:\n pass\n return getattr(settings, setting_name, getattr(defaults, setting_name))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef offset_limit(func):\n\n def func_wrapper(self, start, stop):\n offset = start\n limit = stop - start\n return func(self, offset, limit)\n\n return func_wrapper", "response": "Decorator that converts python slicing to offset and limit\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncheck whether the item is list of list", "response": "def is_list_of_list(item):\n \"\"\"\n check whether the item is list (tuple)\n and consist of list (tuple) elements\n \"\"\"\n if (\n type(item) in (list, tuple)\n and len(item)\n and isinstance(item[0], (list, tuple))\n ):\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef generate_id(*s):\n with translation.override(\"en\"):\n generated_id = slugify(\"-\".join([str(i) for i in s]))\n return generated_id", "response": "Generates an id from one or more strings\n "} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrender the current item in the browser.", "response": "def render(self, name, value, attrs=None, renderer=None):\n \"\"\"For django 1.10 compatibility\"\"\"\n if django.VERSION >= (1, 11):\n return super(BetterFileInput, self).render(name, value, attrs)\n\n t = render_to_string(\n template_name=self.template_name,\n context=self.get_context(name, value, attrs),\n )\n return mark_safe(t)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_table(ports):\n table = PrettyTable([\"Name\", \"Port\", \"Protocol\", \"Description\"])\n table.align[\"Name\"] = \"l\"\n table.align[\"Description\"] = \"l\"\n table.padding_width = 1\n\n for port in ports:\n table.add_row(port)\n\n return table", "response": "This function returns a pretty table used to display the port results."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nsearch ports and numbers and print them.", "response": "def run(port, like, use_json, server):\n \"\"\"Search port names and numbers.\"\"\"\n if not port and not server[0]:\n raise click.UsageError(\"Please specify a port\")\n\n if server[0]:\n app.run(host=server[0], port=server[1])\n return\n\n ports = get_ports(port, like)\n if not ports:\n sys.stderr.write(\"No ports found for '{0}'\\n\".format(port))\n return\n\n if use_json:\n print(json.dumps(ports, indent=4))\n else:\n table = get_table(ports)\n print(table)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_account(self, address, id=None, endpoint=None):\n return self._call_endpoint(GET_ACCOUNT_STATE, params=[address], id=id, endpoint=endpoint)", "response": "This function is used to get the state of an account on the blockchain."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_height(self, id=None, endpoint=None):\n return self._call_endpoint(GET_BLOCK_COUNT, id=id, endpoint=endpoint)", "response": "Get the current height of the blockchain\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget an asset by its hash", "response": "def get_asset(self, asset_hash, id=None, endpoint=None):\n \"\"\"\n Get an asset by its hash\n Args:\n asset_hash: (str) asset to lookup, example would be 'c56f33fc6ecfcd0c225c4ab356fee59390af8560be0e930faebe74a6daff7c9b'\n id: (int, optional) id to use for response tracking\n endpoint: (RPCEndpoint, optional) endpoint to specify to use\n\n Returns:\n json object of the result or the error encountered in the RPC call\n \"\"\"\n return self._call_endpoint(GET_ASSET_STATE, params=[asset_hash], id=id, endpoint=endpoint)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_balance(self, asset_hash, id=None, endpoint=None):\n return self._call_endpoint(GET_BALANCE, params=[asset_hash], id=id, endpoint=endpoint)", "response": "Get the balance of a specific asset hash"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the hash of the highest block in the cache", "response": "def get_best_blockhash(self, id=None, endpoint=None):\n \"\"\"\n Get the hash of the highest block\n Args:\n id: (int, optional) id to use for response tracking\n endpoint: (RPCEndpoint, optional) endpoint to specify to use\n Returns:\n json object of the result or the error encountered in the RPC call\n \"\"\"\n return self._call_endpoint(GET_BEST_BLOCK_HASH, id=id, endpoint=endpoint)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget a block by its height or hash.", "response": "def get_block(self, height_or_hash, id=None, endpoint=None):\n \"\"\"\n Look up a block by the height or hash of the block.\n Args:\n height_or_hash: (int or str) either the height of the desired block or its hash in the form '1e67372c158a4cfbb17b9ad3aaae77001a4247a00318e354c62e53b56af4006f'\n id: (int, optional) id to use for response tracking\n endpoint: (RPCEndpoint, optional) endpoint to specify to use\n\n Returns:\n block: a json object or the ``neorpc.Core.Block.Block`` object\n \"\"\"\n return self._call_endpoint(GET_BLOCK, params=[height_or_hash, 1], id=id, endpoint=endpoint)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_block_hash(self, height, id=None, endpoint=None):\n return self._call_endpoint(GET_BLOCK_HASH, params=[height], id=id, endpoint=endpoint)", "response": "Get hash of a block by its height"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_block_header(self, block_hash, id=None, endpoint=None):\n return self._call_endpoint(GET_BLOCK_HEADER, params=[block_hash, 1], id=id, endpoint=endpoint)", "response": "Get the block header information according to the specified script hash."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the system fee of a block by height. This is used in calculating gas claims Args: height: (int) height of the block to lookup id: (int, optional) id to use for response tracking endpoint: (RPCEndpoint, optional) endpoint to specify to use Returns: json object of the result or the error encountered in the RPC call", "response": "def get_block_sysfee(self, height, id=None, endpoint=None):\n \"\"\"\n Get the system fee of a block by height. This is used in calculating gas claims\n Args:\n height: (int) height of the block to lookup\n id: (int, optional) id to use for response tracking\n endpoint: (RPCEndpoint, optional) endpoint to specify to use\n\n Returns:\n json object of the result or the error encountered in the RPC call\n \"\"\"\n return self._call_endpoint(GET_BLOCK_SYS_FEE, params=[height], id=id, endpoint=endpoint)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the number of nodes connected to the endpoint", "response": "def get_connection_count(self, id=None, endpoint=None):\n \"\"\"\n Gets the number of nodes connected to the endpoint\n Args:\n id: (int, optional) id to use for response tracking\n endpoint: (RPCEndpoint, optional) endpoint to specify to use\n Returns:\n json object of the result or the error encountered in the RPC call\n \"\"\"\n return self._call_endpoint(GET_CONNECTION_COUNT, id=id, endpoint=endpoint)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_contract_state(self, contract_hash, id=None, endpoint=None):\n return self._call_endpoint(GET_CONTRACT_STATE, params=[contract_hash], id=id, endpoint=endpoint)", "response": "Get a contract state object by its hash"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the raw mempool of the endpoint", "response": "def get_raw_mempool(self, id=None, endpoint=None):\n \"\"\"\n Returns the tx that are in the memorypool of the endpoint\n Args:\n id: (int, optional) id to use for response tracking\n endpoint: (RPCEndpoint, optional) endpoint to specify to use\n Returns:\n json object of the result or the error encountered in the RPC call\n \"\"\"\n return self._call_endpoint(GET_RAW_MEMPOOL, id=id, endpoint=endpoint)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting a single transaction by hash.", "response": "def get_transaction(self, tx_hash, id=None, endpoint=None):\n \"\"\"\n Look up a transaction by hash.\n Args:\n tx_hash: (str) hash in the form '58c634f81fbd4ae2733d7e3930a9849021840fc19dc6af064d6f2812a333f91d'\n id: (int, optional) id to use for response tracking\n endpoint: (RPCEndpoint, optional) endpoint to specify to use\n\n Returns:\n json: the transaction as a json object\n \"\"\"\n return self._call_endpoint(GET_RAW_TRANSACTION, params=[tx_hash, 1], id=id, endpoint=endpoint)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a storage item of a specified contract and storage key.", "response": "def get_storage(self, contract_hash, storage_key, id=None, endpoint=None):\n \"\"\"\n Returns a storage item of a specified contract\n Args:\n contract_hash: (str) hash of the contract to lookup, for example 'd7678dd97c000be3f33e9362e673101bac4ca654'\n storage_key: (str) storage key to lookup, for example 'totalSupply'\n id: (int, optional) id to use for response tracking\n endpoint: (RPCEndpoint, optional) endpoint to specify to use\n Returns:\n bytearray: bytearray value of the storage item\n \"\"\"\n result = self._call_endpoint(GET_STORAGE, params=[contract_hash, binascii.hexlify(storage_key.encode('utf-8')).decode('utf-8')], id=id, endpoint=endpoint)\n try:\n\n return bytearray(binascii.unhexlify(result.encode('utf-8')))\n except Exception as e:\n raise NEORPCException(\"could not decode result %s \" % e)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget a transaction output by specified transaction hash and vout id", "response": "def get_tx_out(self, tx_hash, vout_id, id=None, endpoint=None):\n \"\"\"\n Gets a transaction output by specified transaction hash and output index\n Args:\n tx_hash: (str) hash in the form '58c634f81fbd4ae2733d7e3930a9849021840fc19dc6af064d6f2812a333f91d'\n vout_id: (int) index of the transaction output in the transaction\n id: (int, optional) id to use for response tracking\n endpoint: (RPCEndpoint, optional) endpoint to specify to use\n Returns:\n json object of the result or the error encountered in the RPC call\n \"\"\"\n return self._call_endpoint(GET_TX_OUT, params=[tx_hash, vout_id], id=id, endpoint=endpoint)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef invoke_contract(self, contract_hash, params, id=None, endpoint=None):\n return self._call_endpoint(INVOKE, params=[contract_hash, params], id=id, endpoint=endpoint)", "response": "Invoke a contract on the base class"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef invoke_contract_fn(self, contract_hash, operation, params=None, id=None, endpoint=None):\n return self._call_endpoint(INVOKE_FUNCTION, params=[contract_hash, operation, params if params else []], id=id, endpoint=endpoint)", "response": "Invoke a contract function on the base class"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef invoke_script(self, script, id=None, endpoint=None):\n return self._call_endpoint(INVOKE_SCRIPT, params=[script], id=id, endpoint=endpoint)", "response": "Invoke a script that has been assembled by the caller."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsends a serialized tx to the network endpoint", "response": "def send_raw_tx(self, serialized_tx, id=None, endpoint=None):\n \"\"\"\n Submits a serialized tx to the network\n Args:\n serialized_tx: (str) a hexlified string of a transaction\n id: (int, optional) id to use for response tracking\n endpoint: (RPCEndpoint, optional) endpoint to specify to use\n Returns:\n bool: whether the tx was accepted or not\n \"\"\"\n return self._call_endpoint(SEND_TX, params=[serialized_tx], id=id, endpoint=endpoint)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nvalidate an address string and return whether or not it is valid.", "response": "def validate_addr(self, address, id=None, endpoint=None):\n \"\"\"\n returns whether or not addr string is valid\n\n Args:\n address: (str) address to lookup ( in format 'AXjaFSP23Jkbe6Pk9pPGT6NBDs1HVdqaXK')\n id: (int, optional) id to use for response tracking\n endpoint: (RPCEndpoint, optional) endpoint to specify to use\n\n Returns:\n json object of the result or the error encountered in the RPC call\n\n \"\"\"\n return self._call_endpoint(VALIDATE_ADDR, params=[address], id=id, endpoint=endpoint)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_peers(self, id=None, endpoint=None):\n return self._call_endpoint(GET_PEERS, id=id, endpoint=endpoint)", "response": "Get the current peers of a remote node."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_validators(self, id=None, endpoint=None):\n return self._call_endpoint(GET_VALIDATORS, id=id, endpoint=endpoint)", "response": "Returns the current NEO consensus nodes information and voting status."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_version(self, id=None, endpoint=None):\n return self._call_endpoint(GET_VERSION, id=id, endpoint=endpoint)", "response": "Get the current version of the current resource in the specified endpoint."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_new_address(self, id=None, endpoint=None):\n return self._call_endpoint(GET_NEW_ADDRESS, id=id, endpoint=endpoint)", "response": "Create new address in the specified region"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the current wallet index height.", "response": "def get_wallet_height(self, id=None, endpoint=None):\n \"\"\"\n Get the current wallet index height.\n Args:\n id: (int, optional) id to use for response tracking\n endpoint: (RPCEndpoint, optional) endpoint to specify to use\n Returns:\n json object of the result or the error encountered in the RPC call\n \"\"\"\n return self._call_endpoint(GET_WALLET_HEIGHT, id=id, endpoint=endpoint)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nlists all the addresses in the current wallet.", "response": "def list_address(self, id=None, endpoint=None):\n \"\"\"\n Lists all the addresses in the current wallet.\n Args:\n id: (int, optional) id to use for response tracking\n endpoint: (RPCEndpoint, optional) endpoint to specify to use\n Returns:\n json object of the result or the error encountered in the RPC call\n \"\"\"\n return self._call_endpoint(LIST_ADDRESS, id=id, endpoint=endpoint)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef send_from(self, asset_id, addr_from, to_addr, value, fee=None, change_addr=None, id=None, endpoint=None):\n params = [asset_id, addr_from, to_addr, value]\n if fee:\n params.append(fee)\n if fee and change_addr:\n params.append(change_addr)\n elif not fee and change_addr:\n params.append(0)\n params.append(change_addr)\n return self._call_endpoint(SEND_FROM, params=params, id=id, endpoint=endpoint)", "response": "Send a value to the specified address."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef send_to_address(self, asset_id, to_addr, value, fee=None, change_addr=None, id=None, endpoint=None):\n params = [asset_id, to_addr, value]\n if fee:\n params.append(fee)\n if fee and change_addr:\n params.append(change_addr)\n elif not fee and change_addr:\n params.append(0)\n params.append(change_addr)\n return self._call_endpoint(SEND_TO_ADDRESS, params=params, id=id, endpoint=endpoint)", "response": "Send a transfer to a specific address."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef send_many(self, outputs_array, fee=None, change_addr=None, id=None, endpoint=None):\n params = [outputs_array]\n if fee:\n params.append(fee)\n if fee and change_addr:\n params.append(change_addr)\n elif not fee and change_addr:\n params.append(0)\n params.append(change_addr)\n return self._call_endpoint(SEND_MANY, params=params, id=id, endpoint=endpoint)", "response": "Send a list of outputs to the NEO network."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef choose_pool(mpi=False, processes=1, **kwargs):\n\n if mpi:\n if not MPIPool.enabled():\n raise SystemError(\"Tried to run with MPI but MPIPool not enabled.\")\n\n pool = MPIPool(**kwargs)\n if not pool.is_master():\n pool.wait()\n sys.exit(0)\n\n log.info(\"Running with MPI on {0} cores\".format(pool.size))\n return pool\n\n elif processes != 1 and MultiPool.enabled():\n log.info(\"Running with MultiPool on {0} cores\".format(processes))\n return MultiPool(processes=processes, **kwargs)\n\n else:\n log.info(\"Running with SerialPool\")\n return SerialPool(**kwargs)", "response": "Choose between the different pools given options from e. g. argparse.\n "} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nwaiting for a master process and send the result to the callback.", "response": "def wait(self, callback=None):\n \"\"\"Tell the workers to wait and listen for the master process. This is\n called automatically when using :meth:`MPIPool.map` and doesn't need to\n be called by the user.\n \"\"\"\n if self.is_master():\n return\n\n worker = self.comm.rank\n status = MPI.Status()\n while True:\n log.log(_VERBOSE, \"Worker {0} waiting for task\".format(worker))\n\n task = self.comm.recv(source=self.master, tag=MPI.ANY_TAG,\n status=status)\n\n if task is None:\n log.log(_VERBOSE, \"Worker {0} told to quit work\".format(worker))\n break\n\n func, arg = task\n log.log(_VERBOSE, \"Worker {0} got task {1} with tag {2}\"\n .format(worker, arg, status.tag))\n\n result = func(arg)\n\n log.log(_VERBOSE, \"Worker {0} sending answer {1} with tag {2}\"\n .format(worker, result, status.tag))\n\n self.comm.ssend(result, self.master, status.tag)\n\n if callback is not None:\n callback()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nevaluate a function or callable on each task in parallel using MPI.", "response": "def map(self, worker, tasks, callback=None):\n \"\"\"Evaluate a function or callable on each task in parallel using MPI.\n\n The callable, ``worker``, is called on each element of the ``tasks``\n iterable. The results are returned in the expected order (symmetric with\n ``tasks``).\n\n Parameters\n ----------\n worker : callable\n A function or callable object that is executed on each element of\n the specified ``tasks`` iterable. This object must be picklable\n (i.e. it can't be a function scoped within a function or a\n ``lambda`` function). This should accept a single positional\n argument and return a single object.\n tasks : iterable\n A list or iterable of tasks. Each task can be itself an iterable\n (e.g., tuple) of values or data to pass in to the worker function.\n callback : callable, optional\n An optional callback function (or callable) that is called with the\n result from each worker run and is executed on the master process.\n This is useful for, e.g., saving results to a file, since the\n callback is only called on the master thread.\n\n Returns\n -------\n results : list\n A list of results from the output of each ``worker()`` call.\n \"\"\"\n\n # If not the master just wait for instructions.\n if not self.is_master():\n self.wait()\n return\n\n if callback is None:\n callback = _dummy_callback\n\n workerset = self.workers.copy()\n tasklist = [(tid, (worker, arg)) for tid, arg in enumerate(tasks)]\n resultlist = [None] * len(tasklist)\n pending = len(tasklist)\n\n while pending:\n if workerset and tasklist:\n worker = workerset.pop()\n taskid, task = tasklist.pop()\n log.log(_VERBOSE, \"Sent task %s to worker %s with tag %s\",\n task[1], worker, taskid)\n self.comm.send(task, dest=worker, tag=taskid)\n\n if tasklist:\n flag = self.comm.Iprobe(source=MPI.ANY_SOURCE, tag=MPI.ANY_TAG)\n if not flag:\n continue\n else:\n self.comm.Probe(source=MPI.ANY_SOURCE, tag=MPI.ANY_TAG)\n\n status = MPI.Status()\n result = self.comm.recv(source=MPI.ANY_SOURCE, tag=MPI.ANY_TAG,\n status=status)\n worker = status.source\n taskid = status.tag\n log.log(_VERBOSE, \"Master received from worker %s with tag %s\",\n worker, taskid)\n\n callback(result)\n\n workerset.add(worker)\n resultlist[taskid] = result\n pending -= 1\n\n return resultlist"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ntell all the workers to quit.", "response": "def close(self):\n \"\"\" Tell all the workers to quit.\"\"\"\n if self.is_worker():\n return\n\n for worker in self.workers:\n self.comm.send(None, worker, 0)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nupdates the git revision string if the path is being imported directly from a git working copy.", "response": "def update_git_devstr(version, path=None):\n \"\"\"\n Updates the git revision string if and only if the path is being imported\n directly from a git working copy. This ensures that the revision number in\n the version string is accurate.\n \"\"\"\n\n try:\n # Quick way to determine if we're in git or not - returns '' if not\n devstr = get_git_devstr(sha=True, show_warning=False, path=path)\n except OSError:\n return version\n\n if not devstr:\n # Probably not in git so just pass silently\n return version\n\n if 'dev' in version: # update to the current git revision\n version_base = version.split('.dev', 1)[0]\n devstr = get_git_devstr(sha=False, show_warning=False, path=path)\n\n return version_base + '.dev' + devstr\n else:\n # otherwise it's already the true/release version\n return version"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_git_devstr(sha=False, show_warning=True, path=None):\n\n if path is None:\n path = os.getcwd()\n if not _get_repo_path(path, levels=0):\n return ''\n\n if not os.path.isdir(path):\n path = os.path.abspath(os.path.dirname(path))\n\n if sha:\n # Faster for getting just the hash of HEAD\n cmd = ['rev-parse', 'HEAD']\n else:\n cmd = ['rev-list', '--count', 'HEAD']\n\n def run_git(cmd):\n try:\n p = subprocess.Popen(['git'] + cmd, cwd=path,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE,\n stdin=subprocess.PIPE)\n stdout, stderr = p.communicate()\n except OSError as e:\n if show_warning:\n warnings.warn('Error running git: ' + str(e))\n return (None, b'', b'')\n\n if p.returncode == 128:\n if show_warning:\n warnings.warn('No git repository present at {0!r}! Using '\n 'default dev version.'.format(path))\n return (p.returncode, b'', b'')\n if p.returncode == 129:\n if show_warning:\n warnings.warn('Your git looks old (does it support {0}?); '\n 'consider upgrading to v1.7.2 or '\n 'later.'.format(cmd[0]))\n return (p.returncode, stdout, stderr)\n elif p.returncode != 0:\n if show_warning:\n warnings.warn('Git failed while determining revision '\n 'count: {0}'.format(_decode_stdio(stderr)))\n return (p.returncode, stdout, stderr)\n\n return p.returncode, stdout, stderr\n\n returncode, stdout, stderr = run_git(cmd)\n\n if not sha and returncode == 129:\n # git returns 129 if a command option failed to parse; in\n # particular this could happen in git versions older than 1.7.2\n # where the --count option is not supported\n # Also use --abbrev-commit and --abbrev=0 to display the minimum\n # number of characters needed per-commit (rather than the full hash)\n cmd = ['rev-list', '--abbrev-commit', '--abbrev=0', 'HEAD']\n returncode, stdout, stderr = run_git(cmd)\n # Fall back on the old method of getting all revisions and counting\n # the lines\n if returncode == 0:\n return str(stdout.count(b'\\n'))\n else:\n return ''\n elif sha:\n return _decode_stdio(stdout)[:40]\n else:\n return _decode_stdio(stdout).strip()", "response": "Returns a string containing the git version info of the current revision."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngiving a file or directory name determine the root of the git repository that corresponds to that file.", "response": "def _get_repo_path(pathname, levels=None):\n \"\"\"\n Given a file or directory name, determine the root of the git repository\n this path is under. If given, this won't look any higher than ``levels``\n (that is, if ``levels=0`` then the given path must be the root of the git\n repository and is returned if so.\n\n Returns `None` if the given path could not be determined to belong to a git\n repo.\n \"\"\"\n\n if os.path.isfile(pathname):\n current_dir = os.path.abspath(os.path.dirname(pathname))\n elif os.path.isdir(pathname):\n current_dir = os.path.abspath(pathname)\n else:\n return None\n\n current_level = 0\n\n while levels is None or current_level <= levels:\n if os.path.exists(os.path.join(current_dir, '.git')):\n return current_dir\n\n current_level += 1\n if current_dir == os.path.dirname(current_dir):\n break\n\n current_dir = os.path.dirname(current_dir)\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef map(self, func, iterable, callback=None):\n return self._call_callback(callback, map(func, iterable))", "response": "A wrapper around the built - in map function that provides consistent interface with the other Pool classes."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ntranslating a single codon into a amino acid or stop M", "response": "def translate_codon(codon, aa_pos):\n \"\"\"Translate a single codon into a single amino acid or stop '*'\n\n Parameters\n ----------\n codon : str\n Expected to be of length 3\n aa_pos : int\n Codon/amino acid offset into the protein (starting from 0)\n \"\"\"\n # not handling rare Leucine or Valine starts!\n if aa_pos == 0 and codon in START_CODONS:\n return \"M\"\n elif codon in STOP_CODONS:\n return \"*\"\n else:\n return DNA_CODON_TABLE[codon]"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntranslates a nucleotide sequence into amino acid protein sequence.", "response": "def translate(\n nucleotide_sequence,\n first_codon_is_start=True,\n to_stop=True,\n truncate=False):\n \"\"\"Translates cDNA coding sequence into amino acid protein sequence.\n\n Should typically start with a start codon but allowing non-methionine\n first residues since the CDS we're translating might have been affected\n by a start loss mutation.\n\n The sequence may include the 3' UTR but will stop translation at the first\n encountered stop codon.\n\n Parameters\n ----------\n nucleotide_sequence : BioPython Seq\n cDNA sequence\n\n first_codon_is_start : bool\n Treat the beginning of nucleotide_sequence (translates methionin)\n\n truncate : bool\n Truncate sequence if it's not a multiple of 3 (default = False)\n Returns BioPython Seq of amino acids\n \"\"\"\n if not isinstance(nucleotide_sequence, Seq):\n nucleotide_sequence = Seq(nucleotide_sequence)\n\n if truncate:\n # if sequence isn't a multiple of 3, truncate it so BioPython\n # doesn't complain\n n_nucleotides = int(len(nucleotide_sequence) / 3) * 3\n nucleotide_sequence = nucleotide_sequence[:n_nucleotides]\n else:\n n_nucleotides = len(nucleotide_sequence)\n\n assert n_nucleotides % 3 == 0, \\\n (\"Expected nucleotide sequence to be multiple of 3\"\n \" but got %s of length %d\") % (\n nucleotide_sequence,\n n_nucleotides)\n\n # passing cds=False to translate since we may want to deal with premature\n # stop codons\n protein_sequence = nucleotide_sequence.translate(to_stop=to_stop, cds=False)\n\n if first_codon_is_start and (\n len(protein_sequence) == 0 or protein_sequence[0] != \"M\"):\n if nucleotide_sequence[:3] in START_CODONS:\n # TODO: figure out when these should be made into methionines\n # and when left as whatever amino acid they normally code for\n # e.g. Leucine start codons\n # See: DOI: 10.1371/journal.pbio.0020397\n return \"M\" + protein_sequence[1:]\n else:\n raise ValueError(\n (\"Expected first codon of %s to be start codon\"\n \" (one of %s) but got %s\") % (\n protein_sequence[:10],\n START_CODONS,\n nucleotide_sequence))\n\n return protein_sequence"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef find_first_stop_codon(nucleotide_sequence):\n n_mutant_codons = len(nucleotide_sequence) // 3\n for i in range(n_mutant_codons):\n codon = nucleotide_sequence[3 * i:3 * i + 3]\n if codon in STOP_CODONS:\n return i\n return -1", "response": "Given a nucleotide sequence return the index of first stop codon."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef translate_in_frame_mutation(\n transcript,\n ref_codon_start_offset,\n ref_codon_end_offset,\n mutant_codons):\n \"\"\"\n Returns:\n - mutant amino acid sequence\n - offset of first stop codon in the mutant sequence (or -1 if there was none)\n - boolean flag indicating whether any codons from the 3' UTR were used\n\n Parameters\n ----------\n transcript : pyensembl.Transcript\n Reference transcript to which a cDNA mutation should be applied.\n\n ref_codon_start_offset : int\n Starting (base 0) integer offset into codons (character triplets) of the\n transcript's reference coding sequence.\n\n ref_codon_end_offset : int\n Final (base 0) integer offset into codons of the transcript's\n reference coding sequence.\n\n mutant_codons : str\n Nucleotide sequence to replace the reference codons with\n (expected to have length that is a multiple of three)\n \"\"\"\n mutant_stop_codon_index = find_first_stop_codon(mutant_codons)\n\n using_three_prime_utr = False\n\n if mutant_stop_codon_index != -1:\n mutant_codons = mutant_codons[:3 * mutant_stop_codon_index]\n elif ref_codon_end_offset > len(transcript.protein_sequence):\n # if the mutant codons didn't contain a stop but did mutate the\n # true reference stop codon then the translated sequence might involve\n # the 3' UTR\n three_prime_utr = transcript.three_prime_utr_sequence\n n_utr_codons = len(three_prime_utr) // 3\n # trim the 3' UTR sequence to have a length that is a multiple of 3\n truncated_utr_sequence = three_prime_utr[:n_utr_codons * 3]\n\n # note the offset of the first stop codon in the combined\n # nucleotide sequence of both the end of the CDS and the 3' UTR\n first_utr_stop_codon_index = find_first_stop_codon(truncated_utr_sequence)\n\n if first_utr_stop_codon_index > 0:\n # if there is a stop codon in the 3' UTR sequence and it's not the\n # very first codon\n using_three_prime_utr = True\n n_mutant_codons_before_utr = len(mutant_codons) // 3\n mutant_stop_codon_index = n_mutant_codons_before_utr + first_utr_stop_codon_index\n # combine the in-frame mutant codons with the truncated sequence of\n # the 3' UTR\n mutant_codons += truncated_utr_sequence[:first_utr_stop_codon_index * 3]\n elif first_utr_stop_codon_index == -1:\n # if there is no stop codon in the 3' UTR sequence\n using_three_prime_utr = True\n mutant_codons += truncated_utr_sequence\n\n amino_acids = translate(\n mutant_codons,\n first_codon_is_start=(ref_codon_start_offset == 0))\n\n return amino_acids, mutant_stop_codon_index, using_three_prime_utr", "response": "Translate the mutant amino acid sequence into a single cDNA mutation sequence."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nscripting which loads variants and annotates them with overlapping genes. Example usage: varcode-genes --vcf mutect.vcf \\ --vcf strelka.vcf \\ --maf tcga_brca.maf \\ --variant chr1 498584 C G \\ --json-variants more_variants.json", "response": "def main(args_list=None):\n \"\"\"\n Script which loads variants and annotates them with overlapping genes.\n\n Example usage:\n varcode-genes\n --vcf mutect.vcf \\\n --vcf strelka.vcf \\\n --maf tcga_brca.maf \\\n --variant chr1 498584 C G \\\n --json-variants more_variants.json\n \"\"\"\n print_version_info()\n if args_list is None:\n args_list = sys.argv[1:]\n args = arg_parser.parse_args(args_list)\n variants = variant_collection_from_args(args)\n variants_dataframe = variants.to_dataframe()\n logger.info('\\n%s', variants_dataframe)\n if args.output_csv:\n variants_dataframe.to_csv(args.output_csv, index=False)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef load_maf_dataframe(path, nrows=None, raise_on_error=True, encoding=None):\n require_string(path, \"Path to MAF\")\n\n n_basic_columns = len(MAF_COLUMN_NAMES)\n\n # pylint: disable=no-member\n # pylint gets confused by read_csv\n df = pandas.read_csv(\n path,\n comment=\"#\",\n sep=\"\\t\",\n low_memory=False,\n skip_blank_lines=True,\n header=0,\n encoding=encoding)\n\n if len(df.columns) < n_basic_columns:\n error_message = (\n \"Too few columns in MAF file %s, expected %d but got %d : %s\" % (\n path, n_basic_columns, len(df.columns), df.columns))\n if raise_on_error:\n raise ValueError(error_message)\n else:\n logging.warn(error_message)\n\n # check each pair of expected/actual column names to make sure they match\n for expected, actual in zip(MAF_COLUMN_NAMES, df.columns):\n if expected != actual:\n # MAFs in the wild have capitalization differences in their\n # column names, normalize them to always use the names above\n if expected.lower() == actual.lower():\n # using DataFrame.rename in Python 2.7.x doesn't seem to\n # work for some files, possibly because Pandas treats\n # unicode vs. str columns as different?\n df[expected] = df[actual]\n del df[actual]\n else:\n error_message = (\n \"Expected column %s but got %s\" % (expected, actual))\n if raise_on_error:\n raise ValueError(error_message)\n else:\n logging.warn(error_message)\n\n return df", "response": "Load the guaranteed columns of a TCGA MAF file into a DataFrame."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef load_maf(\n path,\n optional_cols=[],\n sort_key=variant_ascending_position_sort_key,\n distinct=True,\n raise_on_error=True,\n encoding=None):\n \"\"\"\n Load reference name and Variant objects from MAF filename.\n\n Parameters\n ----------\n\n path : str\n Path to MAF (*.maf).\n\n optional_cols : list, optional\n A list of MAF columns to include as metadata if they are present in the MAF.\n Does not result in an error if those columns are not present.\n\n sort_key : fn\n Function which maps each element to a sorting criterion.\n Set to None to not to sort the variants.\n\n distinct : bool\n Don't keep repeated variants\n\n raise_on_error : bool\n Raise an exception upon encountering an error or just log a warning.\n\n encoding : str, optional\n Encoding to use for UTF when reading MAF file.\n \"\"\"\n # pylint: disable=no-member\n # pylint gets confused by read_csv inside load_maf_dataframe\n maf_df = load_maf_dataframe(path, raise_on_error=raise_on_error, encoding=encoding)\n\n if len(maf_df) == 0 and raise_on_error:\n raise ValueError(\"Empty MAF file %s\" % path)\n\n ensembl_objects = {}\n variants = []\n metadata = {}\n for _, x in maf_df.iterrows():\n contig = x.Chromosome\n if isnull(contig):\n error_message = \"Invalid contig name: %s\" % (contig,)\n if raise_on_error:\n raise ValueError(error_message)\n else:\n logging.warn(error_message)\n continue\n\n start_pos = x.Start_Position\n ref = x.Reference_Allele\n\n # it's possible in a MAF file to have multiple Ensembl releases\n # mixed in a single MAF file (the genome assembly is\n # specified by the NCBI_Build column)\n ncbi_build = x.NCBI_Build\n if ncbi_build in ensembl_objects:\n ensembl = ensembl_objects[ncbi_build]\n else:\n if isinstance(ncbi_build, int):\n reference_name = \"B%d\" % ncbi_build\n else:\n reference_name = str(ncbi_build)\n ensembl = infer_genome(reference_name)\n ensembl_objects[ncbi_build] = ensembl\n\n # have to try both Tumor_Seq_Allele1 and Tumor_Seq_Allele2\n # to figure out which is different from the reference allele\n if x.Tumor_Seq_Allele1 != ref:\n alt = x.Tumor_Seq_Allele1\n else:\n if x.Tumor_Seq_Allele2 == ref:\n error_message = (\n \"Both tumor alleles agree with reference %s: %s\" % (\n ref, x,))\n if raise_on_error:\n raise ValueError(error_message)\n else:\n logging.warn(error_message)\n continue\n alt = x.Tumor_Seq_Allele2\n\n variant = Variant(\n contig,\n start_pos,\n str(ref),\n str(alt),\n ensembl=ensembl)\n\n # keep metadata about the variant and its TCGA annotation\n metadata[variant] = {\n 'Hugo_Symbol': x.Hugo_Symbol,\n 'Center': x.Center,\n 'Strand': x.Strand,\n 'Variant_Classification': x.Variant_Classification,\n 'Variant_Type': x.Variant_Type,\n 'dbSNP_RS': x.dbSNP_RS,\n 'dbSNP_Val_Status': x.dbSNP_Val_Status,\n 'Tumor_Sample_Barcode': x.Tumor_Sample_Barcode,\n 'Matched_Norm_Sample_Barcode': x.Matched_Norm_Sample_Barcode,\n }\n for optional_col in optional_cols:\n if optional_col in x:\n metadata[variant][optional_col] = x[optional_col]\n\n variants.append(variant)\n\n return VariantCollection(\n variants=variants,\n source_to_metadata_dict={path: metadata},\n sort_key=sort_key,\n distinct=distinct)", "response": "Loads the ensembl objects from a MAF file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\napply function to specified field of effect if it is not None otherwise return default.", "response": "def apply_to_field_if_exists(effect, field_name, fn, default):\n \"\"\"\n Apply function to specified field of effect if it is not None,\n otherwise return default.\n \"\"\"\n value = getattr(effect, field_name, None)\n if value is None:\n return default\n else:\n return fn(value)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\napplies function to transcript associated with effect if it exists otherwise return default.", "response": "def apply_to_transcript_if_exists(effect, fn, default):\n \"\"\"\n Apply function to transcript associated with effect,\n if it exists, otherwise return default.\n \"\"\"\n return apply_to_field_if_exists(\n effect=effect,\n field_name=\"transcript\",\n fn=fn,\n default=default)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef number_exons_in_associated_transcript(effect):\n return apply_to_transcript_if_exists(\n effect=effect,\n fn=lambda t: len(t.exons),\n default=0)", "response": "Returns the number of exons in an associated transcript."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the length of the coding sequence of the associated transcript.", "response": "def cds_length_of_associated_transcript(effect):\n \"\"\"\n Length of coding sequence of transcript associated with effect,\n if there is one (otherwise return 0).\n \"\"\"\n return apply_to_transcript_if_exists(\n effect=effect,\n fn=lambda t: len(t.coding_sequence) if (t.complete and t.coding_sequence) else 0,\n default=0)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef length_of_associated_transcript(effect):\n return apply_to_transcript_if_exists(\n effect=effect,\n fn=lambda t: len(t.sequence),\n default=0)", "response": "Returns the length of spliced mRNA sequence of transcript associated with effect."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the name of the associated transcript.", "response": "def name_of_associated_transcript(effect):\n \"\"\"\n Name of transcript associated with effect,\n if there is one (otherwise return \"\").\n \"\"\"\n return apply_to_transcript_if_exists(\n effect=effect,\n fn=lambda t: t.name,\n default=\"\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning True if effect has complete transcript False otherwise", "response": "def effect_has_complete_transcript(effect):\n \"\"\"\n Parameters\n ----------\n effect : subclass of MutationEffect\n\n Returns True if effect has transcript and that transcript has complete CDS\n \"\"\"\n return apply_to_transcript_if_exists(\n effect=effect,\n fn=lambda t: t.complete,\n default=False)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef effect_associated_with_protein_coding_gene(effect):\n return apply_to_gene_if_exists(\n effect=effect,\n fn=lambda g: g.biotype == \"protein_coding\",\n default=False)", "response": "Returns True if effect is associated with a gene and that gene has a protein_coding biotype."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef effect_associated_with_protein_coding_transcript(effect):\n return apply_to_transcript_if_exists(\n effect=effect,\n fn=lambda t: t.biotype == \"protein_coding\",\n default=False)", "response": "Returns True if effect is associated with a protein coding transcript."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef parse_transcript_number(effect):\n name = name_of_associated_transcript(effect)\n if \"-\" not in name:\n return 0\n parts = name.split(\"-\")\n last_part = parts[-1]\n if last_part.isdigit():\n return int(last_part)\n else:\n return 0", "response": "Try to parse the number at the end of a transcript name associated with\n an effect. Returns 0 if the number is not found."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef multi_gene_effect_sort_key(effect):\n\n return tuple([\n effect_priority(effect),\n effect_associated_with_protein_coding_gene(effect),\n effect_associated_with_protein_coding_transcript(effect),\n effect_has_complete_transcript(effect),\n cds_length_of_associated_transcript(effect),\n length_of_associated_transcript(effect),\n number_exons_in_associated_transcript(effect),\n transcript_name_ends_with_01(effect),\n -parse_transcript_number(effect)\n ])", "response": "This function returns a sort key for choosing the highest priority of an effect across multiple genes."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef select_between_exonic_splice_site_and_alternate_effect(effect):\n if effect.__class__ is not ExonicSpliceSite:\n return effect\n if effect.alternate_effect is None:\n return effect\n splice_priority = effect_priority(effect)\n alternate_priority = effect_priority(effect.alternate_effect)\n if splice_priority > alternate_priority:\n return effect\n else:\n return effect.alternate_effect", "response": "Select the most appropriate effect between an ExonicSpliceSite and an alternate effect."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef keep_max_priority_effects(effects):\n priority_values = map(effect_priority, effects)\n max_priority = max(priority_values)\n return [e for (e, p) in zip(effects, priority_values) if p == max_priority]", "response": "Given a list of MutationEffect subclasses and only keep the ones with the maximum priority\n effect type."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\napply each filter to the effect list sequentially.", "response": "def filter_pipeline(effects, filters):\n \"\"\"\n Apply each filter to the effect list sequentially. If any filter\n returns zero values then ignore it. As soon as only one effect is left,\n return it.\n\n Parameters\n ----------\n effects : list of MutationEffect subclass instances\n\n filters : list of functions\n Each function takes a list of effects and returns a list of effects\n\n Returns list of effects\n \"\"\"\n for filter_fn in filters:\n filtered_effects = filter_fn(effects)\n if len(effects) == 1:\n return effects\n elif len(filtered_effects) > 1:\n effects = filtered_effects\n return effects"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the first non - empty mutation effect that is a canonical transcript with both the maximum length CDS and maximum length full transcript sequence.", "response": "def top_priority_effect_for_single_gene(effects):\n \"\"\"\n For effects which are from the same gene, check to see if there\n is a canonical transcript with both the maximum length CDS\n and maximum length full transcript sequence.\n\n If not, then use number of exons and transcript name as tie-breaking\n features.\n\n Parameters\n ----------\n effects : list of MutationEffect subclass instances\n\n Returns single effect object\n \"\"\"\n\n # first filter effects to keep those on\n # 1) maximum priority effects\n # 2) protein coding genes\n # 3) protein coding transcripts\n # 4) complete transcripts\n #\n # If any of these filters drop all the effects then we move on to the next\n # filtering step.\n effects = filter_pipeline(\n effects=effects,\n filters=[\n keep_max_priority_effects,\n keep_effects_on_protein_coding_genes,\n keep_effects_on_protein_coding_transcripts,\n keep_effects_on_complete_transcripts,\n ],\n )\n\n if len(effects) == 1:\n return effects[0]\n\n # compare CDS length and transcript lengths of remaining effects\n # if one effect has the maximum of both categories then return it\n cds_lengths = [cds_length_of_associated_transcript(e) for e in effects]\n max_cds_length = max(cds_lengths)\n\n # get set of indices of all effects with maximum CDS length\n max_cds_length_indices = {\n i\n for (i, l) in enumerate(cds_lengths)\n if l == max_cds_length\n }\n\n seq_lengths = [length_of_associated_transcript(e) for e in effects]\n max_seq_length = max(seq_lengths)\n\n # get set of indices for all effects whose associated transcript\n # has maximum sequence length\n max_seq_length_indices = {\n i\n for (i, l) in enumerate(seq_lengths)\n if l == max_seq_length\n }\n\n # which effects have transcripts with both the longest CDS and\n # longest full transcript sequence?\n intersection_of_indices = \\\n max_cds_length_indices.intersection(max_seq_length_indices)\n\n n_candidates = len(intersection_of_indices)\n if n_candidates == 1:\n best_index = intersection_of_indices.pop()\n return effects[best_index]\n\n elif n_candidates == 0:\n # if set of max CDS effects and max sequence length effects is disjoint\n # then let's try to do the tie-breaking sort over their union\n union_of_indices = max_cds_length_indices.union(max_seq_length_indices)\n candidate_effects = [effects[i] for i in union_of_indices]\n\n else:\n # if multiple effects have transcripts with the max CDS length and\n # the max full sequence length then run the tie-breaking sort\n # over all these candidates\n candidate_effects = [effects[i] for i in intersection_of_indices]\n\n # break ties by number of exons, whether name of transcript ends if \"01\",\n # and all else being equal, prefer transcript names that end with lower\n # numbers\n return max(\n candidate_effects,\n key=tie_breaking_sort_key_for_single_gene_effects)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngives a collection of variant transcript effects return the highest priority effect.", "response": "def top_priority_effect(effects):\n \"\"\"\n Given a collection of variant transcript effects,\n return the top priority object. ExonicSpliceSite variants require special\n treatment since they actually represent two effects -- the splicing modification\n and whatever else would happen to the exonic sequence if nothing else gets\n changed. In cases where multiple transcripts give rise to multiple\n effects, use a variety of filtering and sorting heuristics to pick\n the canonical transcript.\n \"\"\"\n if len(effects) == 0:\n raise ValueError(\"List of effects cannot be empty\")\n\n effects = map(\n select_between_exonic_splice_site_and_alternate_effect,\n effects)\n\n effects_grouped_by_gene = apply_groupby(\n effects, fn=gene_id_of_associated_transcript, skip_none=False)\n\n if None in effects_grouped_by_gene:\n effects_without_genes = effects_grouped_by_gene.pop(None)\n else:\n effects_without_genes = []\n\n # if we had any effects associated with genes then choose one of those\n if len(effects_grouped_by_gene) > 0:\n effects_with_genes = [\n top_priority_effect_for_single_gene(gene_effects)\n for gene_effects in effects_grouped_by_gene.values()\n ]\n return max(effects_with_genes, key=multi_gene_effect_sort_key)\n else:\n # if all effects were without genes then choose the best among those\n assert len(effects_without_genes) > 0\n return max(effects_without_genes, key=multi_gene_effect_sort_key)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef to_dict(self):\n return dict(\n variants=self.variants,\n distinct=self.distinct,\n sort_key=self.sort_key,\n sources=self.sources,\n source_to_metadata_dict=self.source_to_metadata_dict)", "response": "Returns a dictionary with the state of the object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates another VariantCollection of the same class and with new elements.", "response": "def clone_with_new_elements(self, new_elements):\n \"\"\"\n Create another VariantCollection of the same class and with\n same state (including metadata) but possibly different entries.\n\n Warning: metadata is a dictionary keyed by variants. This method\n leaves that dictionary as-is, which may result in extraneous entries\n or missing entries.\n \"\"\"\n kwargs = self.to_dict()\n kwargs[\"variants\"] = new_elements\n return self.from_dict(kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns EffectCollection containing all effects of this Variant on a transcript.", "response": "def effects(self, raise_on_error=True):\n \"\"\"\n Parameters\n ----------\n raise_on_error : bool, optional\n If exception is raised while determining effect of variant on a\n transcript, should it be raised? This default is True, meaning\n errors result in raised exceptions, otherwise they are only logged.\n\n \"\"\"\n return EffectCollection([\n effect\n for variant in self\n for effect in variant.effects(raise_on_error=raise_on_error)\n ])"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef gene_counts(self):\n return {\n gene_name: len(group)\n for (gene_name, group)\n in self.groupby_gene_name().items()\n }", "response": "Returns the number of elements overlapping each gene name. Expects the sequence to have\n an implementation of groupby_gene_name."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef filter_by_transcript_expression(\n self,\n transcript_expression_dict,\n min_expression_value=0.0):\n \"\"\"\n Filters variants down to those which have overlap a transcript whose\n expression value in the transcript_expression_dict argument is greater\n than min_expression_value.\n\n Parameters\n ----------\n transcript_expression_dict : dict\n Dictionary mapping Ensembl transcript IDs to expression estimates\n (either FPKM or TPM)\n\n min_expression_value : float\n Threshold above which we'll keep an effect in the result collection\n \"\"\"\n return self.filter_any_above_threshold(\n multi_key_fn=lambda variant: variant.transcript_ids,\n value_dict=transcript_expression_dict,\n threshold=min_expression_value)", "response": "Returns a new VariantCollection containing only those variants which have overlap a transcript whose transcript_expression_value in the transcript_expression_dict argument is greater than min_expression_value."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a new Collection containing only variants which have overlap a gene whose expression value in the transcript_expression_dict argument is greater than min_expression_value.", "response": "def filter_by_gene_expression(\n self,\n gene_expression_dict,\n min_expression_value=0.0):\n \"\"\"\n Filters variants down to those which have overlap a gene whose\n expression value in the transcript_expression_dict argument is greater\n than min_expression_value.\n\n Parameters\n ----------\n gene_expression_dict : dict\n Dictionary mapping Ensembl gene IDs to expression estimates\n (either FPKM or TPM)\n\n min_expression_value : float\n Threshold above which we'll keep an effect in the result collection\n \"\"\"\n return self.filter_any_above_threshold(\n multi_key_fn=lambda effect: effect.gene_ids,\n value_dict=gene_expression_dict,\n threshold=min_expression_value)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns True if the variants in this collection are exactly equal.", "response": "def exactly_equal(self, other):\n '''\n Comparison between VariantCollection instances that takes into account\n the info field of Variant instances.\n\n Returns\n ----------\n True if the variants in this collection equal the variants in the other\n collection. The Variant.info fields are included in the comparison.\n '''\n return (\n self.__class__ == other.__class__ and\n len(self) == len(other) and\n all(x.exactly_equal(y) for (x, y) in zip(self, other)))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a single VariantCollection from multiple VariantCollection objects.", "response": "def _combine_variant_collections(cls, combine_fn, variant_collections, kwargs):\n \"\"\"\n Create a single VariantCollection from multiple different collections.\n\n Parameters\n ----------\n\n cls : class\n Should be VariantCollection\n\n combine_fn : function\n Function which takes any number of sets of variants and returns\n some combination of them (typically union or intersection).\n\n variant_collections : tuple of VariantCollection\n\n kwargs : dict\n Optional dictionary of keyword arguments to pass to the initializer\n for VariantCollection.\n \"\"\"\n kwargs[\"variants\"] = combine_fn(*[set(vc) for vc in variant_collections])\n kwargs[\"source_to_metadata_dict\"] = cls._merge_metadata_dictionaries(\n [vc.source_to_metadata_dict for vc in variant_collections])\n kwargs[\"sources\"] = set.union(*([vc.sources for vc in variant_collections]))\n for key, value in variant_collections[0].to_dict().items():\n # If some optional parameter isn't explicitly specified as an\n # argument to union() or intersection() then use the same value\n # as the first VariantCollection.\n #\n # I'm doing this so that the meaning of VariantCollection.union\n # and VariantCollection.intersection with a single argument is\n # the identity function (rather than setting optional parameters\n # to their default values.\n if key not in kwargs:\n kwargs[key] = value\n return cls(**kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the union of two VariantCollection objects.", "response": "def union(self, *others, **kwargs):\n \"\"\"\n Returns the union of variants in a several VariantCollection objects.\n \"\"\"\n return self._combine_variant_collections(\n combine_fn=set.union,\n variant_collections=(self,) + others,\n kwargs=kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the intersection of two VariantCollection objects.", "response": "def intersection(self, *others, **kwargs):\n \"\"\"\n Returns the intersection of variants in several VariantCollection objects.\n \"\"\"\n return self._combine_variant_collections(\n combine_fn=set.intersection,\n variant_collections=(self,) + others,\n kwargs=kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef to_dataframe(self):\n def row_from_variant(variant):\n return OrderedDict([\n (\"chr\", variant.contig),\n (\"start\", variant.original_start),\n (\"ref\", variant.original_ref),\n (\"alt\", variant.original_alt),\n (\"gene_name\", \";\".join(variant.gene_names)),\n (\"gene_id\", \";\".join(variant.gene_ids))\n ])\n rows = [row_from_variant(v) for v in self]\n if len(rows) == 0:\n # TODO: return a DataFrame with the appropriate columns\n return pd.DataFrame()\n return pd.DataFrame.from_records(rows, columns=rows[0].keys())", "response": "Build a DataFrame from this variant collection"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreusing the `trim_shared_prefix` function above to implement similar functionality for string suffixes. Given ref='ABC' and alt='BC', we first revese both strings: reverse_ref = 'CBA' reverse_alt = 'CB' and then the result of calling trim_shared_prefix will be: ('A', '', 'CB') We then reverse all three of the result strings to get back the shared suffix and both prefixes leading up to it: ('A', '', 'BC')", "response": "def trim_shared_suffix(ref, alt):\n \"\"\"\n Reuse the `trim_shared_prefix` function above to implement similar\n functionality for string suffixes.\n\n Given ref='ABC' and alt='BC', we first revese both strings:\n reverse_ref = 'CBA'\n reverse_alt = 'CB'\n and then the result of calling trim_shared_prefix will be:\n ('A', '', 'CB')\n We then reverse all three of the result strings to get back\n the shared suffix and both prefixes leading up to it:\n ('A', '', 'BC')\n \"\"\"\n n_ref = len(ref)\n n_alt = len(alt)\n n_min = min(n_ref, n_alt)\n i = 0\n while i < n_min and ref[-i - 1] == alt[-i - 1]:\n i += 1\n\n # i is length of shared suffix.\n if i == 0:\n return (ref, alt, '')\n return (ref[:-i], alt[:-i], ref[-i:])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngiving two nucleotide or amino acid strings identify the common prefix and suffix of the two nucleotide strings and return the unique components along with the prefix and suffix.", "response": "def trim_shared_flanking_strings(ref, alt):\n \"\"\"\n Given two nucleotide or amino acid strings, identify\n if they have a common prefix, a common suffix, and return\n their unique components along with the prefix and suffix.\n\n For example, if the input ref = \"SYFFQGR\" and alt = \"SYMLLFIFQGR\"\n then the result will be:\n (\"F\", \"MLLFI\", \"SY\", \"FQGR\")\n \"\"\"\n ref, alt, prefix = trim_shared_prefix(ref, alt)\n ref, alt, suffix = trim_shared_suffix(ref, alt)\n return ref, alt, prefix, suffix"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nscript which loads variants and annotates them with overlapping genes and predicted coding effects. Example usage: varcode --vcf mutect.vcf \\ --vcf strelka.vcf \\ --maf tcga_brca.maf \\ --variant chr1 498584 C G \\ --json-variants more_variants.json", "response": "def main(args_list=None):\n \"\"\"\n Script which loads variants and annotates them with overlapping genes\n and predicted coding effects.\n\n Example usage:\n varcode\n --vcf mutect.vcf \\\n --vcf strelka.vcf \\\n --maf tcga_brca.maf \\\n --variant chr1 498584 C G \\\n --json-variants more_variants.json\n \"\"\"\n print_version_info()\n if args_list is None:\n args_list = sys.argv[1:]\n\n args = arg_parser.parse_args(args_list)\n variants = variant_collection_from_args(args)\n effects = variants.effects()\n if args.only_coding:\n effects = effects.drop_silent_and_noncoding()\n if args.one_per_variant:\n variant_to_effect_dict = effects.top_priority_effect_per_variant()\n effects = effects.clone_with_new_elements(list(variant_to_effect_dict.values()))\n\n effects_dataframe = effects.to_dataframe()\n logger.info('\\n%s', effects)\n if args.output_csv:\n effects_dataframe.to_csv(args.output_csv, index=False)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_codons(\n variant,\n trimmed_cdna_ref,\n trimmed_cdna_alt,\n sequence_from_start_codon,\n cds_offset):\n \"\"\"\n Returns indices of first and last reference codons affected by the variant,\n as well as the actual sequence of the mutated codons which replace those\n reference codons.\n\n Parameters\n ----------\n variant : Variant\n\n trimmed_cdna_ref : str\n Trimmed reference cDNA nucleotides affected by the variant\n\n trimmed_cdna_alt : str\n Trimmed alternate cDNA nucleotides which replace the reference\n\n sequence_from_start_codon : str\n cDNA nucleotide coding sequence\n\n cds_offset : int\n Integer offset into the coding sequence where ref is replace with alt\n \"\"\"\n # index (starting from 0) of first affected reference codon\n ref_codon_start_offset = cds_offset // 3\n # which nucleotide of the first codon got changed?\n nucleotide_offset_into_first_ref_codon = cds_offset % 3\n\n n_ref_nucleotides = len(trimmed_cdna_ref)\n if n_ref_nucleotides == 0:\n if nucleotide_offset_into_first_ref_codon == 2:\n # if we're inserting between codons\n ref_codon_end_offset = ref_codon_start_offset\n else:\n # inserting inside a reference codon\n ref_codon_end_offset = ref_codon_start_offset + 1\n ref_codons = sequence_from_start_codon[\n ref_codon_start_offset * 3:ref_codon_end_offset * 3]\n # split the reference codon into nucleotides before/after insertion\n prefix = ref_codons[:nucleotide_offset_into_first_ref_codon + 1]\n suffix = ref_codons[nucleotide_offset_into_first_ref_codon + 1:]\n else:\n ref_codon_end_offset = (cds_offset + n_ref_nucleotides - 1) // 3 + 1\n # codons in the reference sequence\n ref_codons = sequence_from_start_codon[\n ref_codon_start_offset * 3:ref_codon_end_offset * 3]\n\n # We construct the new codons by taking the unmodified prefix\n # of the first ref codon, the unmodified suffix of the last ref codon\n # and sticking the alt nucleotides in between.\n # Since this is supposed to be an in-frame mutation, the concatenated\n # nucleotide string is expected to have a length that is a multiple of\n # three.\n prefix = ref_codons[:nucleotide_offset_into_first_ref_codon]\n\n offset_in_last_ref_codon = (cds_offset + n_ref_nucleotides - 1) % 3\n\n if offset_in_last_ref_codon == 0:\n suffix = ref_codons[-2:]\n elif offset_in_last_ref_codon == 1:\n suffix = ref_codons[-1:]\n else:\n suffix = \"\"\n mutant_codons = prefix + trimmed_cdna_alt + suffix\n assert len(mutant_codons) % 3 == 0, \\\n \"Expected in-frame mutation but got %s (length = %d)\" % (\n mutant_codons, len(mutant_codons))\n return ref_codon_start_offset, ref_codon_end_offset, mutant_codons", "response": "Returns the indices of first and last reference codons affected by the variant and the actual sequence of the mutated codons."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\npredicts the coding effect of an in - frame nucleotide change in - frame nucleotide change in - frame sequence.", "response": "def predict_in_frame_coding_effect(\n variant,\n transcript,\n trimmed_cdna_ref,\n trimmed_cdna_alt,\n sequence_from_start_codon,\n cds_offset):\n \"\"\"Coding effect of an in-frame nucleotide change\n\n Parameters\n ----------\n variant : Variant\n\n transcript : Transcript\n\n trimmed_cdna_ref : str\n Reference nucleotides from the coding sequence of the transcript\n\n trimmed_cdna_alt : str\n Nucleotides to insert in place of the reference nucleotides\n\n sequence_from_start_codon : Bio.Seq or str\n Transcript sequence from the CDS start codon (including the 3' UTR).\n This sequence includes the 3' UTR since a mutation may delete the stop\n codon and we'll have to translate past the normal end of the CDS to\n determine the new protein sequence.\n\n cds_offset : int\n Index of first ref nucleotide, starting from 0 = beginning of coding\n sequence. If variant is a pure insertion (no ref nucleotides) then this\n argument indicates the offset *after* which to insert the alt\n nucleotides.\n \"\"\"\n ref_codon_start_offset, ref_codon_end_offset, mutant_codons = get_codons(\n variant=variant,\n trimmed_cdna_ref=trimmed_cdna_ref,\n trimmed_cdna_alt=trimmed_cdna_alt,\n sequence_from_start_codon=sequence_from_start_codon,\n cds_offset=cds_offset)\n\n mutation_affects_start_codon = (ref_codon_start_offset == 0)\n\n if mutation_affects_start_codon and mutant_codons[:3] not in START_CODONS:\n # if we changed a start codon to something else then\n # we no longer know where the protein begins (or even in\n # what frame).\n # TODO: use the Kozak consensus sequence or a predictive model\n # to identify the most likely start site\n return StartLoss(\n variant=variant,\n transcript=transcript)\n\n # rely on Ensembl's annotation of the protein sequence since we can't\n # easily predict whether the starting nucleotide is a methionine\n # (most common) or leucine\n aa_ref = transcript.protein_sequence[ref_codon_start_offset:ref_codon_end_offset]\n\n reference_protein_length = len(transcript.protein_sequence)\n\n aa_alt, mutant_stop_codon_index, using_three_prime_utr = \\\n translate_in_frame_mutation(\n transcript=transcript,\n ref_codon_start_offset=ref_codon_start_offset,\n ref_codon_end_offset=ref_codon_end_offset,\n mutant_codons=mutant_codons)\n\n mutant_codons_contain_stop = mutant_stop_codon_index != -1\n\n # trim shared subsequences at the start and end of reference\n # and mutated amino acid sequences\n aa_ref, aa_alt, shared_prefix, shared_suffix = \\\n trim_shared_flanking_strings(\n aa_ref,\n aa_alt)\n\n n_aa_ref = len(aa_ref)\n n_aa_alt = len(aa_alt)\n n_aa_shared = len(shared_prefix)\n\n is_insertion = (ref_codon_start_offset == ref_codon_end_offset)\n\n # index of first amino acid which is different from the reference\n aa_mutation_start_offset = (\n ref_codon_start_offset + n_aa_shared + is_insertion)\n\n if mutant_codons_contain_stop:\n mutant_stop_codon_index += n_aa_shared\n\n if mutation_affects_start_codon and (aa_ref == aa_alt):\n # Substitution between start codons gets special treatment since,\n # though superficially synonymous, this could still potentially\n # cause a start loss / change in reading frame and might be worth\n # closer scrutiny\n return AlternateStartCodon(\n variant=variant,\n transcript=transcript,\n ref_codon=transcript.sequence[:3],\n alt_codon=mutant_codons[:3])\n\n n_ref_amino_acids_after_mutated_site = (\n reference_protein_length - aa_mutation_start_offset - 1)\n\n if mutant_codons_contain_stop and (\n n_aa_alt <= n_ref_amino_acids_after_mutated_site):\n # if the new coding sequence contains a stop codon, then this is a\n # PrematureStop mutation if it decreases the length of the protein\n return PrematureStop(\n variant=variant,\n transcript=transcript,\n aa_mutation_start_offset=aa_mutation_start_offset,\n aa_ref=aa_ref,\n aa_alt=aa_alt)\n\n if (aa_mutation_start_offset > reference_protein_length) or (\n n_aa_ref == n_aa_alt == 0):\n # if inserted nucleotides go after original stop codon or if nothing\n # is changed in the amino acid sequence then this is a Silent variant\n return Silent(\n variant=variant,\n transcript=transcript,\n aa_pos=aa_mutation_start_offset,\n aa_ref=shared_prefix + shared_suffix)\n elif using_three_prime_utr:\n # if non-silent mutation is at the end of the protein then\n # should be a stop-loss\n return StopLoss(\n variant,\n transcript,\n aa_ref=aa_ref,\n aa_alt=aa_alt)\n elif n_aa_alt == 0:\n return Deletion(\n variant,\n transcript,\n aa_mutation_start_offset=aa_mutation_start_offset,\n aa_ref=aa_ref)\n elif n_aa_ref == 0:\n return Insertion(\n variant,\n transcript,\n aa_mutation_start_offset=aa_mutation_start_offset,\n aa_alt=aa_alt)\n elif n_aa_ref == n_aa_alt == 1:\n # simple substitution e.g. p.V600E\n return Substitution(\n variant,\n transcript,\n aa_mutation_start_offset=aa_mutation_start_offset,\n aa_ref=aa_ref,\n aa_alt=aa_alt)\n else:\n # multiple amino acids were substituted e.g. p.VQQ39FF\n return ComplexSubstitution(\n variant,\n transcript,\n aa_mutation_start_offset=aa_mutation_start_offset,\n aa_ref=aa_ref,\n aa_alt=aa_alt)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_variant_args(arg_parser):\n variant_arg_group = arg_parser.add_argument_group(\n title=\"Variants\",\n description=\"Genomic variant files\")\n\n variant_arg_group.add_argument(\n \"--vcf\",\n default=[],\n action=\"append\",\n help=\"Genomic variants in VCF format\")\n\n variant_arg_group.add_argument(\n \"--maf\",\n default=[],\n action=\"append\",\n help=\"Genomic variants in TCGA's MAF format\",)\n\n variant_arg_group.add_argument(\n \"--variant\",\n default=[],\n action=\"append\",\n nargs=4,\n metavar=(\"CHR\", \"POS\", \"REF\", \"ALT\"),\n help=(\n \"Individual variant as 4 arguments giving chromsome, position, ref,\"\n \" and alt. Example: chr1 3848 C G. Use '.' to indicate empty alleles\"\n \" for insertions or deletions.\"))\n\n variant_arg_group.add_argument(\n \"--genome\",\n type=str,\n help=(\n \"What reference assembly your variant coordinates are using. \"\n \"Examples: 'hg19', 'GRCh38', or 'mm9'. \"\n \"This argument is ignored for MAF files, since each row includes \"\n \"the reference. \"\n \"For VCF files, this is used if specified, and otherwise is guessed from \"\n \"the header. For variants specfied on the commandline with --variant, \"\n \"this option is required.\"))\n\n variant_arg_group.add_argument(\n \"--download-reference-genome-data\",\n action=\"store_true\",\n default=False,\n help=(\n (\"Automatically download genome reference data required for \"\n \"annotation using PyEnsembl. Otherwise you must first run \"\n \"'pyensembl install' for the release/species corresponding \"\n \"to the genome used in your VCF.\")))\n\n variant_arg_group.add_argument(\n \"--json-variants\",\n default=[],\n action=\"append\",\n help=\"Path to Varcode.VariantCollection object serialized as a JSON file.\")\n\n return variant_arg_group", "response": "Adds the commandline arguments for the variant command line."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef insert_before(sequence, offset, new_residues):\n assert 0 < offset <= len(sequence), \\\n \"Invalid position %d for sequence of length %d\" % (\n offset, len(sequence))\n prefix = sequence[:offset]\n suffix = sequence[offset:]\n return prefix + new_residues + suffix", "response": "Mutate the given sequence by inserting the string new_residues before offset."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef insert_after(sequence, offset, new_residues):\n assert 0 <= offset < len(sequence), \\\n \"Invalid position %d for sequence of length %d\" % (\n offset, len(sequence))\n prefix = sequence[:offset + 1]\n suffix = sequence[offset + 1:]\n return prefix + new_residues + suffix", "response": "Mutate the given sequence by inserting new_residues after offset."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef substitute(sequence, offset, ref, alt):\n n_ref = len(ref)\n sequence_ref = sequence[offset:offset + n_ref]\n assert str(sequence_ref) == str(ref), \\\n \"Reference %s at offset %d != expected reference %s\" % \\\n (sequence_ref, offset, ref)\n prefix = sequence[:offset]\n suffix = sequence[offset + n_ref:]\n return prefix + alt + suffix", "response": "Mutate a sequence by substituting given alt at instead of ref at the given position."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngiving a list of assemblies identify the most recent one", "response": "def _most_recent_assembly(assembly_names):\n \"\"\"\n Given list of (in this case, matched) assemblies, identify the most recent\n (\"recency\" here is determined by sorting based on the numeric element of the assembly name)\n \"\"\"\n match_recency = [\n int(re.search('\\d+', assembly_name).group())\n for assembly_name in assembly_names\n ]\n most_recent = [\n x for (y, x) in sorted(zip(match_recency, assembly_names), reverse=True)][0]\n return most_recent"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ninfer a canonical name from a string containing a reference name or path.", "response": "def infer_reference_name(reference_name_or_path):\n \"\"\"\n Given a string containing a reference name (such as a path to\n that reference's FASTA file), return its canonical name\n as used by Ensembl.\n \"\"\"\n # identify all cases where reference name or path matches candidate aliases\n reference_file_name = os.path.basename(reference_name_or_path)\n matches = {'file_name': list(), 'full_path': list()}\n for assembly_name in reference_alias_dict.keys():\n candidate_list = [assembly_name] + reference_alias_dict[assembly_name]\n for candidate in candidate_list:\n if candidate.lower() in reference_file_name.lower():\n matches['file_name'].append(assembly_name)\n elif candidate.lower() in reference_name_or_path.lower():\n matches['full_path'].append(assembly_name)\n # remove duplicate matches (happens due to overlapping aliases)\n matches['file_name'] = list(set(matches['file_name']))\n matches['full_path'] = list(set(matches['full_path']))\n # given set of existing matches, choose one to return\n # (first select based on file_name, then full path. If multiples, use most recent)\n if len(matches['file_name']) == 1:\n match = matches['file_name'][0]\n elif len(matches['file_name']) > 1:\n # separate logic for >1 vs 1 to give informative warning\n match = _most_recent_assembly(matches['file_name'])\n warn(\n ('More than one reference ({}) matches path in header ({}); '\n 'the most recent one ({}) was used.').format(\n ','.join(matches['file_name']), reference_file_name, match))\n elif len(matches['full_path']) >= 1:\n # combine full-path logic since warning is the same\n match = _most_recent_assembly(matches['full_path'])\n warn((\n 'Reference could not be matched against filename ({}); '\n 'using best match against full path ({}).').format(\n reference_name_or_path, match))\n else:\n raise ValueError(\n \"Failed to infer genome assembly name for %s\" % reference_name_or_path)\n return match"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ninfer the Ensembl version of a given genome.", "response": "def infer_genome(genome_object_string_or_int):\n \"\"\"\n If given an integer, return associated human EnsemblRelease for that\n Ensembl version.\n\n If given a string, return latest EnsemblRelease which has a reference\n of the same name.\n\n If given a PyEnsembl Genome, simply return it.\n \"\"\"\n if isinstance(genome_object_string_or_int, Genome):\n return genome_object_string_or_int\n if is_integer(genome_object_string_or_int):\n return cached_release(genome_object_string_or_int)\n elif is_string(genome_object_string_or_int):\n # first infer the canonical reference name, e.g. mapping hg19 -> GRCh37\n # and then get the associated PyEnsembl Genome object\n reference_name = infer_reference_name(genome_object_string_or_int)\n return genome_for_reference_name(reference_name)\n else:\n raise TypeError(\n (\"Expected genome to be an int, string, or pyensembl.Genome \"\n \"instance, got %s : %s\") % (\n str(genome_object_string_or_int),\n type(genome_object_string_or_int)))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a dictionary representation of the object.", "response": "def to_dict(self):\n \"\"\"\n We want the original values (un-normalized) field values while\n serializing since normalization will happen in __init__.\n \"\"\"\n return dict(\n contig=self.original_contig,\n start=self.original_start,\n ref=self.original_ref,\n alt=self.original_alt,\n ensembl=self.ensembl,\n allow_extended_nucleotides=self.allow_extended_nucleotides,\n normalize_contig_name=self.normalize_contig_name)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a short description of the genomic variant.", "response": "def short_description(self):\n \"\"\"\n HGVS nomenclature for genomic variants\n More info: http://www.hgvs.org/mutnomen/\n \"\"\"\n if self.is_insertion:\n return \"chr%s g.%d_%dins%s\" % (\n self.contig,\n self.start,\n self.start + 1,\n self.alt)\n elif self.is_deletion:\n return \"chr%s g.%d_%ddel%s\" % (\n self.contig,\n self.start,\n self.end,\n self.ref)\n elif self.ref == self.alt:\n return \"chr%s g.%d%s\" % (self.contig, self.start, self.ref)\n else:\n # substitution\n return \"chr%s g.%d%s>%s\" % (\n self.contig,\n self.start,\n self.ref,\n self.alt)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef genes(self):\n if self._genes is None:\n self._genes = self.ensembl.genes_at_locus(\n self.contig, self.start, self.end)\n return self._genes", "response": "Return a Gene object for all genes which overlap this variant."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef gene_ids(self):\n return self.ensembl.gene_ids_at_locus(\n self.contig, self.start, self.end)", "response": "Returns a list of gene IDs that overlap this variant."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef gene_names(self):\n return self.ensembl.gene_names_at_locus(\n self.contig, self.start, self.end)", "response": "Return a list of all gene names that overlap this variant."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning True if this variant represents an insertion of nucleotides into the reference genome.", "response": "def is_insertion(self):\n \"\"\"\n Does this variant represent the insertion of nucleotides into the\n reference genome?\n \"\"\"\n # An insertion would appear in a VCF like C>CT, so that the\n # alternate allele starts with the reference nucleotides.\n # Since the nucleotide strings may be normalized in the constructor,\n # it's worth noting that the normalized form of this variant would be\n # ''>'T', so that 'T'.startswith('') still holds.\n return (len(self.ref) < len(self.alt)) and self.alt.startswith(self.ref)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nbeing this variant represent the deletion of nucleotides from the reference genome?", "response": "def is_deletion(self):\n \"\"\"\n Does this variant represent the deletion of nucleotides from the\n reference genome?\n \"\"\"\n # A deletion would appear in a VCF like CT>C, so that the\n # reference allele starts with the alternate nucleotides.\n # This is true even in the normalized case, where the alternate\n # nucleotides are an empty string.\n return (len(self.ref) > len(self.alt)) and self.ref.startswith(self.alt)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nis the variant a single nucleotide variant", "response": "def is_snv(self):\n \"\"\"Is the variant a single nucleotide variant\"\"\"\n return (len(self.ref) == len(self.alt) == 1) and (self.ref != self.alt)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nam this variant and pyrimidine to pyrimidine change?", "response": "def is_transition(self):\n \"\"\"Is this variant and pyrimidine to pyrimidine change or purine to purine change\"\"\"\n return self.is_snv and is_purine(self.ref) == is_purine(self.alt)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nis this variant a pyrimidine to purine change vice versa?", "response": "def is_transversion(self):\n \"\"\"Is this variant a pyrimidine to purine change or vice versa\"\"\"\n return self.is_snv and is_purine(self.ref) != is_purine(self.alt)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef variant_overlaps_interval(\n variant_start,\n n_ref_bases,\n interval_start,\n interval_end):\n \"\"\"\n Does a variant overlap a given interval on the same chromosome?\n\n Parameters\n ----------\n variant_start : int\n Inclusive base-1 position of variant's starting location\n (or location before an insertion)\n\n n_ref_bases : int\n Number of reference bases affect by variant (used to compute\n end coordinate or determine whether variant is an insertion)\n\n interval_start : int\n Interval's inclusive base-1 start position\n\n interval_end : int\n Interval's inclusive base-1 end position\n \"\"\"\n\n if n_ref_bases == 0:\n # insertions only overlap intervals which start before and\n # end after the insertion point, they must be fully contained\n # by the other interval\n return interval_start <= variant_start and interval_end >= variant_start\n variant_end = variant_start + n_ref_bases\n \"\"\"\n if self._changes_exonic_splice_site(\n strand_ref,\n strand_alt,)\n \"\"\"\n # overlap means other interval starts before this variant ends\n # and the interval ends after this variant starts\n return interval_start <= variant_end and interval_end >= variant_start", "response": "Returns True if the given interval of a given variant is contained within the given base - 1 interval."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef changes_exonic_splice_site(\n transcript_offset,\n transcript,\n transcript_ref,\n transcript_alt,\n exon_start_offset,\n exon_end_offset,\n exon_number):\n \"\"\"Does the given exonic mutation of a particular transcript change a\n splice site?\n\n Parameters\n ----------\n transcript_offset : int\n Offset from start of transcript of first reference nucleotide\n (or the last nucleotide before an insertion)\n\n transcript : pyensembl.Transcript\n\n transcript_ref : str\n Reference nucleotides\n\n transcript_alt : alt\n Alternate nucleotides\n\n exon_start_offset : int\n Start offset of exon relative to beginning of transcript\n\n exon_end_offset : int\n End offset of exon relative to beginning of transcript\n\n exon_number : int\n Which exon in the order they form the transcript\n \"\"\"\n # first we're going to make sure the variant doesn't disrupt the\n # splicing sequences we got from Divina et. al's\n # Ab initio prediction of mutation-induced cryptic\n # splice-site activation and exon skipping\n # (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2947103/)\n #\n # 5' splice site: MAG|GURAGU consensus\n # M is A or C; R is purine; | is the exon-intron boundary\n #\n # 3' splice site: YAG|R\n #\n if exon_number > 1 and transcript_offset == exon_start_offset:\n # if this is any exon past the first, check to see if it lost\n # the purine on its left side\n #\n # the 3' splice site sequence has just a single purine on\n # the exon side\n if len(transcript_ref) > 0 and transcript_ref[0] in PURINE_NUCLEOTIDES:\n if len(transcript_alt) > 0:\n if transcript_alt[0] not in PURINE_NUCLEOTIDES:\n return True\n else:\n # if the mutation is a deletion, are there ref nucleotides\n # afterward?\n offset_after_deletion = transcript_offset + len(transcript_ref)\n if len(transcript.sequence) > offset_after_deletion:\n next_base = transcript.sequence[offset_after_deletion]\n if next_base not in PURINE_NUCLEOTIDES:\n return True\n\n if exon_number < len(transcript.exons):\n # if the mutation affects an exon whose right end gets spliced\n # to a next exon, check if the variant alters the exon side of\n # 5' consensus splicing sequence\n #\n # splicing sequence:\n # MAG|GURAGU\n # M is A or C; R is purine; | is the exon-intron boundary\n #\n # TODO: check for overlap of two intervals instead of just\n # seeing if the mutation starts inside the exonic splice site\n if variant_overlaps_interval(\n variant_start=transcript_offset,\n n_ref_bases=len(transcript_ref),\n interval_start=exon_end_offset - 2,\n interval_end=exon_end_offset):\n end_of_reference_exon = transcript.sequence[\n exon_end_offset - 2:exon_end_offset + 1]\n\n if matches_exon_end_pattern(end_of_reference_exon):\n # if the last three nucleotides conform to the consensus\n # sequence then treat any deviation as an ExonicSpliceSite\n # mutation\n end_of_variant_exon = end_of_reference_exon\n if matches_exon_end_pattern(end_of_variant_exon):\n # end of exon matches splicing signal, check if it still\n # does after the mutation\n return True", "response": "Does the given exonic mutation of a particular transcript change a specific splice site?"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nis the nucleotide a purine", "response": "def is_purine(nucleotide, allow_extended_nucleotides=False):\n \"\"\"Is the nucleotide a purine\"\"\"\n if not allow_extended_nucleotides and nucleotide not in STANDARD_NUCLEOTIDES:\n raise ValueError(\n \"{} is a non-standard nucleotide, neither purine or pyrimidine\".format(nucleotide))\n return nucleotide in PURINE_NUCLEOTIDES"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nnormalizes a nucleotide string by converting all letters upper case checking to make sure all letters in the string are actually nucleotides.", "response": "def normalize_nucleotide_string(\n nucleotides,\n allow_extended_nucleotides=False,\n empty_chars=\".-\",\n treat_nan_as_empty=True):\n \"\"\"\n Normalizes a nucleotide string by converting various ways of encoding empty\n strings into \"\", making all letters upper case, and checking to make sure\n all letters in the string are actually nucleotides.\n\n Parameters\n ----------\n nucleotides : str\n Sequence of nucleotides, e.g. \"ACCTG\"\n\n extended_nucleotides : bool\n Allow non-canonical nucleotide characters like 'X' for unknown base\n\n empty_chars : str\n Characters which encode empty strings, such as \".\" used in VCF format\n or \"-\" used in MAF format\n\n treat_nan_as_empty : bool\n Some MAF files represent deletions/insertions with NaN ref/alt values\n \"\"\"\n if nucleotides in empty_chars:\n return \"\"\n elif treat_nan_as_empty and isinstance(nucleotides, float) and np.isnan(nucleotides):\n return \"\"\n\n require_string(nucleotides, name=\"nucleotide string\")\n\n nucleotides = nucleotides.upper()\n\n if allow_extended_nucleotides:\n valid_nucleotides = EXTENDED_NUCLEOTIDES\n else:\n valid_nucleotides = STANDARD_NUCLEOTIDES\n\n if not set(nucleotides) <= valid_nucleotides:\n raise ValueError(\n \"Invalid character(s) in nucleotide string: %s\" % (\n \",\".join(set(nucleotides) - valid_nucleotides),))\n\n return nucleotides"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nload a VCF file into memory.", "response": "def load_vcf(\n path,\n genome=None,\n reference_vcf_key=\"reference\",\n only_passing=True,\n allow_extended_nucleotides=False,\n include_info=True,\n chunk_size=10 ** 5,\n max_variants=None,\n sort_key=variant_ascending_position_sort_key,\n distinct=True):\n \"\"\"\n Load reference name and Variant objects from the given VCF filename.\n\n Currently only local files are supported by this function (no http). If you\n call this on an HTTP URL, it will fall back to `load_vcf`.\n\n Parameters\n ----------\n\n path : str\n Path to VCF (*.vcf) or compressed VCF (*.vcf.gz).\n\n genome : {pyensembl.Genome, reference name, Ensembl version int}, optional\n Optionally pass in a PyEnsembl Genome object, name of reference, or\n PyEnsembl release version to specify the reference associated with a\n VCF (otherwise infer reference from VCF using reference_vcf_key)\n\n reference_vcf_key : str, optional\n Name of metadata field which contains path to reference FASTA\n file (default = 'reference')\n\n only_passing : boolean, optional\n If true, any entries whose FILTER field is not one of \".\" or \"PASS\" is\n dropped.\n\n allow_extended_nucleotides : boolean, default False\n Allow characters other that A,C,T,G in the ref and alt strings.\n\n include_info : boolean, default True\n Whether to parse the INFO and per-sample columns. If you don't need\n these, set to False for faster parsing.\n\n chunk_size: int, optional\n Number of records to load in memory at once.\n\n max_variants : int, optional\n If specified, return only the first max_variants variants.\n\n sort_key : fn\n Function which maps each element to a sorting criterion.\n Set to None to not to sort the variants.\n\n distinct : boolean, default True\n Don't keep repeated variants\n \"\"\"\n\n require_string(path, \"Path or URL to VCF\")\n parsed_path = parse_url_or_path(path)\n\n if parsed_path.scheme and parsed_path.scheme.lower() != \"file\":\n # pandas.read_table nominally supports HTTP, but it tends to crash on\n # large files and does not support gzip. Switching to the python-based\n # implementation of read_table (with engine=\"python\") helps with some\n # issues but introduces a new set of problems (e.g. the dtype parameter\n # is not accepted). For these reasons, we're currently not attempting\n # to load VCFs over HTTP with pandas directly, and instead download it\n # to a temporary file and open that.\n\n (filename, headers) = urllib.request.urlretrieve(path)\n try:\n # The downloaded file has no file extension, which confuses pyvcf\n # for gziped files in Python 3. We rename it to have the correct\n # file extension.\n new_filename = \"%s.%s\" % (\n filename, parsed_path.path.split(\".\")[-1])\n os.rename(filename, new_filename)\n filename = new_filename\n return load_vcf(\n filename,\n genome=genome,\n reference_vcf_key=reference_vcf_key,\n only_passing=only_passing,\n allow_extended_nucleotides=allow_extended_nucleotides,\n include_info=include_info,\n chunk_size=chunk_size,\n max_variants=max_variants,\n sort_key=sort_key,\n distinct=distinct)\n finally:\n logger.info(\"Removing temporary file: %s\", filename)\n os.unlink(filename)\n\n # Loading a local file.\n # The file will be opened twice: first to parse the header with pyvcf, then\n # by pandas to read the data.\n\n # PyVCF reads the metadata immediately and stops at the first line with\n # data. We can close the file after that.\n handle = PyVCFReaderFromPathOrURL(path)\n handle.close()\n genome = infer_genome_from_vcf(\n genome,\n handle.vcf_reader,\n reference_vcf_key)\n\n df_iterator = read_vcf_into_dataframe(\n path,\n include_info=include_info,\n sample_names=handle.vcf_reader.samples if include_info else None,\n chunk_size=chunk_size)\n\n if include_info:\n def sample_info_parser(unparsed_sample_info_strings, format_string):\n \"\"\"\n Given a format string like \"GT:AD:ADP:DP:FS\"\n and a list of sample info strings where each entry is like\n \"0/1:3,22:T=3,G=22:25:33\", return a dict that maps:\n sample name -> field name -> value. Uses pyvcf to parse the fields.\n \"\"\"\n return pyvcf_calls_to_sample_info_list(\n handle.vcf_reader._parse_samples(\n unparsed_sample_info_strings, format_string, None))\n else:\n sample_info_parser = None\n\n return dataframes_to_variant_collection(\n df_iterator,\n source_path=path,\n info_parser=handle.vcf_reader._parse_info if include_info else None,\n only_passing=only_passing,\n max_variants=max_variants,\n sample_names=handle.vcf_reader.samples if include_info else None,\n sample_info_parser=sample_info_parser,\n variant_kwargs={\n 'ensembl': genome,\n 'allow_extended_nucleotides': allow_extended_nucleotides},\n variant_collection_kwargs={\n 'sort_key': sort_key,\n 'distinct': distinct})"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef pyvcf_calls_to_sample_info_list(calls):\n return OrderedDict(\n (call.sample, call.data._asdict()) for call in calls)", "response": "Given a list of pyvcf. model. _Call instances return a dict mapping each sample name to its per - sample info."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nloads a VariantCollection from a list of pandas dataframes.", "response": "def dataframes_to_variant_collection(\n dataframes,\n source_path,\n info_parser=None,\n only_passing=True,\n max_variants=None,\n sample_names=None,\n sample_info_parser=None,\n variant_kwargs={},\n variant_collection_kwargs={}):\n \"\"\"\n Load a VariantCollection from an iterable of pandas dataframes.\n\n This takes an iterable of dataframes instead of a single dataframe to avoid\n having to load huge dataframes at once into memory. If you have a single\n dataframe, just pass it in a single-element list.\n\n Parameters\n ----------\n dataframes\n Iterable of dataframes (e.g. a generator). Expected columns are:\n [\"CHROM\", \"POS\", \"ID\", \"REF\", \"ALT\", \"QUAL\", \"FILTER\"]\n and 'INFO' if `info_parser` is not Null. Columns must be in this\n order.\n\n source_path : str\n Path of VCF file from which DataFrame chunks were generated.\n\n info_parser : string -> object, optional\n Callable to parse INFO strings.\n\n only_passing : boolean, optional\n If true, any entries whose FILTER field is not one of \".\" or \"PASS\" is\n dropped.\n\n max_variants : int, optional\n If specified, return only the first max_variants variants.\n\n sample_names : list of strings, optional\n Sample names. The final columns of the dataframe should match these.\n If specified, the per-sample info columns will be parsed. You must\n also specify sample_info_parser.\n\n sample_info_parser : string list * string -> dict, optional\n Callable to parse per-sample info columns.\n\n variant_kwargs : dict, optional\n Additional keyword paramters to pass to Variant.__init__\n\n variant_collection_kwargs : dict, optional\n Additional keyword parameters to pass to VariantCollection.__init__.\n \"\"\"\n\n expected_columns = (\n [\"CHROM\", \"POS\", \"ID\", \"REF\", \"ALT\", \"QUAL\", \"FILTER\"] +\n ([\"INFO\"] if info_parser else []))\n\n if info_parser and sample_names:\n if sample_info_parser is None:\n raise TypeError(\n \"Must specify sample_info_parser if specifying sample_names\")\n expected_columns.append(\"FORMAT\")\n expected_columns.extend(sample_names)\n\n variants = []\n metadata = {}\n try:\n for chunk in dataframes:\n assert chunk.columns.tolist() == expected_columns,\\\n \"dataframe columns (%s) do not match expected columns (%s)\" % (\n chunk.columns, expected_columns)\n\n for tpl in chunk.itertuples():\n (i, chrom, pos, id_, ref, alts, qual, flter) = tpl[:8]\n if flter == \".\":\n flter = None\n elif flter == \"PASS\":\n flter = []\n elif only_passing:\n continue\n else:\n flter = flter.split(';')\n if id_ == \".\":\n id_ = None\n qual = float(qual) if qual != \".\" else None\n alt_num = 0\n info = sample_info = None\n for alt in alts.split(\",\"):\n if alt != \".\":\n if info_parser is not None and info is None:\n info = info_parser(tpl[8]) # INFO column\n if sample_names:\n # Sample name -> field -> value dict.\n sample_info = sample_info_parser(\n list(tpl[10:]), # sample info columns\n tpl[9], # FORMAT column\n )\n\n variant = Variant(\n chrom,\n int(pos), # want a Python int not numpy.int64\n ref,\n alt,\n **variant_kwargs)\n variants.append(variant)\n metadata[variant] = {\n 'id': id_,\n 'qual': qual,\n 'filter': flter,\n 'info': info,\n 'sample_info': sample_info,\n 'alt_allele_index': alt_num,\n }\n if max_variants and len(variants) > max_variants:\n raise StopIteration\n alt_num += 1\n except StopIteration:\n pass\n\n return VariantCollection(\n variants=variants,\n source_to_metadata_dict={source_path: metadata},\n **variant_collection_kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nread the contents of a VCF file into a pandas dataframe.", "response": "def read_vcf_into_dataframe(\n path,\n include_info=False,\n sample_names=None,\n chunk_size=None):\n \"\"\"\n Load the data of a VCF into a pandas dataframe. All headers are ignored.\n\n Parameters\n ----------\n path : str\n Path to local file. HTTP and other protocols are not implemented.\n\n include_info : boolean, default False\n If true, the INFO field is not parsed, but is included as a string in\n the resulting data frame. If false, the INFO field is omitted.\n\n sample_names: string list, optional\n Sample names. The final columns of the dataframe should match these.\n If specified (and include_info is also specified), the FORMAT and\n per-sample info columns will be included in the result dataframe.\n\n chunk_size : int, optional\n If buffering is desired, the number of rows per chunk.\n\n Returns\n ---------\n If chunk_size is None (the default), a dataframe with the contents of the\n VCF file. Otherwise, an iterable of dataframes, each with chunk_size rows.\n\n \"\"\"\n vcf_field_types = OrderedDict()\n vcf_field_types['CHROM'] = str\n vcf_field_types['POS'] = int\n vcf_field_types['ID'] = str\n vcf_field_types['REF'] = str\n vcf_field_types['ALT'] = str\n vcf_field_types['QUAL'] = str\n vcf_field_types['FILTER'] = str\n if include_info:\n vcf_field_types['INFO'] = str\n if sample_names:\n vcf_field_types['FORMAT'] = str\n for name in sample_names:\n vcf_field_types[name] = str\n\n parsed_path = parse_url_or_path(path)\n if not parsed_path.scheme or parsed_path.scheme.lower() == \"file\":\n path = parsed_path.path\n else:\n raise NotImplementedError(\"Only local files are supported.\")\n\n compression = None\n if path.endswith(\".gz\"):\n compression = \"gzip\"\n elif path.endswith(\".bz2\"):\n compression = \"bz2\"\n\n reader = pandas.read_table(\n path,\n compression=compression,\n comment=\"#\",\n chunksize=chunk_size,\n dtype=vcf_field_types,\n names=list(vcf_field_types),\n usecols=range(len(vcf_field_types)))\n return reader"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef stream_gzip_decompress_lines(stream):\n dec = zlib.decompressobj(zlib.MAX_WBITS | 16)\n previous = \"\"\n for compressed_chunk in stream:\n chunk = dec.decompress(compressed_chunk).decode()\n if chunk:\n lines = (previous + chunk).split(\"\\n\")\n previous = lines.pop()\n for line in lines:\n yield line\n yield previous", "response": "Decompress a gzip stream into lines of text."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef infer_genome_from_vcf(genome, vcf_reader, reference_vcf_key):\n if genome:\n return infer_genome(genome)\n elif reference_vcf_key not in vcf_reader.metadata:\n raise ValueError(\"Unable to infer reference genome for %s\" % (\n vcf_reader.filename,))\n else:\n reference_path = vcf_reader.metadata[reference_vcf_key]\n return infer_genome(reference_path)", "response": "Helper function to make a pyensembl. Genome instance from a vcf file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_frameshift_effect(\n mutated_codon_index,\n sequence_from_mutated_codon,\n variant,\n transcript):\n \"\"\"\n Determine frameshift effect within a coding sequence (possibly affecting\n either the start or stop codons, or anythign in between)\n\n Parameters\n ----------\n mutated_codon_index : int\n Codon offset (starting from 0 = start codon) of first non-reference\n amino acid in the variant protein\n\n sequence_from_mutated_codon: Bio.Seq\n Sequence of mutated cDNA, starting from first mutated codon, until\n the end of the transcript\n\n variant : Variant\n\n transcript : transcript\n \"\"\"\n\n assert transcript.protein_sequence is not None, \\\n \"Expect transcript %s to have protein sequence\" % transcript\n\n original_protein_sequence = transcript.protein_sequence\n original_protein_length = len(original_protein_sequence)\n\n mutant_protein_suffix = translate(\n nucleotide_sequence=sequence_from_mutated_codon,\n first_codon_is_start=False,\n to_stop=True,\n truncate=True)\n\n if mutated_codon_index == 0:\n # TODO: scan through sequence_from_mutated_codon for\n # Kozak sequence + start codon to choose the new start\n return StartLoss(variant=variant, transcript=transcript)\n\n # the frameshifted sequence may contain some amino acids which are\n # the same as the original protein!\n _, mutant_protein_suffix, unchanged_amino_acids = trim_shared_prefix(\n ref=original_protein_sequence[mutated_codon_index:],\n alt=mutant_protein_suffix)\n n_unchanged_amino_acids = len(unchanged_amino_acids)\n offset_to_first_different_amino_acid = mutated_codon_index + n_unchanged_amino_acids\n # miraculously, this frameshift left the protein unchanged,\n # most likely by turning one stop codon into another stop codon\n if n_unchanged_amino_acids == 0:\n aa_ref = \"\"\n else:\n aa_ref = original_protein_sequence[-n_unchanged_amino_acids:]\n if offset_to_first_different_amino_acid >= original_protein_length:\n # frameshift is either extending the protein or leaving it unchanged\n if len(mutant_protein_suffix) == 0:\n\n return Silent(\n variant=variant,\n transcript=transcript,\n aa_pos=mutated_codon_index,\n aa_ref=aa_ref)\n else:\n # When all the amino acids are the same as the original, we either\n # have the original protein or we've extended it.\n # If we've extended it, it means we must have lost our stop codon.\n return StopLoss(\n variant=variant,\n transcript=transcript,\n aa_ref=aa_ref,\n aa_alt=mutant_protein_suffix)\n # original amino acid at the mutated codon before the frameshift occurred\n aa_ref = original_protein_sequence[offset_to_first_different_amino_acid]\n\n # TODO: what if all the shifted amino acids were the same and the protein\n # ended up the same length? Add a Silent case?\n if len(mutant_protein_suffix) == 0:\n # if a frameshift doesn't create any new amino acids, then\n # it must immediately have hit a stop codon\n return FrameShiftTruncation(\n variant=variant,\n transcript=transcript,\n stop_codon_offset=offset_to_first_different_amino_acid)\n return FrameShift(\n variant=variant,\n transcript=transcript,\n aa_mutation_start_offset=offset_to_first_different_amino_acid,\n shifted_sequence=str(mutant_protein_suffix))", "response": "Create a new sequence that is part of a frameshift effect within a sequence of mutated codons."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the index of mutated codon and nucleotide sequence after the insertion.", "response": "def cdna_codon_sequence_after_insertion_frameshift(\n sequence_from_start_codon,\n cds_offset_before_insertion,\n inserted_nucleotides):\n \"\"\"\n Returns index of mutated codon and nucleotide sequence starting at the first\n mutated codon.\n \"\"\"\n # special logic for insertions\n coding_sequence_after_insertion = \\\n sequence_from_start_codon[cds_offset_before_insertion + 1:]\n\n if cds_offset_before_insertion % 3 == 2:\n # insertion happens after last nucleotide in a codon,\n # doesn't disrupt the existing codon from cds_offset-2 to cds_offset\n mutated_codon_index = cds_offset_before_insertion // 3 + 1\n nucleotides_before = \"\"\n elif cds_offset_before_insertion % 3 == 1:\n # insertion happens after 2nd nucleotide of a codon\n # codon positions:\n # 1) cds_offset - 1\n # 2) cds_offset\n # <----- Insertsion\n # 3) cds_offset + 1\n mutated_codon_index = cds_offset_before_insertion // 3\n # the first codon in the returned sequence will contain two reference\n # nucleotides before the insertion\n nucleotides_before = sequence_from_start_codon[\n cds_offset_before_insertion - 1:cds_offset_before_insertion + 1]\n elif cds_offset_before_insertion % 3 == 0:\n # insertion happens after 1st nucleotide of a codon\n # codon positions:\n # 1) cds_offset\n # <----- Insertsion\n # 2) cds_offset + 1\n # 3) cds_offset + 2\n mutated_codon_index = cds_offset_before_insertion // 3\n # the first codon in the returned sequence will contain one reference\n # nucleotide before the insertion\n nucleotides_before = sequence_from_start_codon[cds_offset_before_insertion]\n sequence_from_mutated_codon = (\n nucleotides_before +\n inserted_nucleotides +\n coding_sequence_after_insertion)\n return mutated_codon_index, sequence_from_mutated_codon"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef cdna_codon_sequence_after_deletion_or_substitution_frameshift(\n sequence_from_start_codon,\n cds_offset,\n trimmed_cdna_ref,\n trimmed_cdna_alt):\n \"\"\"\n Logic for any frameshift which isn't an insertion.\n\n We have insertions as a special case since our base-inclusive\n indexing means something different for insertions:\n cds_offset = base before insertion\n Whereas in this case:\n cds_offset = first reference base affected by a variant\n\n Returns index of first modified codon and sequence from that codon\n onward.\n \"\"\"\n mutated_codon_index = cds_offset // 3\n # get the sequence starting from the first modified codon until the end\n # of the transcript.\n sequence_after_mutated_codon = \\\n sequence_from_start_codon[mutated_codon_index * 3:]\n\n # the variant's ref nucleotides should start either 0, 1, or 2 nucleotides\n # into `sequence_after_mutated_codon`\n offset_into_mutated_codon = cds_offset % 3\n\n sequence_from_mutated_codon = substitute(\n sequence=sequence_after_mutated_codon,\n offset=offset_into_mutated_codon,\n ref=trimmed_cdna_ref,\n alt=trimmed_cdna_alt)\n return mutated_codon_index, sequence_from_mutated_codon", "response": "This function returns the index of the mutated codon and sequence after the frameshift which is an insertion or substitution."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef predict_frameshift_coding_effect(\n variant,\n transcript,\n trimmed_cdna_ref,\n trimmed_cdna_alt,\n cds_offset,\n sequence_from_start_codon):\n \"\"\"\n Coding effect of a frameshift mutation.\n\n Parameters\n ----------\n variant : Variant\n\n transcript : Transcript\n\n trimmed_cdna_ref : nucleotide sequence\n Reference nucleotides in the coding sequence of the given transcript.\n\n trimmed_cdna_alt : nucleotide sequence\n Alternate nucleotides introduced by mutation\n\n cds_offset : int\n Offset into the CDS of first ref nucleotide. For insertions, this\n is the offset of the last ref nucleotide before the insertion.\n\n sequence_from_start_codon : nucleotide sequence\n Nucleotides of the coding sequence and 3' UTR\n\n \"\"\"\n if len(trimmed_cdna_ref) != 0:\n mutated_codon_index, sequence_from_mutated_codon = \\\n cdna_codon_sequence_after_deletion_or_substitution_frameshift(\n sequence_from_start_codon=sequence_from_start_codon,\n cds_offset=cds_offset,\n trimmed_cdna_ref=trimmed_cdna_ref,\n trimmed_cdna_alt=trimmed_cdna_alt)\n else:\n mutated_codon_index, sequence_from_mutated_codon = \\\n cdna_codon_sequence_after_insertion_frameshift(\n sequence_from_start_codon=sequence_from_start_codon,\n cds_offset_before_insertion=cds_offset,\n inserted_nucleotides=trimmed_cdna_alt)\n return create_frameshift_effect(\n mutated_codon_index=mutated_codon_index,\n sequence_from_mutated_codon=sequence_from_mutated_codon,\n variant=variant,\n transcript=transcript)", "response": "Predicts the coding effect of a frameshift mutation."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngive a minimal cDNA reference nucleotide string pair and an offset into a CDS containing that nucleotide and the original nucleotide of the variant and the original nucleotide of the variant.", "response": "def predict_variant_coding_effect_on_transcript(\n variant,\n transcript,\n trimmed_cdna_ref,\n trimmed_cdna_alt,\n transcript_offset):\n \"\"\"\n Given a minimal cDNA ref/alt nucleotide string pair and an offset into a\n given transcript, determine the coding effect of this nucleotide substitution\n onto the translated protein.\n\n Parameters\n ----------\n variant : Variant\n\n transcript : Transcript\n\n trimmed_cdna_ref : str\n Reference nucleotides we expect to find in the transcript's CDS\n\n trimmed_cdna_alt : str\n Alternate nucleotides we're replacing the reference with\n\n transcript_offset : int\n Offset into the full transcript sequence of the ref->alt substitution\n \"\"\"\n if not transcript.complete:\n raise ValueError(\n (\"Can't annotate coding effect for %s\"\n \" on incomplete transcript %s\" % (variant, transcript)))\n\n sequence = transcript.sequence\n\n n_ref = len(trimmed_cdna_ref)\n n_alt = len(trimmed_cdna_alt)\n\n # reference nucleotides found on the transcript, if these don't match\n # what we were told to expect from the variant then raise an exception\n ref_nucleotides_from_transcript = str(\n sequence[transcript_offset:transcript_offset + n_ref])\n\n # Make sure that the reference sequence agrees with what we expected\n # from the VCF\n assert ref_nucleotides_from_transcript == trimmed_cdna_ref, \\\n \"%s: expected ref '%s' at offset %d of %s, transcript has '%s'\" % (\n variant,\n trimmed_cdna_ref,\n transcript_offset,\n transcript,\n ref_nucleotides_from_transcript)\n\n start_codon_offset = transcript.first_start_codon_spliced_offset\n stop_codon_offset = transcript.last_stop_codon_spliced_offset\n\n cds_len = stop_codon_offset - start_codon_offset + 1\n\n if cds_len < 3:\n raise ValueError(\n \"Coding sequence for %s is too short: '%s'\" % (\n transcript,\n transcript.sequence[start_codon_offset:stop_codon_offset + 1]))\n\n if n_ref == 0 and transcript.strand == \"-\":\n # By convention, genomic insertions happen *after* their base 1 position on\n # a chromosome. On the reverse strand, however, an insertion has to go\n # before the nucleotide at some transcript offset.\n # Example:\n # chromosome sequence:\n # TTT|GATCTCGTA|CCC\n # transcript on reverse strand:\n # CCC|ATGCTCTAG|TTT\n # where the CDS is emphasized:\n # ATGCTCTAG\n # If we have a genomic insertion g.6insATT\n # the genomic sequence becomes:\n # TTT|GAT_ATT_CTCGTA|CCC\n # (insert the \"ATT\" after the \"T\" at position 6)\n # On the reverse strand this becomes:\n # CCC|ATGCTC_TTA_TAG|TTT\n # (insert the \"ATT\" *before* the \"T\" at position 10)\n #\n # To preserve the interpretation of the start offset as the base\n # before the insertion, need to subtract one\n cds_offset = transcript_offset - start_codon_offset - 1\n else:\n cds_offset = transcript_offset - start_codon_offset\n\n assert cds_offset < cds_len, \\\n \"Expected CDS offset (%d) < |CDS| (%d) for %s on %s\" % (\n cds_offset, cds_len, variant, transcript)\n\n sequence_from_start_codon = str(sequence[start_codon_offset:])\n\n # is this an in-frame mutations?\n if (n_ref - n_alt) % 3 == 0:\n return predict_in_frame_coding_effect(\n variant=variant,\n transcript=transcript,\n trimmed_cdna_ref=trimmed_cdna_ref,\n trimmed_cdna_alt=trimmed_cdna_alt,\n cds_offset=cds_offset,\n sequence_from_start_codon=sequence_from_start_codon)\n else:\n return predict_frameshift_coding_effect(\n variant=variant,\n transcript=transcript,\n trimmed_cdna_ref=trimmed_cdna_ref,\n trimmed_cdna_alt=trimmed_cdna_alt,\n cds_offset=cds_offset,\n sequence_from_start_codon=sequence_from_start_codon)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\npredicting the effects of a variant on any transcripts it overlaps.", "response": "def predict_variant_effects(variant, raise_on_error=False):\n \"\"\"Determine the effects of a variant on any transcripts it overlaps.\n Returns an EffectCollection object.\n\n Parameters\n ----------\n variant : Variant\n\n raise_on_error : bool\n Raise an exception if we encounter an error while trying to\n determine the effect of this variant on a transcript, or simply\n log the error and continue.\n \"\"\"\n # if this variant isn't overlapping any genes, return a\n # Intergenic effect\n # TODO: look for nearby genes and mark those as Upstream and Downstream\n # effects\n if len(variant.gene_ids) == 0:\n effects = [Intergenic(variant)]\n else:\n # list of all MutationEffects for all genes & transcripts\n effects = []\n\n # group transcripts by their gene ID\n transcripts_grouped_by_gene = groupby_field(variant.transcripts, 'gene_id')\n\n # want effects in the list grouped by the gene they come from\n for gene_id in sorted(variant.gene_ids):\n if gene_id not in transcripts_grouped_by_gene:\n # intragenic variant overlaps a gene but not any transcripts\n gene = variant.ensembl.gene_by_id(gene_id)\n effects.append(Intragenic(variant, gene))\n else:\n # gene ID has transcripts overlapped by this variant\n for transcript in transcripts_grouped_by_gene[gene_id]:\n if raise_on_error:\n effect = predict_variant_effect_on_transcript(\n variant=variant,\n transcript=transcript)\n else:\n effect = predict_variant_effect_on_transcript_or_failure(\n variant=variant,\n transcript=transcript)\n effects.append(effect)\n return EffectCollection(effects)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef predict_variant_effect_on_transcript_or_failure(variant, transcript):\n try:\n return predict_variant_effect_on_transcript(\n variant=variant,\n transcript=transcript)\n except (AssertionError, ValueError) as error:\n logger.warn(\n \"Encountered error annotating %s for %s: %s\",\n variant,\n transcript,\n error)\n return Failure(variant, transcript)", "response": "Predict the effect of a variant on a particular transcript or return a Failure object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef predict_variant_effect_on_transcript(variant, transcript):\n\n if transcript.__class__ is not Transcript:\n raise TypeError(\n \"Expected %s : %s to have type Transcript\" % (\n transcript, type(transcript)))\n\n # check for non-coding transcripts first, since\n # every non-coding transcript is \"incomplete\".\n if not transcript.is_protein_coding:\n return NoncodingTranscript(variant, transcript)\n\n if not transcript.complete:\n return IncompleteTranscript(variant, transcript)\n\n # since we're using inclusive base-1 coordinates,\n # checking for overlap requires special logic for insertions\n is_insertion = variant.is_insertion\n\n # determine if any exons are deleted, and if not,\n # what is the closest exon and how far is this variant\n # from that exon (overlapping the exon = 0 distance)\n completely_lost_exons = []\n\n # list of which (exon #, Exon) pairs this mutation overlaps\n overlapping_exon_numbers_and_exons = []\n\n distance_to_nearest_exon = float(\"inf\")\n\n start_in_exon = False\n end_in_exon = False\n\n nearest_exon = None\n\n variant_start = variant.trimmed_base1_start\n variant_end = variant.trimmed_base1_end\n\n for i, exon in enumerate(transcript.exons):\n if variant_start <= exon.start and variant_end >= exon.end:\n completely_lost_exons.append(exon)\n\n if is_insertion and exon.strand == \"+\" and variant_end == exon.end:\n # insertions after an exon don't overlap the exon\n distance = 1\n elif is_insertion and exon.strand == \"-\" and variant_start == exon.start:\n distance = 1\n else:\n distance = exon.distance_to_interval(variant_start, variant_end)\n\n if distance == 0:\n overlapping_exon_numbers_and_exons.append((i + 1, exon))\n # start is contained in current exon\n if exon.start <= variant_start <= exon.end:\n start_in_exon = True\n # end is contained in current exon\n if exon.end >= variant_end >= exon.start:\n end_in_exon = True\n elif distance < distance_to_nearest_exon:\n distance_to_nearest_exon = distance\n nearest_exon = exon\n\n if len(overlapping_exon_numbers_and_exons) == 0:\n intronic_effect_class = choose_intronic_effect_class(\n variant=variant,\n nearest_exon=nearest_exon,\n distance_to_exon=distance_to_nearest_exon)\n return intronic_effect_class(\n variant=variant,\n transcript=transcript,\n nearest_exon=nearest_exon,\n distance_to_exon=distance_to_nearest_exon)\n elif len(completely_lost_exons) > 0 or (\n len(overlapping_exon_numbers_and_exons) > 1):\n # if spanning multiple exons, or completely deleted an exon\n # then consider that an ExonLoss mutation\n exons = [exon for (_, exon) in overlapping_exon_numbers_and_exons]\n return ExonLoss(variant, transcript, exons)\n\n assert len(overlapping_exon_numbers_and_exons) == 1\n\n exon_number, exon = overlapping_exon_numbers_and_exons[0]\n\n exonic_effect_annotation = exonic_transcript_effect(\n variant, exon, exon_number, transcript)\n\n # simple case: both start and end are in the same\n if start_in_exon and end_in_exon:\n return exonic_effect_annotation\n elif isinstance(exonic_effect_annotation, ExonicSpliceSite):\n # if mutation bleeds over into intro but even just\n # the exonic portion got annotated as an exonic splice site\n # then return it\n return exonic_effect_annotation\n\n return ExonicSpliceSite(\n variant=variant,\n transcript=transcript,\n exon=exon,\n alternate_effect=exonic_effect_annotation)", "response": "Predicts the transcript effect that results from applying this variant to a particular transcript."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nchooses the intronic effect class for a given variant.", "response": "def choose_intronic_effect_class(\n variant,\n nearest_exon,\n distance_to_exon):\n \"\"\"\n Infer effect of variant which does not overlap any exon of\n the given transcript.\n \"\"\"\n assert distance_to_exon > 0, \\\n \"Expected intronic effect to have distance_to_exon > 0, got %d\" % (\n distance_to_exon,)\n\n if nearest_exon.strand == \"+\":\n # if exon on positive strand\n start_before = variant.trimmed_base1_start < nearest_exon.start\n start_same = variant.trimmed_base1_start == nearest_exon.start\n before_exon = start_before or (variant.is_insertion and start_same)\n else:\n # if exon on negative strand\n end_after = variant.trimmed_base1_end > nearest_exon.end\n end_same = variant.trimmed_base1_end == nearest_exon.end\n before_exon = end_after or (variant.is_insertion and end_same)\n\n # distance cutoffs based on consensus splice sequences from\n # http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2947103/\n # 5' splice site: MAG|GURAGU consensus\n # M is A or C; R is purine; | is the exon-intron boundary\n # 3' splice site: YAG|R\n if distance_to_exon <= 2:\n if before_exon:\n # 2 last nucleotides of intron before exon are the splice\n # acceptor site, typically \"AG\"\n return SpliceAcceptor\n else:\n # 2 first nucleotides of intron after exon are the splice donor\n # site, typically \"GT\"\n return SpliceDonor\n elif not before_exon and distance_to_exon <= 6:\n # variants in nucleotides 3-6 at start of intron aren't as certain\n # to cause problems as nucleotides 1-2 but still implicated in\n # alternative splicing\n return IntronicSpliceSite\n elif before_exon and distance_to_exon <= 3:\n # nucleotide -3 before exon is part of the 3' splicing\n # motif but allows for more degeneracy than the -2, -1 nucleotides\n return IntronicSpliceSite\n else:\n # intronic mutation unrelated to splicing\n return Intronic"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef exonic_transcript_effect(variant, exon, exon_number, transcript):\n\n genome_ref = variant.trimmed_ref\n genome_alt = variant.trimmed_alt\n variant_start = variant.trimmed_base1_start\n variant_end = variant.trimmed_base1_end\n\n # clip mutation to only affect the current exon\n if variant_start < exon.start:\n # if mutation starts before current exon then only look\n # at nucleotides which overlap the exon\n logger.info('Mutation in variant %s starts before exon %s', variant, exon)\n assert len(genome_ref) > 0, \"Unexpected insertion into intron\"\n n_skip_start = exon.start - variant_start\n genome_ref = genome_ref[n_skip_start:]\n genome_alt = genome_alt[n_skip_start:]\n genome_start = exon.start\n else:\n genome_start = variant_start\n\n if variant_end > exon.end:\n # if mutation goes past exon end then only look at nucleotides\n # which overlap the exon\n logger.info('Mutation in variant %s ends after exon %s', variant, exon)\n n_skip_end = variant_end - exon.end\n genome_ref = genome_ref[:-n_skip_end]\n genome_alt = genome_alt[:len(genome_ref)]\n genome_end = exon.end\n else:\n genome_end = variant_end\n\n transcript_offset = interval_offset_on_transcript(\n genome_start, genome_end, transcript)\n\n if transcript.on_backward_strand:\n cdna_ref = reverse_complement(genome_ref)\n cdna_alt = reverse_complement(genome_alt)\n else:\n cdna_ref = genome_ref\n cdna_alt = genome_alt\n\n n_ref = len(cdna_ref)\n\n expected_ref = str(\n transcript.sequence[transcript_offset:transcript_offset + n_ref])\n\n if cdna_ref != expected_ref:\n raise ValueError(\n (\"Found ref nucleotides '%s' in sequence\"\n \" of %s at offset %d (chromosome positions %d:%d)\"\n \" but variant %s has '%s'\") % (\n expected_ref,\n transcript,\n transcript_offset,\n genome_start,\n genome_end,\n variant,\n cdna_ref))\n\n utr5_length = min(transcript.start_codon_spliced_offsets)\n\n # does the variant start inside the 5' UTR?\n if utr5_length > transcript_offset:\n # does the variant end after the 5' UTR, within the coding region?\n if utr5_length < transcript_offset + n_ref:\n # TODO: we *might* lose the Kozak sequence or the start codon\n # but without looking at the modified sequence how can we tell\n # for sure that this is a start-loss variant?\n return StartLoss(variant, transcript)\n else:\n # if variant contained within 5' UTR\n return FivePrimeUTR(variant, transcript)\n\n utr3_offset = max(transcript.stop_codon_spliced_offsets) + 1\n\n if transcript_offset >= utr3_offset:\n return ThreePrimeUTR(variant, transcript)\n\n exon_start_offset = interval_offset_on_transcript(\n exon.start, exon.end, transcript)\n exon_end_offset = exon_start_offset + len(exon) - 1\n\n # Further below we're going to try to predict exonic splice site\n # modifications, which will take this effect_annotation as their\n # alternative hypothesis for what happens if splicing doesn't change.\n # If the mutation doesn't affect an exonic splice site, then\n # we'll just return this effect.\n coding_effect_annotation = predict_variant_coding_effect_on_transcript(\n variant=variant,\n transcript=transcript,\n trimmed_cdna_ref=cdna_ref,\n trimmed_cdna_alt=cdna_alt,\n transcript_offset=transcript_offset)\n\n if changes_exonic_splice_site(\n transcript=transcript,\n transcript_ref=cdna_ref,\n transcript_alt=cdna_alt,\n transcript_offset=transcript_offset,\n exon_start_offset=exon_start_offset,\n exon_end_offset=exon_end_offset,\n exon_number=exon_number):\n return ExonicSpliceSite(\n variant=variant,\n transcript=transcript,\n exon=exon,\n alternate_effect=coding_effect_annotation)\n return coding_effect_annotation", "response": "This function returns the effect of this variant on a Transcript assuming we already know that this variant overlaps some exon of the transcript."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef apply_groupby(records, fn, skip_none=False):\n\n # create an empty list for every new key\n groups = defaultdict(list)\n for record in records:\n value = fn(record)\n if value is not None or not skip_none:\n groups[value].append(record)\n return dict(groups)", "response": "Given a list of objects groups them into a dictionary by applying fn to each one and returns the resulting dictionary by the key value."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngiving a list of objects group them into a dictionary by a given field name.", "response": "def groupby_field(records, field_name, skip_none=True):\n \"\"\"\n Given a list of objects, group them into a dictionary by\n the unique values of a given field name.\n \"\"\"\n return apply_groupby(\n records,\n lambda obj: getattr(obj, field_name),\n skip_none=skip_none)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef memoize(fn):\n memoized_values = {}\n\n @wraps(fn)\n def wrapped_fn(*args, **kwargs):\n key = (args, tuple(sorted(kwargs.items())))\n try:\n return memoized_values[key]\n except KeyError:\n memoized_values[key] = fn(*args, **kwargs)\n return memoized_values[key]\n\n return wrapped_fn", "response": "Memoization decorator for functions and methods."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef interval_offset_on_transcript(start, end, transcript):\n # ensure that start_pos:end_pos overlap with transcript positions\n if start > end:\n raise ValueError(\n \"start_pos %d shouldn't be greater than end_pos %d\" % (\n start, end))\n if start > transcript.end:\n raise ValueError(\n \"Range %d:%d starts after transcript %s (%d:%d)\" % (\n start,\n end,\n transcript,\n transcript.start,\n transcript.end))\n if end < transcript.start:\n raise ValueError(\n \"Range %d:%d ends before transcript %s (%d:%d)\" % (\n start,\n end,\n transcript,\n transcript.start,\n transcript.end))\n # trim the start position to the beginning of the transcript\n if start < transcript.start:\n start = transcript.start\n # trim the end position to the end of the transcript\n if end > transcript.end:\n end = transcript.end\n # return earliest offset into the spliced transcript\n return min(\n transcript.spliced_offset(start),\n transcript.spliced_offset(end))", "response": "Given an interval [ start end ) and a particular transcript return the start offset of the interval relative to the chromosomal positions of the transcript."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef variants_to_vcf(variants, variant_to_metadata, out=sys.stdout):\n\n # TODO: The variant metadata dictionary (in the `VariantCollection`)\n # contains different data depending on the original input file format (VCF,\n # MAF, CSV). It's easy to output variants originally in VCF format, but we\n # might want to consider how fields from MAF (for example) map to those in\n # VCF.\n\n # TODO: The VCF file we output doesn't contain any VCF metadata headers, as\n # the original headers were thrown away when the VCF file was parsed. We\n # may want to keep some of that information and/or infer some of the\n # headers based on the variants themselves. The former is difficult because\n # merge conflicts will inevitably occur; the latter is difficult because\n # the variants themselves don't contain all the information required for\n # these metadata headers (e.g., descriptions).\n #\n # As a side note, adding headers for certain fields will make them parse\n # correctly -- into an integer instead of a string (the default), for\n # example.\n\n # TODO: If we maintain headers (see above TODO), what should happen if\n # variant sources use different reference genomes?\n #\n # If we don't maintain headers, what should the default reference genome\n # be? This code chose one fairly arbitrarily.\n\n # TODO: If we end up needing more functions to \"build\" VCF record fields\n # (see functions below), we may want to abstract away the individual\n # functions and create a mapping from field to format function.\n\n def get_metadata_field(key, variant, default='.'):\n \"\"\"Retrieve field from variant metadata dictionary.\"\"\"\n val = variant_to_metadata[variant].get(key)\n if val is None:\n return default\n return val\n\n def build_filter_field(variant):\n \"\"\"Build the filter field from the given variant.\n\n The `filter` field, as stored in the variants metadata dictionary,\n comes in 3 flavors:\n\n - empty list: 1+ filters were run and none failed\n - non-empty list: 1+ filters were run and 1+ failed\n - `None`: no filters were run\n\n This function maps each of these internal representations to their\n corresponding VCF representations.\n \"\"\"\n filter_metadata = get_metadata_field('filter', variant)\n if type(filter_metadata) == list:\n return 'PASS' if filter_metadata == [] else ';'.join(filter_metadata)\n else:\n # TODO: Can the filter field ever be something other than the 3\n # possibilities described in this function's doc comment?\n bad_value_msg = (\n 'Filter metadata field took on unexpected value `{}`. Update '\n 'code to account for this value.').format(str(filter_metadata))\n assert filter_metadata == '.', bad_value_msg\n return filter_metadata\n\n def build_info_field(variant):\n \"\"\"Build the info field from the given variant.\n\n Format is `=,...;=,...;=,...`.\n \"\"\"\n\n def build_info_pair(key, val):\n \"\"\"Build key/val pair for info field.\"\"\"\n # Note: Different from `val == True`, which returns True when `val == 1`.\n if val is True:\n return key\n\n if type(val) == list:\n val = ','.join(map(str, val))\n else:\n val = str(val)\n return '{}={}'.format(key, val)\n\n info_dict = get_metadata_field('info', variant, default={})\n if not info_dict:\n return '.'\n\n return ';'.join(build_info_pair(k, v) for (k, v) in info_dict.items())\n\n def build_format_field(variant):\n \"\"\"Build the sample format string from the given variant.\n\n Each sample column follows this format for the specified variant.\n \"\"\"\n sample_info = get_metadata_field('sample_info', variant, default={})\n return ':'.join(list(sample_info.values())[0].keys()) if sample_info else '.'\n\n def build_sample_fields(variant):\n \"\"\"Build the sample fields for the given variant.\"\"\"\n\n def build_sample_field(sample):\n \"\"\"Build a specific sample's field (i.e., one sample column).\"\"\"\n sample_vals = sample_info[sample].values()\n return ':'.join(build_sample_val(val) for val in sample_vals)\n\n def build_sample_val(sample_val):\n \"\"\"Build a specific value for a sample (i.e., one value for one column).\"\"\"\n if type(sample_val) is list:\n return ','.join(map(str, sample_val))\n elif sample_val is not None:\n return str(sample_val)\n else:\n return '.'\n\n sample_info = get_metadata_field('sample_info', variant, default={})\n return list(build_sample_field(sample) for sample in sample_info)\n\n def build_vcf_record(variant, add_sample_info):\n \"\"\"Return a list of all the variant's VCF record fields.\"\"\"\n record = [\n str(variant.original_contig),\n str(variant.original_start),\n get_metadata_field('id', variant),\n variant.original_ref,\n variant.original_alt,\n str(get_metadata_field('qual', variant)),\n build_filter_field(variant),\n build_info_field(variant),\n ]\n if add_sample_info:\n record.append(build_format_field(variant))\n record.extend(build_sample_fields(variant))\n return record\n\n def merge_duplicate_variants():\n \"\"\"Merge duplicate variants (according to ID) and return *list* of merged, sorted variants.\n\n Multiple `Variant`s can have the same VCF id (e.g., those variants which\n originally had > 1 alternate base), but we can't output a VCF file with multiple\n records having the same id. This function merges those duplicate variants.\n \"\"\"\n\n def construct_id2variants():\n \"\"\"Construct dictionary which maps variant IDs to `Variant`s.\"\"\"\n id2variants = defaultdict(list)\n for variant in variants:\n # Note: For variants without IDs (e.g., those that came from\n # MAF files), we assign the missing value.\n variant_id = get_metadata_field('id', variant)\n id2variants[variant_id].append(variant)\n return id2variants\n\n def merge_variant_list(duplicates):\n \"\"\"Merge duplicate variant list into one.\"\"\"\n\n # TODO: Currently assumes that only alternate bases differ, but we may\n # want or need to merge variants that have the same ID but different\n # contigs, positions, or reference bases. If merging isn't possible (which\n # I think is the case for many situations), it may be easiest to assign\n # the \"missing value\" to the IDs and move on.\n\n assert len(duplicates) > 0\n assert all(v.original_contig == duplicates[0].original_contig and\n v.original_start == duplicates[0].original_start and\n v.original_ref == duplicates[0].original_ref for v in duplicates)\n import copy\n merged = copy.copy(duplicates[0])\n merged.original_alt = ','.join(duplicate.original_alt for duplicate in duplicates)\n return merged\n\n id2variants = construct_id2variants()\n variants_no_id = id2variants.pop('.', []) # don't want to merge variants w/ no id\n merged_variants = list(map(merge_variant_list, id2variants.values())) + variants_no_id\n\n # groups variants by contig; relative ordering of contigs doesn't matter\n return sorted(\n merged_variants,\n key=lambda v: (str(v.original_contig), str(v.original_start)))\n\n def get_sample_names():\n \"\"\"Return the sample names for all variants.\"\"\"\n\n # TODO: For now, ensures that every variant has the same samples. If they didn't,\n # we'd have to use the missing value for VCF records with no data on a given\n # sample. In and of itself, that isn't a problem until the format field contains\n # GT (genotype), which is required for all samples if present in the format field.\n #\n # A couple of ways to handle this:\n #\n # 1) Ignore this requirement of VCF files. I'd assume this would still be\n # compatible with varcode, but may not be with other VCF parsers.\n # 2) Remove the GT field from those records where this rule would be violated.\n # This is sad because we lose information.\n #\n # It's also important to note that if/when we combine all samples, we'll have to\n # add a `.` (missing value) for those samples in which a variant has no data.\n # Check the VCF spec to make sure this is valid; if not, we may have to write\n # `.:.:.:.` -- one `.` for each field in the format string.\n #\n # Moreover, we'll want to note whether we care about maintaining the relative\n # ordering of the sample names in the original VCF files. This probably isn't\n # important, but the code below does not do this (it effectively alphabetizes the\n # sample names because of the use of `set`).\n\n sample_names = set()\n for variant in variants:\n sample_info = get_metadata_field('sample_info', variant, default={})\n addl_sample_names = set(sample_info.keys())\n\n # Ensures all variants have the same samples.\n if sample_names and sample_names != addl_sample_names:\n return []\n\n sample_names.update(addl_sample_names)\n\n return list(sample_names)\n\n headers = ['#CHROM', 'POS', 'ID', 'REF', 'ALT', 'QUAL', 'FILTER', 'INFO']\n sample_names = get_sample_names()\n if sample_names:\n headers += ['FORMAT'] + sample_names\n\n unique_variants_list = merge_duplicate_variants()\n\n # usually we have just one reference genome for the whole variant collection\n # but if variants from multiple sources have been merged then we might\n # not be able to write out a VCF since the individual variants may be using\n # different coordinate systems\n genome_names = list({v.ensembl.reference_name for v in unique_variants_list})\n if len(genome_names) > 1:\n raise ValueError(\n \"Cannot create VCF for variants with multiple reference genomes: %s\" % (\n genome_names,))\n genome_name = genome_names[0]\n print('##fileformat=VCFv4.2', file=out)\n print('##reference=%s' % genome_name, file=out)\n print('\\t'.join(headers), file=out)\n for variant in unique_variants_list:\n record = build_vcf_record(variant, add_sample_info=bool(sample_names))\n print('\\t'.join(record), file=out)", "response": "Output a VCF file from a list of variants."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef filter_by_transcript_expression(\n self,\n transcript_expression_dict,\n min_expression_value=0.0):\n \"\"\"\n Filters effects to those which have an associated transcript whose\n expression value in the transcript_expression_dict argument is greater\n than min_expression_value.\n\n Parameters\n ----------\n transcript_expression_dict : dict\n Dictionary mapping Ensembl transcript IDs to expression estimates\n (either FPKM or TPM)\n\n min_expression_value : float\n Threshold above which we'll keep an effect in the result collection\n \"\"\"\n return self.filter_above_threshold(\n key_fn=lambda effect: effect.transcript_id,\n value_dict=transcript_expression_dict,\n threshold=min_expression_value)", "response": "Returns a new effect collection where the transcript_id of the associated object is greater than min_expression_value."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef filter_by_gene_expression(\n self,\n gene_expression_dict,\n min_expression_value=0.0):\n \"\"\"\n Filters effects to those which have an associated gene whose\n expression value in the gene_expression_dict argument is greater\n than min_expression_value.\n\n Parameters\n ----------\n gene_expression_dict : dict\n Dictionary mapping Ensembl gene IDs to expression estimates\n (either FPKM or TPM)\n\n min_expression_value : float\n Threshold above which we'll keep an effect in the result collection\n \"\"\"\n return self.filter_above_threshold(\n key_fn=lambda effect: effect.gene_id,\n value_dict=gene_expression_dict,\n threshold=min_expression_value)", "response": "Returns a new ouput collection of all effects to those which have an associated gene whose\n expression value in the gene_expression_dict argument is greater than min_expression_value."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef filter_by_effect_priority(self, min_priority_class):\n min_priority = transcript_effect_priority_dict[min_priority_class]\n return self.filter(\n lambda effect: effect_priority(effect) >= min_priority)", "response": "Create a new EffectCollection containing only effects whose priority\n falls below the given class."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a string with all transcript effects for each mutation.", "response": "def detailed_string(self):\n \"\"\"\n Create a long string with all transcript effects for each mutation,\n grouped by gene (if a mutation affects multiple genes).\n \"\"\"\n lines = []\n # TODO: annoying to always write `groupby_result.items()`,\n # consider makings a GroupBy class which iterates over pairs\n # and also common helper methods like `map_values`.\n for variant, variant_effects in self.groupby_variant().items():\n lines.append(\"\\n%s\" % variant)\n\n gene_effects_groups = variant_effects.groupby_gene_id()\n for (gene_id, gene_effects) in gene_effects_groups.items():\n if gene_id:\n gene_name = variant.ensembl.gene_name_of_gene_id(gene_id)\n lines.append(\" Gene: %s (%s)\" % (gene_name, gene_id))\n # place transcript effects with more significant impact\n # on top (e.g. FrameShift should go before NoncodingTranscript)\n for effect in sorted(\n gene_effects,\n key=effect_priority,\n reverse=True):\n lines.append(\" -- %s\" % effect)\n\n # if we only printed one effect for this gene then\n # it's redundant to print it again as the highest priority effect\n if len(variant_effects) > 1:\n best = variant_effects.top_priority_effect()\n lines.append(\" Highest Priority Effect: %s\" % best)\n return \"\\n\".join(lines)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a dictionary mapping each transcript effect to an expression level.", "response": "def effect_expression(self, expression_levels):\n \"\"\"\n Parameters\n ----------\n expression_levels : dict\n Dictionary mapping transcript IDs to length-normalized expression\n levels (either FPKM or TPM)\n\n Returns dictionary mapping each transcript effect to an expression\n quantity. Effects that don't have an associated transcript\n (e.g. Intergenic) will not be included.\n \"\"\"\n return OrderedDict(\n (effect, expression_levels.get(effect.transcript.id, 0.0))\n for effect in self\n if effect.transcript is not None)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the highest expression effect that has the highest expression level.", "response": "def top_expression_effect(self, expression_levels):\n \"\"\"\n Return effect whose transcript has the highest expression level.\n If none of the effects are expressed or have associated transcripts,\n then return None. In case of ties, add lexicographical sorting by\n effect priority and transcript length.\n \"\"\"\n effect_expression_dict = self.effect_expression(expression_levels)\n\n if len(effect_expression_dict) == 0:\n return None\n\n def key_fn(effect_fpkm_pair):\n \"\"\"\n Sort effects primarily by their expression level\n and secondarily by the priority logic used in\n `top_priority_effect`.\n \"\"\"\n (effect, fpkm) = effect_fpkm_pair\n return (fpkm, multi_gene_effect_sort_key(effect))\n\n return max(effect_expression_dict.items(), key=key_fn)[0]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef to_dataframe(self):\n # list of properties to extract from Variant objects if they're\n # not None\n variant_properties = [\n \"contig\",\n \"start\",\n \"ref\",\n \"alt\",\n \"is_snv\",\n \"is_transversion\",\n \"is_transition\"\n ]\n\n def row_from_effect(effect):\n row = OrderedDict()\n\n row['variant'] = str(effect.variant.short_description)\n for field_name in variant_properties:\n # if effect.variant is None then this column value will be None\n row[field_name] = getattr(effect.variant, field_name, None)\n row['gene_id'] = effect.gene_id\n row['gene_name'] = effect.gene_name\n row['transcript_id'] = effect.transcript_id\n row['transcript_name'] = effect.transcript_name\n\n row['effect_type'] = effect.__class__.__name__\n row['effect'] = effect.short_description\n return row\n return pd.DataFrame.from_records([row_from_effect(effect) for effect in self])", "response": "Build a dataframe from the effect collection"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef random_variants(\n count,\n genome_name=\"GRCh38\",\n deletions=True,\n insertions=True,\n random_seed=None):\n \"\"\"\n Generate a VariantCollection with random variants that overlap\n at least one complete coding transcript.\n \"\"\"\n rng = random.Random(random_seed)\n ensembl = genome_for_reference_name(genome_name)\n\n if ensembl in _transcript_ids_cache:\n transcript_ids = _transcript_ids_cache[ensembl]\n else:\n transcript_ids = ensembl.transcript_ids()\n _transcript_ids_cache[ensembl] = transcript_ids\n\n variants = []\n\n # we should finish way before this loop is over but just in case\n # something is wrong with PyEnsembl we want to avoid an infinite loop\n for _ in range(count * 100):\n if len(variants) < count:\n transcript_id = rng.choice(transcript_ids)\n transcript = ensembl.transcript_by_id(transcript_id)\n\n if not transcript.complete:\n continue\n\n exon = rng.choice(transcript.exons)\n base1_genomic_position = rng.randint(exon.start, exon.end)\n transcript_offset = transcript.spliced_offset(base1_genomic_position)\n seq = transcript.sequence\n\n ref = str(seq[transcript_offset])\n if transcript.on_backward_strand:\n ref = reverse_complement(ref)\n\n alt_nucleotides = [x for x in STANDARD_NUCLEOTIDES if x != ref]\n\n if insertions:\n nucleotide_pairs = [\n x + y\n for x in STANDARD_NUCLEOTIDES\n for y in STANDARD_NUCLEOTIDES\n ]\n alt_nucleotides.extend(nucleotide_pairs)\n if deletions:\n alt_nucleotides.append(\"\")\n alt = rng.choice(alt_nucleotides)\n variant = Variant(\n transcript.contig,\n base1_genomic_position,\n ref=ref,\n alt=alt,\n ensembl=ensembl)\n variants.append(variant)\n else:\n return VariantCollection(variants)\n raise ValueError(\n (\"Unable to generate %d random variants, \"\n \"there may be a problem with PyEnsembl\") % count)", "response": "Generate a VariantCollection with random variants that overlap at least one complete coding transcript."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nperform bind on the server and grabs the DSA info object.", "response": "def get_server_info(self, anonymous = True):\n\t\t\"\"\"\n\t\tPerforms bind on the server and grabs the DSA info object.\n\t\tIf anonymous is set to true, then it will perform anonymous bind, not using user credentials\n\t\tOtherwise it will use the credentials set in the object constructor.\n\t\t\"\"\"\n\t\tif anonymous == True:\n\t\t\tlogger.debug('Getting server info via Anonymous BIND on server %s' % self.target_server.get_host())\n\t\t\tserver = Server(self.target_server.get_host(), use_ssl=self.target_server.is_ssl(), get_info=ALL)\n\t\t\tconn = Connection(server, auto_bind=True)\n\t\t\tlogger.debug('Got server info')\n\t\telse:\n\t\t\tlogger.debug('Getting server info via credentials supplied on server %s' % self.target_server.get_host())\n\t\t\tserver = Server(self.target_server.get_host(), use_ssl=self.target_server.is_ssl(), get_info=ALL)\n\t\t\tif self.use_sspi == True:\n\t\t\t\tconn = self.monkeypatch()\n\t\t\telse:\n\t\t\t\tconn = Connection(self._srv, user=self.login_credential.get_msuser(), password=self.login_credential.get_password(), authentication=self.login_credential.get_authmethod())\n\t\t\tlogger.debug('Performing BIND to server %s' % self.target_server.get_host())\n\t\t\tif not self._con.bind():\n\t\t\t\tif 'description' in self._con.result:\n\t\t\t\t\traise Exception('Failed to bind to server! Reason: %s' % conn.result['description'])\n\t\t\t\traise Exception('Failed to bind to server! Reason: %s' % conn.result)\n\t\t\tlogger.debug('Connected to server!')\n\t\treturn server.info"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nperform a paged search on the AD using the filter and attributes as a normal query does.", "response": "def pagedsearch(self, ldap_filter, attributes):\n\t\t\"\"\"\n\t\tPerforms a paged search on the AD, using the filter and attributes as a normal query does.\n\t\tNeeds to connect to the server first!\n\t\tldap_filter: str : LDAP query filter\n\t\tattributes: list : Attributes list to recieve in the result\n\t\t\"\"\"\n\t\tlogger.debug('Paged search, filter: %s attributes: %s' % (ldap_filter, ','.join(attributes)))\n\t\tctr = 0\n\t\tentries = self._con.extend.standard.paged_search(self._tree, ldap_filter, attributes = attributes, paged_size = self.ldap_query_page_size)\n\t\tfor entry in entries:\n\t\t\tif 'raw_attributes' in entry and 'attributes' in entry:\n\t\t\t\t# TODO: return ldapuser object\n\t\t\t\tctr += 1\n\t\t\t\tif ctr % self.ldap_query_page_size == 0:\n\t\t\t\t\tlogger.info('New page requested. Result count: %d' % ctr)\n\t\t\t\tyield entry"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfetching all user objects from the AD and returns MSADUser object", "response": "def get_all_user_objects(self):\n\t\t\"\"\"\n\t\tFetches all user objects from the AD, and returns MSADUser object\n\t\t\"\"\"\n\t\tlogger.debug('Polling AD for all user objects')\n\t\tldap_filter = r'(objectClass=user)'\n\n\t\tattributes = MSADUser.ATTRS\n\t\tfor entry in self.pagedsearch(ldap_filter, attributes):\n\t\t\t# TODO: return ldapuser object\n\t\t\tyield MSADUser.from_ldap(entry, self._ldapinfo)\n\t\tlogger.debug('Finished polling for entries!')"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nfetch one user object from the AD based on the sAMAccountName attribute.", "response": "def get_user(self, sAMAccountName):\n\t\t\"\"\"\n\t\tFetches one user object from the AD, based on the sAMAccountName attribute (read: username) \n\t\t\"\"\"\n\t\tlogger.debug('Polling AD for user %s'% sAMAccountName)\n\t\tldap_filter = r'(&(objectClass=user)(sAMAccountName=%s)' % sAMAccountName\n\t\tattributes = MSADUser.ATTRS\n\t\tfor entry in self.pagedsearch(ldap_filter, attributes):\n\t\t\t# TODO: return ldapuser object\n\t\t\tyield MSADUser.from_ldap(entry, self._ldapinfo)\n\t\tlogger.debug('Finished polling for entries!')"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\npolls for basic AD information", "response": "def get_ad_info(self):\n\t\t\"\"\"\n\t\tPolls for basic AD information (needed for determine password usage characteristics!)\n\t\t\"\"\"\n\t\tlogger.debug('Polling AD for basic info')\n\t\tldap_filter = r'(distinguishedName=%s)' % self._tree\n\t\tattributes = MSADInfo.ATTRS\n\t\tfor entry in self.pagedsearch(ldap_filter, attributes):\n\t\t\tself._ldapinfo = MSADInfo.from_ldap(entry)\n\t\t\treturn self._ldapinfo\n\n\t\tlogger.debug('Poll finished!')"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfetches all service user objects from the AD and returns MSADUser object.", "response": "def get_all_service_user_objects(self, include_machine = False):\n\t\t\"\"\"\n\t\tFetches all service user objects from the AD, and returns MSADUser object.\n\t\tService user refers to an user whith SPN (servicePrincipalName) attribute set\n\t\t\"\"\"\n\t\tlogger.debug('Polling AD for all user objects, machine accounts included: %s'% include_machine)\n\t\tif include_machine == True:\n\t\t\tldap_filter = r'(servicePrincipalName=*)'\n\t\telse:\n\t\t\tldap_filter = r'(&(servicePrincipalName=*)(!(sAMAccountName = *$)))'\n\n\t\tattributes = MSADUser.ATTRS\n\t\tfor entry in self.pagedsearch(ldap_filter, attributes):\n\t\t\t# TODO: return ldapuser object\n\t\t\tyield MSADUser.from_ldap(entry, self._ldapinfo)\n\t\tlogger.debug('Finished polling for entries!')"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_all_knoreq_user_objects(self, include_machine = False):\n\t\tlogger.debug('Polling AD for all user objects, machine accounts included: %s'% include_machine)\n\t\tif include_machine == True:\n\t\t\tldap_filter = r'(userAccountControl:1.2.840.113556.1.4.803:=4194304)'\n\t\telse:\n\t\t\tldap_filter = r'(&(userAccountControl:1.2.840.113556.1.4.803:=4194304)(!(sAMAccountName = *$)))'\n\n\t\tattributes = MSADUser.ATTRS\n\t\tfor entry in self.pagedsearch(ldap_filter, attributes):\n\t\t\t# TODO: return ldapuser object\n\t\t\tyield MSADUser.from_ldap(entry, self._ldapinfo)\n\t\tlogger.debug('Finished polling for entries!')", "response": "Fetches all user objects from the AD and returns MSADUser object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the value of the last item in x", "response": "def vn(x):\n\t\"\"\"\n\tvalue or none, returns none if x is an empty list\n\t\"\"\"\n\tif x == []:\n\t\treturn None\n\tif isinstance(x, list):\n\t\treturn '|'.join(x)\n\tif isinstance(x, datetime):\n\t\treturn x.isoformat()\n\treturn x"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nconvert AWS inconvenient tags model of a list of {\"Key\": pairs to a dict of { value > pairs", "response": "def convert_tags_to_dict(item):\n \"\"\"\n Convert AWS inconvenient tags model of a list of {\"Key\": , \"Value\": } pairs\n to a dict of {: } for easier querying.\n\n This returns a proxied object over given item to return a different tags format as the tags\n attribute is read-only and we cannot modify it directly.\n \"\"\"\n if hasattr(item, 'tags'):\n tags = item.tags\n if isinstance(tags, list):\n tags_dict = {}\n for kv_dict in tags:\n if isinstance(kv_dict, dict) and 'Key' in kv_dict and 'Value' in kv_dict:\n tags_dict[kv_dict['Key']] = kv_dict['Value']\n return ObjectProxy(item, tags=tags_dict)\n return item"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef load_tables(self, query, meta):\n try:\n for table in meta.tables:\n self.load_table(table)\n except NoCredentialsError:\n help_link = 'http://boto3.readthedocs.io/en/latest/guide/configuration.html'\n raise QueryError('Unable to locate AWS credential. '\n 'Please see {0} on how to configure AWS credential.'.format(help_link))", "response": "Load necessary resources tables into db to execute given query."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef load_table(self, table):\n region = table.database if table.database else self.default_region\n resource_name, collection_name = table.table.split('_', 1)\n # we use underscore \"_\" instead of dash \"-\" for region name but boto3 need dash\n boto_region_name = region.replace('_', '-')\n resource = self.boto3_session.resource(resource_name, region_name=boto_region_name)\n if not hasattr(resource, collection_name):\n raise QueryError(\n 'Unknown collection <{0}> of resource <{1}>'.format(collection_name, resource_name))\n\n self.attach_region(region)\n self.refresh_table(region, table.table, resource, getattr(resource, collection_name))", "response": "Load resources as specified by given table into our db."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef json_serialize(obj):\n if isinstance(obj, datetime):\n return obj.isoformat()\n\n if hasattr(obj, 'id'):\n return jsonify(obj.id)\n\n if hasattr(obj, 'name'):\n return jsonify(obj.name)\n\n raise TypeError('{0} is not JSON serializable'.format(obj))", "response": "Simple generic JSON serializer for common objects."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a table in given database with given list of column names.", "response": "def create_table(db, schema_name, table_name, columns):\n \"\"\"\n Create a table, schema_name.table_name, in given database with given list of column names.\n \"\"\"\n table = '{0}.{1}'.format(schema_name, table_name) if schema_name else table_name\n db.execute('DROP TABLE IF EXISTS {0}'.format(table))\n columns_list = ', '.join(columns)\n db.execute('CREATE TABLE {0} ({1})'.format(table, columns_list))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef insert_all(db, schema_name, table_name, columns, items):\n table = '{0}.{1}'.format(schema_name, table_name) if schema_name else table_name\n columns_list = ', '.join(columns)\n values_list = ', '.join(['?'] * len(columns))\n query = 'INSERT INTO {table} ({columns}) VALUES ({values})'.format(\n table=table, columns=columns_list, values=values_list)\n for item in items:\n values = [getattr(item, col) for col in columns]\n db.execute(query, values)", "response": "Insert all items in given items list into the specified table."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a full ComicDataWrapper object for this creator.", "response": "def get_comics(self, *args, **kwargs):\n \"\"\"\n Returns a full ComicDataWrapper object for this creator.\n \n /creators/{creatorId}/comics\n \n :returns: ComicDataWrapper -- A new request to API. Contains full results set.\n \"\"\"\n from .comic import Comic, ComicDataWrapper\n return self.get_related_resource(Comic, ComicDataWrapper, args, kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a full EventDataWrapper object for this creator.", "response": "def get_events(self, *args, **kwargs):\n \"\"\"\n Returns a full EventDataWrapper object for this creator.\n\n /creators/{creatorId}/events\n\n :returns: EventDataWrapper -- A new request to API. Contains full results set.\n \"\"\"\n from .event import Event, EventDataWrapper\n return self.get_related_resource(Event, EventDataWrapper, args, kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_series(self, *args, **kwargs):\n from .series import Series, SeriesDataWrapper\n return self.get_related_resource(Series, SeriesDataWrapper, args, kwargs)", "response": "Returns a full SeriesDataWrapper object for this creator."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a full StoriesDataWrapper object for this creator.", "response": "def get_stories(self, *args, **kwargs):\n \"\"\"\n Returns a full StoryDataWrapper object for this creator.\n\n /creators/{creatorId}/stories\n\n :returns: StoriesDataWrapper -- A new request to API. Contains full results set.\n \"\"\"\n from .story import Story, StoryDataWrapper\n return self.get_related_resource(Story, StoryDataWrapper, args, kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef list_to_instance_list(_self, _list, _Class):\n items = []\n for item in _list:\n items.append(_Class(_self.marvel, item))\n return items", "response": "Takes a list of resource dicts and returns a list of resource instances defined by the _Class param."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_related_resource(_self, _Class, _ClassDataWrapper, *args, **kwargs):\n url = \"%s/%s/%s\" % (_self.resource_url(), _self.id, _Class.resource_url())\n response = json.loads(_self.marvel._call(url, _self.marvel._params(kwargs)).text)\n return _ClassDataWrapper(_self.marvel, response)", "response": "Returns a related resource Class \n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a new CharacterDataWrapper containing the next set of characters.", "response": "def next(self):\n \"\"\"\n Returns new CharacterDataWrapper\n TODO: Don't raise offset past count - limit\n \"\"\"\n self.params['offset'] = str(int(self.params['offset']) + int(self.params['limit']))\n return self.marvel.get_characters(self.marvel, (), **self.params)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a new CharacterDataWrapper containing only the previous set of characters.", "response": "def previous(self):\n \"\"\"\n Returns new CharacterDataWrapper\n TODO: Don't lower offset below 0\n \"\"\"\n self.params['offset'] = str(int(self.params['offset']) - int(self.params['limit']))\n return self.marvel.get_characters(self.marvel, (), **self.params)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _call(self, resource_url, params=None):\n \n url = \"%s%s\" % (self._endpoint(), resource_url)\n if params:\n url += \"?%s&%s\" % (params, self._auth())\n else:\n url += \"?%s\" % self._auth()\n return requests.get(url)", "response": "Calls the Marvel API endpoint and returns the response"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates hash from api keys and returns all required parametsrs", "response": "def _auth(self):\n \"\"\"\n Creates hash from api keys and returns all required parametsrs\n \n :returns: str -- URL encoded query parameters containing \"ts\", \"apikey\", and \"hash\"\n \"\"\"\n ts = datetime.datetime.now().strftime(\"%Y-%m-%d%H:%M:%S\")\n hash_string = hashlib.md5(\"%s%s%s\" % (ts, self.private_key, self.public_key)).hexdigest()\n return \"ts=%s&apikey=%s&hash=%s\" % (ts, self.public_key, hash_string)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfetch a single character by id.", "response": "def get_character(self, id):\n \"\"\"Fetches a single character by id.\n\n get /v1/public/characters\n\n :param id: ID of Character\n :type params: int\n \n :returns: CharacterDataWrapper\n\n >>> m = Marvel(public_key, private_key)\n >>> cdw = m.get_character(1009718)\n >>> print cdw.data.count\n 1\n >>> print cdw.data.results[0].name\n Wolverine\n\n \"\"\"\n url = \"%s/%s\" % (Character.resource_url(), id)\n response = json.loads(self._call(url).text)\n return CharacterDataWrapper(self, response)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfetch list of comic characters with optional filters.", "response": "def get_characters(self, *args, **kwargs):\n \"\"\"Fetches lists of comic characters with optional filters.\n\n get /v1/public/characters/{characterId}\n\n :returns: CharacterDataWrapper\n\n >>> m = Marvel(public_key, private_key)\n >>> cdw = m.get_characters(orderBy=\"name,-modified\", limit=\"5\", offset=\"15\")\n >>> print cdw.data.count\n 1401\n >>> for result in cdw.data.results:\n ... print result.name\n Aginar\n Air-Walker (Gabriel Lan)\n Ajak\n Ajaxis\n Akemi\n \n \"\"\"\n #pass url string and params string to _call\n response = json.loads(self._call(Character.resource_url(), self._params(kwargs)).text)\n return CharacterDataWrapper(self, response, kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_comic(self, id):\n \n url = \"%s/%s\" % (Comic.resource_url(), id)\n response = json.loads(self._call(url).text)\n return ComicDataWrapper(self, response)", "response": "Fetches a single comic by id."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_comics(self, *args, **kwargs):\n\n response = json.loads(self._call(Comic.resource_url(), self._params(kwargs)).text)\n return ComicDataWrapper(self, response)", "response": "Fetches list of comics."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nfetches a single creator by id.", "response": "def get_creator(self, id):\n \"\"\"Fetches a single creator by id.\n\n get /v1/public/creators/{creatorId}\n\n :param id: ID of Creator\n :type params: int\n \n :returns: CreatorDataWrapper\n\n >>> m = Marvel(public_key, private_key)\n >>> cdw = m.get_creator(30)\n >>> print cdw.data.count\n 1\n >>> print cdw.data.result.fullName\n Stan Lee\n \"\"\"\n\n url = \"%s/%s\" % (Creator.resource_url(), id)\n response = json.loads(self._call(url).text)\n return CreatorDataWrapper(self, response)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_creators(self, *args, **kwargs):\n \n response = json.loads(self._call(Creator.resource_url(), self._params(kwargs)).text)\n return CreatorDataWrapper(self, response)", "response": "Fetches lists of creators.\n \n get / v1 / public / creators\n \n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_event(self, id):\n\n url = \"%s/%s\" % (Event.resource_url(), id)\n response = json.loads(self._call(url).text)\n return EventDataWrapper(self, response)", "response": "Fetches a single event by id."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_events(self, *args, **kwargs):\n\n response = json.loads(self._call(Event.resource_url(), self._params(kwargs)).text)\n return EventDataWrapper(self, response)", "response": "Fetches lists of events. EventDataWrapper\n get / v1. public / events\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfetching a single comic series by id.", "response": "def get_single_series(self, id):\n \"\"\"Fetches a single comic series by id.\n\n get /v1/public/series/{seriesId}\n\n :param id: ID of Series\n :type params: int\n \n :returns: SeriesDataWrapper\n \n >>> m = Marvel(public_key, private_key)\n >>> response = m.get_single_series(12429)\n >>> print response.data.result.title\n 5 Ronin (2010)\n \"\"\"\n\n url = \"%s/%s\" % (Series.resource_url(), id)\n response = json.loads(self._call(url).text)\n return SeriesDataWrapper(self, response)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nfetch lists of events. get / v1 / public / events", "response": "def get_series(self, *args, **kwargs):\n \"\"\"Fetches lists of events.\n\n get /v1/public/events\n \n :returns: SeriesDataWrapper\n \n >>> #Find all the series that involved Wolverine\n >>> #wolverine's id: 1009718\n >>> m = Marvel(public_key, private_key)\n >>> response = m.get_series(characters=\"1009718\")\n >>> print response.data.total\n 435\n >>> series = response.data.results\n >>> print series[0].title\n 5 Ronin (2010)\n \"\"\"\n\n response = json.loads(self._call(Series.resource_url(), self._params(kwargs)).text)\n return SeriesDataWrapper(self, response)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_story(self, id):\n\n url = \"%s/%s\" % (Story.resource_url(), id)\n response = json.loads(self._call(url).text)\n return StoryDataWrapper(self, response)", "response": "Fetches a single story by id."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_stories(self, *args, **kwargs):\n\n response = json.loads(self._call(Story.resource_url(), self._params(kwargs)).text)\n return StoryDataWrapper(self, response)", "response": "Fetches lists of stories.\n get / v1 / public / stories\n \n "} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a full CreatorDataWrapper object for this story.", "response": "def get_creators(self, *args, **kwargs):\n \"\"\"\n Returns a full CreatorDataWrapper object for this story.\n\n /stories/{storyId}/creators\n\n :returns: CreatorDataWrapper -- A new request to API. Contains full results set.\n \"\"\"\n from .creator import Creator, CreatorDataWrapper\n return self.get_related_resource(Creator, CreatorDataWrapper, args, kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a full CharacterDataWrapper object for this story.", "response": "def get_characters(self, *args, **kwargs):\n \"\"\"\n Returns a full CharacterDataWrapper object for this story.\n\n /stories/{storyId}/characters\n\n :returns: CharacterDataWrapper -- A new request to API. Contains full results set.\n \"\"\"\n from .character import Character, CharacterDataWrapper\n return self.get_related_resource(Character, CharacterDataWrapper, args, kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the available ffmpeg version as string", "response": "def ffmpeg_version():\n \"\"\"Returns the available ffmpeg version\n\n Returns\n ----------\n version : str\n version number as string\n \"\"\"\n\n cmd = [\n 'ffmpeg',\n '-version'\n ]\n\n output = sp.check_output(cmd)\n aac_codecs = [\n x for x in\n output.splitlines() if \"ffmpeg version \" in str(x)\n ][0]\n hay = aac_codecs.decode('ascii')\n match = re.findall(r'ffmpeg version (\\d+\\.)?(\\d+\\.)?(\\*|\\d+)', hay)\n if match:\n return \"\".join(match[0])\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef cli(inargs=None):\n\n parser = argparse.ArgumentParser()\n\n parser.add_argument(\n '--version', '-V',\n action='version',\n version='%%(prog)s %s' % __version__\n )\n\n parser.add_argument(\n 'filename',\n metavar=\"filename\",\n help=\"Input STEM file\"\n )\n\n parser.add_argument(\n '--id',\n metavar='id',\n type=int,\n nargs='+',\n help=\"A list of stem_ids\"\n )\n\n parser.add_argument(\n '-s',\n type=float,\n nargs='?',\n help=\"start offset in seconds\"\n )\n\n parser.add_argument(\n '-t',\n type=float,\n nargs='?',\n help=\"read duration\"\n )\n\n parser.add_argument(\n 'outdir',\n metavar='outdir',\n nargs='?',\n help=\"Output folder\"\n )\n\n args = parser.parse_args(inargs)\n stem2wav(args.filename, args.outdir, args.id, args.s, args.t)", "response": "Commandline interface for receiving stem files"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef check_available_aac_encoders():\n\n cmd = [\n 'ffmpeg',\n '-v', 'error',\n '-codecs'\n ]\n\n output = sp.check_output(cmd)\n aac_codecs = [\n x for x in\n output.splitlines() if \"AAC (Advanced Audio Coding)\" in str(x)\n ][0]\n hay = aac_codecs.decode('ascii')\n match = re.findall(r'\\(encoders: ([^\\)]*) \\)', hay)\n if match:\n return match[0].split(\" \")\n else:\n return None", "response": "Returns the available AAC encoders Returns None if no AAC encoders are available"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef write_stems(\n audio,\n filename,\n rate=44100,\n bitrate=256000,\n codec=None,\n ffmpeg_params=None\n):\n \"\"\"Write stems from numpy Tensor\n\n Parameters\n ----------\n audio : array_like\n The tensor of Matrix of stems. The data shape is formatted as\n :code:`stems x channels x samples`.\n filename : str\n Output file_name of the stems file\n rate : int\n Output samplerate. Defaults to 44100 Hz.\n bitrate : int\n AAC Bitrate in Bits per second. Defaults to 256 Kbit/s\n codec : str\n AAC codec used. Defaults to `None` which automatically selects\n either `libfdk_aac` or `aac` in that order, determined by availability.\n ffmpeg_params : list(str)\n List of additional ffmpeg parameters\n\n Notes\n -----\n\n Output is written as 16bit/44.1 kHz\n\n \"\"\"\n if int(stempeg.ffmpeg_version()[0]) < 3:\n warnings.warn(\n \"Writing STEMS with FFMPEG version < 3 is unsupported\", UserWarning\n )\n\n if codec is None:\n avail = check_available_aac_encoders()\n\n if avail is not None:\n if 'libfdk_aac' in avail:\n codec = 'libfdk_aac'\n else:\n codec = 'aac'\n warnings.warn(\"For better quality, please install libfdc_aac\")\n else:\n codec = 'aac'\n warnings.warn(\"For better quality, please install libfdc_aac\")\n\n tmps = [\n tmp.NamedTemporaryFile(delete=False, suffix='.wav')\n for t in range(audio.shape[0])\n ]\n\n if audio.shape[1] % 1024 != 0:\n warnings.warn(\n \"Number of samples does not divide by 1024, be aware that \"\n \"the AAC encoder add silence to the input signal\"\n )\n\n for k in range(audio.shape[0]):\n sf.write(tmps[k].name, audio[k], rate)\n\n cmd = (\n [\n 'ffmpeg', '-y',\n \"-f\", 's%dle' % (16),\n \"-acodec\", 'pcm_s%dle' % (16),\n '-ar', \"%d\" % rate,\n '-ac', \"%d\" % 2\n ] +\n list(chain.from_iterable(\n [['-i', i.name] for i in tmps]\n )) +\n list(chain.from_iterable(\n [['-map', str(k)] for k, _ in enumerate(tmps)]\n )) +\n [\n '-vn',\n '-acodec', codec,\n '-ar', \"%d\" % rate,\n '-strict', '-2',\n '-loglevel', 'error'\n ] +\n (['-ab', str(bitrate)] if (bitrate is not None) else []) +\n (ffmpeg_params if ffmpeg_params else []) +\n [filename]\n )\n sp.call(cmd)", "response": "Write STEMS from numpy Tensor of Matrix of Stems."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nextract FFMPEG info and returns info as JSON", "response": "def read_info(\n filename\n):\n \"\"\"Extracts FFMPEG info and returns info as JSON\n\n Returns\n -------\n info : Dict\n JSON info dict\n \"\"\"\n\n cmd = [\n 'ffprobe',\n filename,\n '-v', 'error',\n '-print_format', 'json',\n '-show_format', '-show_streams',\n ]\n\n out = sp.check_output(cmd)\n info = json.loads(out.decode('utf-8'))\n return info"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreads STEMS format into numpy Tensor of STEMS.", "response": "def read_stems(\n filename,\n out_type=np.float_,\n stem_id=None,\n start=0,\n duration=None,\n info=None\n):\n \"\"\"Read STEMS format into numpy Tensor\n\n Parameters\n ----------\n filename : str\n Filename of STEMS format. Typically `filename.stem.mp4`.\n out_type : type\n Output type. Defaults to 32bit float aka `np.float32`.\n stem_id : int\n Stem ID (Stream ID) to read. Defaults to `None`, which reads all\n available stems.\n start : float\n Start position (seek) in seconds, defaults to 0.\n duration : float\n Read `duration` seconds. End position then is `start + duration`.\n Defaults to `None`: read till the end.\n info : object\n provide info object, useful if read_stems is called frequently on\n file with same configuration (#streams, #channels, samplerate).\n Returns\n -------\n stems : array_like\n The tensor of Matrix of stems. The data shape is formatted as\n :code:`stems x channels x samples`.\n\n Notes\n -----\n\n Input is expected to be in 16bit/44.1 kHz\n\n \"\"\"\n if info is None:\n FFinfo = Info(filename)\n else:\n FFinfo = info\n\n if stem_id is not None:\n substreams = stem_id\n else:\n substreams = FFinfo.audio_stream_idx()\n\n if not isinstance(substreams, list):\n substreams = [substreams]\n\n stems = []\n tmps = [\n tmp.NamedTemporaryFile(delete=False, suffix='.wav')\n for t in substreams\n ]\n for tmp_id, stem in enumerate(substreams):\n rate = FFinfo.rate(stem)\n channels = FFinfo.channels(stem)\n cmd = [\n 'ffmpeg',\n '-y',\n '-vn',\n '-i', filename,\n '-map', '0:' + str(stem),\n '-acodec', 'pcm_s16le',\n '-ar', str(rate),\n '-ac', str(channels),\n '-loglevel', 'error',\n tmps[tmp_id].name\n ]\n if start:\n cmd.insert(3, '-ss')\n cmd.insert(4, str(start))\n\n if duration is not None:\n cmd.insert(-1, '-t')\n cmd.insert(-1, str(duration))\n\n sp.call(cmd)\n # read wav files\n audio, rate = sf.read(tmps[tmp_id].name)\n tmps[tmp_id].close()\n os.remove(tmps[tmp_id].name)\n stems.append(audio)\n\n # check if all stems have the same duration\n stem_durations = np.array([t.shape[0] for t in stems])\n if not (stem_durations == stem_durations[0]).all():\n warnings.warn(\"Warning.......Stems differ in length and were shortend\")\n min_length = np.min(stem_durations)\n stems = [t[:min_length, :] for t in stems]\n\n stems = np.array(stems)\n stems = np.squeeze(stems).astype(out_type)\n return stems, rate"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef nDims(self):\n result = self._index.ndim\n assert result == 1, \"Expected index to be 1D, got: {}D\".format(result)\n return result", "response": "The number of dimensions of the index. Will always be 1."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nstring representation of the element type.", "response": "def elementTypeName(self):\n \"\"\" String representation of the element type.\n \"\"\"\n if self._ndFrame is None:\n return super(AbstractPandasNDFrameRti, self).elementTypeName\n else:\n try:\n return str(self._ndFrame.dtype) # Series\n except AttributeError:\n return ''"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _createIndexRti(self, index, nodeName):\n return PandasIndexRti(index=index, nodeName=nodeName, fileName=self.fileName,\n iconColor=self._iconColor)", "response": "Auxiliary method that creates a PandasIndexRti."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfetching all children of the object.", "response": "def _fetchAllChildren(self):\n \"\"\" Fetches the index if the showIndex member is True\n\n Descendants can override this function to add the subdevicions.\n \"\"\"\n assert self.isSliceable, \"No underlying pandas object: self._ndFrame is None\"\n childItems = []\n if self._standAlone:\n childItems.append(self._createIndexRti(self._ndFrame.index, 'index'))\n return childItems"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nfetching all children items.", "response": "def _fetchAllChildren(self):\n \"\"\" Fetches children items.\n\n If this is stand-alone DataFrame the index, column etc are added as PandasIndexRti obj.\n \"\"\"\n assert self.isSliceable, \"No underlying pandas object: self._ndFrame is None\"\n\n childItems = []\n\n for subName in self._ndFrame.columns: # Note that this is not the first dimension!\n childItem = PandasSeriesRti(self._ndFrame[subName], nodeName=subName,\n fileName=self.fileName, iconColor=self._iconColor,\n standAlone=False)\n childItems.append(childItem)\n\n if self._standAlone:\n childItems.append(self._createIndexRti(self._ndFrame.index, 'index'))\n childItems.append(self._createIndexRti(self._ndFrame.columns, 'columns'))\n\n return childItems"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _fetchAllChildren(self):\n assert self.isSliceable, \"No underlying pandas object: self._ndFrame is None\"\n\n childItems = []\n\n for subName in self._ndFrame.items:\n childItem = PandasDataFrameRti(self._ndFrame[subName], nodeName=subName,\n fileName=self.fileName, iconColor=self._iconColor,\n standAlone=False)\n childItems.append(childItem)\n\n if self._standAlone:\n childItems.append(self._createIndexRti(self._ndFrame.items, 'items'))\n childItems.append(self._createIndexRti(self._ndFrame.major_axis, 'major_axis'))\n childItems.append(self._createIndexRti(self._ndFrame.minor_axis, 'minor_axis'))\n\n return childItems", "response": "Fetches all children items."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef setCurrentInspectorRegItem(self, regItem):\n check_class(regItem, InspectorRegItem, allow_none=True)\n self.inspectorTab.setCurrentRegItem(regItem)", "response": "Sets the current inspector given an InspectorRegItem"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef createEditor(self, delegate, parent, _option):\n return GroupCtiEditor(self, delegate, parent=parent)", "response": "Creates a hidden widget that can be used to edit the group cti."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef log_dictionary(dictionary, msg='', logger=None, level='debug', item_prefix=' '):\n level_nr = logging.getLevelName(level.upper())\n\n if logger is None:\n logger = logging.getLogger('main')\n\n if msg :\n logger.log(level_nr, \"Logging dictionary: {}\".format(msg))\n\n if not dictionary:\n logger.log(level_nr,\"{}\".format(item_prefix))\n return\n\n max_key_len = max([len(k) for k in dictionary.keys()])\n\n for key, value in sorted(dictionary.items()):\n logger.log(level_nr, \"{0}{1:<{2}s} = {3}\".format(item_prefix, key, max_key_len, value))", "response": "Writes a log message with the given dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef is_quoted(s):\n return (s.startswith(\"'\") and s.endswith(\"'\")) or (s.startswith('\"') and s.endswith('\"'))", "response": "Returns True if the string begins and ends with quotes."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef string_to_identifier(s, white_space_becomes='_'):\n import re\n s = s.lower()\n s = re.sub(r\"\\s+\", white_space_becomes, s) # replace whitespace with underscores\n s = re.sub(r\"-\", \"_\", s) # replace hyphens with underscores\n s = re.sub(r\"[^A-Za-z0-9_]\", \"\", s) # remove everything that's not a character, a digit or a _\n return s", "response": "Takes a string and makes it suitable for use as an identifier."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nregistering an icon SVG file given a glyph and optionally the open or close state.", "response": "def registerIcon(self, fileName, glyph, isOpen=None):\n \"\"\" Register an icon SVG file given a glyph, and optionally the open/close state.\n\n :param fileName: filename to the SVG file.\n If the filename is a relative path, the ICONS_DIRECTORY will be prepended.\n :param glyph: a string describing the glyph (e.g. 'file', 'array')\n :param isOpen: boolean that indicates if the RTI is open or closed.\n If None, the icon will be registered for open is both True and False\n :return: QIcon\n \"\"\"\n check_class(isOpen, bool, allow_none=True)\n\n if fileName and not os.path.isabs(fileName):\n fileName = os.path.join(self.ICONS_DIRECTORY, fileName)\n\n if isOpen is None:\n # Register both opened and closed variants\n self._registry[(glyph, True)] = fileName\n self._registry[(glyph, False)] = fileName\n else:\n self._registry[(glyph, isOpen)] = fileName"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a QIcon given a glyph name open or closed state and color.", "response": "def getIcon(self, glyph, isOpen, color=None):\n \"\"\" Returns a QIcon given a glyph name, open/closed state and color.\n\n The reslulting icon is cached so that it only needs to be rendered once.\n\n :param glyph: name of a registered glyph (e.g. 'file', 'array')\n :param isOpen: boolean that indicates if the RTI is open or closed.\n :param color: '#RRGGBB' string (e.g. '#FF0000' for red)\n :return: QtGui.QIcon\n \"\"\"\n try:\n fileName = self._registry[(glyph, isOpen)]\n except KeyError:\n logger.warn(\"Unregistered icon glyph: {} (open={})\".format(glyph, isOpen))\n from argos.utils.misc import log_dictionary\n log_dictionary(self._registry, \"registry\", logger=logger)\n raise\n\n return self.loadIcon(fileName, color=color)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nread SVG from a file name and creates an QIcon from it.", "response": "def loadIcon(self, fileName, color=None):\n \"\"\" Reads SVG from a file name and creates an QIcon from it.\n\n Optionally replaces the color. Caches the created icons.\n\n :param fileName: absolute path to an icon file.\n If False/empty/None, None returned, which yields no icon.\n :param color: '#RRGGBB' string (e.g. '#FF0000' for red)\n :return: QtGui.QIcon\n \"\"\"\n if not fileName:\n return None\n\n key = (fileName, color)\n if key not in self._icons:\n try:\n with open(fileName, 'r') as input:\n svg = input.read()\n\n self._icons[key] = self.createIconFromSvg(svg, color=color)\n except Exception as ex:\n # It's preferable to show no icon in case of an error rather than letting\n # the application fail. Icons are a (very) nice to have.\n logger.warn(\"Unable to read icon: {}\".format(ex))\n if DEBUGGING:\n raise\n else:\n return None\n\n return self._icons[key]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef createIconFromSvg(self, svg, color=None, colorsToBeReplaced=None):\n if colorsToBeReplaced is None:\n colorsToBeReplaced = self.colorsToBeReplaced\n\n if color:\n for oldColor in colorsToBeReplaced:\n svg = svg.replace(oldColor, color)\n\n # From http://stackoverflow.com/questions/15123544/change-the-color-of-an-svg-in-qt\n qByteArray = QtCore.QByteArray()\n qByteArray.append(svg)\n svgRenderer = QtSvg.QSvgRenderer(qByteArray)\n icon = QtGui.QIcon()\n for size in self.renderSizes:\n pixMap = QtGui.QPixmap(QtCore.QSize(size, size))\n pixMap.fill(Qt.transparent)\n pixPainter = QtGui.QPainter(pixMap)\n pixPainter.setRenderHint(QtGui.QPainter.TextAntialiasing, True)\n pixPainter.setRenderHint(QtGui.QPainter.Antialiasing, True)\n svgRenderer.render(pixPainter)\n pixPainter.end()\n icon.addPixmap(pixMap)\n\n return icon", "response": "Creates a QIcon from an SVG string."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the data stored under the given role for the item referred to by the index.", "response": "def data(self, index, role=Qt.DisplayRole):\n \"\"\" Returns the data stored under the given role for the item referred to by the index.\n \"\"\"\n if not index.isValid():\n return None\n\n if role not in (Qt.DisplayRole, self.SORT_ROLE, Qt.ForegroundRole):\n return None\n\n row = index.row()\n col = index.column()\n item = self.registry.items[row]\n attrName = self.attrNames[col]\n\n if role == Qt.DisplayRole:\n return str(getattr(item, attrName))\n\n elif role == self.SORT_ROLE:\n # Use the fullName column as a tie-breaker\n return (getattr(item, attrName), item.fullName)\n\n elif role == Qt.ForegroundRole:\n if item.successfullyImported is None:\n return self.notImportedBrush\n elif item.successfullyImported:\n return self.regularBrush\n else:\n return self.errorBrush\n else:\n raise ValueError(\"Invalid role: {}\".format(role))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the data for the header for a particular section.", "response": "def headerData(self, section, orientation, role):\n \"\"\" Returns the header for a section (row or column depending on orientation).\n Reimplemented from QAbstractTableModel to make the headers start at 0.\n \"\"\"\n if role == QtCore.Qt.DisplayRole:\n if orientation == Qt.Horizontal:\n return self.attrNames[section]\n else:\n return str(section)\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef indexFromItem(self, regItem, col=0):\n if col < 0:\n col = len(self.attrNames) - col\n try:\n row = self.registry.items.index(regItem)\n except ValueError:\n return QtCore.QModelIndex()\n else:\n return self.index(row, col)", "response": "Gets the index of the item in the registry that contains the regItem"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nemit the dataChanged signal for the regItem", "response": "def emitDataChanged(self, regItem):\n \"\"\" Emits the dataChagned signal for the regItem\n \"\"\"\n leftIndex = self.indexFromItem(regItem, col=0)\n rightIndex = self.indexFromItem(regItem, col=-1)\n\n logger.debug(\"Data changed: {} ...{}\".format(self.data(leftIndex), self.data(rightIndex)))\n self.dataChanged.emit(leftIndex, rightIndex)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning True if the source row is accepted by the sourceModel.", "response": "def filterAcceptsRow(self, sourceRow, sourceParent):\n \"\"\" If onlyShowImported is True, regItems that were not (successfully) imported are\n filtered out.\n \"\"\"\n if not self.onlyShowImported:\n return True\n\n item = self.sourceModel().registry.items[sourceRow]\n return bool(item.successfullyImported)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning true if the value of the item referred to by the given index left is less than the value of the item referred to by the given index right.", "response": "def lessThan(self, leftIndex, rightIndex):\n \"\"\" Returns true if the value of the item referred to by the given index left is less than\n the value of the item referred to by the given index right, otherwise returns false.\n \"\"\"\n leftData = self.sourceModel().data(leftIndex, RegistryTableModel.SORT_ROLE)\n rightData = self.sourceModel().data(rightIndex, RegistryTableModel.SORT_ROLE)\n\n return leftData < rightData"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef itemFromIndex(self, index):\n sourceIndex = self.mapToSource(index)\n return self.sourceModel().itemFromIndex(sourceIndex)", "response": "Gets the item given the model index"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef indexFromItem(self, regItem, col=0):\n sourceIndex = self.sourceModel().indexFromItem(regItem, col=col)\n return self.mapFromSource(sourceIndex)", "response": "Gets the index of the item in the registry"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef emitDataChanged(self, regItem):\n #self.sourceModel().emitDataChanged(regItem) # Does this work?\n leftIndex = self.indexFromItem(regItem, col=0)\n rightIndex = self.indexFromItem(regItem, col=-1)\n self.dataChanged.emit(leftIndex, rightIndex)", "response": "Emits the dataChagned signal for the regItem\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef setCurrentRegItem(self, regItem):\n rowIndex = self.model().indexFromItem(regItem)\n if not rowIndex.isValid():\n logger.warn(\"Can't select {!r} in table\".format(regItem))\n self.setCurrentIndex(rowIndex)", "response": "Sets the current registry item."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef persistentRegisterInspector(fullName, fullClassName, pythonPath=''):\n registry = InspectorRegistry()\n registry.loadOrInitSettings()\n registry.registerInspector(fullName, fullClassName, pythonPath=pythonPath)\n registry.saveSettings()", "response": "Registers an inspector in the persistent registry"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nopening the main window for the given files and returns the main window with the given settings.", "response": "def browse(fileNames=None,\n inspectorFullName=None,\n select=None,\n profile=DEFAULT_PROFILE,\n resetProfile=False, # TODO: should probably be moved to the main program\n resetAllProfiles=False, # TODO: should probably be moved to the main program\n resetRegistry=False): # TODO: should probably be moved to the main program\n \"\"\" Opens the main window(s) for the persistent settings of the given profile,\n and executes the application.\n\n :param fileNames: List of file names that will be added to the repository\n :param inspectorFullName: The full path name of the inspector that will be loaded\n :param select: a path of the repository item that will selected at start up.\n :param profile: the name of the profile that will be loaded\n :param resetProfile: if True, the profile will be reset to it standard settings.\n :param resetAllProfiles: if True, all profiles will be reset to it standard settings.\n :param resetRegistry: if True, the registry will be reset to it standard settings.\n :return:\n \"\"\"\n # Imported here so this module can be imported without Qt being installed.\n from argos.qt import QtWidgets, QtCore\n from argos.application import ArgosApplication\n from argos.repo.testdata import createArgosTestData\n\n try:\n QtWidgets.QApplication.setAttribute(QtCore.Qt.AA_UseHighDpiPixmaps)\n except Exception as ex:\n logger.debug(\"AA_UseHighDpiPixmaps not available in PyQt4: {}\".format(ex))\n\n # Create\n argosApp = ArgosApplication()\n\n if resetProfile:\n argosApp.deleteProfile(profile)\n if resetAllProfiles:\n argosApp.deleteAllProfiles()\n if resetRegistry:\n argosApp.deleteRegistries()\n\n # Must be called before opening the files so that file formats are auto-detected.\n argosApp.loadOrInitRegistries()\n\n # Load data in common repository before windows are created.\n argosApp.loadFiles(fileNames)\n if DEBUGGING:\n argosApp.repo.insertItem(createArgosTestData())\n\n # Create windows for this profile.\n argosApp.loadProfile(profile=profile, inspectorFullName=inspectorFullName)\n\n if select:\n for mainWindow in argosApp.mainWindows:\n mainWindow.trySelectRtiByPath(select)\n\n return argosApp.execute()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef printInspectors():\n # Imported here so this module can be imported without Qt being installed.\n from argos.application import ArgosApplication\n\n argosApp = ArgosApplication()\n argosApp.loadOrInitRegistries()\n for regItem in argosApp.inspectorRegistry.items:\n print(regItem.fullName)", "response": "Prints a list of inspectors"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nstarting the main function of the main function of the main function of the main function of the main function of the main function of the main function of the main function of the main function.", "response": "def main():\n \"\"\" Starts Argos main window\n \"\"\"\n about_str = \"{} version: {}\".format(PROJECT_NAME, VERSION)\n parser = argparse.ArgumentParser(description = about_str)\n\n parser.add_argument('fileNames', metavar='FILE', nargs='*', help='Input files')\n\n parser.add_argument('-i', '--inspector', dest='inspector',\n help=\"\"\"The identifier or fullName of the inspector that will be opened at start up.\n E.g. 'Qt/Table'\"\"\")\n\n parser.add_argument('--list-inspectors', dest='list_inspectors', action = 'store_true',\n help=\"\"\"Prints a list of available inspectors for the -i option\"\"\")\n\n\n parser.add_argument('-s', '--select', dest='selectPath',\n help=\"\"\"Full path name of a repository tree item that will be selected at start-up.\n E.g. 'file/var/fieldname'\"\"\")\n\n parser.add_argument('-p', '--profile', dest='profile', default=DEFAULT_PROFILE,\n help=\"Can be used to have different persistent settings for different use cases.\")\n\n parser.add_argument('--reset', '--reset-profile', dest='reset_profile', action = 'store_true',\n help=\"If set, persistent settings will be reset for the current profile.\")\n\n parser.add_argument('--reset-all-profiles', dest='reset_all_profiles', action = 'store_true',\n help=\"If set, persistent settings will be reset for the all profiles.\")\n\n parser.add_argument('--reset-registry', dest='reset_registry', action = 'store_true',\n help=\"If set, the registry will be reset to contain only the default plugins.\")\n\n parser.add_argument('--version', action = 'store_true',\n help=\"Prints the program version.\")\n\n parser.add_argument('-l', '--log-level', dest='log_level', default='warning',\n help=\"Log level. Only log messages with a level higher or equal than this will be printed. \"\n \"Default: 'warning'\",\n choices=('debug', 'info', 'warning', 'error', 'critical'))\n\n args = parser.parse_args(remove_process_serial_number(sys.argv[1:]))\n\n if DEBUGGING:\n logger.info(\"Setting log level to: {}\".format(args.log_level.upper()))\n logger.setLevel(args.log_level.upper())\n\n if DEBUGGING:\n logger.warn(\"Debugging flag is on!\")\n\n if args.version:\n print(about_str)\n sys.exit(0)\n\n if args.list_inspectors:\n printInspectors()\n sys.exit(0)\n\n logger.info('Started {} {}'.format(PROJECT_NAME, VERSION))\n logger.info(\"Python version: {}\".format(sys.version).replace('\\n', ''))\n #logger.info('Using: {}'.format('PyQt' if USE_PYQT else 'PySide'))\n\n # Imported here so this module can be imported without Qt being installed.\n from argos.qt.misc import ABOUT_QT_BINDINGS\n logger.info(\"Using {}\".format(ABOUT_QT_BINDINGS))\n\n # Browse will create an ArgosApplication with one MainWindow\n browse(fileNames = args.fileNames,\n inspectorFullName=args.inspector,\n select=args.selectPath,\n profile=args.profile,\n resetProfile=args.reset_profile,\n resetAllProfiles=args.reset_all_profiles,\n resetRegistry=args.reset_registry)\n logger.info('Done {}'.format(PROJECT_NAME))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets the model of the tree view", "response": "def setModel(self, model):\n \"\"\" Sets the model.\n Checks that the model is a\n \"\"\"\n check_class(model, BaseTreeModel)\n super(ArgosTreeView, self).setModel(model)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsetting the current item to be the item at currentIndex.", "response": "def setCurrentIndex(self, currentIndex):\n \"\"\" Sets the current item to be the item at currentIndex.\n Also select the row as to give consistent user feedback.\n See also the notes at the top of this module on current item vs selected item(s).\n \"\"\"\n selectionModel = self.selectionModel()\n selectionFlags = (QtCore.QItemSelectionModel.ClearAndSelect |\n QtCore.QItemSelectionModel.Rows)\n selectionModel.setCurrentIndex(currentIndex, selectionFlags)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef getRowCurrentIndex(self):\n curIndex = self.currentIndex()\n col0Index = curIndex.sibling(curIndex.row(), 0)\n return col0Index", "response": "Returns the index of the column 0 of the current item in the underlying model."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfinding the current tree item and its index while we re at it Returns a tuple with the current item and its index.", "response": "def getCurrentItem(self): # TODO: rename? getCurrentItemAndIndex? getCurrentTuple? getCurrent?\n \"\"\" Find the current tree item (and the current index while we're at it)\n Returns a tuple with the current item, and its index. The item may be None.\n See also the notes at the top of this module on current item vs selected item(s).\n \"\"\"\n currentIndex = self.getRowCurrentIndex()\n currentItem = self.model().getItem(currentIndex)\n return currentItem, currentIndex"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfollow the path and expands all nodes along the way.", "response": "def expandPath(self, path):\n \"\"\" Follows the path and expand all nodes along the way.\n Returns (item, index) tuple of the last node in the path (the leaf node). This can be\n reused e.g. to select it.\n \"\"\"\n iiPath = self.model().findItemAndIndexPath(path)\n for (item, index) in iiPath[1:]: # skip invisible root\n assert index.isValid(), \"Sanity check: invalid index in path for item: {}\".format(item)\n self.expand(index)\n\n leaf = iiPath[-1]\n return leaf"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nexpand or collapses the node at the index and all its descendants.", "response": "def expandBranch(self, index=None, expanded=True):\n \"\"\" Expands or collapses the node at the index and all it's descendants.\n\n If expanded is True the nodes will be expanded, if False they will be collapsed.\n\n If parentIndex is None, the invisible root will be used (i.e. the complete forest will\n be expanded).\n \"\"\"\n treeModel = self.model()\n if index is None:\n index = QtCore.QModelIndex()\n\n if index.isValid():\n self.setExpanded(index, expanded)\n\n for rowNr in range(treeModel.rowCount(index)):\n childIndex = treeModel.index(rowNr, 0, parentIndex=index)\n self.expandBranch(index=childIndex, expanded=expanded)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsetting the signals of the children of the current object to block.", "response": "def blockChildrenSignals(self, block):\n \"\"\" If block equals True, the signals of the combo boxes and spin boxes are blocked\n Returns the old blocking state.\n \"\"\"\n logger.debug(\"Blocking collector signals\")\n for spinBox in self._spinBoxes:\n spinBox.blockSignals(block)\n for comboBox in self._comboBoxes:\n comboBox.blockSignals(block)\n result = self._signalsBlocked\n self._signalsBlocked = block\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _setColumnCountForContents(self):\n numRtiDims = self.rti.nDims if self.rti and self.rti.isSliceable else 0\n\n colCount = self.COL_FIRST_COMBO + max(numRtiDims, len(self.axisNames))\n self.tree.model().setColumnCount(colCount)\n return colCount", "response": "Sets the column count given the current axes and selected RTI. Returns the newly set column count."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nremove all comboboxes and sets the axes names to axesNames.", "response": "def clearAndSetComboBoxes(self, axesNames):\n \"\"\" Removes all comboboxes.\n \"\"\"\n logger.debug(\"Collector clearAndSetComboBoxes: {}\".format(axesNames))\n check_is_a_sequence(axesNames)\n row = 0\n self._deleteComboBoxes(row)\n self.clear()\n self._setAxesNames(axesNames)\n self._createComboBoxes(row)\n self._updateWidgets()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _setAxesNames(self, axisNames):\n for col, _ in enumerate(self._fullAxisNames, self.COL_FIRST_COMBO):\n self._setHeaderLabel(col, '')\n\n self._axisNames = tuple(axisNames)\n self._fullAxisNames = tuple([axName + self.AXIS_POST_FIX for axName in axisNames])\n\n for col, label in enumerate(self._fullAxisNames, self.COL_FIRST_COMBO):\n self._setHeaderLabel(col, label)", "response": "Sets the axesnames combobox lables and updates the headers. Removes old values first."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nset the header label of column col to text.", "response": "def _setHeaderLabel(self, col, text):\n \"\"\" Sets the header of column col to text.\n Will increase the number of columns if col is larger than the current number.\n \"\"\"\n model = self.tree.model()\n item = model.horizontalHeaderItem(col)\n if item:\n item.setText(text)\n else:\n model.setHorizontalHeaderItem(col, QtGui.QStandardItem(text))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates the current VisItem from the contents of the repo tree item.", "response": "def setRti(self, rti):\n \"\"\" Updates the current VisItem from the contents of the repo tree item.\n\n Is a slot but the signal is usually connected to the Collector, which then calls\n this function directly.\n \"\"\"\n check_class(rti, BaseRti)\n #assert rti.isSliceable, \"RTI must be sliceable\" # TODO: maybe later\n\n self._rti = rti\n self._updateWidgets()\n self._updateRtiInfo()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nupdate the combo and spin boxes given the new rti or axes.", "response": "def _updateWidgets(self):\n \"\"\" Updates the combo and spin boxes given the new rti or axes.\n Emits the sigContentsChanged signal.\n \"\"\"\n row = 0\n model = self.tree.model()\n\n # Create path label\n nodePath = '' if self.rti is None else self.rti.nodePath\n pathItem = QtGui.QStandardItem(nodePath)\n pathItem.setToolTip(nodePath)\n pathItem.setEditable(False)\n if self.rti is not None:\n pathItem.setIcon(self.rti.decoration)\n model.setItem(row, 0, pathItem)\n\n self._deleteSpinBoxes(row)\n self._populateComboBoxes(row)\n self._createSpinBoxes(row)\n self._updateRtiInfo()\n\n self.tree.resizeColumnsToContents(startCol=self.COL_FIRST_COMBO)\n\n logger.debug(\"{} sigContentsChanged signal (_updateWidgets)\"\n .format(\"Blocked\" if self.signalsBlocked() else \"Emitting\"))\n self.sigContentsChanged.emit(UpdateReason.RTI_CHANGED)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a combo box for each of the fullAxisNames", "response": "def _createComboBoxes(self, row):\n \"\"\" Creates a combo box for each of the fullAxisNames\n \"\"\"\n tree = self.tree\n model = self.tree.model()\n self._setColumnCountForContents()\n\n for col, _ in enumerate(self._axisNames, self.COL_FIRST_COMBO):\n logger.debug(\"Adding combobox at ({}, {})\".format(row, col))\n comboBox = QtWidgets.QComboBox()\n comboBox.setSizeAdjustPolicy(QtWidgets.QComboBox.AdjustToContents)\n comboBox.activated.connect(self._comboBoxActivated)\n self._comboBoxes.append(comboBox)\n\n #editor = LabeledWidget(QtWidgets.QLabel(comboLabel), comboBox)\n tree.setIndexWidget(model.index(row, col), comboBox)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _deleteComboBoxes(self, row):\n tree = self.tree\n model = self.tree.model()\n\n for col in range(self.COL_FIRST_COMBO, self.maxCombos):\n logger.debug(\"Removing combobox at: ({}, {})\".format(row, col))\n tree.setIndexWidget(model.index(row, col), None)\n\n self._comboBoxes = []", "response": "Deletes all comboboxes of a row"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\npopulating the comboboxes with values of the repo tree item .", "response": "def _populateComboBoxes(self, row):\n \"\"\" Populates the combo boxes with values of the repo tree item\n \"\"\"\n logger.debug(\"_populateComboBoxes\")\n for comboBox in self._comboBoxes:\n comboBox.clear()\n\n if not self.rtiIsSliceable:\n # Add an empty item to the combo boxes so that resize to contents works.\n for comboBoxNr, comboBox in enumerate(self._comboBoxes):\n comboBox.addItem('', userData=None)\n comboBox.setEnabled(False)\n return\n\n nDims = self._rti.nDims\n nCombos = len(self._comboBoxes)\n\n for comboBoxNr, comboBox in enumerate(self._comboBoxes):\n # Add a fake dimension of length 1\n comboBox.addItem(FAKE_DIM_NAME, userData = FAKE_DIM_OFFSET + comboBoxNr)\n\n for dimNr in range(nDims):\n comboBox.addItem(self._rti.dimensionNames[dimNr], userData=dimNr)\n\n # Set combobox current index\n if nDims >= nCombos:\n # We set the nth combo-box index to the last item - n. This because the\n # NetCDF-CF conventions have the preferred dimension order of T, Z, Y, X.\n # The +1 below is from the fake dimension.\n curIdx = nDims + 1 - nCombos + comboBoxNr\n else:\n # If there are less dimensions in the RTI than the inspector can show, we fill\n # the comboboxes starting at the leftmost and set the remaining comboboxes to the\n # fake dimension. This means that a table inspector fill have one column and many\n # rows, which is the most convenient.\n curIdx = comboBoxNr + 1 if comboBoxNr < nDims else 0\n\n assert 0 <= curIdx <= nDims + 1, \\\n \"curIdx should be <= {}, got {}\".format(nDims + 1, curIdx)\n\n comboBox.setCurrentIndex(curIdx)\n comboBox.setEnabled(True)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns True if the dimension is selected in the combo boxes.", "response": "def _dimensionSelectedInComboBox(self, dimNr):\n \"\"\" Returns True if the dimension is selected in one of the combo boxes.\n \"\"\"\n for combobox in self._comboBoxes:\n if self._comboBoxDimensionIndex(combobox) == dimNr:\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _createSpinBoxes(self, row):\n assert len(self._spinBoxes) == 0, \"Spinbox list not empty. Call _deleteSpinBoxes first\"\n\n if not self.rtiIsSliceable:\n return\n\n logger.debug(\"_createSpinBoxes, array shape: {}\".format(self._rti.arrayShape))\n\n self._setColumnCountForContents()\n\n tree = self.tree\n model = self.tree.model()\n col = self.COL_FIRST_COMBO + self.maxCombos\n\n for dimNr, dimSize in enumerate(self._rti.arrayShape):\n\n if self._dimensionSelectedInComboBox(dimNr):\n continue\n\n self._setHeaderLabel(col, '')\n\n spinBox = CollectorSpinBox()\n self._spinBoxes.append(spinBox)\n\n spinBox.setKeyboardTracking(False)\n spinBox.setCorrectionMode(QtWidgets.QAbstractSpinBox.CorrectToNearestValue)\n\n spinBox.setMinimum(0)\n spinBox.setMaximum(dimSize - 1)\n spinBox.setSingleStep(1)\n spinBox.setValue(dimSize // 2) # select the middle of the slice\n spinBox.setPrefix(\"{}: \".format(self._rti.dimensionNames[dimNr]))\n spinBox.setSuffix(\"/{}\".format(spinBox.maximum()))\n spinBox.setProperty(\"dim_nr\", dimNr)\n #spinBox.adjustSize() # necessary?\n\n # This must be done after setValue to prevent emitting too many signals\n spinBox.valueChanged[int].connect(self._spinboxValueChanged)\n\n tree.setIndexWidget(model.index(row, col), spinBox)\n col += 1\n\n # Resize the spinbox columns to their new contents\n self.tree.resizeColumnsToContents(startCol=self.COL_FIRST_COMBO + self.maxCombos)", "response": "Creates a list of SpinBox objects for each dimension that is not selected in a combo box."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nremoving all spinboxes from the tree.", "response": "def _deleteSpinBoxes(self, row):\n \"\"\" Removes all spinboxes\n \"\"\"\n tree = self.tree\n model = self.tree.model()\n\n for col, spinBox in enumerate(self._spinBoxes, self.COL_FIRST_COMBO + self.maxCombos):\n spinBox.valueChanged[int].disconnect(self._spinboxValueChanged)\n tree.setIndexWidget(model.index(row, col), None)\n self._spinBoxes = []\n\n self._setColumnCountForContents()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _comboBoxActivated(self, index, comboBox=None):\n if comboBox is None:\n comboBox = self.sender()\n assert comboBox, \"comboBox not defined and not the sender\"\n\n blocked = self.blockChildrenSignals(True)\n\n # If one of the other combo boxes has the same value, set it to the fake dimension\n curDimIdx = self._comboBoxDimensionIndex(comboBox)\n if curDimIdx < FAKE_DIM_OFFSET:\n otherComboBoxes = [cb for cb in self._comboBoxes if cb is not comboBox]\n for otherComboBox in otherComboBoxes:\n if otherComboBox.currentIndex() == comboBox.currentIndex():\n #newIdx = otherComboBox.findData(FAKE_DIM_IDX)\n #otherComboBox.setCurrentIndex(newIdx)\n otherComboBox.setCurrentIndex(0) # Fake dimension is always the first\n\n # Show only spin boxes that are not selected\n row = 0\n self._deleteSpinBoxes(row)\n self._createSpinBoxes(row)\n self._updateRtiInfo()\n\n self.blockChildrenSignals(blocked)\n\n logger.debug(\"{} sigContentsChanged signal (comboBox)\"\n .format(\"Blocked\" if self.signalsBlocked() else \"Emitting\"))\n self.sigContentsChanged.emit(UpdateReason.COLLECTOR_COMBO_BOX)", "response": "Called when a combo box value has changed by the user."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncalling when a spin box value has changed.", "response": "def _spinboxValueChanged(self, index, spinBox=None):\n \"\"\" Is called when a spin box value was changed.\n\n Updates the spin boxes and sets other combo boxes having the same index to\n the fake dimension of length 1.\n \"\"\"\n if spinBox is None:\n spinBox = self.sender()\n assert spinBox, \"spinBox not defined and not the sender\"\n\n logger.debug(\"{} sigContentsChanged signal (spinBox)\"\n .format(\"Blocked\" if self.signalsBlocked() else \"Emitting\"))\n self.sigContentsChanged.emit(UpdateReason.COLLECTOR_SPIN_BOX)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef getSlicedArray(self, copy=True):\n #logger.debug(\"getSlicedArray() called\")\n\n if not self.rtiIsSliceable:\n return None\n\n # The dimensions that are selected in the combo boxes will be set to slice(None),\n # the values from the spin boxes will be set as a single integer value\n nDims = self.rti.nDims\n sliceList = [slice(None)] * nDims\n\n for spinBox in self._spinBoxes:\n dimNr = spinBox.property(\"dim_nr\")\n sliceList[dimNr] = spinBox.value()\n\n # Make the array slicer. It needs to be a tuple, a list of only integers will be\n # interpreted as an index. With a tuple, array[(exp1, exp2, ..., expN)] is equivalent to\n # array[exp1, exp2, ..., expN].\n # See: http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html\n logger.debug(\"Array slice list: {}\".format(str(sliceList)))\n slicedArray = self.rti[tuple(sliceList)]\n\n # Make a copy to prevent inspectors from modifying the underlying array.\n if copy:\n slicedArray = ma.copy(slicedArray)\n\n # If there are no comboboxes the sliceList will contain no Slices objects, only ints. Then\n # the resulting slicedArray will be a usually a scalar (only structured fields may yield an\n # array). We convert this scalar to a zero-dimensional Numpy array so that inspectors\n # always get an array (having the same number of dimensions as the dimensionality of the\n # inspector, i.e. the number of comboboxes).\n if self.maxCombos == 0:\n slicedArray = ma.MaskedArray(slicedArray)\n\n # Post-condition type check\n check_is_an_array(slicedArray, np.ndarray)\n\n # Enforce the return type to be a masked array.\n if not isinstance(slicedArray, ma.MaskedArray):\n slicedArray = ma.MaskedArray(slicedArray)\n\n # Add fake dimensions of length 1 so that result.ndim will equal the number of combo boxes\n for dimNr in range(slicedArray.ndim, self.maxCombos):\n #logger.debug(\"Adding fake dimension: {}\".format(dimNr))\n slicedArray = ma.expand_dims(slicedArray, dimNr)\n\n # Post-condition dimension check\n assert slicedArray.ndim == self.maxCombos, \\\n \"Bug: getSlicedArray should return a {:d}D array, got: {}D\" \\\n .format(self.maxCombos, slicedArray.ndim)\n\n # Convert to ArrayWithMask class for working around issues with the numpy maskedarray\n awm = ArrayWithMask.createFromMaskedArray(slicedArray)\n del slicedArray\n\n # Shuffle the dimensions to be in the order as specified by the combo boxes\n comboDims = [self._comboBoxDimensionIndex(cb) for cb in self._comboBoxes]\n permutations = np.argsort(comboDims)\n logger.debug(\"slicedArray.shape: {}\".format(awm.data.shape))\n logger.debug(\"Transposing dimensions: {}\".format(permutations))\n awm = awm.transpose(permutations)\n\n awm.checkIsConsistent()\n\n return awm", "response": "Slice the RTI using a tuple of slices made from the values of the combo boxes and spin boxes."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a string representation of the slices that are used to get the sliced array.", "response": "def getSlicesString(self):\n \"\"\" Returns a string representation of the slices that are used to get the sliced array.\n For example returns '[:, 5]' if the combo box selects dimension 0 and the spin box 5.\n \"\"\"\n if not self.rtiIsSliceable:\n return ''\n\n # The dimensions that are selected in the combo boxes will be set to slice(None),\n # the values from the spin boxes will be set as a single integer value\n nDims = self.rti.nDims\n sliceList = [':'] * nDims\n\n for spinBox in self._spinBoxes:\n dimNr = spinBox.property(\"dim_nr\")\n sliceList[dimNr] = str(spinBox.value())\n\n # No need to shuffle combobox dimensions like in getSlicedArray; all combobox dimensions\n # yield a colon.\n\n return \"[\" + \", \".join(sliceList) + \"]\""} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate the _rtiInfo property when a new RTI is set or the comboboxes value change.", "response": "def _updateRtiInfo(self):\n \"\"\" Updates the _rtiInfo property when a new RTI is set or the comboboxes value change.\n \"\"\"\n logger.debug(\"Updating self._rtiInfo\")\n # Info about the dependent dimension\n rti = self.rti\n if rti is None:\n info = {'slices': '',\n 'name': '',\n 'path': '',\n 'file-name': '',\n 'dir-name': '',\n 'base-name': '',\n 'unit': '',\n 'raw-unit': ''}\n else:\n dirName, baseName = os.path.split(rti.fileName)\n info = {'slices': self.getSlicesString(),\n 'name': rti.nodeName,\n 'path': rti.nodePath,\n 'file-name': rti.fileName,\n 'dir-name': dirName,\n 'base-name': baseName,\n 'unit': '({})'.format(rti.unit) if rti.unit else '',\n 'raw-unit': rti.unit}\n\n # Add the info of the independent dimensions (appended with the axis name of that dim).\n for axisName, comboBox in zip(self._axisNames, self._comboBoxes):\n dimName = comboBox.currentText()\n key = '{}-dim'.format(axisName.lower())\n info[key] = dimName\n\n self._rtiInfo = info"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating an RTI given an object.", "response": "def _createFromObject(obj, *args, **kwargs):\n \"\"\" Creates an RTI given an object. Auto-detects which RTI class to return.\n The *args and **kwargs parameters are passed to the RTI constructor.\n It is therefor important that all memory RTIs accept the same parameters in the\n constructor (with exception of the FieldRti which is not auto-detected).\n \"\"\"\n if is_a_sequence(obj):\n return SequenceRti(obj, *args, **kwargs)\n elif is_a_mapping(obj):\n return MappingRti(obj, *args, **kwargs)\n elif is_an_array(obj):\n return ArrayRti(obj, *args, **kwargs)\n elif isinstance(obj, bytearray):\n return ArrayRti(np.array(obj), *args, **kwargs)\n else:\n return ScalarRti(obj, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _subArrayShape(self):\n fieldName = self.nodeName\n fieldDtype = self._array.dtype.fields[fieldName][0]\n return fieldDtype.shape", "response": "Returns the shape of the sub - array."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef elementTypeName(self):\n if self._array is None:\n return super(FieldRti, self).elementTypeName\n else:\n fieldName = self.nodeName\n return str(self._array.dtype.fields[fieldName][0])", "response": "String representation of the element type."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a list with the dimension names of the underlying NCDF variable", "response": "def dimensionNames(self):\n \"\"\" Returns a list with the dimension names of the underlying NCDF variable\n \"\"\"\n mainArrayDims = ['Dim{}'.format(dimNr) for dimNr in range(self._array.ndim)]\n nSubDims = len(self._subArrayShape)\n subArrayDims = ['SubDim{}'.format(dimNr) for dimNr in range(nSubDims)]\n return mainArrayDims + subArrayDims"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the value to indicate missing data.", "response": "def missingDataValue(self):\n \"\"\" Returns the value to indicate missing data.\n \"\"\"\n value = getMissingDataValue(self._array)\n fieldNames = self._array.dtype.names\n\n # If the missing value attibute is a list with the same length as the number of fields,\n # return the missing value for field that equals the self.nodeName.\n if hasattr(value, '__len__') and len(value) == len(fieldNames):\n idx = fieldNames.index(self.nodeName)\n return value[idx]\n else:\n return value"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef elementTypeName(self):\n if self._array is None:\n return super(ArrayRti, self).elementTypeName\n else:\n dtype = self._array.dtype\n return '' if dtype.names else str(dtype)", "response": "String representation of the element type."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfetching all the children of this variable.", "response": "def _fetchAllChildren(self):\n \"\"\" Fetches all fields that this variable contains.\n Only variables with a structured data type can have fields.\n \"\"\"\n assert self.canFetchChildren(), \"canFetchChildren must be True\"\n\n childItems = []\n\n # Add fields in case of an array of structured type.\n if self._isStructured:\n for fieldName in self._array.dtype.names:\n childItem = FieldRti(self._array, nodeName=fieldName, fileName=self.fileName)\n childItem._iconColor = self.iconColor\n childItems.append(childItem)\n\n return childItems"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nevaluates the function to result an array of resources and sets the internal array to the result of the function.", "response": "def _openResources(self):\n \"\"\" Evaluates the function to result an array\n \"\"\"\n arr = self._fun()\n check_is_an_array(arr)\n self._array = arr"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding a child item for each column", "response": "def _fetchAllChildren(self):\n \"\"\" Adds a child item for each column\n \"\"\"\n childItems = []\n for nr, elem in enumerate(self._sequence):\n childItem = _createFromObject(elem, \"elem-{}\".format(nr), self.fileName)\n childItem._iconColor = self.iconColor\n childItems.append(childItem)\n return childItems"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding a child item for each item", "response": "def _fetchAllChildren(self):\n \"\"\" Adds a child item for each item\n \"\"\"\n childItems = []\n logger.debug(\"{!r} _fetchAllChildren {!r}\".format(self, self.fileName))\n\n if self.hasChildren():\n for key, value in sorted(self._dictionary.items()):\n # TODO: pass the attributes to the children? (probably not)\n childItem = _createFromObject(value, str(key), self.fileName)\n childItem._iconColor = self.iconColor\n childItems.append(childItem)\n\n return childItems"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the string with debugging information about the current item.", "response": "def debugInfo(self):\n \"\"\" Returns the string with debugging information\n \"\"\"\n return (\"enabled = {}, min = {}, max = {}, step = {}, specVal = {}\"\n .format(self.enabled, self.minValue, self.maxValue, self.stepSize, self.specialValueText))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef createEditor(self, delegate, parent, option):\n return IntCtiEditor(self, delegate, parent=parent)", "response": "Creates a IntCtiEditor object for the given delegate and parent."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncall at clean up.", "response": "def finalize(self):\n \"\"\" Called at clean up. Is used to disconnect signals.\n \"\"\"\n self.spinBox.valueChanged.disconnect(self.commitChangedValue)\n super(IntCtiEditor, self).finalize()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndisconnect signals and disconnects the related objects.", "response": "def finalize(self):\n \"\"\" Is called before destruction (when closing).\n Can be used to clean-up resources.\n \"\"\"\n logger.debug(\"Finalizing: {}\".format(self))\n\n # Disconnect signals\n self.collector.sigContentsChanged.disconnect(self.collectorContentsChanged)\n self._configTreeModel.sigItemChanged.disconnect(self.configContentsChanged)\n self.sigInspectorChanged.disconnect(self.inspectorSelectionPane.updateFromInspectorRegItem)\n self.customContextMenuRequested.disconnect(self.showContextMenu)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates the UI widgets.", "response": "def __setupViews(self):\n \"\"\" Creates the UI widgets.\n \"\"\"\n self._collector = Collector(self.windowNumber)\n self.configWidget = ConfigWidget(self._configTreeModel)\n self.repoWidget = RepoWidget(self.argosApplication.repo, self.collector)\n # self._configTreeModel.insertItem(self.repoWidget.repoTreeView.config) # No configurable items yet\n\n # Define a central widget that will be the parent of the inspector widget.\n # We don't set the inspector directly as the central widget to retain the size when the\n # inspector is changed.\n widget = QtWidgets.QWidget()\n layout = QtWidgets.QVBoxLayout(widget)\n layout.setContentsMargins(CENTRAL_MARGIN, CENTRAL_MARGIN, CENTRAL_MARGIN, CENTRAL_MARGIN)\n layout.setSpacing(CENTRAL_SPACING)\n self.setCentralWidget(widget)\n\n # Must be after setInspector since that already draws the inspector\n self.collector.sigContentsChanged.connect(self.collectorContentsChanged)\n self._configTreeModel.sigItemChanged.connect(self.configContentsChanged)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef __setupMenus(self):\n if True:\n # Don't use self.menuBar(), on OS-X this is not shared across windows.\n # See: http://qt-project.org/doc/qt-4.8/qmenubar.html#details\n # And:http://qt-project.org/doc/qt-4.8/qmainwindow.html#menuBar\n menuBar = QtWidgets.QMenuBar() # Make a menu without parent.\n self.setMenuBar(menuBar)\n else:\n menuBar = self.menuBar()\n\n ### File Menu ###\n\n fileMenu = menuBar.addMenu(\"&File\")\n\n fileMenu.addAction(\"&New Window\", self.cloneWindow, QtGui.QKeySequence(\"Ctrl+N\"))\n fileMenu.addAction(\"Close &Window\", self.close, QtGui.QKeySequence.Close)\n\n fileMenu.addSeparator()\n\n action = fileMenu.addAction(\"Browse Directory...\",\n lambda: self.openFiles(fileMode = QtWidgets.QFileDialog.Directory))\n action.setShortcut(QtGui.QKeySequence(\"Ctrl+B\"))\n\n action = fileMenu.addAction(\"&Open Files...\",\n lambda: self.openFiles(fileMode = QtWidgets.QFileDialog.ExistingFiles))\n action.setShortcut(QtGui.QKeySequence(\"Ctrl+O\"))\n\n openAsMenu = fileMenu.addMenu(\"Open As\")\n for rtiRegItem in self.argosApplication.rtiRegistry.items:\n #rtiRegItem.tryImportClass()\n def createTrigger():\n \"Function to create a closure with the regItem\"\n _rtiRegItem = rtiRegItem # keep reference in closure\n return lambda: self.openFiles(rtiRegItem=_rtiRegItem,\n fileMode = QtWidgets.QFileDialog.ExistingFiles,\n caption=\"Open {}\".format(_rtiRegItem.name))\n\n action = QtWidgets.QAction(\"{}...\".format(rtiRegItem.name), self,\n enabled=True, # Since this is only executed at start-up, it must be static\n #enabled=bool(rtiRegItem.successfullyImported), # TODO: make this work?\n triggered=createTrigger())\n openAsMenu.addAction(action)\n\n fileMenu.addSeparator()\n\n # for action in self.repoWidget.repoTreeView.topLevelItemActionGroup.actions():\n # fileMenu.addAction(action)\n #\n # for action in self.repoWidget.repoTreeView.currentItemActionGroup.actions():\n # fileMenu.addAction(action)\n\n fileMenu.addSeparator()\n fileMenu.addAction(\"E&xit\", self.argosApplication.closeAllWindows, QtGui.QKeySequence.Quit)\n\n ### View Menu ###\n self.viewMenu = menuBar.addMenu(\"&View\")\n action = self.viewMenu.addAction(\"Installed &Plugins...\", self.execPluginsDialog)\n action.setShortcut(QtGui.QKeySequence(\"Ctrl+P\"))\n self.viewMenu.addSeparator()\n\n ### Inspector Menu ###\n self.execInspectorDialogAction = QtWidgets.QAction(\"&Browse Inspectors...\", self,\n triggered=self.execInspectorDialog)\n self.execInspectorDialogAction.setShortcut(QtGui.QKeySequence(\"Ctrl+i\"))\n\n self.inspectorActionGroup = self.__createInspectorActionGroup(self)\n self.inspectorMenu = menuBar.addMenu(\"Inspector\")\n addInspectorActionsToMenu(self.inspectorMenu, self.execInspectorDialogAction,\n self.inspectorActionGroup)\n\n ### Window Menu ###\n self.windowMenu = menuBar.addMenu(\"&Window\")\n\n # The action added to the menu in the repopulateWindowMenu method, which is called by\n # the ArgosApplication object every time a window is added or removed.\n self.activateWindowAction = QtWidgets.QAction(\"Window #{}\".format(self.windowNumber),\n self, triggered=self.activateAndRaise,\n checkable=True)\n if self.windowNumber <= 9:\n self.activateWindowAction.setShortcut(QtGui.QKeySequence(\"Alt+{}\"\n .format(self.windowNumber)))\n ### Help Menu ###\n menuBar.addSeparator()\n helpMenu = menuBar.addMenu(\"&Help\")\n helpMenu.addAction('&About...', self.about)\n\n if DEBUGGING:\n helpMenu.addSeparator()\n helpMenu.addAction(\"&Test\", self.myTest, \"Alt+T\")\n helpMenu.addAction(\"Add Test Data\", self.addTestData, \"Alt+A\")\n\n ### Context menu ###\n\n # Note that the dock-widgets have a Qt.PreventContextMenu context menu policy.\n # Therefor the context menu is only shown in the center widget.\n self.setContextMenuPolicy(Qt.CustomContextMenu)\n self.customContextMenuRequested.connect(self.showContextMenu)", "response": "Sets up the main menu."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef __createInspectorActionGroup(self, parent):\n actionGroup = QtWidgets.QActionGroup(parent)\n actionGroup.setExclusive(True)\n\n sortedItems = sorted(self.argosApplication.inspectorRegistry.items,\n key=lambda item: item.identifier)\n shortCutNr = 1\n for item in sortedItems:\n logger.debug(\"item: {}\".format(item.identifier))\n setAndDrawFn = partial(self.setAndDrawInspectorById, item.identifier)\n action = QtWidgets.QAction(item.name, self, triggered=setAndDrawFn, checkable=True)\n action.setData(item.identifier)\n if shortCutNr <= 9 and \"debug\" not in item.identifier: # TODO: make configurable by the user\n action.setShortcut(QtGui.QKeySequence(\"Ctrl+{}\".format(shortCutNr)))\n shortCutNr += 1\n\n actionGroup.addAction(action)\n\n return actionGroup", "response": "Creates an action group with set inspector actions for all installed inspector."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef __setupDockWidgets(self):\n #self.dockWidget(self.currentInspectorPane, \"Current Inspector\", Qt.LeftDockWidgetArea)\n\n self.inspectorSelectionPane = InspectorSelectionPane(self.execInspectorDialogAction,\n self.inspectorActionGroup)\n self.sigInspectorChanged.connect(self.inspectorSelectionPane.updateFromInspectorRegItem)\n self.dockWidget(self.inspectorSelectionPane, \"Current Inspector\",\n area=Qt.LeftDockWidgetArea)\n\n self.dockWidget(self.repoWidget, \"Data Repository\", Qt.LeftDockWidgetArea)\n self.dockWidget(self.collector, \"Data Collector\", Qt.TopDockWidgetArea)\n # TODO: if the title == \"Settings\" it won't be added to the view menu.\n self.dockWidget(self.configWidget, \"Application Settings\", Qt.RightDockWidgetArea)\n\n self.viewMenu.addSeparator()\n\n propertiesPane = PropertiesPane(self.repoWidget.repoTreeView)\n self.dockDetailPane(propertiesPane, area=Qt.LeftDockWidgetArea)\n\n attributesPane = AttributesPane(self.repoWidget.repoTreeView)\n self.dockDetailPane(attributesPane, area=Qt.LeftDockWidgetArea)\n\n dimensionsPane = DimensionsPane(self.repoWidget.repoTreeView)\n self.dockDetailPane(dimensionsPane, area=Qt.LeftDockWidgetArea)\n\n # Add am extra separator on mac because OS-X adds an 'Enter Full Screen' item\n if sys.platform.startswith('darwin'):\n self.viewMenu.addSeparator()", "response": "Sets up the dock widgets."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef repopulateWinowMenu(self, actionGroup):\n for action in self.windowMenu.actions():\n self.windowMenu.removeAction(action)\n\n for action in actionGroup.actions():\n self.windowMenu.addAction(action)", "response": "Re - creates the window menu with the actions of the actionGroup."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef showContextMenu(self, pos):\n contextMenu = QtWidgets.QMenu()\n addInspectorActionsToMenu(contextMenu, self.execInspectorDialogAction,\n self.inspectorActionGroup)\n contextMenu.exec_(self.mapToGlobal(pos))", "response": "Shows the context menu at the given position."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd a widget as a docked widget. Returns the added dockWidget", "response": "def dockWidget(self, widget, title, area):\n \"\"\" Adds a widget as a docked widget.\n Returns the added dockWidget\n \"\"\"\n assert widget.parent() is None, \"Widget already has a parent\"\n\n dockWidget = QtWidgets.QDockWidget(title, parent=self)\n dockWidget.setObjectName(\"dock_\" + string_to_identifier(title))\n dockWidget.setWidget(widget)\n\n # Prevent parent context menu (with e.g. 'set inspector\" option) to be displayed.\n dockWidget.setContextMenuPolicy(Qt.PreventContextMenu)\n\n self.addDockWidget(area, dockWidget)\n self.viewMenu.addAction(dockWidget.toggleViewAction())\n return dockWidget"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef dockDetailPane(self, detailPane, title=None, area=None):\n title = detailPane.classLabel() if title is None else title\n area = Qt.LeftDockWidgetArea if area is None else area\n dockWidget = self.dockWidget(detailPane, title, area)\n # TODO: undockDetailPane to disconnect\n dockWidget.visibilityChanged.connect(detailPane.dockVisibilityChanged)\n if len(self._detailDockWidgets) > 0:\n self.tabifyDockWidget(self._detailDockWidgets[-1], dockWidget)\n self._detailDockWidgets.append(dockWidget)\n return dockWidget", "response": "Creates a dockWidget and adds it to the detailPane."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate the window title frm the window number inspector etc Also updates the Window Menu", "response": "def updateWindowTitle(self):\n \"\"\" Updates the window title frm the window number, inspector, etc\n Also updates the Window Menu\n \"\"\"\n self.setWindowTitle(\"{} #{} | {}-{}\".format(self.inspectorName, self.windowNumber,\n PROJECT_NAME, self.argosApplication.profile))\n #self.activateWindowAction.setText(\"{} window\".format(self.inspectorName, self.windowNumber))\n self.activateWindowAction.setText(\"{} window\".format(self.inspectorName))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef execInspectorDialog(self):\n dialog = OpenInspectorDialog(self.argosApplication.inspectorRegistry, parent=self)\n dialog.setCurrentInspectorRegItem(self.inspectorRegItem)\n dialog.exec_()\n if dialog.result():\n inspectorRegItem = dialog.getCurrentInspectorRegItem()\n if inspectorRegItem is not None:\n self.getInspectorActionById(inspectorRegItem.identifier).trigger()", "response": "Opens the inspector dialog box to change the current inspector."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset the inspector and draw the contents Triggers the corresponding action", "response": "def getInspectorActionById(self, identifier):\n \"\"\" Sets the inspector and draw the contents\n Triggers the corresponding action so that it is checked in the menus.\n \"\"\"\n for action in self.inspectorActionGroup.actions():\n if action.data() == identifier:\n return action\n raise KeyError(\"No action found with ID: {!r}\".format(identifier))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef setAndDrawInspectorById(self, identifier):\n self.setInspectorById(identifier)\n\n # Show dialog box if import was unsuccessful.\n regItem = self.inspectorRegItem\n if regItem and not regItem.successfullyImported:\n msg = \"Unable to import {} inspector.\\n{}\".format(regItem.identifier, regItem.exception)\n QtWidgets.QMessageBox.warning(self, \"Warning\", msg)\n logger.warn(msg)\n\n self.drawInspectorContents(reason=UpdateReason.INSPECTOR_CHANGED)", "response": "Sets the inspector and draw the contents."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsetting the central inspector widget given an ID.", "response": "def setInspectorById(self, identifier):\n \"\"\" Sets the central inspector widget given a inspector ID.\n\n If identifier is None, the inspector will be unset. Otherwise it will lookup the\n inspector class in the registry. It will raise a KeyError if the ID is not found there.\n\n It will do an import of the inspector code if it's loaded for the first time. If the\n the inspector class cannot be imported a warning is logged and the inspector is unset.\n\n NOTE: does not draw the new inspector, this is the responsibility of the caller.\n Also, the corresponding action is not triggered.\n\n Emits the sigInspectorChanged(self.inspectorRegItem)\n \"\"\"\n logger.info(\"Setting inspector: {}\".format(identifier))\n\n # Use the identifier to find a registered inspector and set self.inspectorRegItem.\n # Then create an inspector object from it.\n\n oldInspectorRegItem = self.inspectorRegItem\n oldInspector = self.inspector\n\n if not identifier:\n inspector = None\n self._inspectorRegItem = None\n else:\n inspectorRegistry = self.argosApplication.inspectorRegistry\n inspectorRegItem = inspectorRegistry.getItemById(identifier)\n\n self._inspectorRegItem = inspectorRegItem\n if inspectorRegItem is None:\n inspector = None\n else:\n try:\n inspector = inspectorRegItem.create(self.collector, tryImport=True)\n except ImportError as ex:\n # Only log the error. No dialog box or user interaction here because this\n # function may be called at startup.\n logger.exception(\"Clearing inspector. Unable to create {!r} because {}\"\n .format(inspectorRegItem.identifier, ex))\n inspector = None\n self.getInspectorActionById(identifier).setEnabled(False)\n\n if DEBUGGING:\n raise\n\n ######################\n # Set self.inspector #\n ######################\n\n check_class(inspector, AbstractInspector, allow_none=True)\n\n logger.debug(\"Disabling updates.\")\n self.setUpdatesEnabled(False)\n try:\n centralLayout = self.centralWidget().layout()\n\n # Delete old inspector\n if oldInspector is None: # can be None at start-up\n oldConfigPosition = None # Last top level element in the config tree.\n else:\n self._updateNonDefaultsForInspector(oldInspectorRegItem, oldInspector)\n\n # Remove old inspector configuration from tree\n oldConfigPosition = oldInspector.config.childNumber()\n configPath = oldInspector.config.nodePath\n _, oldConfigIndex = self._configTreeModel.findItemAndIndexPath(configPath)[-1]\n self._configTreeModel.deleteItemAtIndex(oldConfigIndex)\n\n oldInspector.finalize() # TODO: before removing config\n centralLayout.removeWidget(oldInspector)\n oldInspector.deleteLater()\n\n # Set new inspector\n self._inspector = inspector\n\n # Update collector widgets and the config tree\n oldBlockState = self.collector.blockSignals(True)\n try:\n if self.inspector is None:\n self.collector.clearAndSetComboBoxes([])\n else:\n # Add and apply config values to the inspector\n key = self.inspectorRegItem.identifier\n nonDefaults = self._inspectorsNonDefaults.get(key, {})\n logger.debug(\"Setting non defaults: {}\".format(nonDefaults))\n self.inspector.config.setValuesFromDict(nonDefaults)\n self._configTreeModel.insertItem(self.inspector.config, oldConfigPosition)\n self.configWidget.configTreeView.expandBranch()\n self.collector.clearAndSetComboBoxes(self.inspector.axesNames())\n centralLayout.addWidget(self.inspector)\n finally:\n self.collector.blockSignals(oldBlockState)\n finally:\n logger.debug(\"Enabling updates.\")\n self.setUpdatesEnabled(True)\n\n self.updateWindowTitle()\n\n logger.debug(\"Emitting sigInspectorChanged({})\".format(self.inspectorRegItem))\n self.sigInspectorChanged.emit(self.inspectorRegItem)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _updateNonDefaultsForInspector(self, inspectorRegItem, inspector):\n if inspectorRegItem and inspector:\n key = inspectorRegItem.identifier\n logger.debug(\"_updateNonDefaultsForInspector: {} {}\"\n .format(key, type(inspector)))\n self._inspectorsNonDefaults[key] = inspector.config.getNonDefaultsDict()\n else:\n logger.debug(\"_updateNonDefaultsForInspector: no inspector\")", "response": "Update the non - default config values for the current inspector in a local dictionary."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nshowing the plugins dialog with the registered plugins", "response": "def execPluginsDialog(self):\n \"\"\" Shows the plugins dialog with the registered plugins\n \"\"\"\n pluginsDialog = PluginsDialog(parent=self,\n inspectorRegistry=self.argosApplication.inspectorRegistry,\n rtiRegistry=self.argosApplication.rtiRegistry)\n pluginsDialog.exec_()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncall when a config tree item has been changed.", "response": "def configContentsChanged(self, configTreeItem):\n \"\"\" Slot is called when an item has been changed by setData of the ConfigTreeModel.\n Will draw the window contents.\n \"\"\"\n logger.debug(\"configContentsChanged: {}\".format(configTreeItem))\n self.drawInspectorContents(reason=UpdateReason.CONFIG_CHANGED,\n origin=configTreeItem)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndrawing all contents of this window s inspector.", "response": "def drawInspectorContents(self, reason, origin=None):\n \"\"\" Draws all contents of this window's inspector.\n The reason and origin parameters are passed on to the inspector's updateContents method.\n\n :param reason: string describing the reason for the redraw.\n Should preferably be one of the UpdateReason enumeration class, but new values may\n be used (which are then ignored by existing inspectors).\n :param origin: object with extra infor on the reason\n \"\"\"\n logger.debug(\"\")\n logger.debug(\"-------- Drawing inspector of window: {} --------\".format(self.windowTitle()))\n if self.inspector:\n self.inspector.updateContents(reason=reason, initiator=origin)\n else:\n logger.debug(\"No inspector selected\")\n logger.debug(\"Finished draw inspector.\\n\")"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nletting the user select on or more files and opens it.", "response": "def openFiles(self, fileNames=None, rtiRegItem=None, caption=None, fileMode=None):\n \"\"\" Lets the user select on or more files and opens it.\n\n :param fileNames: If None an open-file dialog allows the user to select files,\n otherwise the files are opened directly.\n :param rtiRegItem: Open the files as this type of registered RTI. None=autodetect.\n :param caption: Optional caption for the file dialog.\n :param fileMode: is passed to the file dialog.\n :rtype fileMode: QtWidgets.QFileDialog.FileMode constant\n \"\"\"\n if fileNames is None:\n dialog = QtWidgets.QFileDialog(self, caption=caption)\n\n if rtiRegItem is None:\n nameFilter = 'All files (*);;' # Default show all files.\n nameFilter += self.argosApplication.rtiRegistry.getFileDialogFilter()\n else:\n nameFilter = rtiRegItem.getFileDialogFilter()\n nameFilter += ';;All files (*)'\n dialog.setNameFilter(nameFilter)\n\n if fileMode:\n dialog.setFileMode(fileMode)\n\n if dialog.exec_() == QtWidgets.QFileDialog.Accepted:\n fileNames = dialog.selectedFiles()\n else:\n fileNames = []\n\n fileRootIndex = None\n for fileName in fileNames:\n rtiClass = rtiRegItem.getClass(tryImport=True) if rtiRegItem else None\n fileRootIndex = self.argosApplication.repo.loadFile(fileName, rtiClass=rtiClass)\n self.repoWidget.repoTreeView.setExpanded(fileRootIndex, True)\n\n # Select last opened file\n if fileRootIndex is not None:\n self.repoWidget.repoTreeView.setCurrentIndex(fileRootIndex)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ntry to select a repository tree item given a path.", "response": "def trySelectRtiByPath(self, path):\n \"\"\" Selects a repository tree item given a path, expanding nodes if along the way if needed.\n\n Returns (item, index) if the path was selected successfully, else a warning is logged\n and (None, None) is returned.\n \"\"\"\n try:\n lastItem, lastIndex = self.repoWidget.repoTreeView.expandPath(path)\n self.repoWidget.repoTreeView.setCurrentIndex(lastIndex)\n return lastItem, lastIndex\n except Exception as ex:\n logger.warn(\"Unable to select {!r} because: {}\".format(path, ex))\n if DEBUGGING:\n raise\n return None, None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreading the persistent program settings and sets the current state of the header state.", "response": "def readViewSettings(self, settings=None): # TODO: rename to readProfile?\n \"\"\" Reads the persistent program settings\n\n :param settings: optional QSettings object which can have a group already opened.\n :returns: True if the header state was restored, otherwise returns False\n \"\"\"\n if settings is None:\n settings = QtCore.QSettings()\n logger.debug(\"Reading settings from: {}\".format(settings.group()))\n\n self.restoreGeometry(settings.value(\"geometry\"))\n self.restoreState(settings.value(\"state\"))\n\n self.repoWidget.repoTreeView.readViewSettings('repo_tree/header_state', settings)\n self.configWidget.configTreeView.readViewSettings('config_tree/header_state', settings)\n\n #self._configTreeModel.readModelSettings('config_model', settings)\n settings.beginGroup('cfg_inspectors')\n try:\n for key in settings.childKeys():\n json = settings.value(key)\n self._inspectorsNonDefaults[key] = ctiLoads(json)\n finally:\n settings.endGroup()\n\n identifier = settings.value(\"inspector\", None)\n try:\n if identifier:\n self.setInspectorById(identifier)\n except KeyError as ex:\n logger.warn(\"No inspector with ID {!r}.: {}\".format(identifier, ex))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nwriting the view settings to the persistent store", "response": "def saveProfile(self, settings=None):\n \"\"\" Writes the view settings to the persistent store\n \"\"\"\n self._updateNonDefaultsForInspector(self.inspectorRegItem, self.inspector)\n\n if settings is None:\n settings = QtCore.QSettings()\n logger.debug(\"Writing settings to: {}\".format(settings.group()))\n\n settings.beginGroup('cfg_inspectors')\n try:\n for key, nonDefaults in self._inspectorsNonDefaults.items():\n if nonDefaults:\n settings.setValue(key, ctiDumps(nonDefaults))\n logger.debug(\"Writing non defaults for {}: {}\".format(key, nonDefaults))\n finally:\n settings.endGroup()\n\n self.configWidget.configTreeView.saveProfile(\"config_tree/header_state\", settings)\n self.repoWidget.repoTreeView.saveProfile(\"repo_tree/header_state\", settings)\n\n settings.setValue(\"geometry\", self.saveGeometry())\n settings.setValue(\"state\", self.saveState())\n\n identifier = self.inspectorRegItem.identifier if self.inspectorRegItem else ''\n settings.setValue(\"inspector\", identifier)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a new window with the same inspector as the current window.", "response": "def cloneWindow(self):\n \"\"\" Opens a new window with the same inspector as the current window.\n \"\"\"\n # Save current window settings.\n settings = QtCore.QSettings()\n settings.beginGroup(self.argosApplication.windowGroupName(self.windowNumber))\n try:\n self.saveProfile(settings)\n\n # Create new window with the freshly baked settings of the current window.\n name = self.inspectorRegItem.fullName\n newWindow = self.argosApplication.addNewMainWindow(settings=settings,\n inspectorFullName=name)\n finally:\n settings.endGroup()\n\n # Select the current item in the new window.\n currentItem, _currentIndex = self.repoWidget.repoTreeView.getCurrentItem()\n if currentItem:\n newWindow.trySelectRtiByPath(currentItem.nodePath)\n\n # Move the new window 24 pixels to the bottom right and raise it to the front.\n newGeomRect = newWindow.geometry()\n logger.debug(\"newGeomRect: x={}\".format(newGeomRect.x()))\n newGeomRect.moveTo(newGeomRect.x() + 24, newGeomRect.y() + 24)\n\n newWindow.setGeometry(newGeomRect)\n logger.debug(\"newGeomRect: x={}\".format(newGeomRect.x()))\n\n newWindow.raise_()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nactivating and raises the window.", "response": "def activateAndRaise(self):\n \"\"\" Activates and raises the window.\n \"\"\"\n logger.debug(\"Activate and raising window: {}\".format(self.windowNumber))\n self.activateWindow()\n self.raise_()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndetect the WindowActivate event and passes all events through to the super class.", "response": "def event(self, ev):\n \"\"\" Detects the WindowActivate event. Pass all event through to the super class.\n \"\"\"\n if ev.type() == QtCore.QEvent.WindowActivate:\n logger.debug(\"Window activated: {}\".format(self.windowNumber))\n self.activateWindowAction.setChecked(True)\n\n return super(MainWindow, self).event(ev);"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef closeEvent(self, event):\n logger.debug(\"closeEvent\")\n self.argosApplication.saveSettingsIfNeeded()\n self.finalize()\n self.argosApplication.removeMainWindow(self)\n event.accept()\n logger.debug(\"closeEvent accepted\")", "response": "Called when the event is received from the application."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nshowing the about message window.", "response": "def about(self):\n \"\"\" Shows the about message window.\n \"\"\"\n aboutDialog = AboutDialog(parent=self)\n aboutDialog.show()\n aboutDialog.addDependencyInfo()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef profileGroupName(self, profile=None):\n profile = profile if profile else self.profile\n profGroupName = '_debug_' if DEBUGGING else ''\n profGroupName += string_to_identifier(profile)\n return profGroupName", "response": "Returns the name of the QSetting group for the given profile."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the name of the QSetting group for this window in the current profile.", "response": "def windowGroupName(self, windowNumber, profile=None):\n \"\"\" Returns the name of the QSetting group for this window in the this profile.\n\n :param windowNumber: int\n :param profile: profile name. If None the current profile is used.\n \"\"\"\n return \"{}/window-{:02d}\".format(self.profileGroupName(profile=profile), windowNumber)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef deleteProfile(self, profile):\n profGroupName = self.profileGroupName(profile)\n logger.debug(\"Resetting profile settings: {}\".format(profGroupName))\n settings = QtCore.QSettings()\n settings.remove(profGroupName)", "response": "Removes a profile from the persistent settings"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a list of all profiles", "response": "def deleteAllProfiles(self):\n \"\"\" Returns a list of all profiles\n \"\"\"\n settings = QtCore.QSettings()\n for profGroupName in QtCore.QSettings().childGroups():\n settings.remove(profGroupName)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef loadProfile(self, profile, inspectorFullName=None):\n settings = QtCore.QSettings()\n logger.info(\"Reading profile {!r} from: {}\".format(profile, settings.fileName()))\n\n self._profile = profile\n profGroupName = self.profileGroupName(profile)\n\n # Instantiate windows from groups\n settings.beginGroup(profGroupName)\n try:\n for windowGroupName in settings.childGroups():\n if windowGroupName.startswith('window'):\n settings.beginGroup(windowGroupName)\n try:\n self.addNewMainWindow(settings=settings)\n finally:\n settings.endGroup()\n finally:\n settings.endGroup()\n\n if inspectorFullName is not None:\n windows = [win for win in self._mainWindows\n if win.inspectorFullName == inspectorFullName]\n if len(windows) == 0:\n logger.info(\"Creating window for inspector: {!r}\".format(inspectorFullName))\n try:\n win = self.addNewMainWindow(inspectorFullName=inspectorFullName)\n except KeyError:\n logger.warn(\"No inspector found with ID: {}\".format(inspectorFullName))\n else:\n for win in windows:\n win.raise_()\n\n if len(self.mainWindows) == 0:\n logger.info(\"No open windows in profile (creating one).\")\n self.addNewMainWindow(inspectorFullName=DEFAULT_INSPECTOR)", "response": "Reads the persistent program settings for the current profile."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef saveProfile(self):\n if not self.profile:\n logger.warning(\"No profile defined (no settings saved)\")\n return\n\n settings = QtCore.QSettings()\n logger.debug(\"Writing settings to: {}\".format(settings.fileName()))\n\n profGroupName = self.profileGroupName()\n settings.remove(profGroupName) # start with a clean slate\n\n assert self.mainWindows, \"no main windows found\"\n for winNr, mainWindow in enumerate(self.mainWindows):\n settings.beginGroup(self.windowGroupName(winNr))\n try:\n mainWindow.saveProfile(settings)\n finally:\n settings.endGroup()", "response": "Writes the current profile settings to the persistent store."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nsave the persistent settings. Only saves the profile.", "response": "def saveSettings(self):\n \"\"\" Saves the persistent settings. Only saves the profile.\n \"\"\"\n try:\n self.saveProfile()\n except Exception as ex:\n # Continue, even if saving the settings fails.\n logger.warn(ex)\n if DEBUGGING:\n raise\n finally:\n self._settingsSaved = True"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nloading files into the repository as tree items of class rtiClass.", "response": "def loadFiles(self, fileNames, rtiClass=None):\n \"\"\" Loads files into the repository as repo tree items of class rtiClass.\n Auto-detects using the extensions when rtiClass is None\n \"\"\"\n for fileName in fileNames:\n self.repo.loadFile(fileName, rtiClass=rtiClass)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef addNewMainWindow(self, settings=None, inspectorFullName=None):\n mainWindow = MainWindow(self)\n self.mainWindows.append(mainWindow)\n\n self.windowActionGroup.addAction(mainWindow.activateWindowAction)\n self.repopulateAllWindowMenus()\n\n if settings:\n mainWindow.readViewSettings(settings)\n\n if inspectorFullName:\n inspectorId = nameToIdentifier(inspectorFullName)\n mainWindow.setInspectorById(inspectorId)\n\n if mainWindow.inspectorRegItem: # can be None at start\n inspectorId = mainWindow.inspectorRegItem.identifier\n mainWindow.getInspectorActionById(inspectorId).setChecked(True)\n logger.info(\"Created new window with inspector: {}\"\n .format(mainWindow.inspectorRegItem.fullName))\n else:\n logger.info(\"Created new window without inspector\")\n\n mainWindow.drawInspectorContents(reason=UpdateReason.NEW_MAIN_WINDOW)\n mainWindow.show()\n\n if sys.platform.startswith('darwin'):\n # Calling raise before the QApplication.exec_ only shows the last window\n # that was added. Therefore we also call activeWindow. However, this may not\n # always be desirable. TODO: make optional?\n mainWindow.raise_()\n pass\n\n return mainWindow", "response": "Creates and shows a new MainWindow."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef removeMainWindow(self, mainWindow):\n logger.debug(\"removeMainWindow called\")\n\n self.windowActionGroup.removeAction(mainWindow.activateWindowAction)\n self.repopulateAllWindowMenus()\n\n self.mainWindows.remove(mainWindow)", "response": "Removes the mainWindow from the list of windows."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef raiseAllWindows(self):\n logger.debug(\"raiseAllWindows called\")\n for mainWindow in self.mainWindows:\n logger.debug(\"Raising {}\".format(mainWindow._instanceNr))\n mainWindow.raise_()", "response": "Raises all application windows."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nquitting the application (called when the last window is closed)", "response": "def quit(self):\n \"\"\" Quits the application (called when the last window is closed)\n \"\"\"\n logger.debug(\"ArgosApplication.quit called\")\n assert len(self.mainWindows) == 0, \\\n \"Bug: still {} windows present at application quit!\".format(len(self.mainWindows))\n self.qApplication.quit()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nexecute all main windows by starting the Qt main application and executing the main application.", "response": "def execute(self):\n \"\"\" Executes all main windows by starting the Qt main application\n \"\"\"\n logger.info(\"Starting Argos event loop...\")\n exitCode = self.qApplication.exec_()\n logger.info(\"Argos event loop finished with exit code: {}\".format(exitCode))\n return exitCode"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _openResources(self):\n with Image.open(self._fileName) as image:\n self._array = np.asarray(image)\n self._bands = image.getbands()\n\n # Fill attributes. For now assume that the info item are not overridden by\n # the Image items.\n self._attributes = dict(image.info)\n self._attributes['Format'] = image.format\n self._attributes['Mode'] = image.mode\n self._attributes['Size'] = image.size\n self._attributes['Width'] = image.width\n self._attributes['Height'] = image.height", "response": "Uses open the underlying file\n to open the array and bands of the image."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding the bands as separate fields so they can be inspected easily.", "response": "def _fetchAllChildren(self):\n \"\"\" Adds the bands as separate fields so they can be inspected easily.\n \"\"\"\n bands = self._bands\n if len(bands) != self._array.shape[-1]:\n logger.warn(\"No bands added, bands != last_dim_lenght ({} !: {})\"\n .format(len(bands), self._array.shape[-1]))\n return []\n\n childItems = []\n for bandNr, band in enumerate(bands):\n bandItem = PillowBandRti(self._array[..., bandNr],\n nodeName=band, fileName=self.fileName,\n iconColor=self.iconColor, attributes=self._attributes)\n childItems.append(bandItem)\n return childItems"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a list of dimension names for this PillowFileRti.", "response": "def dimensionNames(self):\n \"\"\" Returns ['Y', 'X', 'Band'].\n The underlying array is expected to be 3-dimensional. If this is not the case we fall\n back on the default dimension names ['Dim-0', 'Dim-1', ...]\n \"\"\"\n if self._array is None:\n return []\n\n if self._array.ndim == 2:\n return ['Y', 'X']\n elif self._array.ndim == 3:\n return ['Y', 'X', 'Band']\n else:\n # Defensive programming: fall back on default names\n msg = \"Expected 3D image. Got: {}\".format(self._array.ndim)\n if DEBUGGING:\n raise ValueError(msg)\n logger.warn(msg)\n return super(PillowFileRti, self).dimensionNames"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef createEditor(self, delegate, parent, option):\n return UntypedCtiEditor(self, delegate, parent=parent)", "response": "Creates an UntypedCtiEditor object."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the width of the label text in pixels.", "response": "def labelTextWidth(label):\n \"\"\" Returns the width of label text of the label in pixels.\n\n IMPORTANT: does not work when the labels are styled using style sheets.\n\n Unfortunately it is possible to retrieve the settings (e.g. padding) that were set by the\n style sheet without parsing the style sheet as text.\n \"\"\"\n # The Qt source shows that fontMetrics().size calls fontMetrics().boundingRect with\n # the TextLongestVariant included in the flags. TextLongestVariant is an internal flag\n # which is used to force selecting the longest string in a multi-length string.\n # See: http://stackoverflow.com/a/8638114/625350\n fontMetrics = label.fontMetrics()\n #contentsWidth = label.fontMetrics().boundingRect(label.text()).width()\n contentsWidth = fontMetrics.size(label.alignment(), label.text()).width()\n\n # If indent is negative, or if no indent has been set, the label computes the effective indent\n # as follows: If frameWidth() is 0, the effective indent becomes 0. If frameWidth() is greater\n # than 0, the effective indent becomes half the width of the \"x\" character of the widget's\n # current font().\n # See http://doc.qt.io/qt-4.8/qlabel.html#indent-prop\n if label.indent() < 0 and label.frameWidth(): # no indent, but we do have a frame\n indent = fontMetrics.width('x') / 2 - label.margin()\n indent *= 2 # the indent seems to be added to the other side as well\n else:\n indent = label.indent()\n\n result = contentsWidth + indent + 2 * label.frameWidth() + 2 * label.margin()\n\n if 1:\n #print (\"contentsMargins: {}\".format(label.getContentsMargins()))\n #print (\"contentsRect: {}\".format(label.contentsRect()))\n #print (\"frameRect: {}\".format(label.frameRect()))\n print (\"contentsWidth: {}\".format(contentsWidth))\n print (\"lineWidth: {}\".format(label.lineWidth()))\n print (\"midLineWidth: {}\".format(label.midLineWidth()))\n print (\"frameWidth: {}\".format(label.frameWidth()))\n print (\"margin: {}\".format(label.margin()))\n print (\"indent: {}\".format(label.indent()))\n print (\"actual indent: {}\".format(indent))\n print (\"RESULT: {}\".format(result))\n print ()\n return result"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the maximum width of the labels in the given list of objects.", "response": "def harmonizeLabelsTextWidth(labels, width=None):\n \"\"\" Sets the the maximum width of the labels\n If width is None, the maximum width is calculated using labelsMaxTextWidth()\n \"\"\"\n if width is None:\n width = labelsMaxTextWidth(labels)\n\n for label in labels:\n #label.setFixedWidth(width)\n label.setMinimumWidth(width)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef jsonAsCti(dct):\n if '_class_'in dct:\n full_class_name = dct['_class_'] # TODO: how to handle the full_class_name?\n cls = import_symbol(full_class_name)\n return cls.createFromJsonDict(dct)\n else:\n return dct", "response": "Returns a CTI given a dictionary of attributes."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef enableBranch(self, enabled):\n self.enabled = enabled\n for child in self.childItems:\n child.enableBranch(enabled)", "response": "Sets the enabled member to True or False for a node and all its children."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef resetToDefault(self, resetChildren=True):\n self.data = self.defaultData\n if resetChildren:\n for child in self.childItems:\n child.resetToDefault(resetChildren=True)", "response": "Resets the data to the default data."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrefresh the configuration tree from the target.", "response": "def refreshFromTarget(self, level=0):\n \"\"\" Refreshes the configuration tree from the target it monitors (if present).\n Recursively call _refreshNodeFromTarget for itself and all children. Subclasses should\n typically override _refreshNodeFromTarget instead of this function.\n During updateTarget's execution refreshFromTarget is blocked to avoid loops.\n \"\"\"\n if self.getRefreshBlocked():\n logger.debug(\"_refreshNodeFromTarget blocked\")\n return\n\n if False and level == 0:\n logger.debug(\"refreshFromTarget: {}\".format(self.nodePath))\n\n self._refreshNodeFromTarget()\n for child in self.childItems:\n child.refreshFromTarget(level=level + 1)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef updateTarget(self, level=0):\n #if level == 0:\n # logger.debug(\"updateTarget: {}\".format(self.nodePath))\n\n self._updateTargetFromNode()\n for child in self.childItems:\n child.updateTarget(level = level + 1)", "response": "Applies the configuration to the target it monitors ( if present."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _nodeGetNonDefaultsDict(self):\n dct = {}\n isEditable = bool(int(self.valueColumnItemFlags) and Qt.ItemIsEditable)\n if (self.data != self.defaultData and self.enabled and isEditable):\n dct['data'] = self.data\n return dct", "response": "Returns this nodes values as a dictionary to be used for persistence."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef getNonDefaultsDict(self):\n dct = self._nodeGetNonDefaultsDict()\n\n childList = []\n for childCti in self.childItems:\n childDct = childCti.getNonDefaultsDict()\n if childDct:\n childList.append(childDct)\n if childList:\n dct['childItems'] = childList\n\n if dct:\n dct['nodeName'] = self.nodeName\n\n return dct", "response": "Recursively retrieves values as a dictionary to be used for persistence."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef setValuesFromDict(self, dct):\n if 'nodeName' not in dct:\n return\n\n nodeName = dct['nodeName']\n if nodeName != self.nodeName:\n msg = \"nodeName mismatch: expected {!r}, got {!r}\".format(self.nodeName, nodeName)\n if DEBUGGING:\n raise ValueError(msg)\n else:\n logger.warn(msg)\n return\n\n self._nodeSetValuesFromDict(dct)\n\n for childDct in dct.get('childItems', []):\n key = childDct['nodeName']\n try:\n childCti = self.childByNodeName(key)\n except IndexError as _ex:\n logger.warn(\"Unable to set values for: {}\".format(key))\n else:\n childCti.setValuesFromDict(childDct)", "response": "Recursively sets values for this node and all its descendants."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncall at cleanup when the editor is closed.", "response": "def finalize(self):\n \"\"\" Called at clean up, when the editor is closed. Can be used to disconnect signals.\n This is often called after the client (e.g. the inspector) is updated. If you want to\n take action before the update, override prepareCommit instead.\n Be sure to call the finalize of the super class if you override this function.\n \"\"\"\n for subEditor in self._subEditors:\n self.removeSubEditor(subEditor)\n\n self.cti.model.sigItemChanged.disconnect(self.modelItemChanged)\n self.resetButton.clicked.disconnect(self.resetEditorValue)\n self.cti = None # just to make sure it's not used again.\n self.delegate = None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef addSubEditor(self, subEditor, isFocusProxy=False):\n self.hBoxLayout.insertWidget(len(self._subEditors), subEditor)\n self._subEditors.append(subEditor)\n\n subEditor.installEventFilter(self)\n subEditor.setFocusPolicy(Qt.StrongFocus)\n\n if isFocusProxy:\n self.setFocusProxy(subEditor)\n\n return subEditor", "response": "Adds a sub editor to the layout at the right but before the reset button."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nremove the subEditor from the layout and removes the event filter.", "response": "def removeSubEditor(self, subEditor):\n \"\"\" Removes the subEditor from the layout and removes the event filter.\n \"\"\"\n if subEditor is self.focusProxy():\n self.setFocusProxy(None)\n\n subEditor.removeEventFilter(self)\n self._subEditors.remove(subEditor)\n self.hBoxLayout.removeWidget(subEditor)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef eventFilter(self, watchedObject, event):\n if event.type() == QtCore.QEvent.KeyPress:\n key = event.key()\n if key in (Qt.Key_Tab, Qt.Key_Backtab):\n self.commitAndClose()\n return True\n else:\n return False\n\n return super(AbstractCtiEditor, self).eventFilter(watchedObject, event)", "response": "Calls commitAndClose when tab and back - tab are pressed."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef modelItemChanged(self, cti):\n if cti is not self.cti:\n logger.debug(\"Another config tree item has changed: {}. Closing editor for {}\"\n .format(cti, self.cti))\n self.delegate.closeEditor.emit(self, QtWidgets.QAbstractItemDelegate.NoHint) # CLOSES SELF!\n else:\n logger.debug(\"Cti of this editor has changed: {}\".format(cti))", "response": "Called when the model item of the config tree item has changed."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncommits the data of the sub editor and closes the current ctiEditor.", "response": "def commitAndClose(self):\n \"\"\" Commits the data of the sub editor and instructs the delegate to close this ctiEditor.\n\n The delegate will emit the closeEditor signal which is connected to the closeEditor\n method of the ConfigTreeView class. This, in turn will, call the finalize method of\n this object so that signals can be disconnected and resources can be freed. This is\n complicated but I don't see a simpler solution.\n \"\"\"\n if self.delegate:\n self.delegate.commitData.emit(self)\n self.delegate.closeEditor.emit(self, QtWidgets.QAbstractItemDelegate.NoHint) # CLOSES SELF!\n else:\n # QAbstractItemView.closeEditor is sometimes called directly, without the\n # QAbstractItemDelegate.closeEditor signal begin emitted, e.g when the currentItem\n # changes. Therefore the commitAndClose method can be called twice, if we call it\n # explicitly as well (e.g. in FontCtiEditor.execFontDialog(). We guard against this.\n logger.debug(\"AbstractCtiEditor.commitAndClose: editor already closed (ignored).\")"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef resetEditorValue(self, checked=False):\n # Block all signals to prevent duplicate inspector updates.\n # No need to restore, the editors will be deleted after the reset.\n for subEditor in self._subEditors:\n subEditor.blockSignals(True)\n\n self.cti.resetToDefault(resetChildren=True)\n # This will commit the children as well.\n self.setData(self.cti.defaultData)\n self.commitAndClose()", "response": "Resets the editor to the default value."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef setError(self, msg=None, title=None):\n if msg is not None:\n self.messageLabel.setText(msg)\n\n if title is not None:\n self.titleLabel.setText(title)", "response": "Shows and error message"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef resizeAllSections(header, sectionSize):\n for idx in range(header.length()):\n header.resizeSection(idx, sectionSize)", "response": "Sets all sections of a header to the same size."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef makeReplacementField(formatSpec, altFormatSpec='', testValue=None):\n check_is_a_string(formatSpec)\n check_is_a_string(altFormatSpec)\n fmt = altFormatSpec if not formatSpec else formatSpec\n\n if is_quoted(fmt):\n fmt = fmt[1:-1] # remove quotes\n else:\n if fmt and ':' not in fmt and '!' not in fmt:\n fmt = ':' + fmt\n fmt = '{' + fmt + '}'\n\n # Test resulting replacement field\n if testValue is not None:\n try:\n _dummy = fmt.format(testValue)\n except Exception:\n msg = (\"Format specifier failed: replacement-field={!r}, test-value={!r}\"\n .format(fmt, testValue))\n logger.error(msg)\n raise ValueError(msg)\n\n logger.debug(\"Resulting replacement field: {!r}\".format(fmt))\n return fmt", "response": "Creates a replacement field for the current locale."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _refreshNodeFromTarget(self):\n tableModel = self.tableInspector.model\n\n # Disable row height and column with settings for large headers (too slow otherwise)\n if tableModel.rowCount() >= RESET_HEADERS_AT_SIZE:\n self.autoRowHeightCti.data = False\n self.model.emitDataChanged(self.autoRowHeightCti)\n self.autoRowHeightCti.enable = False\n else:\n self.autoRowHeightCti.enable = True\n\n if tableModel.columnCount() >= RESET_HEADERS_AT_SIZE:\n self.autoColWidthCti.data = False\n self.model.emitDataChanged(self.autoColWidthCti)\n self.autoColWidthCti.enable = False\n else:\n self.autoColWidthCti.enable = True", "response": "Refreshes the TableInspectorCti from the TableInspector target it monitors."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndrawing the contents of the collected repo tree item.", "response": "def _drawContents(self, reason=None, initiator=None):\n \"\"\" Draws the table contents from the sliced array of the collected repo tree item.\n\n See AbstractInspector.updateContents for the reason and initiator description.\n \"\"\"\n logger.debug(\"TableInspector._drawContents: {}\".format(self))\n\n oldTableIndex = self.tableView.currentIndex()\n if oldTableIndex.isValid():\n selectionWasValid = True # Defensive programming, keep old value just in case\n oldRow = oldTableIndex.row()\n oldCol = oldTableIndex.column()\n else:\n selectionWasValid = False\n oldRow = 0\n oldCol = 0\n\n self.model.updateState(self.collector.getSlicedArray(),\n self.collector.rtiInfo,\n self.configValue('separate fields'))\n\n self.model.encoding = self.config.encodingCti.configValue\n self.model.horAlignment = self.config.horAlignCti.configValue\n self.model.verAlignment = self.config.verAlignCti.configValue\n self.model.dataColor = self.config.dataColorCti.configValue\n self.model.missingColor = self.config.missingColorCti.configValue\n\n self.model.strFormat = makeReplacementField(self.config.strFormatCti.configValue,\n testValue='my_string')\n self.model.intFormat = makeReplacementField(self.config.intFormatCti.configValue,\n testValue=0)\n self.model.numFormat = makeReplacementField(self.config.numFormatCti.configValue,\n testValue=0.0)\n self.model.otherFormat = makeReplacementField(self.config.otherFormatCti.configValue,\n testValue=None)\n self.model.maskFormat = makeReplacementField(self.config.maskFormatCti.configValue,\n testValue=None)\n\n scrollMode = self.configValue(\"scroll\")\n self.tableView.setHorizontalScrollMode(scrollMode)\n self.tableView.setVerticalScrollMode(scrollMode)\n self.tableView.setWordWrap(self.configValue('word wrap'))\n\n # Update the model font from the font config item (will call self.setFont)\n self.config.updateTarget()\n\n verHeader = self.tableView.verticalHeader()\n if (self.config.autoRowHeightCti.configValue\n and self.model.rowCount() < RESET_HEADERS_AT_SIZE):\n\n numCells = self.model.rowCount() * self.model.columnCount()\n if numCells <= OPTIMIZE_RESIZE_AT_SIZE:\n logger.debug(\"Setting vertical resize mode to ResizeToContents\")\n verHeader.setSectionResizeMode(QtWidgets.QHeaderView.ResizeToContents)\n else:\n # ResizeToContents can be very slow because it gets all rows.\n # Work around: resize only first row and columns and apply this to all rows/cols\n # TODO: perhaps use SizeHintRole?\n logger.warning(\"Performance work around: for tables with more than 1000 cells \" +\n \"only the current row is resized and the others use that size.\")\n verHeader.setSectionResizeMode(QtWidgets.QHeaderView.Interactive)\n self.tableView.resizeRowToContents(oldRow)\n resizeAllSections(verHeader, self.tableView.rowHeight(oldRow))\n else:\n logger.debug(\"Setting vertical resize mode to Interactive and reset header\")\n verHeader.setSectionResizeMode(QtWidgets.QHeaderView.Interactive)\n verHeader.setDefaultSectionSize(self.config.defaultRowHeightCti.configValue)\n verHeader.reset()\n\n horHeader = self.tableView.horizontalHeader()\n if (self.config.autoColWidthCti.configValue\n and self.model.columnCount() < RESET_HEADERS_AT_SIZE):\n\n numCells = self.model.rowCount() * self.model.columnCount()\n if numCells <= OPTIMIZE_RESIZE_AT_SIZE:\n logger.debug(\"setting horizontal resize mode to ResizeToContents\")\n horHeader.setSectionResizeMode(QtWidgets.QHeaderView.ResizeToContents)\n else:\n # ResizeToContents can be very slow because it gets all rows.\n # Work around: resize only first row and columns and apply this to all rows/cols\n # TODO: perhaps use SizeHintRole?\n logger.warning(\"Performance work around: for tables with more than 1000 cells \" +\n \"only the current column is resized and the others use that size.\")\n horHeader.setSectionResizeMode(QtWidgets.QHeaderView.Interactive)\n self.tableView.resizeColumnToContents(oldCol)\n resizeAllSections(horHeader, self.tableView.columnWidth(oldCol))\n else:\n logger.debug(\"Setting horizontal resize mode to Interactive and reset header\")\n horHeader.setSectionResizeMode(QtWidgets.QHeaderView.Interactive)\n resizeAllSections(horHeader, self.config.defaultColWidthCti.configValue)\n horHeader.reset()\n\n # Restore selection after select\n if selectionWasValid:\n newIndex = self.model.index(oldRow, oldCol)\n if newIndex.isValid():\n logger.debug(\"Restoring selection: row={}, col={}\".format(oldRow, oldCol))\n self.tableView.setCurrentIndex(newIndex)\n else:\n logger.debug(\"Can't restore selection\")"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nupdates the state of the record in the table.", "response": "def updateState(self, slicedArray, rtiInfo, separateFields):\n \"\"\" Sets the slicedArray and rtiInfo and other members. This will reset the model.\n\n Will be called from the tableInspector._drawContents.\n \"\"\"\n self.beginResetModel()\n try:\n # The sliced array can be a masked array or a (regular) numpy array.\n # The table works fine with masked arrays, no need to replace the masked values.\n self._slicedArray = slicedArray\n if slicedArray is None:\n self._nRows = 0\n self._nCols = 0\n self._fieldNames = []\n else:\n self._nRows, self._nCols = self._slicedArray.shape\n if self._slicedArray.data.dtype.names:\n self._fieldNames = self._slicedArray.data.dtype.names\n else:\n self._fieldNames = []\n\n self._rtiInfo = rtiInfo\n self._separateFields = separateFields\n\n # Don't put numbers in the header if the record is of structured type, fields are\n # placed in separate cells and the fake dimension is selected (combo index 0)\n if self._separateFields and self._fieldNames:\n\n if self._rtiInfo['x-dim'] == FAKE_DIM_NAME:\n self._separateFieldOrientation = Qt.Horizontal\n self._numbersInHeader = False\n elif self._rtiInfo['y-dim'] == FAKE_DIM_NAME:\n self._separateFieldOrientation = Qt.Vertical\n self._numbersInHeader = False\n else:\n self._separateFieldOrientation = Qt.Horizontal\n self._numbersInHeader = True\n else:\n self._separateFieldOrientation = None\n self._numbersInHeader = True\n\n finally:\n self.endResetModel()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _cellValue(self, index):\n row = index.row()\n col = index.column()\n if (row < 0 or row >= self.rowCount() or col < 0 or col >= self.columnCount()):\n return None\n\n # The check above should have returned None if the sliced array is None\n assert self._slicedArray is not None, \"Sanity check failed.\"\n\n nFields = len(self._fieldNames)\n\n data = self._slicedArray.data\n if self._separateFieldOrientation == Qt.Horizontal:\n dataValue = data[row, col // nFields][self._fieldNames[col % nFields]]\n elif self._separateFieldOrientation == Qt.Vertical:\n dataValue = data[row // nFields, col][self._fieldNames[row % nFields]]\n else:\n dataValue = data[row, col]\n\n return dataValue", "response": "Returns the data value of the cell at the given index."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the data mask of the cell at the given index.", "response": "def _cellMask(self, index):\n \"\"\" Returns the data mask of the cell at the index (without any string conversion)\n \"\"\"\n row = index.row()\n col = index.column()\n if (row < 0 or row >= self.rowCount() or col < 0 or col >= self.columnCount()):\n return None\n\n # The check above should have returned None if the sliced array is None\n assert self._slicedArray is not None, \"Sanity check failed.\"\n\n nFields = len(self._fieldNames)\n mask = self._slicedArray.mask\n if is_an_array(mask):\n if self._separateFieldOrientation == Qt.Horizontal:\n maskValue = mask[row, col // nFields][self._fieldNames[col % nFields]]\n elif self._separateFieldOrientation == Qt.Vertical:\n maskValue = mask[row // nFields, col][self._fieldNames[row % nFields]]\n else:\n maskValue = mask[row, col]\n else:\n maskValue = mask\n\n # Here maskValue can still be a list in case of structured arrays. It can even still be\n # a numpy array in case of a structured array with sub arrays as fields\n if is_an_array(maskValue):\n allMasked = np.all(maskValue)\n else:\n try:\n allMasked = all(maskValue)\n except TypeError as ex:\n allMasked = bool(maskValue)\n\n return allMasked"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef data(self, index, role = Qt.DisplayRole):\n try:\n if role == Qt.DisplayRole:\n return to_string(self._cellValue(index), masked=self._cellMask(index),\n decode_bytes=self.encoding, maskFormat=self.maskFormat,\n strFormat=self.strFormat, intFormat=self.intFormat,\n numFormat=self.numFormat, otherFormat=self.otherFormat)\n\n elif role == Qt.FontRole:\n #assert self._font, \"Font undefined\"\n return self._font\n\n elif role == Qt.TextColorRole:\n masked = self._cellMask(index)\n if not is_an_array(masked) and masked:\n return self.missingColor\n else:\n return self.dataColor\n\n elif role == Qt.TextAlignmentRole:\n if self.horAlignment == ALIGN_SMART:\n cellContainsNumber = isinstance(self._cellValue(index), numbers.Number)\n horAlign = Qt.AlignRight if cellContainsNumber else Qt.AlignLeft\n return horAlign | self.verAlignment\n else:\n return self.horAlignment | self.verAlignment\n\n else:\n return None\n except Exception as ex:\n logger.error(\"Slot is not exception-safe.\")\n logger.exception(ex)\n if DEBUGGING:\n raise", "response": "Returns the data at an index for a certain role"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the header data for a given section and orientation.", "response": "def headerData(self, section, orientation, role):\n \"\"\" Returns the header for a section (row or column depending on orientation).\n Reimplemented from QAbstractTableModel to make the headers start at 0.\n \"\"\"\n if role == Qt.DisplayRole:\n if self._separateFieldOrientation == orientation:\n\n nFields = len(self._fieldNames)\n varNr = section // nFields\n fieldNr = section % nFields\n header = str(varNr) + ' : ' if self._numbersInHeader else ''\n header += self._fieldNames[fieldNr]\n return header\n else:\n return str(section)\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef rowCount(self, parent=None):\n if self._separateFieldOrientation == Qt.Vertical:\n return self._nRows * len(self._fieldNames)\n else:\n return self._nRows", "response": "Returns the number of rows in the sliced array."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef columnCount(self, parent=None):\n if self._separateFieldOrientation == Qt.Horizontal:\n return self._nCols * len(self._fieldNames)\n else:\n return self._nCols", "response": "Returns the number of columns in the sliced array."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef setFont(self, font):\n check_class(font, QtGui.QFont, allow_none=True)\n self._font = font", "response": "Sets the font that will be returned when data is called with the Qt. FontRole.\n Can be a QFont or None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncalculating the range of an axis of a viewBox.", "response": "def viewBoxAxisRange(viewBox, axisNumber):\n \"\"\" Calculates the range of an axis of a viewBox.\n \"\"\"\n rect = viewBox.childrenBoundingRect() # taken from viewBox.autoRange()\n if rect is not None:\n if axisNumber == X_AXIS:\n return rect.left(), rect.right()\n elif axisNumber == Y_AXIS:\n return rect.bottom(), rect.top()\n else:\n raise ValueError(\"axisNumber should be 0 or 1, got: {}\".format(axisNumber))\n else:\n # Does this happen? Probably when the plot is empty.\n raise AssertionError(\"No children bbox. Plot range not updated.\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncalculate the range from the sliced array.", "response": "def inspectorDataRange(inspector, percentage):\n \"\"\" Calculates the range from the inspectors' sliced array. Discards percentage of the minimum\n and percentage of the maximum values of the inspector.slicedArray\n\n Meant to be used with functools.partial for filling the autorange methods combobox.\n The first parameter is an inspector, it's not an array, because we would then have to\n regenerate the range function every time sliced array of an inspector changes.\n \"\"\"\n logger.debug(\"Discarding {}% from id: {}\".format(percentage, id(inspector.slicedArray)))\n return maskedNanPercentile(inspector.slicedArray, (percentage, 100-percentage) )"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef defaultAutoRangeMethods(inspector, intialItems=None):\n rangeFunctions = OrderedDict({} if intialItems is None else intialItems)\n rangeFunctions['use all data'] = partial(inspectorDataRange, inspector, 0.0)\n for percentage in [0.1, 0.2, 0.5, 1, 2, 5, 10, 20]:\n label = \"discard {}%\".format(percentage)\n rangeFunctions[label] = partial(inspectorDataRange, inspector, percentage)\n return rangeFunctions", "response": "Returns an ordered dict with default autorange methods for an inspector."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef setXYAxesAutoRangeOn(commonCti, xAxisRangeCti, yAxisRangeCti, axisNumber):\n assert axisNumber in VALID_AXIS_NUMBERS, \\\n \"Axis number should be one of {}, got {}\".format(VALID_AXIS_NUMBERS, axisNumber)\n\n logger.debug(\"setXYAxesAutoRangeOn, axisNumber: {}\".format(axisNumber))\n if axisNumber == X_AXIS or axisNumber == BOTH_AXES:\n xAxisRangeCti.autoRangeCti.data = True\n\n if axisNumber == Y_AXIS or axisNumber == BOTH_AXES:\n yAxisRangeCti.autoRangeCti.data = True\n\n commonCti.model.sigItemChanged.emit(commonCti)", "response": "Turn on the auto range of an X and Y axes simultaneously."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nupdate the config settings of the node from the target node.", "response": "def _refreshNodeFromTarget(self):\n \"\"\" Updates the config settings\n \"\"\"\n for key, value in self.viewBox.state.items():\n if key != \"limits\":\n childItem = self.childByNodeName(key)\n childItem.data = value\n else:\n # limits contains a dictionary as well\n for limitKey, limitValue in value.items():\n limitChildItem = self.limitsItem.childByNodeName(limitKey)\n limitChildItem.data = limitValue"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef autoRangeMethod(self):\n if self.methodCti:\n return self.methodCti.configValue\n else:\n assert len(self._rangeFunctions) == 1, \\\n \"Assumed only one _rangeFunctions. Got: {}\".format(self._rangeFunctions)\n return list(self._rangeFunctions.keys())[0]", "response": "Returns the auto range method for the current CTI."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nrefreshing the min max config values from the axes state.", "response": "def _forceRefreshMinMax(self):\n \"\"\" Refreshes the min max config values from the axes' state.\n \"\"\"\n #logger.debug(\"_forceRefreshMinMax\", stack_info=True)\n\n # Set the precision from by looking how many decimals are needed to show the difference\n # between the minimum and maximum, given the maximum. E.g. if min = 0.04 and max = 0.07,\n # we would only need zero decimals behind the point as we can write the range as\n # [4e-2, 7e-2]. However if min = 1.04 and max = 1.07, we need 2 decimals behind the point.\n # So, while the range is the same size we need more decimals because we are not zoomed in\n # around zero.\n rangeMin, rangeMax = self.getTargetRange() # [[xmin, xmax], [ymin, ymax]]\n maxOrder = np.log10(np.abs(max(rangeMax, rangeMin)))\n diffOrder = np.log10(np.abs(rangeMax - rangeMin))\n\n extraDigits = 2 # add some extra digits to make each pan/zoom action show a new value.\n precisionF = np.clip(abs(maxOrder - diffOrder) + extraDigits, extraDigits + 1, 25)\n precision = int(precisionF) if np.isfinite(precisionF) else extraDigits + 1\n #logger.debug(\"maxOrder: {}, diffOrder: {}, precision: {}\"\n # .format(maxOrder, diffOrder, precision))\n\n self.rangeMinCti.precision = precision\n self.rangeMaxCti.precision = precision\n self.rangeMinCti.data, self.rangeMaxCti.data = rangeMin, rangeMax\n\n # Update values in the tree\n self.model.emitDataChanged(self.rangeMinCti)\n self.model.emitDataChanged(self.rangeMaxCti)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nrefresh the min max config values from the axes state.", "response": "def refreshMinMax(self):\n \"\"\" Refreshes the min max config values from the axes' state.\n Does nothing when self.getRefreshBlocked() returns True.\n \"\"\"\n if self.getRefreshBlocked():\n logger.debug(\"refreshMinMax blocked for {}\".format(self.nodeName))\n return\n\n self._forceRefreshMinMax()"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nforces refreshing the auto range.", "response": "def _forceRefreshAutoRange(self):\n \"\"\" The min and max config items will be disabled if auto range is on.\n \"\"\"\n enabled = self.autoRangeCti and self.autoRangeCti.configValue\n self.rangeMinCti.enabled = not enabled\n self.rangeMaxCti.enabled = not enabled\n self.model.emitDataChanged(self)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nturns off the auto range checkbox.", "response": "def setAutoRangeOff(self):\n \"\"\" Turns off the auto range checkbox.\n Calls _refreshNodeFromTarget, not _updateTargetFromNode, because setting auto range off\n does not require a redraw of the target.\n \"\"\"\n # TODO: catch exceptions. How?\n # /argos/hdf-eos/DeepBlue-SeaWiFS-1.0_L3_20100101_v002-20110527T191319Z.h5/aerosol_optical_thickness_stddev_ocean\n if self.getRefreshBlocked():\n logger.debug(\"setAutoRangeOff blocked for {}\".format(self.nodeName))\n return\n\n if self.autoRangeCti:\n self.autoRangeCti.data = False\n\n self._forceRefreshAutoRange()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef setAutoRangeOn(self):\n if self.getRefreshBlocked():\n logger.debug(\"Set autorange on blocked for {}\".format(self.nodeName))\n return\n\n if self.autoRangeCti:\n self.autoRangeCti.data = True\n self.model.sigItemChanged.emit(self)", "response": "Turns on the auto range checkbox for the equivalent axes\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef calculateRange(self):\n if not self.autoRangeCti or not self.autoRangeCti.configValue:\n return (self.rangeMinCti.data, self.rangeMaxCti.data)\n else:\n rangeFunction = self._rangeFunctions[self.autoRangeMethod]\n return rangeFunction()", "response": "Calculates the range depending on the autoRangeCti and autoRangeMethod settings."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nupdate the target axis to match the current configuration.", "response": "def _updateTargetFromNode(self):\n \"\"\" Applies the configuration to the target axis.\n \"\"\"\n if not self.autoRangeCti or not self.autoRangeCti.configValue:\n padding = 0\n elif self.paddingCti.configValue == -1: # specialValueText\n # PyQtGraph dynamic padding: between 0.02 and 0.1 dep. on the size of the ViewBox\n padding = None\n else:\n padding = self.paddingCti.configValue / 100\n\n targetRange = self.calculateRange()\n #logger.debug(\"axisRange: {}, padding={}\".format(targetRange, padding))\n if not np.all(np.isfinite(targetRange)):\n logger.warn(\"New target range is not finite. Plot range not updated\")\n return\n\n self.setTargetRange(targetRange, padding=padding)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndisconnect signals. Is called by self. finalize when the cti is deleted.", "response": "def _closeResources(self):\n \"\"\" Disconnects signals.\n Is called by self.finalize when the cti is deleted.\n \"\"\"\n self.viewBox.sigRangeChangedManually.disconnect(self.setAutoRangeOff)\n self.viewBox.sigRangeChanged.disconnect(self.refreshMinMax)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset the range of the target.", "response": "def setTargetRange(self, targetRange, padding=None):\n \"\"\" Sets the range of the target.\n \"\"\"\n # viewBox.setRange doesn't accept an axis number :-(\n if self.axisNumber == X_AXIS:\n xRange, yRange = targetRange, None\n else:\n xRange, yRange = None, targetRange\n\n # Do not set disableAutoRange to True in setRange; it triggers 'one last' auto range.\n # This is why the viewBox' autorange must be False at construction.\n self.viewBox.setRange(xRange = xRange, yRange=yRange, padding=padding,\n update=False, disableAutoRange=False)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndisconnect signals. Is called by self. finalize when the cti is deleted.", "response": "def _closeResources(self):\n \"\"\" Disconnects signals.\n Is called by self.finalize when the cti is deleted.\n \"\"\"\n self.histLutItem.sigLevelsChanged.disconnect(self.setAutoRangeOff)\n self.histLutItem.sigLevelsChanged.disconnect(self.refreshMinMax)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef setTargetRange(self, targetRange, padding=None):\n rangeMin, rangeMax = targetRange\n self.histLutItem.setLevels(rangeMin, rangeMax)", "response": "Sets the range of the histogram LUT item."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\napplying the configuration to its target axis.", "response": "def _updateTargetFromNode(self):\n \"\"\" Applies the configuration to its target axis\n \"\"\"\n self.viewBox.setAspectLocked(lock=self.configValue, ratio=self.aspectRatioCti.configValue)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _updateTargetFromNode(self):\n if self.axisNumber == X_AXIS:\n self.viewBox.invertX(self.configValue)\n else:\n self.viewBox.invertY(self.configValue)", "response": "Applies the configuration to its target axis."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _updateTargetFromNode(self):\n rtiInfo = self.collector.rtiInfo\n self.plotItem.setLabel(self.axisPosition, self.configValue.format(**rtiInfo))\n self.plotItem.showLabel(self.axisPosition, self.configValue != self.NO_LABEL)", "response": "Applies the configuration to the target axis it monitors."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _updateTargetFromNode(self):\n logger.debug(\"showAxis: {}, {}\".format(self.axisPosition, self.configValue))\n self.plotItem.showAxis(self.axisPosition, show=self.configValue)", "response": "Applies the configuration to its target axis"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _updateTargetFromNode(self):\n if self.axisNumber == X_AXIS:\n xMode, yMode = self.configValue, None\n else:\n xMode, yMode = None, self.configValue\n\n self.plotItem.setLogMode(x=xMode, y=yMode)", "response": "Applies the configuration to its target axis and updates the plotItem s log mode."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _updateTargetFromNode(self):\n self.plotItem.showGrid(x=self.xGridCti.configValue, y=self.yGridCti.configValue,\n alpha=self.alphaCti.configValue)\n self.plotItem.updateGrid()", "response": "Updates the grid of the target of the item."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef createPlotDataItem(self):\n antialias = self.antiAliasCti.configValue\n\n color = self.penColor\n if self.lineCti.configValue:\n pen = QtGui.QPen()\n pen.setCosmetic(True)\n pen.setColor(color)\n pen.setWidthF(self.lineWidthCti.configValue)\n pen.setStyle(self.lineStyleCti.configValue)\n shadowCti = self.lineCti.findByNodePath('shadow')\n shadowPen = shadowCti.createPen(altStyle=pen.style(), altWidth=2.0 * pen.widthF())\n else:\n pen = None\n shadowPen = None\n\n drawSymbols = self.symbolCti.configValue\n symbolShape = self.symbolShapeCti.configValue if drawSymbols else None\n symbolSize = self.symbolSizeCti.configValue if drawSymbols else 0.0\n symbolPen = None # otherwise the symbols will also have dotted/solid line.\n symbolBrush = QtGui.QBrush(color) if drawSymbols else None\n\n plotDataItem = pg.PlotDataItem(antialias=antialias, pen=pen, shadowPen=shadowPen,\n symbol=symbolShape, symbolSize=symbolSize,\n symbolPen=symbolPen, symbolBrush=symbolBrush)\n return plotDataItem", "response": "Creates a PyQtGraph PlotDataItem from the config values."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _drawContents(self, currentRti=None):\n table = self.table\n table.setUpdatesEnabled(False)\n try:\n table.clearContents()\n verticalHeader = table.verticalHeader()\n verticalHeader.setSectionResizeMode(QtWidgets.QHeaderView.Fixed)\n\n if currentRti is None:\n return\n\n # Each column in the repo tree corresponds to a row in this detail pane.\n repoModel = self._repoTreeView.model()\n propNames = RepoTreeModel.HEADERS\n table.setRowCount(len(propNames))\n\n for row, propName in enumerate(propNames):\n nameItem = QtWidgets.QTableWidgetItem(propName)\n nameItem.setToolTip(propName)\n table.setItem(row, self.COL_PROP_NAME, nameItem)\n propValue = repoModel.itemData(currentRti, row)\n propItem = QtWidgets.QTableWidgetItem(propValue)\n propItem.setToolTip(propValue)\n table.setItem(row, self.COL_VALUE, propItem)\n table.resizeRowToContents(row)\n\n verticalHeader.setSectionResizeMode(QtWidgets.QHeaderView.ResizeToContents)\n\n finally:\n table.setUpdatesEnabled(True)", "response": "Draw the attributes of the currentRTI and the details pane."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef itemData(self, treeItem, column, role=Qt.DisplayRole):\n if role == Qt.DisplayRole:\n if column == self.COL_NODE_NAME:\n return treeItem.nodeName\n elif column == self.COL_NODE_PATH:\n return treeItem.nodePath\n elif column == self.COL_SHAPE:\n if treeItem.isSliceable:\n return \" x \".join(str(elem) for elem in treeItem.arrayShape)\n else:\n return \"\"\n elif column == self.COL_IS_OPEN:\n # Only show for RTIs that actually open resources.\n # TODO: this must be clearer. Use CanFetchChildren? Set is Open to None by default?\n if treeItem.hasChildren():\n return str(treeItem.isOpen)\n else:\n return \"\"\n elif column == self.COL_ELEM_TYPE:\n return treeItem.elementTypeName\n elif column == self.COL_FILE_NAME:\n return treeItem.fileName if hasattr(treeItem, 'fileName') else ''\n elif column == self.COL_UNIT:\n return treeItem.unit\n elif column == self.COL_MISSING_DATA:\n return to_string(treeItem.missingDataValue, noneFormat='') # empty str for Nones\n elif column == self.COL_RTI_TYPE:\n return type_name(treeItem)\n elif column == self.COL_EXCEPTION:\n return str(treeItem.exception) if treeItem.exception else ''\n else:\n raise ValueError(\"Invalid column: {}\".format(column))\n\n elif role == Qt.ToolTipRole:\n if treeItem.exception:\n return str(treeItem.exception)\n if column == self.COL_NODE_NAME:\n return treeItem.nodePath # Also path when hovering over the name\n elif column == self.COL_NODE_PATH:\n return treeItem.nodePath\n elif column == self.COL_SHAPE:\n if treeItem.isSliceable:\n return \" x \".join(str(elem) for elem in treeItem.arrayShape)\n else:\n return \"\"\n elif column == self.COL_UNIT:\n return treeItem.unit\n elif column == self.COL_MISSING_DATA:\n return to_string(treeItem.missingDataValue, noneFormat='') # empty str for Nones\n elif column == self.COL_RTI_TYPE:\n return type_name(treeItem)\n elif column == self.COL_ELEM_TYPE:\n return treeItem.elementTypeName\n elif column == self.COL_FILE_NAME:\n return treeItem.fileName if hasattr(treeItem, 'fileName') else ''\n else:\n return None\n else:\n return super(RepoTreeModel, self).itemData(treeItem, column, role=role)", "response": "Returns the data stored under the given role for the item."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef canFetchMore(self, parentIndex):\n parentItem = self.getItem(parentIndex)\n if not parentItem:\n return False\n\n return parentItem.canFetchChildren()", "response": "Returns true if there is more data available for the parent item ; otherwise returns false."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef fetchMore(self, parentIndex): # TODO: Make LazyLoadRepoTreeModel?\n parentItem = self.getItem(parentIndex)\n if not parentItem:\n return\n\n if not parentItem.canFetchChildren():\n return\n\n # TODO: implement InsertItems to optimize?\n for childItem in parentItem.fetchChildren():\n self.insertItem(childItem, parentIndex=parentIndex)\n\n # Check that Rti implementation correctly sets canFetchChildren\n assert not parentItem.canFetchChildren(), \\\n \"not all children fetched: {}\".format(parentItem)", "response": "Fetches any available data for the items with the parent specified by the parent index."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef findFileRtiIndex(self, childIndex):\n parentIndex = childIndex.parent()\n if not parentIndex.isValid():\n return childIndex\n else:\n parentItem = self.getItem(parentIndex)\n childItem = self.getItem(childIndex)\n if parentItem.fileName == childItem.fileName:\n return self.findFileRtiIndex(parentIndex)\n else:\n return childIndex", "response": "Finds the tree item at the given index and returns the index of the file that represents the item at the given index."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef reloadFileAtIndex(self, itemIndex, rtiClass=None):\n fileRtiParentIndex = itemIndex.parent()\n fileRti = self.getItem(itemIndex)\n position = fileRti.childNumber()\n fileName = fileRti.fileName\n\n if rtiClass is None:\n rtiClass = type(fileRti)\n\n # Delete old RTI and Insert a new one instead.\n self.deleteItemAtIndex(itemIndex) # this will close the items resources.\n return self.loadFile(fileName, rtiClass, position=position, parentIndex=fileRtiParentIndex)", "response": "Reloads the item at the index by removing the old one and inserting a new one."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nloading a file in the repository as a repo tree item of class rtiClass.", "response": "def loadFile(self, fileName, rtiClass=None,\n position=None, parentIndex=QtCore.QModelIndex()):\n \"\"\" Loads a file in the repository as a repo tree item of class rtiClass.\n Autodetects the RTI type if rtiClass is None.\n If position is None the child will be appended as the last child of the parent.\n Returns the index of the newly inserted RTI\n \"\"\"\n logger.info(\"Loading data from: {!r}\".format(fileName))\n if rtiClass is None:\n repoTreeItem = createRtiFromFileName(fileName)\n else:\n repoTreeItem = rtiClass.createFromFileName(fileName)\n assert repoTreeItem.parentItem is None, \"repoTreeItem {!r}\".format(repoTreeItem)\n return self.insertItem(repoTreeItem, position=position, parentIndex=parentIndex)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndisconnects signals and frees resources", "response": "def finalize(self):\n \"\"\" Disconnects signals and frees resources\n \"\"\"\n self.model().sigItemChanged.disconnect(self.repoTreeItemChanged)\n\n selectionModel = self.selectionModel() # need to store reference to prevent crash in PySide\n selectionModel.currentChanged.disconnect(self.currentItemChanged)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef contextMenuEvent(self, event):\n menu = QtWidgets.QMenu(self)\n\n for action in self.actions():\n menu.addAction(action)\n\n openAsMenu = self.createOpenAsMenu(parent=menu)\n menu.insertMenu(self.closeItemAction, openAsMenu)\n\n menu.exec_(event.globalPos())", "response": "Creates and executes the context menu for the tree view"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef createOpenAsMenu(self, parent=None):\n openAsMenu = QtWidgets.QMenu(parent=parent)\n openAsMenu.setTitle(\"Open Item As\")\n\n registry = globalRtiRegistry()\n for rtiRegItem in registry.items:\n #rtiRegItem.tryImportClass()\n def createTrigger():\n \"\"\"Function to create a closure with the regItem\"\"\"\n _rtiRegItem = rtiRegItem # keep reference in closure\n return lambda: self.reloadFileOfCurrentItem(_rtiRegItem)\n\n action = QtWidgets.QAction(\"{}\".format(rtiRegItem.name), self,\n enabled=bool(rtiRegItem.successfullyImported is not False),\n triggered=createTrigger())\n openAsMenu.addAction(action)\n\n return openAsMenu", "response": "Creates the menu for the Open Item As choice"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nopens the current item in the repository.", "response": "def openCurrentItem(self):\n \"\"\" Opens the current item in the repository.\n \"\"\"\n logger.debug(\"openCurrentItem\")\n _currentItem, currentIndex = self.getCurrentItem()\n if not currentIndex.isValid():\n return\n\n # Expanding the node will call indirectly call RepoTreeModel.fetchMore which will call\n # BaseRti.fetchChildren, which will call BaseRti.open and thus open the current RTI.\n # BaseRti.open will emit the self.model.sigItemChanged signal, which is connected to\n # RepoTreeView.onItemChanged.\n self.expand(currentIndex)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncloses the current item in the repository.", "response": "def closeCurrentItem(self):\n \"\"\" Closes the current item in the repository.\n All its children will be unfetched and closed.\n \"\"\"\n logger.debug(\"closeCurrentItem\")\n currentItem, currentIndex = self.getCurrentItem()\n if not currentIndex.isValid():\n return\n\n # First we remove all the children, this will close them as well.\n self.model().removeAllChildrenAtIndex(currentIndex)\n\n # Close the current item. BaseRti.close will emit the self.model.sigItemChanged signal,\n # which is connected to RepoTreeView.onItemChanged.\n currentItem.close()\n self.dataChanged(currentIndex, currentIndex)\n self.collapse(currentIndex)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef removeCurrentItem(self):\n logger.debug(\"removeCurrentFile\")\n currentIndex = self.getRowCurrentIndex()\n if not currentIndex.isValid():\n return\n\n self.model().deleteItemAtIndex(currentIndex)", "response": "Removes the current item from the repository tree."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef reloadFileOfCurrentItem(self, rtiRegItem=None):\n logger.debug(\"reloadFileOfCurrentItem, rtiClass={}\".format(rtiRegItem))\n\n\n currentIndex = self.getRowCurrentIndex()\n if not currentIndex.isValid():\n return\n\n currentItem, _ = self.getCurrentItem()\n oldPath = currentItem.nodePath\n\n fileRtiIndex = self.model().findFileRtiIndex(currentIndex)\n isExpanded = self.isExpanded(fileRtiIndex)\n\n if rtiRegItem is None:\n rtiClass = None\n else:\n rtiRegItem.tryImportClass()\n rtiClass = rtiRegItem.cls\n\n newRtiIndex = self.model().reloadFileAtIndex(fileRtiIndex, rtiClass=rtiClass)\n\n try:\n # Expand and select the name with the old path\n _lastItem, lastIndex = self.expandPath(oldPath)\n self.setCurrentIndex(lastIndex)\n return lastIndex\n except Exception as ex:\n # The old path may not exist anymore. In that case select file RTI\n logger.warning(\"Unable to select {!r} beause of: {}\".format(oldPath, ex))\n self.setExpanded(newRtiIndex, isExpanded)\n self.setCurrentIndex(newRtiIndex)\n return newRtiIndex", "response": "Reloads the file of the current item."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncalling when the repo tree item has changed", "response": "def repoTreeItemChanged(self, rti):\n \"\"\" Called when repo tree item has changed (the item itself, not a new selection)\n\n If the item is the currently selected item, the the collector (inspector) and\n metadata widgets are updated.\n \"\"\"\n logger.debug(\"onItemChanged: {}\".format(rti))\n currentItem, _currentIndex = self.getCurrentItem()\n\n if rti == currentItem:\n self.currentRepoTreeItemChanged()\n else:\n logger.debug(\"Ignoring changed item as is not the current item: {}\".format(rti))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncalling when a new repo tree item has changed or a new one was selected.", "response": "def currentRepoTreeItemChanged(self):\n \"\"\" Called to update the GUI when a repo tree item has changed or a new one was selected.\n \"\"\"\n # When the model is empty the current index may be invalid and the currentItem may be None.\n currentItem, currentIndex = self.getCurrentItem()\n\n hasCurrent = currentIndex.isValid()\n assert hasCurrent == (currentItem is not None), \\\n \"If current idex is valid, currentIndex may not be None\" # sanity check\n\n # Set the item in the collector, will will subsequently update the inspector.\n if hasCurrent:\n logger.info(\"Adding rti to collector: {}\".format(currentItem.nodePath))\n self.collector.setRti(currentItem)\n #if rti.asArray is not None: # TODO: maybe later, first test how robust it is now\n # self.collector.setRti(rti)\n\n # Update context menus in the repo tree\n self.currentItemActionGroup.setEnabled(hasCurrent)\n isTopLevel = hasCurrent and self.model().isTopLevelIndex(currentIndex)\n self.topLevelItemActionGroup.setEnabled(isTopLevel)\n self.openItemAction.setEnabled(currentItem is not None\n and currentItem.hasChildren()\n and not currentItem.isOpen)\n self.closeItemAction.setEnabled(currentItem is not None\n and currentItem.hasChildren()\n and currentItem.isOpen)\n\n # Emit sigRepoItemChanged signal so that, for example, details panes can update.\n logger.debug(\"Emitting sigRepoItemChanged: {}\".format(currentItem))\n self.sigRepoItemChanged.emit(currentItem)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef initQApplication():\n # PyQtGraph recommends raster graphics system for OS-X.\n if 'darwin' in sys.platform:\n graphicsSystem = \"raster\" # raster, native or opengl\n os.environ.setdefault('QT_GRAPHICSSYSTEM', graphicsSystem)\n logger.info(\"Setting QT_GRAPHICSSYSTEM to: {}\".format(graphicsSystem))\n\n app = QtWidgets.QApplication(sys.argv)\n initArgosApplicationSettings(app)\n return app", "response": "Initializes the QtWidgets. QApplication instance. Creates one if it doesn t exist."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ninitializes the Argos application settings.", "response": "def initArgosApplicationSettings(app): # TODO: this is Argos specific. Move somewhere else.\n \"\"\" Sets Argos specific attributes, such as the OrganizationName, so that the application\n persistent settings are read/written to the correct settings file/winreg. It is therefore\n important to call this function at startup. The ArgosApplication constructor does this.\n \"\"\"\n assert app, \\\n \"app undefined. Call QtWidgets.QApplication.instance() or QtCor.QApplication.instance() first.\"\n\n logger.debug(\"Setting Argos QApplication settings.\")\n app.setApplicationName(info.REPO_NAME)\n app.setApplicationVersion(info.VERSION)\n app.setOrganizationName(info.ORGANIZATION_NAME)\n app.setOrganizationDomain(info.ORGANIZATION_DOMAIN)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nremove a group from the persistent settings", "response": "def removeSettingsGroup(groupName, settings=None):\n \"\"\" Removes a group from the persistent settings\n \"\"\"\n logger.debug(\"Removing settings group: {}\".format(groupName))\n settings = QtCore.QSettings() if settings is None else settings\n settings.remove(groupName)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef containsSettingsGroup(groupName, settings=None):\n def _containsPath(path, settings):\n \"Aux function for containsSettingsGroup. Does the actual recursive search.\"\n if len(path) == 0:\n return True\n else:\n head = path[0]\n tail = path[1:]\n if head not in settings.childGroups():\n return False\n else:\n settings.beginGroup(head)\n try:\n return _containsPath(tail, settings)\n finally:\n settings.endGroup()\n\n # Body starts here\n path = os.path.split(groupName)\n logger.debug(\"Looking for path: {}\".format(path))\n settings = QtCore.QSettings() if settings is None else settings\n return _containsPath(path, settings)", "response": "Returns True if the settings contains a group with the name groupName."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef printAllWidgets(qApplication, ofType=None):\n print (\"Application's widgets {}\".format(('of type: ' + str(ofType)) if ofType else ''))\n for widget in qApplication.allWidgets():\n if ofType is None or isinstance(widget, ofType):\n print (\" {!r}\".format(widget))", "response": "Prints list of all widgets in the given application to stdout."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the rectangle of a check box drawn as a sub element of widget", "response": "def widgetSubCheckBoxRect(widget, option):\n \"\"\" Returns the rectangle of a check box drawn as a sub element of widget\n \"\"\"\n opt = QtWidgets.QStyleOption()\n opt.initFrom(widget)\n style = widget.style()\n return style.subElementRect(QtWidgets.QStyle.SE_ViewItemCheckIndicator, opt, widget)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef resizeEvent(self, event):\n result = super(ResizeDetailsMessageBox, self).resizeEvent(event)\n\n details_box = self.findChild(QtWidgets.QTextEdit)\n if details_box is not None:\n #details_box.setFixedSize(details_box.sizeHint())\n details_box.setFixedSize(QtCore.QSize(self.detailsBoxWidth, self.detailBoxHeight))\n\n return result", "response": "Resizes the details box if present."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _drawContents(self, currentRti=None):\n #logger.debug(\"_drawContents: {}\".format(currentRti))\n table = self.table\n table.setUpdatesEnabled(False)\n try:\n table.clearContents()\n verticalHeader = table.verticalHeader()\n verticalHeader.setSectionResizeMode(QtWidgets.QHeaderView.Fixed)\n\n attributes = currentRti.attributes if currentRti is not None else {}\n table.setRowCount(len(attributes))\n\n for row, (attrName, attrValue) in enumerate(sorted(attributes.items())):\n attrStr = to_string(attrValue, decode_bytes='utf-8')\n\n try:\n type_str = type_name(attrValue)\n except Exception as ex:\n logger.exception(ex)\n type_str = \"\"\n\n nameItem = QtWidgets.QTableWidgetItem(attrName)\n nameItem.setToolTip(attrName)\n table.setItem(row, self.COL_ATTR_NAME, nameItem)\n valItem = QtWidgets.QTableWidgetItem(attrStr)\n valItem.setToolTip(attrStr)\n table.setItem(row, self.COL_VALUE, valItem)\n table.setItem(row, self.COL_ELEM_TYPE, QtWidgets.QTableWidgetItem(type_str))\n table.resizeRowToContents(row)\n\n verticalHeader.setSectionResizeMode(QtWidgets.QHeaderView.ResizeToContents)\n\n finally:\n table.setUpdatesEnabled(True)", "response": "Draw the contents of the current RTI and all of its children."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nraises ValueError if the reason is not one of the enumerations .", "response": "def checkValid(cls, reason):\n \"\"\" Raises ValueError if the reason is not one of the valid enumerations\n \"\"\"\n if reason not in cls.__VALID_REASONS:\n raise ValueError(\"reason must be one of {}, got {}\".format(cls.__VALID_REASONS, reason))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nupdates the contents of the current item in the configuration tree.", "response": "def updateContents(self, reason=None, initiator=None): # TODO: reason mandatory?\n \"\"\" Tries to draw the widget contents with the updated RTI.\n Shows the error page in case an exception is raised while drawing the contents.\n Descendants should override _drawContents, not updateContents.\n\n During the call of _drawContents, the updating of the configuration tree is blocked to\n avoid circular effects. After that, a call to self.config.refreshFromTarget() is\n made to refresh the configuration tree with possible new values from the inspector\n (the inspector is the configuration's target, hence the name).\n\n The reason parameter is a string (one of the UpdateReason values) that indicates why\n the inspector contents whas updated. This can, for instance, be used to optimize\n drawing the inspector contents. Note that the reason may be undefined (None).\n\n The initiator may contain the object that initiated the updated. The type depends on the\n reason. At the moment the initiator is only implemented for the \"config changed\" reason. In\n this case the initiator will be the Config Tree Item (CTI that has changed).\n \"\"\"\n UpdateReason.checkValid(reason)\n logger.debug(\"---- Inspector updateContents, reason: {}, initiator: {}\"\n .format(reason, initiator))\n logger.debug(\"Inspector: {}\".format(self))\n logger.debug(\"RTI: {}\".format(self.collector.rti))\n try:\n self.setCurrentIndex(self.CONTENTS_PAGE_IDX)\n\n wasBlocked = self.config.model.setRefreshBlocked(True)\n try:\n self._drawContents(reason=reason, initiator=initiator)\n logger.debug(\"_drawContents finished successfully\")\n\n # Update the config tree from the (possibly) new state of the PgLinePlot1d inspector,\n # e.g. the axis range may have changed while drawing.\n # self.config.updateTarget() # TODO: enable this here (instead of doing it in the inspector._drawContents when needed)?\n finally:\n self.config.model.setRefreshBlocked(wasBlocked)\n\n # Call refreshFromTarget in case the newly applied configuration resulted in a change\n # of the state of the configuration's target's (i.e. the inspector state)\n logger.debug(\"_drawContents finished successfully, calling refreshFromTarget...\")\n self.config.refreshFromTarget()\n logger.debug(\"refreshFromTarget finished successfully\")\n\n except InvalidDataError as ex:\n logger.info(\"Unable to draw the inspector contents: {}\".format(ex))\n\n except Exception as ex:\n if DEBUGGING: # TODO: enable\n raise\n logger.error(\"Error while drawing the inspector: {} ----\".format(ex))\n logger.exception(ex)\n self._clearContents()\n self.setCurrentIndex(self.ERROR_PAGE_IDX)\n self._showError(msg=str(ex), title=type_name(ex))\n else:\n logger.debug(\"---- updateContents finished successfully\")"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nshows an error message.", "response": "def _showError(self, msg=\"\", title=\"Error\"):\n \"\"\" Shows an error message.\n \"\"\"\n self.errorWidget.setError(msg=msg, title=title)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _enforceDataType(self, data):\n idx = int(data)\n if idx < 0:\n idx += len(self._displayValues)\n assert 0 <= idx < len(self._displayValues), \\\n \"Index should be >= 0 and < {}. Got {}\".format(len(self._displayValues), idx)\n return idx", "response": "Converts the data to int so that this CTI always stores that type."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _nodeGetNonDefaultsDict(self):\n dct = super(ChoiceCti, self)._nodeGetNonDefaultsDict()\n if self._configValues != self._defaultConfigValues:\n dct['choices'] = self._configValues\n\n return dct", "response": "Returns this nodes values as a dictionary to be used for persistence. getNonDefaultsDict\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _nodeSetValuesFromDict(self, dct):\n if 'choices' in dct:\n self._configValues = list(dct['choices'])\n self._displayValues = list(dct['choices'])\n super(ChoiceCti, self)._nodeSetValuesFromDict(dct)", "response": "Sets values from a dictionary in the current node."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef createEditor(self, delegate, parent, option):\n return ChoiceCtiEditor(self, delegate, parent=parent)", "response": "Creates a ChoiceCtiEditor object for the given option."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef insertValue(self, pos, configValue, displayValue=None):\n self._configValues.insert(pos, configValue)\n self._displayValues.insert(pos, displayValue if displayValue is not None else configValue)", "response": "Insert a value into the configValues and displayValues list at the given position."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndisconnect signals and restores the object.", "response": "def finalize(self):\n \"\"\" Is called when the editor is closed. Disconnect signals.\n \"\"\"\n self._comboboxListView.removeEventFilter(self)\n self.comboBox.model().rowsInserted.disconnect(self.comboBoxRowsInserted)\n self.comboBox.activated.disconnect(self.comboBoxActivated)\n super(ChoiceCtiEditor, self).finalize()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncalling when the user has entered a new value in the combobox. This method inserts the new value into the cti.", "response": "def comboBoxRowsInserted(self, _parent, start, end):\n \"\"\" Called when the user has entered a new value in the combobox.\n Puts the combobox values back into the cti.\n \"\"\"\n assert start == end, \"Bug, please report: more than one row inserted\"\n configValue = self.comboBox.itemText(start)\n logger.debug(\"Inserting {!r} at position {} in {}\"\n .format(configValue, start, self.cti.nodePath))\n self.cti.insertValue(start, configValue)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nremoves an item from an editable combobox when the delete or backspace key is pressed in the combobox list view.", "response": "def eventFilter(self, watchedObject, event):\n \"\"\" Deletes an item from an editable combobox when the delete or backspace key is pressed\n in the list of items, or when ctrl-delete or ctrl-back space is pressed in the\n line-edit.\n\n When the combobox is not editable the filter does nothing.\n \"\"\"\n if self.comboBox.isEditable() and event.type() == QtCore.QEvent.KeyPress:\n key = event.key()\n if key in (Qt.Key_Delete, Qt.Key_Backspace):\n if (watchedObject == self._comboboxListView\n or (watchedObject == self.comboBox\n and event.modifiers() == Qt.ControlModifier)):\n\n index = self._comboboxListView.currentIndex()\n if index.isValid():\n row = index.row()\n logger.debug(\"Removing item {} from the combobox: {}\"\n .format(row, self._comboboxListView.model().data(index)))\n self.cti.removeValueByIndex(row)\n self.comboBox.removeItem(row)\n return True\n\n # Calling parent event filter, which may filter out other events.\n return super(ChoiceCtiEditor, self).eventFilter(watchedObject, event)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef debug(self, topLeftIndex, bottomRightIndex):\n if topLeftIndex.isValid() and bottomRightIndex.isValid():\n topRow = topLeftIndex.row()\n bottomRow = bottomRightIndex.row()\n for row in range(topRow, bottomRow + 1):\n index = topLeftIndex.sibling(row, 0)\n childItem = self.getItem(index)\n logger.debug(\"Data changed in: {}\".format(childItem.nodePath))", "response": "Test the dataChanged signal."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the item flags for the given index.", "response": "def flags(self, index):\n \"\"\" Returns the item flags for the given index.\n \"\"\"\n if not index.isValid():\n return 0\n\n cti = self.getItem(index)\n\n result = Qt.ItemIsSelectable\n\n if cti.enabled:\n result |= Qt.ItemIsEnabled\n\n if index.column() == self.COL_VALUE:\n result |= cti.valueColumnItemFlags\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ninserts a top level group tree item.", "response": "def insertTopLevelGroup(self, groupName, position=None):\n \"\"\" Inserts a top level group tree item.\n Used to group all config nodes of (for instance) the current inspector,\n Returns the newly created CTI\n \"\"\"\n groupCti = GroupCti(groupName)\n return self._invisibleRootItem.insertChild(groupCti, position=position)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the data stored under the given role for the item.", "response": "def itemData(self, treeItem, column, role=Qt.DisplayRole):\n \"\"\" Returns the data stored under the given role for the item.\n \"\"\"\n if role == Qt.DisplayRole:\n if column == self.COL_NODE_NAME:\n return treeItem.nodeName\n elif column == self.COL_NODE_PATH:\n return treeItem.nodePath\n elif column == self.COL_VALUE:\n return treeItem.displayValue\n elif column == self.COL_DEF_VALUE:\n return treeItem.displayDefaultValue\n elif column == self.COL_CTI_TYPE:\n return type_name(treeItem)\n elif column == self.COL_DEBUG:\n return treeItem.debugInfo\n else:\n raise ValueError(\"Invalid column: {}\".format(column))\n\n elif role == Qt.EditRole:\n if column == self.COL_VALUE:\n return treeItem.data\n else:\n raise ValueError(\"Invalid column: {}\".format(column))\n\n elif role == Qt.ToolTipRole:\n if column == self.COL_NODE_NAME or column == self.COL_NODE_PATH:\n return treeItem.nodePath\n elif column == self.COL_VALUE:\n # Give Access to exact values. In particular in scientific-notation spin boxes\n return repr(treeItem.configValue)\n elif column == self.COL_DEF_VALUE:\n return treeItem.displayDefaultValue\n elif column == self.COL_CTI_TYPE:\n return type_name(treeItem)\n elif column == self.COL_DEBUG:\n return treeItem.debugInfo\n else:\n return None\n\n elif role == Qt.CheckStateRole:\n if column != self.COL_VALUE:\n # The CheckStateRole is called for each cell so return None here.\n return None\n else:\n return treeItem.checkState\n else:\n return super(ConfigTreeModel, self).itemData(treeItem, column, role=role)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets the role data for the item at the given index.", "response": "def setItemData(self, treeItem, column, value, role=Qt.EditRole):\n \"\"\" Sets the role data for the item at index to value.\n \"\"\"\n if role == Qt.CheckStateRole:\n if column != self.COL_VALUE:\n return False\n else:\n logger.debug(\"Setting check state (col={}): {!r}\".format(column, value))\n treeItem.checkState = value\n return True\n\n elif role == Qt.EditRole:\n if column != self.COL_VALUE:\n return False\n else:\n logger.debug(\"Set Edit value (col={}): {!r}\".format(column, value))\n treeItem.data = value\n return True\n else:\n raise ValueError(\"Unexpected edit role: {}\".format(role))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nset the expanded flag of the item specified by the index.", "response": "def setExpanded(self, index, expanded):\n \"\"\" Expands the model item specified by the index.\n Overridden from QTreeView to make it persistent (between inspector changes).\n \"\"\"\n if index.isValid():\n item = self.getItem(index)\n item.expanded = expanded"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the first column model index and last column model index for a configTreeItem", "response": "def indexTupleFromItem(self, treeItem): # TODO: move to BaseTreeItem?\n \"\"\" Return (first column model index, last column model index) tuple for a configTreeItem\n \"\"\"\n if not treeItem:\n return (QtCore.QModelIndex(), QtCore.QModelIndex())\n\n if not treeItem.parentItem: # TODO: only necessary because of childNumber?\n return (QtCore.QModelIndex(), QtCore.QModelIndex())\n\n # Is there a bug in Qt in QStandardItemModel::indexFromItem?\n # It passes the parent in createIndex. TODO: investigate\n\n row = treeItem.childNumber()\n return (self.createIndex(row, 0, treeItem),\n self.createIndex(row, self.columnCount() - 1, treeItem))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nemits the data changed for the model indices for this treeItem", "response": "def emitDataChanged(self, treeItem): # TODO: move to BaseTreeItem?\n \"\"\" Emits the data changed for the model indices (all columns) for this treeItem\n \"\"\"\n indexLeft, indexRight = self.indexTupleFromItem(treeItem)\n checkItem = self.getItem(indexLeft)\n assert checkItem is treeItem, \"{} != {}\".format(checkItem, treeItem) # TODO: remove\n self.dataChanged.emit(indexLeft, indexRight)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef setRefreshBlocked(self, blocked):\n wasBlocked = self._refreshBlocked\n logger.debug(\"Setting refreshBlocked from {} to {}\".format(wasBlocked, blocked))\n self._refreshBlocked = blocked\n return wasBlocked", "response": "Set to True to indicate that the configuration should not be updated."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nuses numpy. load to open the underlying file", "response": "def _openResources(self):\n \"\"\" Uses numpy.load to open the underlying file\n \"\"\"\n arr = np.load(self._fileName, allow_pickle=ALLOW_PICKLE)\n check_is_an_array(arr)\n self._array = arr"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nusing numpy. load to open the underlying file", "response": "def _openResources(self):\n \"\"\" Uses numpy.load to open the underlying file\n \"\"\"\n dct = np.load(self._fileName, allow_pickle=ALLOW_PICKLE)\n check_class(dct, NpzFile)\n self._dictionary = dct"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef data(self, data):\n # Descendants should convert the data to the desired type here\n self._data = self._enforceDataType(data)\n\n #logger.debug(\"BoolCti.setData: {} for {}\".format(data, self))\n enabled = self.enabled\n self.enableBranch(enabled and self.data != self.childrenDisabledValue)\n self.enabled = enabled", "response": "Sets the data of this item."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef insertChild(self, childItem, position=None):\n childItem = super(BoolCti, self).insertChild(childItem, position=None)\n\n enableChildren = self.enabled and self.data != self.childrenDisabledValue\n #logger.debug(\"BoolCti.insertChild: {} enableChildren={}\".format(childItem, enableChildren))\n childItem.enableBranch(enableChildren)\n childItem.enabled = enableChildren\n return childItem", "response": "Inserts a child item into the current item."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef checkState(self):\n if self.data is True:\n return Qt.Checked\n elif self.data is False:\n return Qt.Unchecked\n else:\n raise ValueError(\"Unexpected data: {!r}\".format(self.data))", "response": "Returns Qt. Checked or Qt. Unchecked."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting the data to given a Qt. CheckState.", "response": "def checkState(self, checkState):\n \"\"\" Sets the data to given a Qt.CheckState (Qt.Checked or Qt.Unchecked).\n \"\"\"\n if checkState == Qt.Checked:\n logger.debug(\"BoolCti.checkState setting to True\")\n self.data = True\n elif checkState == Qt.Unchecked:\n logger.debug(\"BoolCti.checkState setting to False\")\n self.data = False\n else:\n raise ValueError(\"Unexpected check state: {!r}\".format(checkState))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset the enabled member to True or False for a node and all its children.", "response": "def enableBranch(self, enabled):\n \"\"\" Sets the enabled member to True or False for a node and all it's children\n \"\"\"\n self.enabled = enabled\n\n # Disabled children and further descendants\n enabled = enabled and self.data != self.childrenDisabledValue\n\n for child in self.childItems:\n child.enableBranch(enabled)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncheck if all children are checked or unchecked and returns Qt. Checked if all children are unchecked else returns Qt. PartiallyChecked", "response": "def checkState(self):\n \"\"\" Returns Qt.Checked or Qt.Unchecked if all children are checked or unchecked, else\n returns Qt.PartiallyChecked\n \"\"\"\n #commonData = self.childItems[0].data if self.childItems else Qt.PartiallyChecked\n commonData = None\n\n for child in self.childItems:\n if isinstance(child, BoolCti):\n if commonData is not None and child.data != commonData:\n return Qt.PartiallyChecked\n commonData = child.data\n\n if commonData is True:\n return Qt.Checked\n elif commonData is False:\n return Qt.Unchecked\n else:\n raise AssertionError(\"Please report this bug: commonData: {!r}\".format(commonData))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the data to given a Qt. CheckState.", "response": "def checkState(self, checkState):\n \"\"\" Sets the data to given a Qt.CheckState (Qt.Checked or Qt.Unchecked).\n \"\"\"\n logger.debug(\"checkState setter: {}\".format(checkState))\n if checkState == Qt.Checked:\n commonData = True\n elif checkState == Qt.Unchecked:\n commonData = False\n elif checkState == Qt.PartiallyChecked:\n commonData = None\n # This never occurs, see remarks above in the classes' docstring\n assert False, \"This never happens. Please report if it does.\"\n else:\n raise ValueError(\"Unexpected check state: {!r}\".format(checkState))\n\n for child in self.childItems:\n if isinstance(child, BoolCti):\n child.data = commonData"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a BaseRti given a file name.", "response": "def createFromFileName(cls, fileName):\n \"\"\" Creates a BaseRti (or descendant), given a file name.\n \"\"\"\n # See https://julien.danjou.info/blog/2013/guide-python-static-class-abstract-methods\n #logger.debug(\"Trying to create object of class: {!r}\".format(cls))\n basename = os.path.basename(os.path.realpath(fileName)) # strips trailing slashes\n return cls(nodeName=basename, fileName=fileName)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nopening underlying resources and sets isOpen flag.", "response": "def open(self):\n \"\"\" Opens underlying resources and sets isOpen flag.\n It calls _openResources. Descendants should usually override the latter\n function instead of this one.\n \"\"\"\n self.clearException()\n try:\n if self._isOpen:\n logger.warn(\"Resources already open. Closing them first before opening.\")\n self._closeResources()\n self._isOpen = False\n\n assert not self._isOpen, \"Sanity check failed: _isOpen should be false\"\n logger.debug(\"Opening {}\".format(self))\n self._openResources()\n self._isOpen = True\n\n if self.model:\n self.model.sigItemChanged.emit(self)\n else:\n logger.warning(\"Model not set yet: {}\".format(self))\n\n except Exception as ex:\n if DEBUGGING:\n raise\n logger.exception(\"Error during tree item open: {}\".format(ex))\n self.setException(ex)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nclosing underlying resources and un - sets the isOpen flag.", "response": "def close(self):\n \"\"\" Closes underlying resources and un-sets the isOpen flag.\n Any exception that occurs is caught and put in the exception property.\n This method calls _closeResources, which does the actual resource cleanup. Descendants\n should typically override the latter instead of this one.\n \"\"\"\n self.clearException()\n try:\n if self._isOpen:\n logger.debug(\"Closing {}\".format(self))\n self._closeResources()\n self._isOpen = False\n else:\n logger.debug(\"Resources already closed (ignored): {}\".format(self))\n\n if self.model:\n self.model.sigItemChanged.emit(self)\n else:\n logger.warning(\"Model not set yet: {}\".format(self))\n\n except Exception as ex:\n if DEBUGGING:\n raise\n logger.error(\"Error during tree item close: {}\".format(ex))\n self.setException(ex)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _checkFileExists(self):\n if self._fileName and not os.path.exists(self._fileName):\n msg = \"File not found: {}\".format(self._fileName)\n logger.error(msg)\n self.setException(IOError(msg))\n return False\n else:\n return True", "response": "Checks that the underlying file exists and sets the _exception attribute if not."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate child items and returns them.", "response": "def fetchChildren(self):\n \"\"\" Creates child items and returns them.\n Opens the tree item first if it's not yet open.\n \"\"\"\n assert self._canFetchChildren, \"canFetchChildren must be True\"\n try:\n self.clearException()\n\n if not self.isOpen:\n self.open() # Will set self._exception in case of failure\n\n if not self.isOpen:\n logger.warn(\"Opening item failed during fetch (aborted)\")\n return [] # no need to continue if opening failed.\n\n childItems = []\n try:\n childItems = self._fetchAllChildren()\n assert is_a_sequence(childItems), \"ChildItems must be a sequence\"\n\n except Exception as ex:\n # This can happen, for example, when a NCDF/HDF5 file contains data types that\n # are not supported by the Python library that is used to read them.\n if DEBUGGING:\n raise\n logger.error(\"Unable fetch tree item children: {}\".format(ex))\n self.setException(ex)\n\n return childItems\n finally:\n self._canFetchChildren = False"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef decoration(self):\n rtiIconFactory = RtiIconFactory.singleton()\n\n if self._exception:\n return rtiIconFactory.getIcon(rtiIconFactory.ERROR, isOpen=False,\n color=rtiIconFactory.COLOR_ERROR)\n else:\n return rtiIconFactory.getIcon(self.iconGlyph, isOpen=not self.canFetchChildren(),\n color=self.iconColor)", "response": "The displayed icon.\n\n Shows open icon when node was visited (children are fetched). This allows users\n for instance to collapse a directory node but still see that it was visited, which\n may be useful if there is a huge list of directories."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _drawContents(self, currentRti=None):\n table = self.table\n table.setUpdatesEnabled(False)\n\n sizeAlignment = Qt.AlignRight | Qt.AlignVCenter\n\n try:\n table.clearContents()\n verticalHeader = table.verticalHeader()\n verticalHeader.setSectionResizeMode(QtWidgets.QHeaderView.Fixed)\n\n if currentRti is None or not currentRti.isSliceable:\n return\n\n nDims = currentRti.nDims\n dimNames = currentRti.dimensionNames\n dimGroups = currentRti.dimensionGroupPaths\n dimSizes = currentRti.arrayShape\n\n # Sanity check\n assert len(dimNames) == nDims, \"dimNames size {} != {}\".format(len(dimNames), nDims)\n assert len(dimGroups) == nDims, \"dimGroups size {} != {}\".format(len(dimGroups), nDims)\n assert len(dimSizes) == nDims, \"dimSizes size {} != {}\".format(len(dimSizes), nDims)\n\n table.setRowCount(nDims)\n for row, (dimName, dimSize, dimGroup) in enumerate(zip(dimNames, dimSizes, dimGroups)):\n table.setItem(row, self.COL_NAME, QtWidgets.QTableWidgetItem(dimName))\n table.setItem(row, self.COL_SIZE, QtWidgets.QTableWidgetItem(str(dimSize)))\n table.item(row, self.COL_SIZE).setTextAlignment(sizeAlignment)\n table.setItem(row, self.COL_GROUP, QtWidgets.QTableWidgetItem(str(dimGroup)))\n table.resizeRowToContents(row)\n\n verticalHeader.setSectionResizeMode(QtWidgets.QHeaderView.ResizeToContents)\n\n finally:\n table.setUpdatesEnabled(True)", "response": "Draws the attributes of the currentRTI and the currentRTI."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef format_float(value): # not used\n string = \"{:g}\".format(value).replace(\"e+\", \"e\")\n string = re.sub(\"e(-?)0*(\\d+)\", r\"e\\1\\2\", string)\n return string", "response": "Modified form of the g format specifier.\n "} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset the number of small steps that go in a large one.", "response": "def smallStepsPerLargeStep(self, smallStepsPerLargeStep):\n \"\"\" Sets the number of small steps that go in a large one.\n\n \"\"\"\n self._smallStepsPerLargeStep = smallStepsPerLargeStep\n self._smallStepFactor = np.power(self.largeStepFactor, 1.0 / smallStepsPerLargeStep)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfunction that is called whenever the user presses a key.", "response": "def stepBy(self, steps):\n \"\"\" Function that is called whenever the user triggers a step. The steps parameter\n indicates how many steps were taken, e.g. Pressing Qt::Key_Down will trigger a call to\n stepBy(-1), whereas pressing Qt::Key_Prior will trigger a call to stepBy(10).\n \"\"\"\n oldValue = self.value()\n\n if oldValue == 0:\n newValue = steps\n elif steps == 1:\n newValue = self.value() * self.smallStepFactor\n elif steps == -1:\n newValue = self.value() / self.smallStepFactor\n elif steps == 10:\n newValue = self.value() * self.largeStepFactor\n elif steps == -10:\n newValue = self.value() / self.largeStepFactor\n else:\n raise ValueError(\"Invalid step size: {!r}, value={}\".format(steps, oldValue))\n\n newValue = float(newValue)\n\n if newValue < self.minimum():\n newValue = self.minimum()\n\n if newValue > self.maximum():\n newValue = self.maximum()\n\n #logger.debug(\"stepBy {}: {} -> {}\".format(steps, oldValue, newValue))\n try:\n self.setValue(newValue)\n except Exception:\n # TODO: does this ever happen? Better validation (e.g. catch underflows)\n logger.warn(\"Unable to set spinbox to: {!r}\".format(newValue))\n self.setValue(oldValue)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nimport the module and sets version information.", "response": "def tryImportModule(self, name):\n \"\"\" Imports the module and sets version information\n If the module cannot be imported, the version is set to empty values.\n \"\"\"\n self._name = name\n try:\n import importlib\n self._module = importlib.import_module(name)\n except ImportError:\n self._module = None\n self._version = ''\n self._packagePath = ''\n else:\n if self._versionAttribute:\n self._version = getattr(self._module, self._versionAttribute, '???')\n if self._pathAttribute:\n self._packagePath = getattr(self._module, self._pathAttribute, '???')"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef setPgConfigOptions(**kwargs):\n for key, value in kwargs.items():\n logger.debug(\"Setting PyQtGraph config option: {} = {}\".format(key, value))\n\n pg.setConfigOptions(**kwargs)", "response": "Sets the PyQtGraph config options and emits a log message\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef model(self):\n if self._model is None and self.parentItem is not None:\n self._model = self.parentItem.model\n return self._model", "response": "Returns the ConfigTreeModel this item belongs to."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef nodeName(self, nodeName):\n assert '/' not in nodeName, \"nodeName may not contain slashes\"\n self._nodeName = nodeName\n self._recursiveSetNodePath(self._constructNodePath())", "response": "The name of the node."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsetting the nodePath property and updates it for all children.", "response": "def _recursiveSetNodePath(self, nodePath):\n \"\"\" Sets the nodePath property and updates it for all children.\n \"\"\"\n self._nodePath = nodePath\n for childItem in self.childItems:\n childItem._recursiveSetNodePath(nodePath + '/' + childItem.nodeName)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parentItem(self, value):\n self._parentItem = value\n self._recursiveSetNodePath(self._constructNodePath())", "response": "Sets the parent item of the item"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget the first direct child that has the nodeName.", "response": "def childByNodeName(self, nodeName):\n \"\"\" Gets first (direct) child that has the nodeName.\n \"\"\"\n assert '/' not in nodeName, \"nodeName can not contain slashes\"\n for child in self.childItems:\n if child.nodeName == nodeName:\n return child\n\n raise IndexError(\"No child item found having nodeName: {}\".format(nodeName))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfinding the child with the nodePath. Starts at self.", "response": "def findByNodePath(self, nodePath):\n \"\"\" Recursively searches for the child having the nodePath. Starts at self.\n \"\"\"\n def _auxGetByPath(parts, item):\n \"Aux function that does the actual recursive search\"\n #logger.debug(\"_auxGetByPath item={}, parts={}\".format(item, parts))\n\n if len(parts) == 0:\n return item\n\n head, tail = parts[0], parts[1:]\n if head == '':\n # Two consecutive slashes. Just go one level deeper.\n return _auxGetByPath(tail, item)\n else:\n childItem = item.childByNodeName(head)\n return _auxGetByPath(tail, childItem)\n\n # The actual body of findByNodePath starts here\n\n check_is_a_string(nodePath)\n assert not nodePath.startswith('/'), \"nodePath may not start with a slash\"\n\n if not nodePath:\n raise IndexError(\"Item not found: {!r}\".format(nodePath))\n\n return _auxGetByPath(nodePath.split('/'), self)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef insertChild(self, childItem, position=None):\n if position is None:\n position = self.nChildren()\n\n assert childItem.parentItem is None, \"childItem already has a parent: {}\".format(childItem)\n assert childItem._model is None, \"childItem already has a model: {}\".format(childItem)\n\n childItem.parentItem = self\n childItem.model = self.model\n self.childItems.insert(position, childItem)\n return childItem", "response": "Inserts a child item into the current item."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef removeChild(self, position):\n assert 0 <= position <= len(self.childItems), \\\n \"position should be 0 < {} <= {}\".format(position, len(self.childItems))\n\n self.childItems[position].finalize()\n self.childItems.pop(position)", "response": "Removes the child at the specified position."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef logBranch(self, indent=0, level=logging.DEBUG):\n if 0:\n print(indent * \" \" + str(self))\n else:\n logger.log(level, indent * \" \" + str(self))\n for childItems in self.childItems:\n childItems.logBranch(indent + 1, level=level)", "response": "Logs the item and all descendants one line per child"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef fetchChildren(self):\n assert self._canFetchChildren, \"canFetchChildren must be True\"\n try:\n childItems = self._fetchAllChildren()\n finally:\n self._canFetchChildren = False # Set to True, even if tried and failed.\n\n return childItems", "response": "Fetches children.\n\n The actual work is done by _fetchAllChildren. Descendant classes should typically\n override that method instead of this one."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate an inspector of the registered and passes the collector to the constructor.", "response": "def create(self, collector, tryImport=True):\n \"\"\" Creates an inspector of the registered and passes the collector to the constructor.\n Tries to import the class if tryImport is True.\n Raises ImportError if the class could not be imported.\n \"\"\"\n cls = self.getClass(tryImport=tryImport)\n if not self.successfullyImported:\n raise ImportError(\"Class not successfully imported: {}\".format(self.exception))\n return cls(collector)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nregister an Inspector class.", "response": "def registerInspector(self, fullName, fullClassName, pythonPath=''):\n \"\"\" Registers an Inspector class.\n \"\"\"\n regInspector = InspectorRegItem(fullName, fullClassName, pythonPath=pythonPath)\n self.registerItem(regInspector)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef getDefaultItems(self):\n plugins = [\n InspectorRegItem(DEFAULT_INSPECTOR,\n 'argos.inspector.qtplugins.table.TableInspector'),\n InspectorRegItem('Qt/Text',\n 'argos.inspector.qtplugins.text.TextInspector'),\n InspectorRegItem('PyQtGraph/1D Line Plot',\n 'argos.inspector.pgplugins.lineplot1d.PgLinePlot1d'),\n InspectorRegItem('PyQtGraph/2D Image Plot',\n 'argos.inspector.pgplugins.imageplot2d.PgImagePlot2d'),\n ]\n if DEBUGGING:\n plugins.append(InspectorRegItem('Debug Inspector',\n 'argos.inspector.debug.DebugInspector'))\n return plugins", "response": "Returns a list with the default plugins in the inspector registry."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _createConfig(self):\n rootItem = MainGroupCti('debug inspector')\n\n if DEBUGGING:\n # Some test config items.\n import numpy as np\n from argos.config.untypedcti import UntypedCti\n from argos.config.stringcti import StringCti\n from argos.config.intcti import IntCti\n from argos.config.floatcti import FloatCti, SnFloatCti\n from argos.config.boolcti import BoolCti, BoolGroupCti\n from argos.config.choicecti import ChoiceCti\n from argos.config.qtctis import PenCti\n\n grpItem = GroupCti(\"group\")\n rootItem.insertChild(grpItem)\n\n lcItem = UntypedCti('line color', 123)\n grpItem.insertChild(lcItem)\n\n disabledItem = rootItem.insertChild(StringCti('disabled', \"Can't touch me\"))\n disabledItem.enabled=False\n\n grpItem.insertChild(IntCti('line-1 color', 7, minValue = -5, stepSize=2,\n prefix=\"@\", suffix=\"%\", specialValueText=\"I'm special\"))\n rootItem.insertChild(StringCti('letter', 'aa', maxLength = 1))\n grpItem.insertChild(FloatCti('width', 2, minValue =5, stepSize=0.45, decimals=3,\n prefix=\"@\", suffix=\"%\", specialValueText=\"so very special\"))\n grpItem.insertChild(SnFloatCti('scientific', defaultData=-np.inf))\n\n gridItem = rootItem.insertChild(BoolGroupCti('grid', True))\n gridItem.insertChild(BoolCti('X-Axis', True))\n gridItem.insertChild(BoolCti('Y-Axis', False))\n\n rootItem.insertChild(ChoiceCti('hobbit', 2, editable=True,\n configValues=['Frodo', 'Sam', 'Pippin', 'Merry']))\n myPen = QtGui.QPen(QtGui.QColor('#1C8857'))\n myPen.setWidth(2)\n myPen.setStyle(Qt.DashDotDotLine)\n rootItem.insertChild(PenCti('line', False, resetTo=myPen))\n\n return rootItem", "response": "Creates a config tree item containing default children."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _drawContents(self, reason=None, initiator=None):\n logger.debug(\"DebugInspector._drawContents: {}\".format(self))\n\n slicedArray = self.collector.getSlicedArray()\n if slicedArray is None:\n text = \"\"\n else:\n text = (\"data = {!r}, masked = {!r}, fill_value = {!r} (?= {}: {})\"\n .format(slicedArray.data, slicedArray.mask, slicedArray.fill_value,\n slicedArray.data.item(),\n slicedArray.data.item() == slicedArray.fill_value))\n\n logger.debug(\"_drawContents: {}\".format(text))\n logger.debug(\"_drawContents: {!r}\".format(slicedArray))\n\n if DEBUGGING:\n self.label.setText(text)", "response": "Draw the table contents from the sliced array of the collected repo tree item."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef data(self, index, role=Qt.DisplayRole):\n try:\n if index.isValid():\n item = self.getItem(index, altItem=self.invisibleRootItem)\n return self.itemData(item, index.column(), role=role)\n else:\n return None\n except Exception as ex:\n # This Qt slot is called directly from the event loop so uncaught exception make the\n # application crash (exceptions can come from plugins here). Instead of crashing we\n # show the error message in the table/tree and hope the users report the error.\n if not DEBUGGING and role in (Qt.DisplayRole, Qt.EditRole, Qt.ToolTipRole,\n Qt.StatusTipRole, Qt.WhatsThisRole):\n return repr(ex)\n else:\n raise", "response": "Returns the data stored under the given role for the item referred to by the index."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef itemData(self, item, column, role=Qt.DisplayRole):\n if role == Qt.DecorationRole:\n if column == self.COL_DECORATION:\n return item.decoration\n\n elif role == Qt.FontRole:\n return item.font\n\n elif role == Qt.ForegroundRole:\n return item.foregroundBrush\n\n elif role == Qt.BackgroundRole:\n return item.backgroundBrush\n\n elif role == Qt.SizeHintRole:\n return self.cellSizeHint if item.sizeHint is None else item.sizeHint\n\n return None", "response": "Returns the data stored under the given role for the item."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the index of the item in the model specified by the given row column and parentIndex.", "response": "def index(self, row, column, parentIndex=QtCore.QModelIndex()):\n \"\"\" Returns the index of the item in the model specified by the given row, column and parent\n index.\n\n Since each item contains information for an entire row of data, we create a model index\n to uniquely identify it by calling createIndex() it with the row and column numbers and\n a pointer to the item. (In the data() function, we will use the item pointer and column\n number to access the data associated with the model index; in this model, the row number\n is not needed to identify data.)\n\n When reimplementing this function in a subclass, call createIndex() to generate\n model indexes that other components can use to refer to items in your model.\n \"\"\"\n# logger.debug(\" called index({}, {}, {}) {}\"\n# .format(parentIndex.row(), parentIndex.column(), parentIndex.isValid(),\n# parentIndex.isValid() and parentIndex.column() != 0))\n\n parentItem = self.getItem(parentIndex, altItem=self.invisibleRootItem)\n #logger.debug(\" Getting row {} from parentItem: {}\".format(row, parentItem))\n\n if not (0 <= row < parentItem.nChildren()):\n # Can happen when deleting the last child.\n #logger.warn(\"Index row {} invalid for parent item: {}\".format(row, parentItem))\n return QtCore.QModelIndex()\n\n if not (0 <= column < self.columnCount()):\n #logger.warn(\"Index column {} invalid for parent item: {}\".format(column, parentItem))\n return QtCore.QModelIndex()\n\n childItem = parentItem.child(row)\n if childItem:\n return self.createIndex(row, column, childItem)\n else:\n logger.warn(\"No child item found at row {} for parent item: {}\".format(row, parentItem))\n return QtCore.QModelIndex()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the parent of the model item with the given index.", "response": "def parent(self, index):\n \"\"\" Returns the parent of the model item with the given index. If the item has no parent,\n an invalid QModelIndex is returned.\n\n A common convention used in models that expose tree data structures is that only items\n in the first column have children. For that case, when reimplementing this function in\n a subclass the column of the returned QModelIndex would be 0. (This is done here.)\n\n When reimplementing this function in a subclass, be careful to avoid calling QModelIndex\n member functions, such as QModelIndex.parent(), since indexes belonging to your model\n will simply call your implementation, leading to infinite recursion.\n \"\"\"\n if not index.isValid():\n return QtCore.QModelIndex()\n\n childItem = self.getItem(index, altItem=self.invisibleRootItem)\n parentItem = childItem.parentItem\n\n if parentItem == self.invisibleRootItem:\n return QtCore.QModelIndex()\n\n return self.createIndex(parentItem.childNumber(), 0, parentItem)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the number of rows under the given parent.", "response": "def rowCount(self, parentIndex=QtCore.QModelIndex()):\n \"\"\" Returns the number of rows under the given parent. When the parent is valid it means\n that rowCount is returning the number of children of parent.\n\n Note: When implementing a table based model, rowCount() should return 0 when the parent\n is valid.\n \"\"\"\n parentItem = self.getItem(parentIndex, altItem=self.invisibleRootItem)\n return parentItem.nChildren()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef hasChildren(self, parentIndex=QtCore.QModelIndex()):\n parentItem = self.getItem(parentIndex, altItem=self.invisibleRootItem)\n return parentItem.hasChildren()", "response": "Returns true if the parent has any children ; otherwise returns false."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef setData(self, index, value, role=Qt.EditRole):\n if role != Qt.CheckStateRole and role != Qt.EditRole:\n return False\n\n treeItem = self.getItem(index, altItem=self.invisibleRootItem)\n try:\n result = self.setItemData(treeItem, index.column(), value, role=role)\n if result:\n # Emit dataChanged to update the tree view\n # TODO, update the entire tree?\n # A check box can have a tristate checkbox as parent which state depends\n # on the state of this child check box. Therefore we update the parentIndex\n # and the descendants.\n self.emitDataChanged(treeItem)\n\n # Emit sigItemChanged to update other widgets.\n self.sigItemChanged.emit(treeItem)\n return result\n\n except Exception as ex:\n # When does this still happen? Can we remove it?\n logger.warn(\"Unable to set data: {}\".format(ex))\n if DEBUGGING:\n raise\n return False", "response": "Sets the role data for the item at the given index to value. Returns true if successful otherwise returns false."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the TreeItem for the given index.", "response": "def getItem(self, index, altItem=None):\n \"\"\" Returns the TreeItem for the given index. Returns the altItem if the index is invalid.\n \"\"\"\n if index.isValid():\n item = index.internalPointer()\n if item:\n return item\n\n #return altItem if altItem is not None else self.invisibleRootItem # TODO: remove\n return altItem"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef insertItem(self, childItem, position=None, parentIndex=None):\n if parentIndex is None:\n parentIndex=QtCore.QModelIndex()\n\n parentItem = self.getItem(parentIndex, altItem=self.invisibleRootItem)\n\n nChildren = parentItem.nChildren()\n if position is None:\n position = nChildren\n\n assert 0 <= position <= nChildren, \\\n \"position should be 0 < {} <= {}\".format(position, nChildren)\n\n self.beginInsertRows(parentIndex, position, position)\n try:\n parentItem.insertChild(childItem, position)\n finally:\n self.endInsertRows()\n\n childIndex = self.index(position, 0, parentIndex)\n assert childIndex.isValid(), \"Sanity check failed: childIndex not valid\"\n return childIndex", "response": "Inserts a childItem at the specified position under the parent index."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef removeAllChildrenAtIndex(self, parentIndex):\n if not parentIndex.isValid():\n logger.debug(\"No valid item selected for deletion (ignored).\")\n return\n\n parentItem = self.getItem(parentIndex, None)\n logger.debug(\"Removing children of {!r}\".format(parentItem))\n assert parentItem, \"parentItem not found\"\n\n #firstChildRow = self.index(0, 0, parentIndex).row()\n #lastChildRow = self.index(parentItem.nChildren()-1, 0, parentIndex).row()\n #logger.debug(\"Removing rows: {} to {}\".format(firstChildRow, lastChildRow))\n #self.beginRemoveRows(parentIndex, firstChildRow, lastChildRow)\n\n self.beginRemoveRows(parentIndex, 0, parentItem.nChildren()-1)\n try:\n parentItem.removeAllChildren()\n finally:\n self.endRemoveRows()\n\n logger.debug(\"removeAllChildrenAtIndex completed\")", "response": "Removes all children of the item at the parentIndex."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef deleteItemAtIndex(self, itemIndex):\n if not itemIndex.isValid():\n logger.debug(\"No valid item selected for deletion (ignored).\")\n return\n\n item = self.getItem(itemIndex, \"\")\n logger.debug(\"deleteItemAtIndex: removing {}\".format(item))\n\n parentIndex = itemIndex.parent()\n parentItem = self.getItem(parentIndex, altItem=self.invisibleRootItem)\n row = itemIndex.row()\n self.beginRemoveRows(parentIndex, row, row)\n try:\n parentItem.removeChild(row)\n finally:\n self.endRemoveRows()\n\n logger.debug(\"deleteItemAtIndex completed\")", "response": "Removes the item at the itemIndex."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nremove the item at the itemIndex and inserts a new item instead. Returns the index of the item inserted.", "response": "def replaceItemAtIndex(self, newItem, oldItemIndex):\n \"\"\" Removes the item at the itemIndex and insert a new item instead.\n \"\"\"\n oldItem = self.getItem(oldItemIndex)\n childNumber = oldItem.childNumber()\n parentIndex = oldItemIndex.parent()\n self.deleteItemAtIndex(oldItemIndex)\n insertedIndex = self.insertItem(newItem, position=childNumber, parentIndex=parentIndex)\n return insertedIndex"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nfinding the top level ancestor item that is direct children of the given child item.", "response": "def findTopLevelItemIndex(self, childIndex):\n \"\"\" Traverses the tree upwards from childItem until its top level ancestor item is found.\n Top level items are items that are direct children of the (invisible) root item.\n This function therefore raises an exception when called with the root item.\n \"\"\"\n if self.isTopLevelIndex(childIndex):\n return childIndex\n else:\n return self.findTopLevelItemIndex(childIndex.parent())"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef findItemAndIndexPath(self, path, startIndex=None):\n def _getIndexAndItemByName(nodeName, parentItem, parentIndex):\n \"\"\" Searches the parent for a direct child having the nodeName.\n Returns (item, itemIndex) tuple. Raises IndexError if the item cannot be found.\n \"\"\"\n if self.canFetchMore(parentIndex):\n self.fetchMore(parentIndex)\n\n for rowNr, childItem in enumerate(parentItem.childItems):\n if childItem.nodeName == nodeName:\n childIndex = self.index(rowNr, 0, parentIndex=parentIndex)\n return (childItem, childIndex)\n raise IndexError(\"Item not found: {!r}\".format(path))\n\n\n def _auxGetByPath(parts, item, index):\n \"Aux function that does the actual recursive search\"\n #logger.debug(\"_auxGetByPath item={}, parts={}\".format(item, parts))\n\n if len(parts) == 0:\n return [(item, index)]\n\n head, tail = parts[0], parts[1:]\n if head == '':\n # Two consecutive slashes. Just go one level deeper.\n return _auxGetByPath(tail, item, index)\n else:\n childItem, childIndex = _getIndexAndItemByName(head, item, index)\n return [(item, index)] + _auxGetByPath(tail, childItem, childIndex)\n\n # The actual body of findItemAndIndexPath starts here\n\n check_is_a_string(path)\n if not path:\n raise IndexError(\"Item not found: {!r}\".format(path))\n\n if startIndex is None or path.startswith('/'):\n startIndex = QtCore.QModelIndex()\n startItem = self.invisibleRootItem\n else:\n startItem = self.getItem(startIndex, None)\n\n if not startItem:\n raise IndexError(\"Item not found: {!r}. No start item!\".format(path))\n\n return _auxGetByPath(path.split('/'), startItem, startIndex)", "response": "Searches all the model recursively for an item where the path is given."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nlogging the items in this item.", "response": "def logItems(self, level=logging.DEBUG):\n \"\"\" rootItem\n \"\"\"\n rootItem = self.rootItem()\n if rootItem is None:\n logger.debug(\"No items in: {}\".format(self))\n else:\n rootItem.logBranch(level=level)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadd the context menu from using header information.", "response": "def addHeaderContextMenu(self, checked = None, checkable = None, enabled = None):\n \"\"\" Adds the context menu from using header information\n\n checked can be a header_name -> boolean dictionary. If given, headers\n with the key name will get the checked value from the dictionary.\n The corresponding column will be hidden if checked is False.\n\n checkable can be a header_name -> boolean dictionary. If given, header actions\n with the key name will get the checkable value from the dictionary. (Default True)\n\n enabled can be a header_name -> boolean dictionary. If given, header actions\n with the key name will get the enabled value from the dictionary. (Default True)\n \"\"\"\n checked = checked if checked is not None else {}\n checkable = checkable if checkable is not None else {}\n enabled = enabled if enabled is not None else {}\n\n horizontal_header = self.horizontalHeader()\n horizontal_header.setContextMenuPolicy(Qt.ActionsContextMenu)\n\n self.toggle_column_actions_group = QtWidgets.QActionGroup(self)\n self.toggle_column_actions_group.setExclusive(False)\n self.__toggle_functions = [] # for keeping references\n\n for col in range(horizontal_header.count()):\n column_label = self.model().headerData(col, Qt.Horizontal, Qt.DisplayRole)\n #logger.debug(\"Adding: col {}: {}\".format(col, column_label))\n action = QtWidgets.QAction(str(column_label),\n self.toggle_column_actions_group,\n checkable = checkable.get(column_label, True),\n enabled = enabled.get(column_label, True),\n toolTip = \"Shows or hides the {} column\".format(column_label))\n func = self.__makeShowColumnFunction(col)\n self.__toggle_functions.append(func) # keep reference\n horizontal_header.addAction(action)\n is_checked = checked.get(column_label, not horizontal_header.isSectionHidden(col))\n horizontal_header.setSectionHidden(col, not is_checked)\n action.setChecked(is_checked)\n action.toggled.connect(func)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef __makeShowColumnFunction(self, column_idx):\n show_column = lambda checked: self.setColumnHidden(column_idx, not checked)\n return show_column", "response": "Creates a function that shows or hides a column."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef readViewSettings(self, key, settings=None):\n #logger.debug(\"Reading view settings for: {}\".format(key))\n if settings is None:\n settings = QtCore.QSettings()\n\n horizontal_header = self.horizontalHeader()\n header_restored = horizontal_header.restoreState(settings.value(key))\n\n # update actions\n for col, action in enumerate(horizontal_header.actions()):\n is_checked = not horizontal_header.isSectionHidden(col)\n action.setChecked(is_checked)\n\n return header_restored", "response": "Reads the persistent program settings for the specified key and returns True if the header state was restored otherwise returns False"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef saveProfile(self, key, settings=None):\n #logger.debug(\"Writing view settings for: {}\".format(key))\n if settings is None:\n settings = QtCore.QSettings()\n settings.setValue(key, self.horizontalHeader().saveState())", "response": "Writes the view settings to the persistent store."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ntry to import the registered class. Will set the exception property if and error occurred.", "response": "def tryImportClass(self):\n \"\"\" Tries to import the registered class.\n Will set the exception property if and error occurred.\n \"\"\"\n logger.info(\"Importing: {}\".format(self.fullClassName))\n self._triedImport = True\n self._exception = None\n self._cls = None\n try:\n for pyPath in self.pythonPath.split(':'):\n if pyPath and pyPath not in sys.path:\n logger.debug(\"Appending {!r} to the PythonPath.\".format(pyPath))\n sys.path.append(pyPath)\n self._cls = import_symbol(self.fullClassName) # TODO: check class?\n except Exception as ex:\n self._exception = ex\n logger.warn(\"Unable to import {!r}: {}\".format(self.fullClassName, ex))\n if DEBUGGING:\n raise"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef getClass(self, tryImport=True):\n if not self.triedImport and tryImport:\n self.tryImportClass()\n\n return self._cls", "response": "Gets the underlying class. Tries to import if tryImport is False."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds a ClassRegItem object to the registry.", "response": "def registerItem(self, regItem):\n \"\"\" Adds a ClassRegItem object to the registry.\n \"\"\"\n check_class(regItem, ClassRegItem)\n if regItem.identifier in self._index:\n oldRegItem = self._index[regItem.identifier]\n logger.warn(\"Class key {!r} already registered as {}. Removing old regItem.\"\n .format(regItem.identifier, oldRegItem.fullClassName))\n self.removeItem(oldRegItem)\n\n logger.info(\"Registering {!r} with {}\".format(regItem.identifier, regItem.fullClassName))\n self._items.append(regItem)\n self._index[regItem.identifier] = regItem"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef removeItem(self, regItem):\n check_class(regItem, ClassRegItem)\n logger.info(\"Removing {!r} containing {}\".format(regItem.identifier, regItem.fullClassName))\n\n del self._index[regItem.identifier]\n idx = self._items.index(regItem)\n del self._items[idx]", "response": "Removes a ClassRegItem object from the registry."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreads the persistent settings store falls back on the default plugins if there are no settings in the store.", "response": "def loadOrInitSettings(self, groupName=None):\n \"\"\" Reads the registry items from the persistent settings store, falls back on the\n default plugins if there are no settings in the store for this registry.\n \"\"\"\n groupName = groupName if groupName else self.settingsGroupName\n settings = QtCore.QSettings()\n\n #for key in sorted(settings.allKeys()):\n # print(key)\n\n if containsSettingsGroup(groupName, settings):\n self.loadSettings(groupName)\n else:\n logger.info(\"Group {!r} not found, falling back on default settings\".format(groupName))\n for item in self.getDefaultItems():\n self.registerItem(item)\n self.saveSettings(groupName)\n assert containsSettingsGroup(groupName, settings), \\\n \"Sanity check failed. {} not found\".format(groupName)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef loadSettings(self, groupName=None):\n groupName = groupName if groupName else self.settingsGroupName\n settings = QtCore.QSettings()\n logger.info(\"Reading {!r} from: {}\".format(groupName, settings.fileName()))\n\n settings.beginGroup(groupName)\n self.clear()\n try:\n for key in settings.childKeys():\n if key.startswith('item'):\n dct = ast.literal_eval(settings.value(key))\n regItem = self._itemClass.createFromDict(dct)\n self.registerItem(regItem)\n finally:\n settings.endGroup()", "response": "Reads the registry items from the persistent settings store."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef saveSettings(self, groupName=None):\n groupName = groupName if groupName else self.settingsGroupName\n settings = QtCore.QSettings()\n logger.info(\"Saving {} to: {}\".format(groupName, settings.fileName()))\n\n settings.remove(groupName) # start with a clean slate\n settings.beginGroup(groupName)\n try:\n for itemNr, item in enumerate(self.items):\n key = \"item-{:03d}\".format(itemNr)\n value = repr(item.asDict())\n settings.setValue(key, value)\n finally:\n settings.endGroup()", "response": "Writes the items into the persistent settings store."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef deleteSettings(self, groupName=None):\n groupName = groupName if groupName else self.settingsGroupName\n settings = QtCore.QSettings()\n logger.info(\"Deleting {} from: {}\".format(groupName, settings.fileName()))\n removeSettingsGroup(groupName)", "response": "Deletes the registry items from the persistent store."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncall when the dock widget that this pane contains become visible.", "response": "def dockVisibilityChanged(self, visible):\n \"\"\" Slot to be called when the dock widget that this pane contains become (in)visible.\n Is used to (dis)connect the pane from its repo tree view and so prevent unnecessary\n and potentially costly updates when the pane is hidden.\n \"\"\"\n logger.debug(\"dockVisibilityChanged of {!r}: visible={}\".format(self, visible))\n\n if visible:\n self._repoTreeView.sigRepoItemChanged.connect(self.repoItemChanged)\n self._isConnected = True\n\n currentRepoItem, _currentIndex = self._repoTreeView.getCurrentItem()\n self.repoItemChanged(currentRepoItem)\n else:\n # At start-up the pane be be hidden but the signals are not connected.\n # A disconnect would fail in that case so we test for isConnected == True.\n if self.isConnected:\n self._repoTreeView.sigRepoItemChanged.disconnect(self.repoItemChanged)\n\n self._isConnected = False\n self.errorWidget.setError(msg=\"Contents disabled\", title=\"Error\")\n self.setCurrentIndex(self.ERROR_PAGE_IDX)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nupdating the content when the current repo tree item changes.", "response": "def repoItemChanged(self, rti):\n \"\"\" Updates the content when the current repo tree item changes.\n The rti parameter can be None when no RTI is selected in the repository tree.\n \"\"\"\n check_class(rti, (BaseRti, int), allow_none=True)\n assert type(rti) != int, \"rti: {}\".format(rti)\n try:\n self._drawContents(rti)\n self.setCurrentIndex(self.CONTENTS_PAGE_IDX)\n except Exception as ex:\n if DEBUGGING:\n raise\n logger.exception(ex)\n self.errorWidget.setError(msg=str(ex), title=get_class_name(ex))\n self.setCurrentIndex(self.ERROR_PAGE_IDX)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding the given set - inspector actions to the given inspector menu.", "response": "def addInspectorActionsToMenu(inspectorMenu, execInspectorDialogAction, inspectorActionGroup):\n \"\"\" Adds menu items to the inpsectorMenu for the given set-inspector actions.\n\n :param inspectorMenu: inspector menu that will be modified\n :param execInspectorDialogAction: the \"Browse Inspectors...\" actions\n :param inspectorActionGroup: action group with actions for selecting a new inspector\n :return: the inspectorMenu, which has been modified.\n \"\"\"\n inspectorMenu.addAction(execInspectorDialogAction)\n inspectorMenu.addSeparator()\n\n for action in inspectorActionGroup.actions():\n inspectorMenu.addAction(action)\n\n return inspectorMenu"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nupdates the label and menuButton from the full name of the InspectorRegItem", "response": "def updateFromInspectorRegItem(self, inspectorRegItem):\n \"\"\" Updates the label from the full name of the InspectorRegItem\n \"\"\"\n library, name = inspectorRegItem.splitName()\n label = \"{} ({})\".format(name, library) if library else name\n #self.label.setText(label)\n self.menuButton.setText(label)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a StringCtiEditor object.", "response": "def createEditor(self, delegate, parent, option):\n \"\"\" Creates a StringCtiEditor.\n For the parameters see the AbstractCti constructor documentation.\n \"\"\"\n return StringCtiEditor(self, delegate, parent=parent)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets the Y range on the histogram plot. This disables auto - scaling.", "response": "def setHistogramRange(self, mn, mx, padding=0.1):\n \"\"\"Set the Y range on the histogram plot. This disables auto-scaling.\"\"\"\n self.vb.enableAutoRange(self.vb.YAxis, False)\n self.vb.setYRange(mn, mx, padding)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef setImageItem(self, img):\n self.imageItem = weakref.ref(img)\n img.sigImageChanged.connect(self.imageChanged)\n img.setLookupTable(self.getLookupTable) ## send function pointer, not the result\n #self.gradientChanged()\n self.regionChanged()\n self.imageChanged(autoLevel=True)", "response": "Set an ImageItem to have its levels automatically controlledby this HistogramLUTItem."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef getLookupTable(self, img=None, n=None, alpha=None):\n if n is None:\n if img.dtype == np.uint8:\n n = 256\n else:\n n = 512\n if self.lut is None:\n self.lut = self.gradient.getLookupTable(n, alpha=alpha)\n return self.lut", "response": "Return a lookup table from the color gradient defined by thisHistogramLUTItem."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsets the auto - range of the axis on.", "response": "def setAutoRangeOn(self, axisNumber):\n \"\"\" Sets the auto-range of the axis on.\n\n :param axisNumber: 0 (X-axis), 1 (Y-axis), 2, (Both X and Y axes).\n \"\"\"\n setXYAxesAutoRangeOn(self, self.xAxisRangeCti, self.yAxisRangeCti, axisNumber)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nclosing the resources used by this instance.", "response": "def finalize(self):\n \"\"\" Is called before destruction. Can be used to clean-up resources\n \"\"\"\n logger.debug(\"Finalizing: {}\".format(self))\n self.plotItem.scene().sigMouseMoved.disconnect(self.mouseMoved)\n self.plotItem.close()\n self.graphicsLayoutWidget.close()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nclearing the contents of the inspector widget.", "response": "def _clearContents(self):\n \"\"\" Clears the the inspector widget when no valid input is available.\n \"\"\"\n self.titleLabel.setText('')\n self.plotItem.clear()\n self.plotItem.setLabel('left', '')\n self.plotItem.setLabel('bottom', '')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _drawContents(self, reason=None, initiator=None):\n self.slicedArray = self.collector.getSlicedArray()\n\n if not self._hasValidData():\n self._clearContents()\n raise InvalidDataError(\"No data available or it does not contain real numbers\")\n\n # -- Valid plot data from here on --\n\n # PyQtGraph doesn't handle masked arrays so we convert the masked values to Nans (missing\n # data values are replaced by NaNs). The PyQtGraph line plot omits the Nans, which is great.\n self.slicedArray.replaceMaskedValueWithNan() # will convert data to float if int\n\n self.plotItem.clear()\n\n # Reset the axes ranges (via the config)\n if (reason == UpdateReason.RTI_CHANGED or\n reason == UpdateReason.COLLECTOR_COMBO_BOX):\n\n # self.config.yAxisRangeCti.setAutoRangeOn() doesn't work as refreshBlocked is True\n # TODO: can refreshBlocked maybe only block the signals to prevent loops?\n self.config.xAxisRangeCti.autoRangeCti.data = True\n self.config.yAxisRangeCti.autoRangeCti.data = True\n\n self.titleLabel.setText(self.configValue('title').format(**self.collector.rtiInfo))\n\n connected = np.isfinite(self.slicedArray.data)\n if is_an_array(self.slicedArray.mask):\n connected = np.logical_and(connected, ~self.slicedArray.mask)\n else:\n connected = np.zeros_like(self.slicedArray.data) if self.slicedArray.mask else connected\n\n plotDataItem = self.config.plotDataItemCti.createPlotDataItem()\n plotDataItem.setData(self.slicedArray.data, connect=connected)\n\n self.plotItem.addItem(plotDataItem)\n\n if self.config.probeCti.configValue:\n self.probeLabel.setVisible(True)\n self.plotItem.addItem(self.crossLineVerShadow, ignoreBounds=True)\n self.plotItem.addItem(self.crossLineVertical, ignoreBounds=True)\n self.plotItem.addItem(self.probeDataItem, ignoreBounds=True)\n self.probeDataItem.setSymbolBrush(QtGui.QBrush(self.config.plotDataItemCti.penColor))\n self.probeDataItem.setSymbolSize(10)\n else:\n self.probeLabel.setVisible(False)\n\n # Update the config tree from the (possibly) new state of the PgLinePlot1d inspector,\n # e.g. the axis range may have changed while drawing.\n self.config.updateTarget()", "response": "Draw the contents of the collected repo tree item."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nupdate the probe text with the values under the cursor.", "response": "def mouseMoved(self, viewPos):\n \"\"\" Updates the probe text with the values under the cursor.\n Draws a vertical line and a symbol at the position of the probe.\n \"\"\"\n try:\n check_class(viewPos, QtCore.QPointF)\n self.crossLineVerShadow.setVisible(False)\n self.crossLineVertical.setVisible(False)\n self.probeLabel.setText(\"\")\n self.probeDataItem.clear()\n\n if (self._hasValidData() and self.config.probeCti.configValue and\n self.viewBox.sceneBoundingRect().contains(viewPos)):\n\n scenePos = self.viewBox.mapSceneToView(viewPos)\n index = int(scenePos.x())\n data = self.slicedArray.data\n\n if not 0 <= index < len(data):\n txt = \"no data at cursor\"\n self.probeLabel.setText(txt)\n else:\n valueStr = to_string(data[index], masked=self.slicedArray.maskAt(index),\n maskFormat='<masked>')\n self.probeLabel.setText(\"pos = {!r}, value = {}\".format(index, valueStr))\n if np.isfinite(data[index]):\n self.crossLineVerShadow.setVisible(True)\n self.crossLineVerShadow.setPos(index)\n self.crossLineVertical.setVisible(True)\n self.crossLineVertical.setPos(index)\n if data[index] > 0 or self.config.yLogCti.configValue == False:\n self.probeDataItem.setData((index,), (data[index],))\n\n except Exception as ex:\n # In contrast to _drawContents, this function is a slot and thus must not throw\n # exceptions. The exception is logged. Perhaps we should clear the cross plots, but\n # this could, in turn, raise exceptions.\n if DEBUGGING:\n raise\n else:\n logger.exception(ex)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef environment_var_to_bool(env_var):\n\n # Try to see if env_var can be converted to an int\n try:\n env_var = int(env_var)\n except ValueError:\n pass\n\n if isinstance(env_var, numbers.Number):\n return bool(env_var)\n elif is_a_string(env_var):\n env_var = env_var.lower().strip()\n if env_var in \"false\":\n return False\n else:\n return True\n else:\n return bool(env_var)", "response": "Converts an environment variable to a boolean."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef fill_values_to_nan(masked_array):\n if masked_array is not None and masked_array.dtype.kind == 'f':\n check_class(masked_array, ma.masked_array)\n logger.debug(\"Replacing fill_values by NaNs\")\n masked_array[:] = ma.filled(masked_array, np.nan)\n masked_array.set_fill_value(np.nan)\n else:\n return masked_array", "response": "Replaces the fill_values of the masked array by NaNs."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef setting_str_to_bool(s):\n if isinstance(s, six.string_types):\n s = s.lower()\n if s == 'true':\n return True\n elif s == 'false':\n return False\n else:\n return ValueError('Invalid boolean representation: {!r}'.format(s))\n else:\n return s", "response": "Converts a string to a boolean representation."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts a variable to a Python string or unicode string.", "response": "def to_string(var, masked=None, decode_bytes='utf-8', maskFormat='', strFormat='{}',\n intFormat='{}', numFormat=DEFAULT_NUM_FORMAT, noneFormat='{!r}', otherFormat='{}'):\n \"\"\" Converts var to a python string or unicode string so Qt widgets can display them.\n\n If var consists of bytes, the decode_bytes is used to decode the bytes.\n\n If var consists of a numpy.str_, the result will be converted to a regular Python string.\n This is necessary to display the string in Qt widgets.\n\n For the possible format string (replacement fields) see:\n https://docs.python.org/3/library/string.html#format-string-syntax\n\n :param masked: if True, the element is masked. The maskFormat is used.\n :param decode_bytes': string containing the expected encoding when var is of type bytes\n :param strFormat' : new style format string used to format strings\n :param intFormat' : new style format string used to format integers\n :param numFormat' : new style format string used to format all numbers except integers.\n :param noneFormat': new style format string used to format Nones.\n :param maskFormat': override with this format used if masked is True.\n If the maskFormat is empty, the format is never overriden.\n :param otherFormat': new style format string used to format all other types\n \"\"\"\n #logger.debug(\"to_string: {!r} ({})\".format(var, type(var)))\n\n # Decode and select correct format specifier.\n if is_binary(var):\n fmt = strFormat\n try:\n decodedVar = var.decode(decode_bytes, 'replace')\n except LookupError as ex:\n # Add URL to exception message.\n raise LookupError(\"{}\\n\\nFor a list of encodings in Python see: {}\"\n .format(ex, URL_PYTHON_ENCODINGS_DOC))\n elif is_text(var):\n fmt = strFormat\n decodedVar = six.text_type(var)\n elif is_a_string(var):\n fmt = strFormat\n decodedVar = str(var)\n elif isinstance(var, numbers.Integral):\n fmt = intFormat\n decodedVar = var\n elif isinstance(var, numbers.Number):\n fmt = numFormat\n decodedVar = var\n elif var is None:\n fmt = noneFormat\n decodedVar = var\n else:\n fmt = otherFormat\n decodedVar = var\n\n if maskFormat != '{}':\n try:\n allMasked = all(masked)\n except TypeError as ex:\n allMasked = bool(masked)\n\n if allMasked:\n fmt = maskFormat\n\n try:\n result = fmt.format(decodedVar)\n except Exception:\n result = \"Invalid format {!r} for: {!r}\".format(fmt, decodedVar)\n\n #if masked:\n # logger.debug(\"to_string (fmt={}): {!r} ({}) -> result = {!r}\".format(maskFormat, var, type(var), result))\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef is_a_string(var, allow_none=False):\n return isinstance(var, six.string_types) or (var is None and allow_none)", "response": "Returns True if var is a string."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef check_is_a_string(var, allow_none=False):\n if not is_a_string(var, allow_none=allow_none):\n raise TypeError(\"var must be a string, however type(var) is {}\"\n .format(type(var)))", "response": "Checks if a variable is a string."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef is_text(var, allow_none=False):\n return isinstance(var, six.text_type) or (var is None and allow_none)", "response": "Returns True if var is a unicode text object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef is_binary(var, allow_none=False):\n return isinstance(var, six.binary_type) or (var is None and allow_none)", "response": "Returns True if var is a binary object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns True if var is a list or tuple.", "response": "def is_a_sequence(var, allow_none=False):\n \"\"\" Returns True if var is a list or a tuple (but not a string!)\n \"\"\"\n return isinstance(var, (list, tuple)) or (var is None and allow_none)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nchecking if a variable is a sequence.", "response": "def check_is_a_sequence(var, allow_none=False):\n \"\"\" Calls is_a_sequence and raises a type error if the check fails.\n \"\"\"\n if not is_a_sequence(var, allow_none=allow_none):\n raise TypeError(\"var must be a list or tuple, however type(var) is {}\"\n .format(type(var)))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef check_is_a_mapping(var, allow_none=False):\n if not is_a_mapping(var, allow_none=allow_none):\n raise TypeError(\"var must be a dict, however type(var) is {}\"\n .format(type(var)))", "response": "Checks if a dict is a mapping."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef is_an_array(var, allow_none=False):\n return isinstance(var, np.ndarray) or (var is None and allow_none)", "response": "Returns True if var is a numpy array."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks if the given variable is a NumPy array.", "response": "def check_is_an_array(var, allow_none=False):\n \"\"\" Calls is_an_array and raises a type error if the check fails.\n \"\"\"\n if not is_an_array(var, allow_none=allow_none):\n raise TypeError(\"var must be a NumPy array, however type(var) is {}\"\n .format(type(var)))"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn True if the array contains real numbers.", "response": "def array_has_real_numbers(array):\n \"\"\" Uses the dtype kind of the numpy array to determine if it represents real numbers.\n\n That is, the array kind should be one of: i u f\n\n Possible dtype.kind values.\n b \tboolean\n i \tsigned integer\n u \tunsigned integer\n f \tfloating-point\n c \tcomplex floating-point\n m \ttimedelta\n M \tdatetime\n O \tobject\n S \t(byte-)string\n U \tUnicode\n V \tvoid\n \"\"\"\n kind = array.dtype.kind\n assert kind in 'biufcmMOSUV', \"Unexpected array kind: {}\".format(kind)\n return kind in 'iuf'"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef check_class(obj, target_class, allow_none = False):\n if not isinstance(obj, target_class):\n if not (allow_none and obj is None):\n raise TypeError(\"obj must be a of type {}, got: {}\"\n .format(target_class, type(obj)))", "response": "Checks that the obj is a sub type of target_class. Raises a TypeError if this is not the case."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef import_symbol(full_symbol_name):\n parts = full_symbol_name.rsplit('.', 1)\n if len(parts) == 2:\n module_name, symbol_name = parts\n module_name = str(module_name) # convert from possible unicode\n symbol_name = str(symbol_name)\n #logger.debug(\"From module {} importing {!r}\".format(module_name, symbol_name))\n module = __import__(module_name, fromlist=[symbol_name])\n cls = getattr(module, symbol_name)\n return cls\n elif len(parts) == 1:\n # No module part, only a class name. If you want to create a class\n # by using name without module, you should use globals()[symbol_name]\n # We cannot do this here because globals is of the module that defines\n # this function, not of the modules where this function is called.\n raise ImportError(\"full_symbol_name should contain a module\")\n else:\n assert False, \"Bug: parts should have 1 or elements: {}\".format(parts)", "response": "Imports a symbol from a dot separated name."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a ChoiceCti with Qt PenStyles.", "response": "def createPenStyleCti(nodeName, defaultData=0, includeNone=False):\n \"\"\" Creates a ChoiceCti with Qt PenStyles.\n If includeEmtpy is True, the first option will be None.\n \"\"\"\n displayValues=PEN_STYLE_DISPLAY_VALUES\n configValues=PEN_STYLE_CONFIG_VALUES\n if includeNone:\n displayValues = [''] + list(displayValues)\n configValues = [None] + list(configValues)\n return ChoiceCti(nodeName, defaultData,\n displayValues=displayValues, configValues=configValues)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a FloatCti with defaults for configuring a QPen width.", "response": "def createPenWidthCti(nodeName, defaultData=1.0, zeroValueText=None):\n \"\"\" Creates a FloatCti with defaults for configuring a QPen width.\n\n If specialValueZero is set, this string will be displayed when 0.0 is selected.\n If specialValueZero is None, the minValue will be 0.1\n \"\"\"\n # A pen line width of zero indicates a cosmetic pen. This means that the pen width is\n # always drawn one pixel wide, independent of the transformation set on the painter.\n # Note that line widths other than 1 may be slow when anti aliasing is on.\n return FloatCti(nodeName, defaultData=defaultData, specialValueText=zeroValueText,\n minValue=0.1 if zeroValueText is None else 0.0,\n maxValue=100, stepSize=0.1, decimals=1)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef fontFamilyIndex(qFont, families):\n try:\n return families.index(qFont.family())\n except ValueError:\n if False and DEBUGGING:\n raise\n else:\n logger.warn(\"{} not found in font families, using default.\".format(qFont.family()))\n return families.index(qFont.defaultFamily())", "response": "Returns the index of the qFont. family in the list."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fontWeightIndex(qFont, weights):\n try:\n return weights.index(qFont.weight())\n except ValueError:\n if False and DEBUGGING:\n raise\n else:\n logger.warn(\"{} not found in font weights, using normal.\".format(qFont.weight()))\n return weights.index(QtGui.QFont.Normal)", "response": "Returns the index of the first element in the list of font weights."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts to str so that this CTI always stores that type.", "response": "def _enforceDataType(self, data):\n \"\"\" Converts to str so that this CTI always stores that type.\n \"\"\"\n qColor = QtGui.QColor(data) # TODO: store a RGB string?\n if not qColor.isValid():\n raise ValueError(\"Invalid color specification: {!r}\".format(data))\n return qColor"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns this nodes values as a dictionary to be used for persistence. getNonDefaultsDict", "response": "def _nodeGetNonDefaultsDict(self):\n \"\"\" Retrieves this nodes` values as a dictionary to be used for persistence.\n Non-recursive auxiliary function for getNonDefaultsDict\n \"\"\"\n dct = {}\n if self.data != self.defaultData:\n dct['data'] = self.data.name()\n return dct"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a ColorCtiEditor object for the given delegate and parent.", "response": "def createEditor(self, delegate, parent, option):\n \"\"\" Creates a ColorCtiEditor.\n For the parameters see the AbstractCti constructor documentation.\n \"\"\"\n return ColorCtiEditor(self, delegate, parent=parent)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncalls when the editor is closed. Disconnects signals.", "response": "def finalize(self):\n \"\"\" Is called when the editor is closed. Disconnect signals.\n \"\"\"\n self.pickButton.clicked.disconnect(self.openColorDialog)\n super(ColorCtiEditor, self).finalize()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef openColorDialog(self):\n try:\n currentColor = self.getData()\n except InvalidInputError:\n currentColor = self.cti.data\n\n qColor = QtWidgets.QColorDialog.getColor(currentColor, self)\n\n if qColor.isValid():\n self.setData(qColor)\n self.commitAndClose()", "response": "Opens a QColorDialog for the user\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget data from the editor widget.", "response": "def getData(self):\n \"\"\" Gets data from the editor widget.\n \"\"\"\n text = self.lineEditor.text()\n if not text.startswith('#'):\n text = '#' + text\n\n validator = self.lineEditor.validator()\n if validator is not None:\n state, text, _ = validator.validate(text, 0)\n if state != QtGui.QValidator.Acceptable:\n raise InvalidInputError(\"Invalid input: {!r}\".format(text))\n\n return QtGui.QColor(text)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef data(self, data):\n self._data = self._enforceDataType(data) # Enforce self._data to be a QFont\n self.familyCti.data = fontFamilyIndex(self.data, list(self.familyCti.iterConfigValues))\n self.pointSizeCti.data = self.data.pointSize()\n self.weightCti.data = fontWeightIndex(self.data, list(self.weightCti.iterConfigValues))\n self.italicCti.data = self.data.italic()", "response": "Sets the font data of this item."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef defaultData(self, defaultData):\n self._defaultData = self._enforceDataType(defaultData) # Enforce to be a QFont\n self.familyCti.defaultData = fontFamilyIndex(self.defaultData,\n list(self.familyCti.iterConfigValues))\n self.pointSizeCti.defaultData = self.defaultData.pointSize()\n self.weightCti.defaultData = self.defaultData.weight()\n self.italicCti.defaultData = self.defaultData.italic()", "response": "Sets the data of this item."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nupdates the target widget s font based on the config values.", "response": "def _updateTargetFromNode(self):\n \"\"\" Applies the font config settings to the target widget's font.\n\n That is the targetWidget.setFont() is called with a font create from the config values.\n \"\"\"\n font = self.data\n if self.familyCti.configValue:\n font.setFamily(self.familyCti.configValue)\n else:\n font.setFamily(QtGui.QFont().family()) # default family\n font.setPointSize(self.pointSizeCti.configValue)\n font.setWeight(self.weightCti.configValue)\n font.setItalic(self.italicCti.configValue)\n self._targetWidget.setFont(font)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _nodeGetNonDefaultsDict(self):\n dct = {}\n if self.data != self.defaultData:\n dct['data'] = self.data.toString() # calls QFont.toString()\n return dct", "response": "Returns this nodes values as a dictionary to be used for persistence. getNonDefaultsDict\n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _nodeSetValuesFromDict(self, dct):\n if 'data' in dct:\n qFont = QtGui.QFont()\n success = qFont.fromString(dct['data'])\n if not success:\n msg = \"Unable to create QFont from string {!r}\".format(dct['data'])\n logger.warn(msg)\n if DEBUGGING:\n raise ValueError(msg)\n self.data = qFont", "response": "Sets values from a dictionary in the current node."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef createEditor(self, delegate, parent, option):\n return FontCtiEditor(self, delegate, parent=parent)", "response": "Creates a FontCtiEditor object."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndisconnect signals and closes the editor.", "response": "def finalize(self):\n \"\"\" Is called when the editor is closed. Disconnect signals.\n \"\"\"\n self.pickButton.clicked.disconnect(self.execFontDialog)\n super(FontCtiEditor, self).finalize()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nopen a QColorDialog for the user and sets the font to the new font.", "response": "def execFontDialog(self):\n \"\"\" Opens a QColorDialog for the user\n \"\"\"\n currentFont = self.getData()\n newFont, ok = QtGui.QFontDialog.getFont(currentFont, self)\n if ok:\n self.setData(newFont)\n else:\n self.setData(currentFont)\n self.commitAndClose()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate a FontChoiceCtiEditor object.", "response": "def createEditor(self, delegate, parent, option):\n \"\"\" Creates a ChoiceCtiEditor.\n For the parameters see the AbstractCti constructor documentation.\n \"\"\"\n return FontChoiceCtiEditor(self, delegate, parent=parent)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef finalize(self):\n self.comboBox.activated.disconnect(self.comboBoxActivated)\n super(FontChoiceCtiEditor, self).finalize()", "response": "Is called when the editor is closed. Disconnect signals."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a QPen made of the children s config values.", "response": "def configValue(self):\n \"\"\" Creates a QPen made of the children's config values.\n \"\"\"\n if not self.data:\n return None\n else:\n pen = QtGui.QPen()\n pen.setCosmetic(True)\n pen.setColor(self.colorCti.configValue)\n style = self.styleCti.configValue\n if style is not None:\n pen.setStyle(style)\n pen.setWidthF(self.widthCti.configValue)\n return pen"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef createPen(self, altStyle=None, altWidth=None):\n pen = self.configValue\n if pen is not None:\n\n style = self.findByNodePath('style').configValue\n if style is None and altStyle is not None:\n pen.setStyle(altStyle)\n\n width = self.findByNodePath('width').configValue\n if width == 0.0 and altWidth is not None:\n #logger.debug(\"Setting altWidth = {!r}\".format(altWidth))\n pen.setWidthF(altWidth)\n\n return pen", "response": "Creates a pen from the config values with the style overridden by altStyle and the width overridden by altWidth."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreplace values where the mask is True with the replacement value.", "response": "def replaceMaskedValue(data, mask, replacementValue, copyOnReplace=True):\n \"\"\" Replaces values where the mask is True with the replacement value.\n\n :copyOnReplace makeCopy: If True (the default) it makes a copy if data is replaced.\n \"\"\"\n if mask is False:\n result = data\n elif mask is True:\n result = np.copy(data) if copyOnReplace else data\n result[:] = replacementValue\n else:\n #logger.debug(\"############ count_nonzero: {}\".format(np.count_nonzero(mask)))\n if copyOnReplace and np.any(mask):\n #logger.debug(\"Making copy\")\n result = np.copy(data)\n else:\n result = data\n\n result[mask] = replacementValue\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef replaceMaskedValueWithFloat(data, mask, replacementValue, copyOnReplace=True):\n kind = data.dtype.kind\n if kind == 'i' or kind == 'u': # signed/unsigned int\n data = data.astype(np.float, casting='safe')\n\n if data.dtype.kind != 'f':\n return # only replace for floats\n else:\n return replaceMaskedValue(data, mask, replacementValue, copyOnReplace=copyOnReplace)", "response": "Replaces values where the mask is True with the replacement value."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef maskedNanPercentile(maskedArray, percentiles, *args, **kwargs):\n\n #https://docs.scipy.org/doc/numpy/reference/maskedarray.generic.html#accessing-the-data\n awm = ArrayWithMask.createFromMaskedArray(maskedArray)\n\n maskIdx = awm.maskIndex()\n validData = awm.data[~maskIdx]\n\n if len(validData) >= 1:\n result = np.nanpercentile(validData, percentiles, *args, **kwargs)\n else:\n # If np.nanpercentile on an empty list only returns a single Nan. We correct this here.\n result = len(percentiles) * [np.nan]\n\n assert len(result) == len(percentiles), \\\n \"shape mismatch: {} != {}\".format(len(result), len(percentiles))\n\n return result", "response": "Calculates np. nanpercentile on the masked array"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nmask an array where equal to a given value.", "response": "def maskedEqual(array, missingValue):\n \"\"\" Mask an array where equal to a given (missing)value.\n\n Unfortunately ma.masked_equal does not work with structured arrays. See:\n https://mail.scipy.org/pipermail/numpy-discussion/2011-July/057669.html\n\n If the data is a structured array the mask is applied for every field (i.e. forming a\n logical-and). Otherwise ma.masked_equal is called.\n \"\"\"\n if array_is_structured(array):\n # Enforce the array to be masked\n if not isinstance(array, ma.MaskedArray):\n array = ma.MaskedArray(array)\n\n # Set the mask separately per field\n for nr, field in enumerate(array.dtype.names):\n if hasattr(missingValue, '__len__'):\n fieldMissingValue = missingValue[nr]\n else:\n fieldMissingValue = missingValue\n\n array[field] = ma.masked_equal(array[field], fieldMissingValue)\n\n check_class(array, ma.MaskedArray) # post-condition check\n return array\n else:\n # masked_equal works with missing is None\n result = ma.masked_equal(array, missingValue, copy=False)\n check_class(result, ma.MaskedArray) # post-condition check\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef checkIsConsistent(self):\n if is_an_array(self.mask) and self.mask.shape != self.data.shape:\n raise ConsistencyError(\"Shape mismatch mask={}, data={}\"\n .format(self.mask.shape != self.data.shape))", "response": "Raises a ConsistencyError if the mask has an incorrect shape."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating an ArrayWithMask object from a masked array.", "response": "def createFromMaskedArray(cls, masked_arr):\n \"\"\" Creates an ArrayWithMak\n\n :param masked_arr: a numpy MaskedArray or numpy array\n :return: ArrayWithMask\n \"\"\"\n if isinstance(masked_arr, ArrayWithMask):\n return masked_arr\n\n check_class(masked_arr, (np.ndarray, ma.MaskedArray))\n\n # A MaskedConstant (i.e. masked) is a special case of MaskedArray. It does not seem to have\n # a fill_value so we use None to use the default.\n # https://docs.scipy.org/doc/numpy/reference/maskedarray.baseclass.html#numpy.ma.masked\n fill_value = getattr(masked_arr, 'fill_value', None)\n\n return cls(masked_arr.data, masked_arr.mask, fill_value)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates converts to a masked array", "response": "def asMaskedArray(self):\n \"\"\" Creates converts to a masked array\n \"\"\"\n return ma.masked_array(data=self.data, mask=self.mask, fill_value=self.fill_value)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the mask at the index.", "response": "def maskAt(self, index):\n \"\"\" Returns the mask at the index.\n\n It the mask is a boolean it is returned since this boolean representes the mask for\n all array elements.\n \"\"\"\n if isinstance(self.mask, bool):\n return self.mask\n else:\n return self.mask[index]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef maskIndex(self):\n if isinstance(self.mask, bool):\n return np.full(self.data.shape, self.mask, dtype=np.bool)\n else:\n return self.mask", "response": "Returns a boolean index with True if the value is masked."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntranspose the array and mask separately", "response": "def transpose(self, *args, **kwargs):\n \"\"\" Transposes the array and mask separately\n\n :param awm: ArrayWithMask\n :return: copy/view with transposed\n \"\"\"\n tdata = np.transpose(self.data, *args, **kwargs)\n tmask = np.transpose(self.mask, *args, **kwargs) if is_an_array(self.mask) else self.mask\n return ArrayWithMask(tdata, tmask, self.fill_value)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef replaceMaskedValue(self, replacementValue):\n if self.mask is False:\n pass\n elif self.mask is True:\n self.data[:] = replacementValue\n else:\n self.data[self.mask] = replacementValue", "response": "Replaces the value in the data array with the replacement value."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreplacing values where the mask is True with the replacement value.", "response": "def replaceMaskedValueWithNan(self):\n \"\"\" Replaces values where the mask is True with the replacement value.\n\n Will change the data type to float if the data is an integer.\n If the data is not a float (or int) the function does nothing.\n \"\"\"\n kind = self.data.dtype.kind\n if kind == 'i' or kind == 'u': # signed/unsigned int\n self.data = self.data.astype(np.float, casting='safe')\n\n if self.data.dtype.kind != 'f':\n return # only replace for floats\n\n if self.mask is False:\n pass\n elif self.mask is True:\n self.data[:] = np.NaN\n else:\n self.data[self.mask] = np.NaN"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nimporting the regItem into the statusLabel", "response": "def importRegItem(self, regItem):\n \"\"\" Imports the regItem\n Writes this in the statusLabel while the import is in progress.\n \"\"\"\n self.statusLabel.setText(\"Importing {}...\".format(regItem.fullName))\n QtWidgets.qApp.processEvents()\n regItem.tryImportClass()\n self.tableView.model().emitDataChanged(regItem)\n self.statusLabel.setText(\"\")\n QtWidgets.qApp.processEvents()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ntry to import all underlying plugin classes", "response": "def tryImportAllPlugins(self):\n \"\"\" Tries to import all underlying plugin classes\n \"\"\"\n for regItem in self.registeredItems:\n if not regItem.triedImport:\n self.importRegItem(regItem)\n\n logger.debug(\"Importing finished.\")"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsetting the current item to the regItem", "response": "def setCurrentRegItem(self, regItem):\n \"\"\" Sets the current item to the regItem\n \"\"\"\n check_class(regItem, ClassRegItem, allow_none=True)\n self.tableView.setCurrentRegItem(regItem)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nupdating the description text widget when the user clicks on a selector in the table.", "response": "def currentItemChanged(self, _currentIndex=None, _previousIndex=None):\n \"\"\" Updates the description text widget when the user clicks on a selector in the table.\n The _currentIndex and _previousIndex parameters are ignored.\n \"\"\"\n self.editor.clear()\n self.editor.setTextColor(QCOLOR_REGULAR)\n\n regItem = self.getCurrentRegItem()\n\n if regItem is None:\n return\n\n if self._importOnSelect and regItem.successfullyImported is None:\n self.importRegItem(regItem)\n\n if regItem.successfullyImported is None:\n self.editor.setTextColor(QCOLOR_NOT_IMPORTED)\n self.editor.setPlainText('')\n elif regItem.successfullyImported is False:\n self.editor.setTextColor(QCOLOR_ERROR)\n self.editor.setPlainText(str(regItem.exception))\n elif regItem.descriptionHtml:\n self.editor.setHtml(regItem.descriptionHtml)\n else:\n self.editor.setPlainText(regItem.docString)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nrefreshes the tables of all tables by importing the underlying classes", "response": "def tryImportAllPlugins(self):\n \"\"\" Refreshes the tables of all tables by importing the underlying classes\n \"\"\"\n logger.debug(\"Importing plugins: {}\".format(self))\n for tabNr in range(self.tabWidget.count()):\n tab = self.tabWidget.widget(tabNr)\n tab.tryImportAllPlugins()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nconstruct the dimension names given a h5py dataset.", "response": "def dimNamesFromDataset(h5Dataset):\n \"\"\" Constructs the dimension names given a h5py dataset.\n\n First looks in the dataset's dimension scales to see if it refers to another\n dataset. In that case the referred dataset's name is used. If not, the label of the\n dimension scale is used. Finally, if this is empty, the dimension is numbered.\n \"\"\"\n dimNames = [] # TODO: cache?\n for dimNr, dimScales in enumerate(h5Dataset.dims):\n if len(dimScales) == 0:\n dimNames.append('Dim{}'.format(dimNr))\n elif len(dimScales) == 1:\n dimScaleLabel, dimScaleDataset = dimScales.items()[0]\n path = dimScaleDataset.name\n if path:\n dimNames.append(os.path.basename(path))\n elif dimScaleLabel: # This could potentially be long so it's our second choice\n dimNames.append(dimScaleLabel)\n else:\n dimNames.append('Dim{}'.format(dimNr))\n else:\n # TODO: multiple scales for this dimension. What to do?\n logger.warn(\"More than one dimension scale found: {!r}\".format(dimScales))\n dimNames.append('Dim{}'.format(dimNr)) # For now, just number them\n\n return dimNames"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a string describing the element type of the dataset", "response": "def dataSetElementType(h5Dataset):\n \"\"\" Returns a string describing the element type of the dataset\n \"\"\"\n dtype = h5Dataset.dtype\n\n if dtype.names:\n return ''\n else:\n if dtype.metadata and 'vlen' in dtype.metadata:\n vlen_type = dtype.metadata['vlen']\n try:\n return \"\".format(vlen_type.__name__) # when vlen_type is a type\n except AttributeError: #\n return \"\".format(vlen_type.name) # when vlen_type is a dtype\n\n return str(dtype)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef dataSetUnit(h5Dataset):\n attributes = h5Dataset.attrs\n if not attributes:\n return '' # a premature optimization :-)\n\n for key in ('unit', 'units', 'Unit', 'Units', 'UNIT', 'UNITS'):\n if key in attributes:\n # In Python3 the attributes are byte strings so we must decode them\n # This a bug in h5py, see https://github.com/h5py/h5py/issues/379\n return to_string(attributes[key])\n # Not found\n return ''", "response": "Returns the unit of the dataset"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef dataSetMissingValue(h5Dataset):\n attributes = h5Dataset.attrs\n if not attributes:\n return None # a premature optimization :-)\n\n for key in ('missing_value', 'MissingValue', 'missingValue', 'FillValue', '_FillValue'):\n if key in attributes:\n missingDataValue = attributes[key]\n if is_an_array(missingDataValue) and len(missingDataValue) == 1:\n return missingDataValue[0] # In case of HDF-EOS and NetCDF files\n else:\n return missingDataValue\n return None", "response": "Returns the missingData given a HDF - 5 dataset"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the shape of the sub - array.", "response": "def _subArrayShape(self):\n \"\"\" Returns the shape of the sub-array\n An empty tuple is returned for regular fields, which have no sub array.\n \"\"\"\n if self._h5Dataset.dtype.fields is None:\n return tuple() # regular field\n else:\n fieldName = self.nodeName\n fieldDtype = self._h5Dataset.dtype.fields[fieldName][0]\n return fieldDtype.shape"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef elementTypeName(self):\n fieldName = self.nodeName\n return str(self._h5Dataset.dtype.fields[fieldName][0])", "response": "String representation of the element type."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a list with the dimension names of the underlying NCDF variable", "response": "def dimensionNames(self):\n \"\"\" Returns a list with the dimension names of the underlying NCDF variable\n \"\"\"\n nSubDims = len(self._subArrayShape)\n subArrayDims = ['SubDim{}'.format(dimNr) for dimNr in range(nSubDims)]\n return dimNamesFromDataset(self._h5Dataset) + subArrayDims"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef unit(self):\n unit = dataSetUnit(self._h5Dataset)\n fieldNames = self._h5Dataset.dtype.names\n\n # If the missing value attribute is a list with the same length as the number of fields,\n # return the missing value for field that equals the self.nodeName.\n if hasattr(unit, '__len__') and len(unit) == len(fieldNames):\n idx = fieldNames.index(self.nodeName)\n return unit[idx]\n else:\n return unit", "response": "Returns the unit of the RTI by calling dataSetUnit on the underlying dataset\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef missingDataValue(self):\n value = dataSetMissingValue(self._h5Dataset)\n fieldNames = self._h5Dataset.dtype.names\n\n # If the missing value attribute is a list with the same length as the number of fields,\n # return the missing value for field that equals the self.nodeName.\n if hasattr(value, '__len__') and len(value) == len(fieldNames):\n idx = fieldNames.index(self.nodeName)\n return value[idx]\n else:\n return value", "response": "Returns the value to indicate missing data. None is returned."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nshow an Array icon for regular datasets but a dimension icon for dimension scales", "response": "def iconGlyph(self):\n \"\"\" Shows an Array icon for regular datasets but a dimension icon for dimension scales\n \"\"\"\n if self._h5Dataset.attrs.get('CLASS', None) == b'DIMENSION_SCALE':\n return RtiIconFactory.DIMENSION\n else:\n return RtiIconFactory.ARRAY"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfetching all the children of this variable.", "response": "def _fetchAllChildren(self):\n \"\"\" Fetches all fields that this variable contains.\n Only variables with a structured data type can have fields.\n \"\"\"\n assert self.canFetchChildren(), \"canFetchChildren must be True\"\n\n childItems = []\n\n # Add fields\n if self._isStructured:\n for fieldName in self._h5Dataset.dtype.names:\n childItems.append(H5pyFieldRti(self._h5Dataset, nodeName=fieldName,\n fileName=self.fileName))\n\n return childItems"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _fetchAllChildren(self):\n assert self._h5Group is not None, \"dataset undefined (file not opened?)\"\n assert self.canFetchChildren(), \"canFetchChildren must be True\"\n\n childItems = []\n\n for childName, h5Child in self._h5Group.items():\n if isinstance(h5Child, h5py.Group):\n childItems.append(H5pyGroupRti(h5Child, nodeName=childName,\n fileName=self.fileName))\n elif isinstance(h5Child, h5py.Dataset):\n if len(h5Child.shape) == 0:\n childItems.append(H5pyScalarRti(h5Child, nodeName=childName,\n fileName=self.fileName))\n else:\n childItems.append(H5pyDatasetRti(h5Child, nodeName=childName,\n fileName=self.fileName))\n\n elif isinstance(h5Child, h5py.Datatype):\n #logger.debug(\"Ignored DataType item: {}\".format(childName))\n pass\n else:\n logger.warn(\"Ignored {}. It has an unexpected HDF-5 type: {}\"\n .format(childName, type(h5Child)))\n\n return childItems", "response": "Fetches all sub - groups and variables that this group contains."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _openResources(self):\n logger.info(\"Opening: {}\".format(self._fileName))\n self._h5Group = h5py.File(self._fileName, 'r')", "response": "Opens the root dataset."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _closeResources(self):\n logger.info(\"Closing: {}\".format(self._fileName))\n self._h5Group.close()\n self._h5Group = None", "response": "Closes the root Dataset."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncalculating the range from the sliced array of the specified percentage and cross - hairs.", "response": "def calcPgImagePlot2dDataRange(pgImagePlot2d, percentage, crossPlot):\n \"\"\" Calculates the range from the inspectors' sliced array. Discards percentage of the minimum\n and percentage of the maximum values of the inspector.slicedArray\n\n :param pgImagePlot2d: the range methods will work on (the sliced array) of this inspector.\n :param percentage: percentage that will be discarded.\n :param crossPlot: if None, the range will be calculated from the entire sliced array,\n if \"horizontal\" or \"vertical\" the range will be calculated from the data under the\n horizontal or vertical cross hairs.\n If the cursor is outside the image, there is no valid data under the cross-hair and\n the range will be determined from the sliced array as a fall back.\n \"\"\"\n check_class(pgImagePlot2d.slicedArray, ArrayWithMask) # sanity check\n\n if crossPlot is None:\n array = pgImagePlot2d.slicedArray # the whole image\n\n elif crossPlot == 'horizontal':\n if pgImagePlot2d.crossPlotRow is not None:\n array = pgImagePlot2d.slicedArray.asMaskedArray()[pgImagePlot2d.crossPlotRow, :]\n else:\n array = pgImagePlot2d.slicedArray # fall back on complete sliced array\n\n elif crossPlot == 'vertical':\n if pgImagePlot2d.crossPlotCol is not None:\n array = pgImagePlot2d.slicedArray.asMaskedArray()[:, pgImagePlot2d.crossPlotCol]\n else:\n array = pgImagePlot2d.slicedArray # fall back on complete sliced array\n else:\n raise ValueError(\"crossPlot must be: None, 'horizontal' or 'vertical', got: {}\"\n .format(crossPlot))\n\n return maskedNanPercentile(array, (percentage, 100-percentage) )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef crossPlotAutoRangeMethods(pgImagePlot2d, crossPlot, intialItems=None):\n rangeFunctions = OrderedDict({} if intialItems is None else intialItems)\n\n # If crossPlot is \"horizontal\" or \"vertical\" make functions that determine the range from the\n # data at the cross hair.\n if crossPlot:\n rangeFunctions['cross all data'] = partial(calcPgImagePlot2dDataRange, pgImagePlot2d,\n 0.0, crossPlot)\n for percentage in [0.1, 0.2, 0.5, 1, 2, 5, 10, 20]:\n label = \"cross discard {}%\".format(percentage)\n rangeFunctions[label] = partial(calcPgImagePlot2dDataRange, pgImagePlot2d,\n percentage, crossPlot)\n\n # Always add functions that determine the data from the complete sliced array.\n for percentage in [0.1, 0.2, 0.5, 1, 2, 5, 10, 20]:\n rangeFunctions['image all data'] = partial(calcPgImagePlot2dDataRange, pgImagePlot2d,\n 0.0, None)\n\n label = \"image discard {}%\".format(percentage)\n rangeFunctions[label] = partial(calcPgImagePlot2dDataRange, pgImagePlot2d,\n percentage, None)\n return rangeFunctions", "response": "Returns an ordered dict with autorange methods for an object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _closeResources(self):\n verCrossViewBox = self.pgImagePlot2d.verCrossPlotItem.getViewBox()\n verCrossViewBox.sigRangeChangedManually.disconnect(self.yAxisRangeCti.setAutoRangeOff)\n horCrossViewBox = self.pgImagePlot2d.horCrossPlotItem.getViewBox()\n horCrossViewBox.sigRangeChangedManually.disconnect(self.xAxisRangeCti.setAutoRangeOff)\n\n self.pgImagePlot2d.verCrossPlotItem.sigAxisReset.disconnect(self.setVerCrossPlotAutoRangeOn)\n self.pgImagePlot2d.horCrossPlotItem.sigAxisReset.disconnect(self.setHorCrossPlotAutoRangeOn)\n self.pgImagePlot2d.imagePlotItem.sigAxisReset.disconnect(self.setImagePlotAutoRangeOn)", "response": "Disconnects signals.\n Is called by self.finalize when the cti is deleted."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef setImagePlotAutoRangeOn(self, axisNumber):\n setXYAxesAutoRangeOn(self, self.xAxisRangeCti, self.yAxisRangeCti, axisNumber)", "response": "Sets the image plot s auto - range on for the axis with number axisNumber."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsets the horizontal cross - hair plot s auto - range on for the axis with number axisNumber.", "response": "def setHorCrossPlotAutoRangeOn(self, axisNumber):\n \"\"\" Sets the horizontal cross-hair plot's auto-range on for the axis with number axisNumber.\n\n :param axisNumber: 0 (X-axis), 1 (Y-axis), 2, (Both X and Y axes).\n \"\"\"\n setXYAxesAutoRangeOn(self, self.xAxisRangeCti, self.horCrossPlotRangeCti, axisNumber)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsetting the vertical cross - hair plot s auto - range on for the axis with number axisNumber.", "response": "def setVerCrossPlotAutoRangeOn(self, axisNumber):\n \"\"\" Sets the vertical cross-hair plot's auto-range on for the axis with number axisNumber.\n\n :param axisNumber: 0 (X-axis), 1 (Y-axis), 2, (Both X and Y axes).\n \"\"\"\n setXYAxesAutoRangeOn(self, self.verCrossPlotRangeCti, self.yAxisRangeCti, axisNumber)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nclear the contents when no valid data is available for the current locale.", "response": "def _clearContents(self):\n \"\"\" Clears the contents when no valid data is available\n \"\"\"\n logger.debug(\"Clearing inspector contents\")\n self.titleLabel.setText('')\n\n # Don't clear the imagePlotItem, the imageItem is only added in the constructor.\n self.imageItem.clear()\n self.imagePlotItem.setLabel('left', '')\n self.imagePlotItem.setLabel('bottom', '')\n\n # Set the histogram range and levels to finite values to prevent futher errors if this\n # function was called after an exception in self.drawContents\n self.histLutItem.setHistogramRange(0, 100)\n self.histLutItem.setLevels(0, 100)\n\n self.crossPlotRow, self.crossPlotCol = None, None\n\n self.probeLabel.setText('')\n self.crossLineHorizontal.setVisible(False)\n self.crossLineVertical.setVisible(False)\n self.crossLineHorShadow.setVisible(False)\n self.crossLineVerShadow.setVisible(False)\n\n self.horCrossPlotItem.clear()\n self.verCrossPlotItem.clear()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _drawContents(self, reason=None, initiator=None):\n self.crossPlotRow = None # reset because the sliced array shape may change\n self.crossPlotCol = None # idem dito\n\n gridLayout = self.graphicsLayoutWidget.ci.layout # A QGraphicsGridLayout\n\n if self.config.horCrossPlotCti.configValue:\n gridLayout.setRowStretchFactor(ROW_HOR_LINE, 1)\n if not self.horPlotAdded:\n self.graphicsLayoutWidget.addItem(self.horCrossPlotItem, ROW_HOR_LINE, COL_HOR_LINE)\n self.horPlotAdded = True\n gridLayout.activate()\n else:\n gridLayout.setRowStretchFactor(ROW_HOR_LINE, 0)\n if self.horPlotAdded:\n self.graphicsLayoutWidget.removeItem(self.horCrossPlotItem)\n self.horPlotAdded = False\n gridLayout.activate()\n\n if self.config.verCrossPlotCti.configValue:\n gridLayout.setColumnStretchFactor(COL_VER_LINE, 1)\n if not self.verPlotAdded:\n self.graphicsLayoutWidget.addItem(self.verCrossPlotItem, ROW_VER_LINE, COL_VER_LINE)\n self.verPlotAdded = True\n gridLayout.activate()\n else:\n gridLayout.setColumnStretchFactor(COL_VER_LINE, 0)\n if self.verPlotAdded:\n self.graphicsLayoutWidget.removeItem(self.verCrossPlotItem)\n self.verPlotAdded = False\n gridLayout.activate()\n\n self.slicedArray = self.collector.getSlicedArray()\n\n if not self._hasValidData():\n self._clearContents()\n raise InvalidDataError(\"No data available or it does not contain real numbers\")\n\n # -- Valid plot data from here on --\n\n # PyQtGraph doesn't handle masked array so we convert the masked values to Nans. Missing\n # data values are replaced by NaNs. The PyQtGraph image plot shows this as the color at the\n # lowest end of the color scale. Unfortunately we cannot choose a missing-value color, but\n # at least the Nans do not influence for the histogram and color range.\n # We don't update self.slicedArray here because the data probe should still be able to\n # print the actual value.\n imageArray = replaceMaskedValueWithFloat(self.slicedArray.data, self.slicedArray.mask,\n np.nan, copyOnReplace=True)\n\n # Replace infinite value with Nans because PyQtGraph fails on them. WNote that the CTIs of\n # the cross plots (e.g. horCrossPlotRangeCti) are still connected to self.slicedArray, so\n # if the cross section consists of only infs, they may not able to update the autorange.\n # A warning is issued in that case.\n # We don't update self.slicedArray here because the data probe should still be able to\n # print the actual value.\n imageArray = replaceMaskedValueWithFloat(imageArray, np.isinf(self.slicedArray.data),\n np.nan, copyOnReplace=True)\n\n # Reset the axes ranges (via the config)\n if (reason == UpdateReason.RTI_CHANGED or\n reason == UpdateReason.COLLECTOR_COMBO_BOX):\n self.config.xAxisRangeCti.autoRangeCti.data = True\n self.config.yAxisRangeCti.autoRangeCti.data = True\n self.config.histColorRangeCti.autoRangeCti.data = True\n self.config.histRangeCti.autoRangeCti.data = True\n self.config.horCrossPlotRangeCti.autoRangeCti.data = True\n self.config.verCrossPlotRangeCti.autoRangeCti.data = True\n\n # PyQtGraph uses the following dimension order: T, X, Y, Color.\n # We need to transpose the slicedArray ourselves because axes = {'x':1, 'y':0}\n # doesn't seem to do anything.\n imageArray = imageArray.transpose()\n self.imageItem.setImage(imageArray, autoLevels=False)\n\n self.horCrossPlotItem.invertX(self.config.xFlippedCti.configValue)\n self.verCrossPlotItem.invertY(self.config.yFlippedCti.configValue)\n\n self.probeLabel.setVisible(self.config.probeCti.configValue)\n\n self.titleLabel.setText(self.configValue('title').format(**self.collector.rtiInfo))\n\n # Update the config tree from the (possibly) new state of the PgImagePlot2d inspector,\n # e.g. the axis range or color range may have changed while drawing.\n self.config.updateTarget()", "response": "Draw the contents of the collected repo tree item."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating the probe text with the values under the cursor.", "response": "def mouseMoved(self, viewPos):\n \"\"\" Updates the probe text with the values under the cursor.\n Draws a vertical line and a symbol at the position of the probe.\n \"\"\"\n try:\n check_class(viewPos, QtCore.QPointF)\n show_data_point = False # shows the data point as a circle in the cross hair plots\n self.crossPlotRow, self.crossPlotCol = None, None\n\n self.probeLabel.setText(\"no data at cursor\")\n self.crossLineHorizontal.setVisible(False)\n self.crossLineVertical.setVisible(False)\n self.crossLineHorShadow.setVisible(False)\n self.crossLineVerShadow.setVisible(False)\n\n self.horCrossPlotItem.clear()\n self.verCrossPlotItem.clear()\n\n if (self._hasValidData() and self.slicedArray is not None\n and self.viewBox.sceneBoundingRect().contains(viewPos)):\n\n # Calculate the row and column at the cursor. We use math.floor because the pixel\n # corners of the image lie at integer values (and not the centers of the pixels).\n scenePos = self.viewBox.mapSceneToView(viewPos)\n row, col = math.floor(scenePos.y()), math.floor(scenePos.x())\n row, col = int(row), int(col) # Needed in Python 2\n nRows, nCols = self.slicedArray.shape\n\n if (0 <= row < nRows) and (0 <= col < nCols):\n self.viewBox.setCursor(Qt.CrossCursor)\n\n self.crossPlotRow, self.crossPlotCol = row, col\n index = tuple([row, col])\n valueStr = to_string(self.slicedArray[index],\n masked=self.slicedArray.maskAt(index),\n maskFormat='<masked>')\n txt = \"pos = ({:d}, {:d}), value = {}\".format(row, col, valueStr)\n self.probeLabel.setText(txt)\n\n # Show cross section at the cursor pos in the line plots\n if self.config.horCrossPlotCti.configValue:\n self.crossLineHorShadow.setVisible(True)\n self.crossLineHorizontal.setVisible(True)\n self.crossLineHorShadow.setPos(row)\n self.crossLineHorizontal.setPos(row)\n\n # Line plot of cross section row.\n # First determine which points are connected or separated by masks/nans.\n rowData = self.slicedArray.data[row, :]\n connected = np.isfinite(rowData)\n if is_an_array(self.slicedArray.mask):\n connected = np.logical_and(connected, ~self.slicedArray.mask[row, :])\n else:\n connected = (np.zeros_like(rowData)\n if self.slicedArray.mask else connected)\n\n # Replace infinite value with nans because PyQtGraph can't handle them\n rowData = replaceMaskedValueWithFloat(rowData, np.isinf(rowData),\n np.nan, copyOnReplace=True)\n\n horPlotDataItem = self.config.crossPenCti.createPlotDataItem()\n horPlotDataItem.setData(rowData, connect=connected)\n self.horCrossPlotItem.addItem(horPlotDataItem)\n\n # Vertical line in hor-cross plot\n crossLineShadow90 = pg.InfiniteLine(angle=90, movable=False,\n pen=self.crossShadowPen)\n crossLineShadow90.setPos(col)\n self.horCrossPlotItem.addItem(crossLineShadow90, ignoreBounds=True)\n crossLine90 = pg.InfiniteLine(angle=90, movable=False, pen=self.crossPen)\n crossLine90.setPos(col)\n self.horCrossPlotItem.addItem(crossLine90, ignoreBounds=True)\n\n if show_data_point:\n crossPoint90 = pg.PlotDataItem(symbolPen=self.crossPen)\n crossPoint90.setSymbolBrush(QtGui.QBrush(self.config.crossPenCti.penColor))\n crossPoint90.setSymbolSize(10)\n crossPoint90.setData((col,), (rowData[col],))\n self.horCrossPlotItem.addItem(crossPoint90, ignoreBounds=True)\n\n self.config.horCrossPlotRangeCti.updateTarget() # update auto range\n del rowData # defensive programming\n\n if self.config.verCrossPlotCti.configValue:\n self.crossLineVerShadow.setVisible(True)\n self.crossLineVertical.setVisible(True)\n self.crossLineVerShadow.setPos(col)\n self.crossLineVertical.setPos(col)\n\n # Line plot of cross section row.\n # First determine which points are connected or separated by masks/nans.\n colData = self.slicedArray.data[:, col]\n connected = np.isfinite(colData)\n if is_an_array(self.slicedArray.mask):\n connected = np.logical_and(connected, ~self.slicedArray.mask[:, col])\n else:\n connected = (np.zeros_like(colData)\n if self.slicedArray.mask else connected)\n\n # Replace infinite value with nans because PyQtGraph can't handle them\n colData = replaceMaskedValueWithFloat(colData, np.isinf(colData),\n np.nan, copyOnReplace=True)\n\n verPlotDataItem = self.config.crossPenCti.createPlotDataItem()\n verPlotDataItem.setData(colData, np.arange(nRows), connect=connected)\n self.verCrossPlotItem.addItem(verPlotDataItem)\n\n # Horizontal line in ver-cross plot\n crossLineShadow0 = pg.InfiniteLine(angle=0, movable=False,\n pen=self.crossShadowPen)\n crossLineShadow0.setPos(row)\n self.verCrossPlotItem.addItem(crossLineShadow0, ignoreBounds=True)\n crossLine0 = pg.InfiniteLine(angle=0, movable=False, pen=self.crossPen)\n crossLine0.setPos(row)\n self.verCrossPlotItem.addItem(crossLine0, ignoreBounds=True)\n\n if show_data_point:\n crossPoint0 = pg.PlotDataItem(symbolPen=self.crossPen)\n crossPoint0.setSymbolBrush(QtGui.QBrush(self.config.crossPenCti.penColor))\n crossPoint0.setSymbolSize(10)\n crossPoint0.setData((colData[row],), (row,))\n self.verCrossPlotItem.addItem(crossPoint0, ignoreBounds=True)\n\n self.config.verCrossPlotRangeCti.updateTarget() # update auto range\n del colData # defensive programming\n\n except Exception as ex:\n # In contrast to _drawContents, this function is a slot and thus must not throw\n # exceptions. The exception is logged. Perhaps we should clear the cross plots, but\n # this could, in turn, raise exceptions.\n if DEBUGGING:\n raise\n else:\n logger.exception(ex)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a string that can be used to construct file dialogs filters for example", "response": "def getFileDialogFilter(self):\n \"\"\" Returns a filters that can be used to construct file dialogs filters,\n for example: 'Text File (*.txt;*.text)'\n \"\"\"\n extStr = ';'.join(['*' + ext for ext in self.extensions])\n return '{} ({})'.format(self.name, extStr)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a dictionary for serialization.", "response": "def asDict(self):\n \"\"\" Returns a dictionary for serialization.\n \"\"\"\n dct = super(RtiRegItem, self).asDict()\n dct['extensions'] = self.extensions\n return dct"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _registerExtension(self, extension, rtiRegItem):\n check_is_a_string(extension)\n check_class(rtiRegItem, RtiRegItem)\n\n logger.debug(\" Registering extension {!r} for {}\".format(extension, rtiRegItem))\n\n # TODO: type checking\n if extension in self._extensionMap:\n logger.info(\"Overriding extension {!r}: old={}, new={}\"\n .format(extension, self._extensionMap[extension], rtiRegItem))\n self._extensionMap[extension] = rtiRegItem", "response": "Adds an extension to the repository tree item."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding a ClassRegItem object to the registry.", "response": "def registerItem(self, regItem):\n \"\"\" Adds a ClassRegItem object to the registry.\n \"\"\"\n super(RtiRegistry, self).registerItem(regItem)\n\n for ext in regItem.extensions:\n self._registerExtension(ext, regItem)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef registerRti(self, fullName, fullClassName, extensions=None, pythonPath=''):\n check_is_a_sequence(extensions)\n extensions = extensions if extensions is not None else []\n extensions = [prepend_point_to_extension(ext) for ext in extensions]\n\n regRti = RtiRegItem(fullName, fullClassName, extensions, pythonPath=pythonPath)\n self.registerItem(regRti)", "response": "Register a new RTI with the registry."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef getFileDialogFilter(self):\n filters = []\n for regRti in self.items:\n filters.append(regRti.getFileDialogFilter())\n return ';;'.join(filters)", "response": "Returns a filter that can be used in open file dialogs for example"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a list with the default plugins in the repo tree item registry.", "response": "def getDefaultItems(self):\n \"\"\" Returns a list with the default plugins in the repo tree item registry.\n \"\"\"\n return [\n RtiRegItem('HDF-5 file',\n 'argos.repo.rtiplugins.hdf5.H5pyFileRti',\n extensions=['hdf5', 'h5', 'h5e', 'he5', 'nc']), # hdf extension is for HDF-4\n\n RtiRegItem('MATLAB file',\n 'argos.repo.rtiplugins.scipyio.MatlabFileRti',\n extensions=['mat']),\n\n RtiRegItem('NetCDF file',\n 'argos.repo.rtiplugins.ncdf.NcdfFileRti',\n #extensions=['nc', 'nc3', 'nc4']),\n extensions=['nc', 'nc4']),\n #extensions=[]),\n\n RtiRegItem('NumPy binary file',\n 'argos.repo.rtiplugins.numpyio.NumpyBinaryFileRti',\n extensions=['npy']),\n\n RtiRegItem('NumPy compressed file',\n 'argos.repo.rtiplugins.numpyio.NumpyCompressedFileRti',\n extensions=['npz']),\n\n RtiRegItem('NumPy text file',\n 'argos.repo.rtiplugins.numpyio.NumpyTextFileRti',\n #extensions=['txt', 'text']),\n extensions=['dat']),\n\n RtiRegItem('IDL save file',\n 'argos.repo.rtiplugins.scipyio.IdlSaveFileRti',\n extensions=['sav']),\n\n RtiRegItem('Pandas CSV file',\n 'argos.repo.rtiplugins.pandasio.PandasCsvFileRti',\n extensions=['csv']),\n\n RtiRegItem('Pillow image',\n 'argos.repo.rtiplugins.pillowio.PillowFileRti',\n extensions=['bmp', 'eps', 'im', 'gif', 'jpg', 'jpeg', 'msp', 'pcx',\n 'png', 'ppm', 'spi', 'tif', 'tiff', 'xbm', 'xv']),\n\n RtiRegItem('Wav file',\n 'argos.repo.rtiplugins.scipyio.WavFileRti',\n extensions=['wav'])]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _addModuleInfo(self, moduleInfo):\n if is_a_string(moduleInfo):\n moduleInfo = mi.ImportedModuleInfo(moduleInfo)\n\n line = \"{:15s}: {}\".format(moduleInfo.name, moduleInfo.verboseVersion)\n self.editor.appendPlainText(line)\n QtWidgets.QApplication.instance().processEvents()", "response": "Adds a line with module info to the editor"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef addDependencyInfo(self):\n logger.debug(\"Adding dependency info to the AboutDialog\")\n self.progressLabel.setText(\"Retrieving package info...\")\n self.editor.clear()\n\n self._addModuleInfo(mi.PythonModuleInfo())\n self._addModuleInfo(mi.QtModuleInfo())\n\n modules = ['numpy', 'scipy', 'pandas', 'pyqtgraph']\n for module in modules:\n self._addModuleInfo(module)\n\n self._addModuleInfo(mi.PillowInfo())\n self._addModuleInfo(mi.H5pyModuleInfo())\n self._addModuleInfo(mi.NetCDF4ModuleInfo())\n\n self.progressLabel.setText(\"\")\n logger.debug(\"Finished adding dependency info to the AboutDialog\")", "response": "Adds version info about the installed dependencies to the AboutDialog"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _enforceDataType(self, data):\n value = float(data)\n if math.isnan(value):\n raise ValueError(\"FloatCti can't store NaNs\")\n\n if math.isinf(value):\n if value > 0:\n logger.warn(\"Replacing inf by the largest representable float\")\n value = sys.float_info.max\n else:\n logger.warn(\"Replacing -inf by the smallest representable float\")\n value = -sys.float_info.max\n\n return value", "response": "Converts the CTI data to a float so that this CTI always stores that type."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconverting the data to the display value.", "response": "def _dataToString(self, data):\n \"\"\" Conversion function used to convert the (default)data to the display value.\n \"\"\"\n if self.specialValueText is not None and data == self.minValue:\n return self.specialValueText\n else:\n return \"{}{:.{}f}{}\".format(self.prefix, data, self.decimals, self.suffix)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the string with debugging information about the current object.", "response": "def debugInfo(self):\n \"\"\" Returns the string with debugging information\n \"\"\"\n return (\"min = {}, max = {}, step = {}, decimals = {}, specVal = {}\"\n .format(self.minValue, self.maxValue, self.stepSize,\n self.decimals, self.specialValueText))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a FloatCtiEditor object.", "response": "def createEditor(self, delegate, parent, option):\n \"\"\" Creates a FloatCtiEditor.\n For the parameters see the AbstractCti constructor documentation.\n \"\"\"\n return FloatCtiEditor(self, delegate, parent=parent)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef finalize(self):\n self.spinBox.valueChanged.disconnect(self.commitChangedValue)\n super(FloatCtiEditor, self).finalize()", "response": "Called at clean up."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef debugInfo(self):\n return (\"enabled = {}, min = {}, max = {}, precision = {}, specVal = {}\"\n .format(self.enabled, self.minValue, self.maxValue,\n self.precision, self.specialValueText))", "response": "Returns the string with debugging information about the current resource."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a FloatCtiEditor object.", "response": "def createEditor(self, delegate, parent, option):\n \"\"\" Creates a FloatCtiEditor.\n For the parameters see the AbstractCti constructor documentation.\n \"\"\"\n return SnFloatCtiEditor(self, delegate, self.precision, parent=parent)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef finalize(self):\n self.spinBox.valueChanged.disconnect(self.commitChangedValue)\n super(SnFloatCtiEditor, self).finalize()", "response": "Called at clean up."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndetecting the type of RepoTreeItem to use given a file name.", "response": "def detectRtiFromFileName(fileName):\n \"\"\" Determines the type of RepoTreeItem to use given a file name.\n Uses a DirectoryRti for directories and an UnknownFileRti if the file\n extension doesn't match one of the registered RTI extensions.\n\n Returns (cls, regItem) tuple. Both the cls ond the regItem can be None.\n If the file is a directory, (DirectoryRti, None) is returned.\n If the file extension is not in the registry, (UnknownFileRti, None) is returned.\n If the cls cannot be imported (None, regItem) returned. regItem.exception will be set.\n Otherwise (cls, regItem) will be returned.\n \"\"\"\n _, extension = os.path.splitext(fileName)\n if os.path.isdir(fileName):\n rtiRegItem = None\n cls = DirectoryRti\n else:\n try:\n rtiRegItem = globalRtiRegistry().getRtiRegItemByExtension(extension)\n except (KeyError):\n logger.debug(\"No file RTI registered for extension: {}\".format(extension))\n rtiRegItem = None\n cls = UnknownFileRti\n else:\n cls = rtiRegItem.getClass(tryImport=True) # cls can be None\n\n return cls, rtiRegItem"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef createRtiFromFileName(fileName):\n cls, rtiRegItem = detectRtiFromFileName(fileName)\n if cls is None:\n logger.warn(\"Unable to import plugin {}: {}\"\n .format(rtiRegItem.fullName, rtiRegItem.exception))\n rti = UnknownFileRti.createFromFileName(fileName)\n rti.setException(rtiRegItem.exception)\n else:\n rti = cls.createFromFileName(fileName)\n\n assert rti, \"Sanity check failed (createRtiFromFileName). Please report this bug.\"\n return rti", "response": "Determines the type of RepoTreeItem to use given a file name and creates it."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting all sub directories and files within the current directory.", "response": "def _fetchAllChildren(self):\n \"\"\" Gets all sub directories and files within the current directory.\n Does not fetch hidden files.\n \"\"\"\n childItems = []\n fileNames = os.listdir(self._fileName)\n absFileNames = [os.path.join(self._fileName, fn) for fn in fileNames]\n\n # Add subdirectories\n for fileName, absFileName in zip(fileNames, absFileNames):\n if os.path.isdir(absFileName) and not fileName.startswith('.'):\n childItems.append(DirectoryRti(fileName=absFileName, nodeName=fileName))\n\n # Add regular files\n for fileName, absFileName in zip(fileNames, absFileNames):\n if os.path.isfile(absFileName) and not fileName.startswith('.'):\n childItem = createRtiFromFileName(absFileName)\n childItems.append(childItem)\n\n return childItems"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef resizeColumnsToContents(self, startCol=None, stopCol=None):\n numCols = self.model().columnCount()\n startCol = 0 if startCol is None else max(startCol, 0)\n stopCol = numCols if stopCol is None else min(stopCol, numCols)\n\n row = 0\n for col in range(startCol, stopCol):\n indexWidget = self.indexWidget(self.model().index(row, col))\n\n if indexWidget:\n contentsWidth = indexWidget.sizeHint().width()\n else:\n contentsWidth = self.header().sectionSizeHint(col)\n\n self.header().resizeSection(col, contentsWidth)", "response": "Resizes all columns to the contents"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the size of the collector containing the current entry.", "response": "def sizeHint(self):\n \"\"\" Reimplemented from the C++ Qt source of QAbstractSpinBox.sizeHint, but without\n truncating to a maximum of 18 characters.\n \"\"\"\n # The cache is invalid after the prefix, postfix and other properties\n # have been set. I disabled it because sizeHint isn't called that often.\n #if self._cachedSizeHint is not None:\n # return self._cachedSizeHint\n\n orgSizeHint = super(CollectorSpinBox, self).sizeHint()\n\n self.ensurePolished()\n d = self\n fm = QtGui.QFontMetrics(self.fontMetrics())\n\n # This was h = d.edit.sizeHint().height(), but that didn't work. In the end we set the\n # height to the height calculated from the parent.\n h = orgSizeHint.height()\n w = 0\n\n # QLatin1Char seems not to be implemented.\n # Using regular string literals and hope for the best\n s = d.prefix() + d.textFromValue(d.minimum()) + d.suffix() + ' '\n\n # We disabled truncating the string here!!\n #s = s[:18]\n w = max(w, fm.width(s))\n\n s = d.prefix() + d.textFromValue(d.maximum()) + d.suffix() + ' '\n\n # We disabled truncating the string here!!\n #s = s[:18]\n w = max(w, fm.width(s))\n if len(d.specialValueText()):\n s = d.specialValueText()\n w = max(w, fm.width(s))\n\n w += 2 # cursor blinking space\n\n opt = QtWidgets.QStyleOptionSpinBox()\n self.initStyleOption(opt)\n hint = QtCore.QSize(w, h)\n extra = QtCore.QSize(35, 6)\n\n opt.rect.setSize(hint + extra)\n extra += hint - self.style().subControlRect(QtWidgets.QStyle.CC_SpinBox, opt,\n QtWidgets.QStyle.SC_SpinBoxEditField, self).size()\n\n # get closer to final result by repeating the calculation\n opt.rect.setSize(hint + extra)\n extra += hint - self.style().subControlRect(QtWidgets.QStyle.CC_SpinBox, opt,\n QtWidgets.QStyle.SC_SpinBoxEditField, self).size()\n hint += extra\n\n opt.rect = self.rect()\n result = (self.style().sizeFromContents(QtWidgets.QStyle.CT_SpinBox, opt, hint, self)\n .expandedTo(QtWidgets.QApplication.globalStrut()))\n self._cachedSizeHint = result\n\n # Use the height ancestor's sizeHint\n result.setHeight(orgSizeHint.height())\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating an editor for the item at the given index.", "response": "def createEditor(self, parent, option, index):\n \"\"\" Returns the widget used to change data from the model and can be reimplemented to\n customize editing behavior.\n\n Reimplemented from QStyledItemDelegate.\n \"\"\"\n logger.debug(\"ConfigItemDelegate.createEditor, parent: {!r}\".format(parent.objectName()))\n assert index.isValid(), \"sanity check failed: invalid index\"\n\n cti = index.model().getItem(index)\n editor = cti.createEditor(self, parent, option)\n return editor"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nprovide the widget with data to manipulate.", "response": "def setEditorData(self, editor, index):\n \"\"\" Provides the widget with data to manipulate.\n Calls the setEditorValue of the config tree item at the index.\n\n :type editor: QWidget\n :type index: QModelIndex\n\n Reimplemented from QStyledItemDelegate.\n \"\"\"\n # We take the config value via the model to be consistent with setModelData\n data = index.model().data(index, Qt.EditRole)\n editor.setData(data)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef setModelData(self, editor, model, index):\n try:\n data = editor.getData()\n except InvalidInputError as ex:\n logger.warn(ex)\n else:\n # The value is set via the model so that signals are emitted\n logger.debug(\"ConfigItemDelegate.setModelData: {}\".format(data))\n model.setData(index, data, Qt.EditRole)", "response": "Sets the data in the specified model at the given index."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating the editor geometry with the current option.", "response": "def updateEditorGeometry(self, editor, option, index):\n \"\"\" Ensures that the editor is displayed correctly with respect to the item view.\n \"\"\"\n cti = index.model().getItem(index)\n if cti.checkState is None:\n displayRect = option.rect\n else:\n checkBoxRect = widgetSubCheckBoxRect(editor, option)\n offset = checkBoxRect.x() + checkBoxRect.width()\n displayRect = option.rect\n displayRect.adjust(offset, 0, 0, 0)\n\n editor.setGeometry(displayRect)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfinalizes closes and releases the given editor.", "response": "def closeEditor(self, editor, hint):\n \"\"\" Finalizes, closes and releases the given editor.\n \"\"\"\n # It would be nicer if this method was part of ConfigItemDelegate since createEditor also\n # lives there. However, QAbstractItemView.closeEditor is sometimes called directly,\n # without the QAbstractItemDelegate.closeEditor signal begin emitted, e.g when the\n # currentItem changes. Therefore we cannot connect the QAbstractItemDelegate.closeEditor\n # signal to a slot in the ConfigItemDelegate.\n configItemDelegate = self.itemDelegate()\n configItemDelegate.finalizeEditor(editor)\n\n super(ConfigTreeView, self).closeEditor(editor, hint)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef expandBranch(self, index=None, expanded=None):\n configModel = self.model()\n if index is None:\n #index = configTreeModel.createIndex()\n index = QtCore.QModelIndex()\n\n if index.isValid():\n if expanded is None:\n item = configModel.getItem(index)\n self.setExpanded(index, item.expanded)\n else:\n self.setExpanded(index, expanded)\n\n for rowNr in range(configModel.rowCount(index)):\n childIndex = configModel.index(rowNr, configModel.COL_NODE_NAME, parentIndex=index)\n self.expandBranch(index=childIndex, expanded=expanded)", "response": "Expands or collapses the node at the index and all its descendants."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef ncVarAttributes(ncVar):\n try:\n return ncVar.__dict__\n except Exception as ex:\n # Due to some internal error netCDF4 may raise an AttributeError or KeyError,\n # depending on its version.\n logger.warn(\"Unable to read the attributes from {}. Reason: {}\"\n .format(ncVar.name, ex))\n return {}", "response": "Returns the attributes of the ncdf variable\n "} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef ncVarUnit(ncVar):\n attributes = ncVarAttributes(ncVar)\n if not attributes:\n return '' # a premature optimization :-)\n\n for key in ('unit', 'units', 'Unit', 'Units', 'UNIT', 'UNITS'):\n if key in attributes:\n # In Python3 the attribures are byte strings so we must decode them\n # This a bug in h5py, see https://github.com/h5py/h5py/issues/379\n return attributes[key]\n else:\n return ''", "response": "Returns the unit of the ncVar in the order they appear."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the missingData given a NetCDF variable", "response": "def variableMissingValue(ncVar):\n \"\"\" Returns the missingData given a NetCDF variable\n\n Looks for one of the following attributes: _FillValue, missing_value, MissingValue,\n missingValue. Returns None if these attributes are not found.\n \"\"\"\n attributes = ncVarAttributes(ncVar)\n if not attributes:\n return None # a premature optimization :-)\n\n for key in ('missing_value', 'MissingValue', 'missingValue', 'FillValue', '_FillValue'):\n if key in attributes:\n missingDataValue = attributes[key]\n return missingDataValue\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef elementTypeName(self):\n fieldName = self.nodeName\n return str(self._ncVar.dtype.fields[fieldName][0])", "response": "String representation of the element type."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the unit attribute of the underlying ncdf variable.", "response": "def unit(self):\n \"\"\" Returns the unit attribute of the underlying ncdf variable.\n\n If the units has a length (e.g is a list) and has precisely one element per field,\n the unit for this field is returned.\n \"\"\"\n unit = ncVarUnit(self._ncVar)\n fieldNames = self._ncVar.dtype.names\n\n # If the missing value attribute is a list with the same length as the number of fields,\n # return the missing value for field that equals the self.nodeName.\n if hasattr(unit, '__len__') and len(unit) == len(fieldNames):\n idx = fieldNames.index(self.nodeName)\n return unit[idx]\n else:\n return unit"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a list with the dimension names of the underlying NCDF variable", "response": "def dimensionNames(self):\n \"\"\" Returns a list with the dimension names of the underlying NCDF variable\n \"\"\"\n nSubDims = len(self._subArrayShape)\n subArrayDims = ['SubDim{}'.format(dimNr) for dimNr in range(nSubDims)]\n return list(self._ncVar.dimensions + tuple(subArrayDims))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the value to indicate missing data. None is returned.", "response": "def missingDataValue(self):\n \"\"\" Returns the value to indicate missing data. None if no missing-data value is specified.\n \"\"\"\n value = variableMissingValue(self._ncVar)\n fieldNames = self._ncVar.dtype.names\n\n # If the missing value attibute is a list with the same length as the number of fields,\n # return the missing value for field that equals the self.nodeName.\n if hasattr(value, '__len__') and len(value) == len(fieldNames):\n idx = fieldNames.index(self.nodeName)\n return value[idx]\n else:\n return value"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef elementTypeName(self):\n dtype = self._ncVar.dtype\n if type(dtype) == type:\n # Handle the unexpected case that dtype is a regular Python type\n # (happens e.g. in the /PROCESSOR/processing_configuration of the Trop LX files)\n return dtype.__name__\n\n return '' if dtype.names else str(dtype)", "response": "String representation of the element type."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _fetchAllChildren(self):\n assert self.canFetchChildren(), \"canFetchChildren must be True\"\n\n childItems = []\n\n # Add fields\n if self._isStructured:\n for fieldName in self._ncVar.dtype.names:\n childItems.append(NcdfFieldRti(self._ncVar, nodeName=fieldName, fileName=self.fileName))\n\n return childItems", "response": "Fetches all the children of this variable."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nfetch all sub groups and variables that this group contains.", "response": "def _fetchAllChildren(self):\n \"\"\" Fetches all sub groups and variables that this group contains.\n \"\"\"\n assert self._ncGroup is not None, \"dataset undefined (file not opened?)\"\n assert self.canFetchChildren(), \"canFetchChildren must be True\"\n\n childItems = []\n\n # Add dimensions\n for dimName, ncDim in self._ncGroup.dimensions.items():\n childItems.append(NcdfDimensionRti(ncDim, nodeName=dimName, fileName=self.fileName))\n\n # Add groups\n for groupName, ncGroup in self._ncGroup.groups.items():\n childItems.append(NcdfGroupRti(ncGroup, nodeName=groupName, fileName=self.fileName))\n\n # Add variables\n for varName, ncVar in self._ncGroup.variables.items():\n childItems.append(NcdfVariableRti(ncVar, nodeName=varName, fileName=self.fileName))\n\n return childItems"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nopening the root Dataset.", "response": "def _openResources(self):\n \"\"\" Opens the root Dataset.\n \"\"\"\n logger.info(\"Opening: {}\".format(self._fileName))\n self._ncGroup = Dataset(self._fileName)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nclose the root Dataset.", "response": "def _closeResources(self):\n \"\"\" Closes the root Dataset.\n \"\"\"\n logger.info(\"Closing: {}\".format(self._fileName))\n self._ncGroup.close()\n self._ncGroup = None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nopening the underlying file and sets the rate attribute.", "response": "def _openResources(self):\n \"\"\" Uses numpy.loadtxt to open the underlying file.\n \"\"\"\n try:\n rate, data = scipy.io.wavfile.read(self._fileName, mmap=True)\n except Exception as ex:\n logger.warning(ex)\n logger.warning(\"Unable to read wav with memmory mapping. Trying without now.\")\n rate, data = scipy.io.wavfile.read(self._fileName, mmap=False)\n\n self._array = data\n self.attributes['rate'] = rate"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _fetchAllChildren(self):\n childItems = []\n if self._array.ndim == 2:\n _nRows, nCols = self._array.shape if self._array is not None else (0, 0)\n for col in range(nCols):\n colItem = SliceRti(self._array[:, col], nodeName=\"channel-{}\".format(col),\n fileName=self.fileName, iconColor=self.iconColor,\n attributes=self.attributes)\n childItems.append(colItem)\n return childItems", "response": "Adds an ArrayRti per column as children so that they can be inspected easily."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _drawContents(self, reason=None, initiator=None):\n logger.debug(\"TextInspector._drawContents: {}\".format(self))\n self._clearContents()\n\n slicedArray = self.collector.getSlicedArray()\n\n if slicedArray is None:\n return\n\n # Sanity check, the slicedArray should be zero-dimensional. It can be used as a scalar.\n # In fact, using an index (e.g. slicedArray[0]) will raise an exception.\n assert slicedArray.data.ndim == 0, \\\n \"Expected zero-dimensional array. Got: {}\".format(slicedArray.ndim)\n\n # Valid data from here...\n maskedArr = slicedArray.asMaskedArray() # So that we call mask[()] for boolean masks\n slicedScalar = maskedArr[()] # Convert to Numpy scalar\n isMasked = maskedArr.mask[()]\n\n text = to_string(slicedScalar, masked=isMasked, maskFormat='--',\n decode_bytes=self.config.encodingCti.configValue)\n self.editor.setPlainText(text)\n self.editor.setWordWrapMode(self.config.wordWrapCti.configValue)\n\n # Update the editor font from the font config item (will call self.editor.setFont)\n self.config.updateTarget()", "response": "Draws the contents of the current item in the text editor."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nemits sigAxisReset when the middle mouse button is clicked on an axis of the plot item.", "response": "def middleMouseClickEvent(argosPgPlotItem, axisNumber, mouseClickEvent):\n \"\"\" Emits sigAxisReset when the middle mouse button is clicked on an axis of the the plot item.\n \"\"\"\n if mouseClickEvent.button() == QtCore.Qt.MiddleButton:\n mouseClickEvent.accept()\n argosPgPlotItem.emitResetAxisSignal(axisNumber)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nclose the resources of this object.", "response": "def close(self):\n \"\"\" Is called before destruction. Can be used to clean-up resources\n Could be called 'finalize' but PlotItem already has a close so we reuse that.\n \"\"\"\n logger.debug(\"Finalizing: {}\".format(self))\n super(ArgosPgPlotItem, self).close()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nshow the context menu at the cursor position", "response": "def contextMenuEvent(self, event):\n \"\"\" Shows the context menu at the cursor position\n\n We need to take the event-based approach because ArgosPgPlotItem does derives from\n QGraphicsWidget, and not from QWidget, and therefore doesn't have the\n customContextMenuRequested signal.\n \"\"\"\n contextMenu = QtWidgets.QMenu()\n for action in self.actions():\n contextMenu.addAction(action)\n contextMenu.exec_(event.screenPos())"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef emitResetAxisSignal(self, axisNumber):\n assert axisNumber in (VALID_AXES_NUMBERS), \\\n \"Axis Nr should be one of {}, got {}\".format(VALID_AXES_NUMBERS, axisNumber)\n\n # Hide 'auto-scale (A)' button\n logger.debug(\"ArgosPgPlotItem.autoBtnClicked, mode:{}\".format(self.autoBtn.mode))\n if self.autoBtn.mode == 'auto':\n self.autoBtn.hide()\n else:\n # Does this occur?\n msg = \"Unexpected autobutton mode: {}\".format(self.autoBtn.mode)\n if DEBUGGING:\n raise ValueError(msg)\n else:\n logger.warn(msg)\n\n logger.debug(\"Emitting sigAxisReset({}) for {!r}\".format(axisNumber, self))\n self.sigAxisReset.emit(axisNumber)", "response": "Emits the sigResetAxis with the axisNumber as parameter\n ."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef merge(local, base, remote, check_modified=False):\n base_local = diff.diff(base, local, check_modified=check_modified)\n base_remote = diff.diff(base, remote, check_modified=check_modified)\n merge = diff.diff(base_local, base_remote)\n return merge", "response": "Generates a merged unmerged version of the two items on the local branch and the remote branch."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef notebook_merge(local, base, remote, check_modified=False):\n\n local_cells = get_cells(local)\n base_cells = get_cells(base)\n remote_cells = get_cells(remote)\n\n rows = []\n current_row = []\n empty_cell = lambda: {\n 'cell_type': 'code',\n 'language': 'python',\n 'outputs': [],\n 'prompt_number': 1,\n 'text': ['Placeholder'],\n 'metadata': {'state': 'empty'}\n }\n\n diff_of_diffs = merge(local_cells, base_cells, remote_cells)\n\n # For each item in the higher-order diff, create a \"row\" that\n # corresponds to a row in the NBDiff interface. A row contains:\n # | LOCAL | BASE | REMOTE |\n\n for item in diff_of_diffs:\n state = item['state']\n cell = copy.deepcopy(diff_result_to_cell(item['value']))\n if state == 'deleted':\n # This change is between base and local branches.\n # It can be an addition or a deletion.\n if cell['metadata']['state'] == 'unchanged':\n # This side doesn't have the change; wait\n # until we encounter the change to create the row.\n continue\n cell['metadata']['side'] = 'local'\n remote_cell = empty_cell()\n remote_cell['metadata']['side'] = 'remote'\n if cell['metadata']['state'] == 'deleted' \\\n or cell['metadata']['state'] == 'unchanged':\n base_cell = copy.deepcopy(cell)\n else:\n base_cell = empty_cell()\n base_cell['metadata']['side'] = 'base'\n # This change is on the right.\n current_row = [\n cell,\n base_cell,\n remote_cell,\n ]\n elif state == 'added':\n # This change is between base and remote branches.\n # It can be an addition or a deletion.\n cell['metadata']['side'] = 'remote'\n if cell['metadata']['state'] == 'unchanged':\n # This side doesn't have the change; wait\n # until we encounter the change to create the row.\n continue\n if cell['metadata']['state'] == 'deleted':\n base_cell = copy.deepcopy(cell)\n base_cell['metadata']['state'] = 'unchanged'\n local_cell = copy.deepcopy(cell)\n local_cell['metadata']['state'] = 'unchanged'\n else:\n base_cell = empty_cell()\n local_cell = empty_cell()\n base_cell['metadata']['side'] = 'base'\n local_cell['metadata']['side'] = 'local'\n current_row = [\n local_cell,\n base_cell,\n cell,\n ]\n elif state == 'unchanged':\n # The same item occurs between base-local and base-remote.\n # This happens if both branches made the same change, whether\n # that is an addition or deletion. If neither branches\n # changed a given cell, that cell shows up here too.\n cell1 = copy.deepcopy(cell)\n cell3 = copy.deepcopy(cell)\n if cell['metadata']['state'] == 'deleted' \\\n or cell['metadata']['state'] == 'unchanged':\n # If the change is a deletion, the cell-to-be-deleted\n # should in the base as 'unchanged'. The user will\n # choose to make it deleted.\n cell2 = copy.deepcopy(cell)\n cell2['metadata']['state'] = 'unchanged'\n else:\n # If the change is an addition, it should not\n # show in the base; the user must add it to the merged version.\n cell2 = empty_cell()\n cell1['metadata']['side'] = 'local'\n cell2['metadata']['side'] = 'base'\n cell3['metadata']['side'] = 'remote'\n current_row = [\n cell1,\n cell2,\n cell3,\n ]\n\n rows.append(current_row)\n\n # Chain all rows together; create a flat array from the nested array.\n # Use the base notebook's notebook-level metadata (title, version, etc.)\n\n result_notebook = local\n if len(result_notebook['worksheets']) == 0:\n result_notebook['worksheets'] = [nbformat.new_worksheet()]\n\n new_cell_array = list(it.chain.from_iterable(rows))\n result_notebook['worksheets'][0]['cells'] = new_cell_array\n\n result_notebook['metadata']['nbdiff-type'] = 'merge'\n\n return result_notebook", "response": "This function takes two dicts of local and base and remote branches and returns a single notebook containing merge metadata."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nparse a notebook. ipynb file.", "response": "def parse(self, json_data):\n \"\"\"Parse a notebook .ipynb file.\n\n Parameters\n ----------\n json_data : file\n A file handle for an .ipynb file.\n\n Returns\n -------\n nb : An IPython Notebook data structure.\n \"\"\"\n data = current.read(json_data, 'ipynb')\n json_data.close()\n return data"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nunify two notebooks into a single notebook with diff metadata.", "response": "def notebook_diff(nb1, nb2, check_modified=True):\n \"\"\"Unify two notebooks into a single notebook with diff metadata.\n\n The result of this function is a valid notebook that can be loaded\n by the IPython Notebook front-end. This function adds additional\n cell metadata that the front-end Javascript uses to render the diffs.\n\n Parameters\n ----------\n nb1 : dict\n An IPython Notebook to use as the baseline version.\n nb2 : dict\n An IPython Notebook to compare against the baseline.\n check_modified : bool\n Whether or not to detect cell modification.\n\n Returns\n -------\n nb : A valid notebook containing diff metadata.\n \"\"\"\n nb1_cells = nb1['worksheets'][0]['cells']\n nb2_cells = nb2['worksheets'][0]['cells']\n\n diffed_nb = cells_diff(nb1_cells, nb2_cells, check_modified=check_modified)\n line_diffs = diff_modified_items(diffed_nb)\n\n cell_list = list()\n for i, item in enumerate(diffed_nb):\n cell = diff_result_to_cell(item)\n if i in line_diffs:\n cell['metadata']['extra-diff-data'] = line_diffs[i]\n cell_list.append(cell)\n\n nb1['worksheets'][0]['cells'] = cell_list\n nb1['metadata']['nbdiff-type'] = 'diff'\n\n return nb1"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef words_diff(before_words, after_words):\n '''Diff the words in two strings.\n\n This is intended for use in diffing prose and other forms of text\n where line breaks have little semantic value.\n\n Parameters\n ----------\n before_words : str\n A string to be used as the baseline version.\n after_words : str\n A string to be compared against the baseline.\n\n Returns\n -------\n diff_result : A list of dictionaries containing diff information.\n '''\n before_comps = before_words.split()\n after_comps = after_words.split()\n\n diff_result = diff(\n before_comps,\n after_comps\n )\n return diff_result", "response": "Diff the words in two strings."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef equal(self, line1, line2):\n '''\n return true if exactly equal or if equal but modified,\n otherwise return false\n return type: BooleanPlus\n '''\n\n eqLine = line1 == line2\n if eqLine:\n return BooleanPlus(True, False)\n else:\n unchanged_count = self.count_similar_words(line1, line2)\n similarity_percent = (\n (2.0 * unchanged_count) /\n (len(line1.split()) + len(line2.split()))\n )\n if similarity_percent >= 0.50:\n return BooleanPlus(True, True)\n return BooleanPlus(False, False)", "response": "Return True if line1 == line2 False otherwise"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef compare_cells(self, cell1, cell2):\n '''\n return true if exactly equal or if equal but modified,\n otherwise return false\n return type: BooleanPlus\n '''\n eqlanguage = cell1[\"language\"] == cell2[\"language\"]\n eqinput = cell1[\"input\"] == cell2[\"input\"]\n eqoutputs = self.equaloutputs(cell1[\"outputs\"], cell2[\"outputs\"])\n\n if eqlanguage and eqinput and eqoutputs:\n return BooleanPlus(True, False)\n elif not self.check_modified:\n return BooleanPlus(False, False)\n\n input1 = u\"\".join(cell1['input'])\n input2 = u\"\".join(cell2['input'])\n similarity_percent = Levenshtein.ratio(input1, input2)\n if similarity_percent >= 0.65:\n return BooleanPlus(True, True)\n return BooleanPlus(False, False)", "response": "return true if exactly equal or if equal but modified,\n otherwise return false\n return type: BooleanPlus"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef run_application(component: Union[Component, Dict[str, Any]], *, event_loop_policy: str = None,\n max_threads: int = None, logging: Union[Dict[str, Any], int, None] = INFO,\n start_timeout: Union[int, float, None] = 10):\n \"\"\"\n Configure logging and start the given root component in the default asyncio event loop.\n\n Assuming the root component was started successfully, the event loop will continue running\n until the process is terminated.\n\n Initializes the logging system first based on the value of ``logging``:\n * If the value is a dictionary, it is passed to :func:`logging.config.dictConfig` as\n argument.\n * If the value is an integer, it is passed to :func:`logging.basicConfig` as the logging\n level.\n * If the value is ``None``, logging setup is skipped entirely.\n\n By default, the logging system is initialized using :func:`~logging.basicConfig` using the\n ``INFO`` logging level.\n\n The default executor in the event loop is replaced with a new\n :class:`~concurrent.futures.ThreadPoolExecutor` where the maximum number of threads is set to\n the value of ``max_threads`` or, if omitted, the default value of\n :class:`~concurrent.futures.ThreadPoolExecutor`.\n\n :param component: the root component (either a component instance or a configuration dictionary\n where the special ``type`` key is either a component class or a ``module:varname``\n reference to one)\n :param event_loop_policy: entry point name (from the ``asphalt.core.event_loop_policies``\n namespace) of an alternate event loop policy (or a module:varname reference to one)\n :param max_threads: the maximum number of worker threads in the default thread pool executor\n (the default value depends on the event loop implementation)\n :param logging: a logging configuration dictionary, :ref:`logging level ` or\n ``None``\n :param start_timeout: seconds to wait for the root component (and its subcomponents) to start\n up before giving up (``None`` = wait forever)\n\n \"\"\"\n assert check_argument_types()\n\n # Configure the logging system\n if isinstance(logging, dict):\n dictConfig(logging)\n elif isinstance(logging, int):\n basicConfig(level=logging)\n\n # Inform the user whether -O or PYTHONOPTIMIZE was set when Python was launched\n logger = getLogger(__name__)\n logger.info('Running in %s mode', 'development' if __debug__ else 'production')\n\n # Switch to an alternate event loop policy if one was provided\n if event_loop_policy:\n create_policy = policies.resolve(event_loop_policy)\n policy = create_policy()\n asyncio.set_event_loop_policy(policy)\n logger.info('Switched event loop policy to %s', qualified_name(policy))\n\n # Assign a new default executor with the given max worker thread limit if one was provided\n event_loop = asyncio.get_event_loop()\n if max_threads is not None:\n event_loop.set_default_executor(ThreadPoolExecutor(max_threads))\n logger.info('Installed a new thread pool executor with max_workers=%d', max_threads)\n\n # Instantiate the root component if a dict was given\n if isinstance(component, dict):\n component = cast(Component, component_types.create_object(**component))\n\n logger.info('Starting application')\n context = Context()\n exception = None # type: Optional[BaseException]\n exit_code = 0\n\n # Start the root component\n try:\n coro = asyncio.wait_for(component.start(context), start_timeout, loop=event_loop)\n event_loop.run_until_complete(coro)\n except asyncio.TimeoutError as e:\n exception = e\n logger.error('Timeout waiting for the root component to start')\n exit_code = 1\n except Exception as e:\n exception = e\n logger.exception('Error during application startup')\n exit_code = 1\n else:\n logger.info('Application started')\n\n # Add a signal handler to gracefully deal with SIGTERM\n try:\n event_loop.add_signal_handler(signal.SIGTERM, sigterm_handler, logger, event_loop)\n except NotImplementedError:\n pass # Windows does not support signals very well\n\n # Finally, run the event loop until the process is terminated or Ctrl+C is pressed\n try:\n event_loop.run_forever()\n except KeyboardInterrupt:\n pass\n except SystemExit as e:\n exit_code = e.code\n\n # Close the root context\n logger.info('Stopping application')\n event_loop.run_until_complete(context.close(exception))\n\n # Shut down leftover async generators (requires Python 3.6+)\n try:\n event_loop.run_until_complete(event_loop.shutdown_asyncgens())\n except (AttributeError, NotImplementedError):\n pass\n\n # Finally, close the event loop itself\n event_loop.close()\n logger.info('Application stopped')\n\n # Shut down the logging system\n shutdown()\n\n if exit_code:\n sys.exit(exit_code)", "response": "Runs the given root component in the event loop."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef executor(func_or_executor: Union[Executor, str, Callable[..., T_Retval]]) -> \\\n Union[WrappedCallable, Callable[..., WrappedCallable]]:\n \"\"\"\n Decorate a function to run in an executor.\n\n If no executor (or ``None``) is given, the current event loop's default executor is used.\n Otherwise, the argument must be a PEP 3148 compliant thread pool executor or the name of an\n :class:`~concurrent.futures.Executor` instance.\n\n If a decorated callable is called in a worker thread, the executor argument is ignored and the\n wrapped function is called directly.\n\n Callables wrapped with this decorator must be used with ``await`` when called in the event loop\n thread.\n\n Example use with the default executor (``None``)::\n\n @executor\n def this_runs_in_threadpool(ctx):\n return do_something_cpu_intensive()\n\n async def request_handler(ctx):\n result = await this_runs_in_threadpool(ctx)\n\n With a named :class:`~concurrent.futures.Executor` resource::\n\n @executor('special_ops')\n def this_runs_in_threadpool(ctx):\n return do_something_cpu_intensive()\n\n async def request_handler(ctx):\n result = await this_runs_in_threadpool(ctx)\n\n :param func_or_executor: either a callable (when used as a decorator), an executor instance or\n the name of an :class:`~concurrent.futures.Executor` resource\n\n \"\"\"\n def outer(func: Callable[..., T_Retval],\n executor: Union[Executor, str] = None) -> Callable[..., Awaitable[T_Retval]]:\n def wrapper(*args, **kwargs):\n try:\n loop = get_event_loop()\n except RuntimeError:\n # Event loop not available -- we're in a worker thread\n return func(*args, **kwargs)\n\n # Resolve the executor resource name to an Executor instance\n if isinstance(executor, str):\n try:\n ctx = next(obj for obj in args[:2] if isinstance(obj, Context))\n except StopIteration:\n raise RuntimeError('the callable needs to be called with a Context as the '\n 'first or second positional argument')\n\n _executor = ctx.require_resource(Executor, executor)\n else:\n _executor = executor\n\n callback = partial(func, *args, **kwargs)\n return loop.run_in_executor(_executor, callback)\n\n assert check_argument_types()\n assert not inspect.iscoroutinefunction(func), \\\n 'Cannot wrap coroutine functions to be run in an executor'\n return wraps(func)(wrapper)\n\n if isinstance(func_or_executor, (str, Executor)):\n return partial(outer, executor=func_or_executor)\n else:\n return outer(func_or_executor)", "response": "Decorator to wrap a function to run in an executor."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the qualified name for the given object.", "response": "def qualified_name(obj) -> str:\n \"\"\"\n Return the qualified name (e.g. package.module.Type) for the given object.\n\n If ``obj`` is not a class, the returned name will match its type instead.\n\n \"\"\"\n if not isclass(obj):\n obj = type(obj)\n\n if obj.__module__ == 'builtins':\n return obj.__name__\n else:\n return '{}.{}'.format(obj.__module__, obj.__qualname__)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the qualified name for the given callable.", "response": "def callable_name(func: Callable) -> str:\n \"\"\"Return the qualified name (e.g. package.module.func) for the given callable.\"\"\"\n if func.__module__ == 'builtins':\n return func.__name__\n else:\n return '{}.{}'.format(func.__module__, func.__qualname__)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef merge_config(original: Optional[Dict[str, Any]],\n overrides: Optional[Dict[str, Any]]) -> Dict[str, Any]:\n \"\"\"\n Return a copy of the ``original`` configuration dictionary, with overrides from ``overrides``\n applied.\n\n This similar to what :meth:`dict.update` does, but when a dictionary is about to be\n replaced with another dictionary, it instead merges the contents.\n\n If a key in ``overrides`` is a dotted path (ie. ``foo.bar.baz: value``), it is assumed to be a\n shorthand for ``foo: {bar: {baz: value}}``.\n\n :param original: a configuration dictionary (or ``None``)\n :param overrides: a dictionary containing overriding values to the configuration (or ``None``)\n :return: the merge result\n\n \"\"\"\n assert check_argument_types()\n copied = original.copy() if original else {}\n if overrides:\n for key, value in overrides.items():\n if '.' in key:\n key, rest = key.split('.', 1)\n value = {rest: value}\n\n orig_value = copied.get(key)\n if isinstance(orig_value, dict) and isinstance(value, dict):\n copied[key] = merge_config(orig_value, value)\n else:\n copied[key] = value\n\n return copied", "response": "Returns a copy of original with overrides applied."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nresolving a reference to an entry point or a variable in a module.", "response": "def resolve(self, obj):\n \"\"\"\n Resolve a reference to an entry point or a variable in a module.\n\n If ``obj`` is a ``module:varname`` reference to an object, :func:`resolve_reference` is\n used to resolve it. If it is a string of any other kind, the named entry point is loaded\n from this container's namespace. Otherwise, ``obj`` is returned as is.\n\n :param obj: an entry point identifier, an object reference or an arbitrary object\n :return: the loaded entry point, resolved object or the unchanged input value\n :raises LookupError: if ``obj`` was a string but the named entry point was not found\n\n \"\"\"\n if not isinstance(obj, str):\n return obj\n if ':' in obj:\n return resolve_reference(obj)\n\n value = self._entrypoints.get(obj)\n if value is None:\n raise LookupError('no such entry point in {}: {}'.format(self.namespace, obj))\n\n if isinstance(value, EntryPoint):\n value = self._entrypoints[obj] = value.load()\n\n return value"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create_object(self, type: Union[type, str], **constructor_kwargs):\n assert check_argument_types()\n assert self.base_class, 'base class has not been defined'\n plugin_class = self.resolve(type)\n if not issubclass(plugin_class, self.base_class):\n raise TypeError('{} is not a subclass of {}'.format(\n qualified_name(plugin_class), qualified_name(self.base_class)))\n\n return plugin_class(**constructor_kwargs)", "response": "Instantiate a plugin object of the specified type."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nloads all entry points in this namespace and return the resulting objects as a list.", "response": "def all(self) -> List[Any]:\n \"\"\"\n Load all entry points (if not already loaded) in this namespace and return the resulting\n objects as a list.\n\n \"\"\"\n values = []\n for name, value in self._entrypoints.items():\n if isinstance(value, EntryPoint):\n value = self._entrypoints[name] = value.load()\n\n values.append(value)\n\n return values"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nadds a child component to the entry point.", "response": "def add_component(self, alias: str, type: Union[str, type] = None, **config):\n \"\"\"\n Add a child component.\n\n This will instantiate a component class, as specified by the ``type`` argument.\n\n If the second argument is omitted, the value of ``alias`` is used as its value.\n\n The locally given configuration can be overridden by component configuration parameters\n supplied to the constructor (via the ``components`` argument).\n\n When configuration values are provided both as keyword arguments to this method and\n component configuration through the ``components`` constructor argument, the configurations\n are merged together using :func:`~asphalt.core.util.merge_config` in a way that the\n configuration values from the ``components`` argument override the keyword arguments to\n this method.\n\n :param alias: a name for the component instance, unique within this container\n :param type: entry point name or :class:`Component` subclass or a ``module:varname``\n reference to one\n :param config: keyword arguments passed to the component's constructor\n\n \"\"\"\n assert check_argument_types()\n if not isinstance(alias, str) or not alias:\n raise TypeError('component_alias must be a nonempty string')\n if alias in self.child_components:\n raise ValueError('there is already a child component named \"{}\"'.format(alias))\n\n config['type'] = type or alias\n\n # Allow the external configuration to override the constructor arguments\n override_config = self.component_configs.get(alias) or {}\n config = merge_config(config, override_config)\n\n component = component_types.create_object(**config)\n self.child_components[alias] = component"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nstarts the child components.", "response": "async def start(self, ctx: Context):\n \"\"\"\n Create child components that have been configured but not yet created and then calls their\n :meth:`~Component.start` methods in separate tasks and waits until they have completed.\n\n \"\"\"\n for alias in self.component_configs:\n if alias not in self.child_components:\n self.add_component(alias)\n\n tasks = [component.start(ctx) for component in self.child_components.values()]\n if tasks:\n await asyncio.gather(*tasks)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef executor(arg: Union[Executor, str, Callable] = None):\n def outer_wrapper(func: Callable):\n @wraps(func)\n def inner_wrapper(*args, **kwargs):\n try:\n ctx = next(arg for arg in args[:2] if isinstance(arg, Context))\n except StopIteration:\n raise RuntimeError('the first positional argument to {}() has to be a Context '\n 'instance'.format(callable_name(func))) from None\n\n executor = ctx.require_resource(Executor, resource_name)\n return asyncio_extras.call_in_executor(func, *args, executor=executor, **kwargs)\n\n return inner_wrapper\n\n if isinstance(arg, str):\n resource_name = arg\n return outer_wrapper\n\n return asyncio_extras.threadpool(arg)", "response": "Decorator that returns a function that runs in an executor instance."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef context_teardown(func: Callable):\n @wraps(func)\n async def wrapper(*args, **kwargs) -> None:\n async def teardown_callback(exception: Optional[Exception]):\n try:\n await generator.asend(exception)\n except StopAsyncIteration:\n pass\n finally:\n await generator.aclose()\n\n try:\n ctx = next(arg for arg in args[:2] if isinstance(arg, Context))\n except StopIteration:\n raise RuntimeError('the first positional argument to {}() has to be a Context '\n 'instance'.format(callable_name(func))) from None\n\n generator = func(*args, **kwargs)\n try:\n await generator.asend(None)\n except StopAsyncIteration:\n pass\n except BaseException:\n await generator.aclose()\n raise\n else:\n ctx.add_teardown_callback(teardown_callback, True)\n\n if iscoroutinefunction(func):\n func = async_generator(func)\n elif not isasyncgenfunction(func):\n raise TypeError('{} must be an async generator function'.format(callable_name(func)))\n\n return wrapper", "response": "A decorator that executes the rest of the function at context teardown."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef context_chain(self) -> List['Context']:\n contexts = []\n ctx = self # type: Optional[Context]\n while ctx is not None:\n contexts.append(ctx)\n ctx = ctx.parent\n\n return contexts", "response": "Return a list of contexts starting from this one its parent and so on."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding a callback to be called when this context closes.", "response": "def add_teardown_callback(self, callback: Callable, pass_exception: bool = False) -> None:\n \"\"\"\n Add a callback to be called when this context closes.\n\n This is intended for cleanup of resources, and the list of callbacks is processed in the\n reverse order in which they were added, so the last added callback will be called first.\n\n The callback may return an awaitable. If it does, the awaitable is awaited on before\n calling any further callbacks.\n\n :param callback: a callable that is called with either no arguments or with the exception\n that ended this context, based on the value of ``pass_exception``\n :param pass_exception: ``True`` to pass the callback the exception that ended this context\n (or ``None`` if the context ended cleanly)\n\n \"\"\"\n assert check_argument_types()\n self._check_closed()\n self._teardown_callbacks.append((callback, pass_exception))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\nasync def close(self, exception: BaseException = None) -> None:\n self._check_closed()\n self._closed = True\n\n exceptions = []\n for callback, pass_exception in reversed(self._teardown_callbacks):\n try:\n retval = callback(exception) if pass_exception else callback()\n if isawaitable(retval):\n await retval\n except Exception as e:\n exceptions.append(e)\n\n del self._teardown_callbacks\n if exceptions:\n raise TeardownError(exceptions)", "response": "Closes this context and call any necessary resource teardown callbacks."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_resource(self, value, name: str = 'default', context_attr: str = None,\n types: Union[type, Sequence[type]] = ()) -> None:\n \"\"\"\n Add a resource to this context.\n\n This will cause a ``resource_added`` event to be dispatched.\n\n :param value: the actual resource value\n :param name: name of this resource (unique among all its registered types within a single\n context)\n :param context_attr: name of the context attribute this resource will be accessible as\n :param types: type(s) to register the resource as (omit to use the type of ``value``)\n :raises asphalt.core.context.ResourceConflict: if the resource conflicts with an existing\n one in any way\n\n \"\"\"\n assert check_argument_types()\n self._check_closed()\n if isinstance(types, type):\n types = (types,)\n elif not types:\n types = (type(value),)\n\n if value is None:\n raise ValueError('\"value\" must not be None')\n if not resource_name_re.fullmatch(name):\n raise ValueError('\"name\" must be a nonempty string consisting only of alphanumeric '\n 'characters and underscores')\n if context_attr and getattr_static(self, context_attr, None) is not None:\n raise ResourceConflict('this context already has an attribute {!r}'.format(\n context_attr))\n for resource_type in types:\n if (resource_type, name) in self._resources:\n raise ResourceConflict(\n 'this context already contains a resource of type {} using the name {!r}'.\n format(qualified_name(resource_type), name))\n\n resource = ResourceContainer(value, tuple(types), name, context_attr, False)\n for type_ in resource.types:\n self._resources[(type_, name)] = resource\n\n if context_attr:\n setattr(self, context_attr, value)\n\n # Notify listeners that a new resource has been made available\n self.resource_added.dispatch(types, name, False)", "response": "Add a resource to the current context."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd a resource factory to the current context.", "response": "def add_resource_factory(self, factory_callback: factory_callback_type,\n types: Union[type, Sequence[Type]], name: str = 'default',\n context_attr: str = None) -> None:\n \"\"\"\n Add a resource factory to this context.\n\n This will cause a ``resource_added`` event to be dispatched.\n\n A resource factory is a callable that generates a \"contextual\" resource when it is\n requested by either using any of the methods :meth:`get_resource`, :meth:`require_resource`\n or :meth:`request_resource` or its context attribute is accessed.\n\n When a new resource is created in this manner, it is always bound to the context through\n it was requested, regardless of where in the chain the factory itself was added to.\n\n :param factory_callback: a (non-coroutine) callable that takes a context instance as\n argument and returns the created resource object\n :param types: one or more types to register the generated resource as on the target context\n :param name: name of the resource that will be created in the target context\n :param context_attr: name of the context attribute the created resource will be accessible\n as\n :raises asphalt.core.context.ResourceConflict: if there is an existing resource factory for\n the given type/name combinations or the given context variable\n\n \"\"\"\n assert check_argument_types()\n self._check_closed()\n if not resource_name_re.fullmatch(name):\n raise ValueError('\"name\" must be a nonempty string consisting only of alphanumeric '\n 'characters and underscores')\n if iscoroutinefunction(factory_callback):\n raise TypeError('\"factory_callback\" must not be a coroutine function')\n if not types:\n raise ValueError('\"types\" must not be empty')\n\n if isinstance(types, type):\n resource_types = (types,) # type: Tuple[type, ...]\n else:\n resource_types = tuple(types)\n\n # Check for a conflicting context attribute\n if context_attr in self._resource_factories_by_context_attr:\n raise ResourceConflict(\n 'this context already contains a resource factory for the context attribute {!r}'.\n format(context_attr))\n\n # Check for conflicts with existing resource factories\n for type_ in resource_types:\n if (type_, name) in self._resource_factories:\n raise ResourceConflict('this context already contains a resource factory for the '\n 'type {}'.format(qualified_name(type_)))\n\n # Add the resource factory to the appropriate lookup tables\n resource = ResourceContainer(factory_callback, resource_types, name, context_attr, True)\n for type_ in resource_types:\n self._resource_factories[(type_, name)] = resource\n\n if context_attr:\n self._resource_factories_by_context_attr[context_attr] = resource\n\n # Notify listeners that a new resource has been made available\n self.resource_added.dispatch(resource_types, name, True)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nlooking up a resource in the chain of contexts and return the value or None if none was available.", "response": "def get_resource(self, type: Type[T_Resource], name: str = 'default') -> Optional[T_Resource]:\n \"\"\"\n Look up a resource in the chain of contexts.\n\n :param type: type of the requested resource\n :param name: name of the requested resource\n :return: the requested resource, or ``None`` if none was available\n\n \"\"\"\n assert check_argument_types()\n self._check_closed()\n key = (type, name)\n\n # First check if there's already a matching resource in this context\n resource = self._resources.get(key)\n if resource is not None:\n return resource.value_or_factory\n\n # Next, check if there's a resource factory available on the context chain\n resource = next((ctx._resource_factories[key] for ctx in self.context_chain\n if key in ctx._resource_factories), None)\n if resource is not None:\n return resource.generate_value(self)\n\n # Finally, check parents for a matching resource\n return next((ctx._resources[key].value_or_factory for ctx in self.context_chain\n if key in ctx._resources), None)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_resources(self, type: Type[T_Resource]) -> Set[T_Resource]:\n assert check_argument_types()\n\n # Collect all the matching resources from this context\n resources = {container.name: container.value_or_factory\n for container in self._resources.values()\n if not container.is_factory and type in container.types\n } # type: Dict[str, T_Resource]\n\n # Next, find all matching resource factories in the context chain and generate resources\n resources.update({container.name: container.generate_value(self)\n for ctx in self.context_chain\n for container in ctx._resources.values()\n if container.is_factory and type in container.types\n and container.name not in resources})\n\n # Finally, add the resource values from the parent contexts\n resources.update({container.name: container.value_or_factory\n for ctx in self.context_chain[1:]\n for container in ctx._resources.values()\n if not container.is_factory and type in container.types\n and container.name not in resources})\n\n return set(resources.values())", "response": "Retrieve all the resources of the given type in this context and its parents."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the requested resource in the chain of contexts.", "response": "def require_resource(self, type: Type[T_Resource], name: str = 'default') -> T_Resource:\n \"\"\"\n Look up a resource in the chain of contexts and raise an exception if it is not found.\n\n This is like :meth:`get_resource` except that instead of returning ``None`` when a resource\n is not found, it will raise :exc:`~asphalt.core.context.ResourceNotFound`.\n\n :param type: type of the requested resource\n :param name: name of the requested resource\n :return: the requested resource\n :raises asphalt.core.context.ResourceNotFound: if a resource of the given type and name was\n not found\n\n \"\"\"\n resource = self.get_resource(type, name)\n if resource is None:\n raise ResourceNotFound(type, name)\n\n return resource"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nrequests a resource from the chain of contexts.", "response": "async def request_resource(self, type: Type[T_Resource], name: str = 'default') -> T_Resource:\n \"\"\"\n Look up a resource in the chain of contexts.\n\n This is like :meth:`get_resource` except that if the resource is not already available, it\n will wait for one to become available.\n\n :param type: type of the requested resource\n :param name: name of the requested resource\n :return: the requested resource\n\n \"\"\"\n # First try to locate an existing resource in this context and its parents\n value = self.get_resource(type, name)\n if value is not None:\n return value\n\n # Wait until a matching resource or resource factory is available\n signals = [ctx.resource_added for ctx in self.context_chain]\n await wait_event(\n signals, lambda event: event.resource_name == name and type in event.resource_types)\n return self.require_resource(type, name)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef call_async(self, func: Callable, *args, **kwargs):\n return asyncio_extras.call_async(self.loop, func, *args, **kwargs)", "response": "Call the given callable in the event loop thread."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncalling the given callable in an executor.", "response": "def call_in_executor(self, func: Callable, *args, executor: Union[Executor, str] = None,\n **kwargs) -> Awaitable:\n \"\"\"\n Call the given callable in an executor.\n\n :param func: the callable to call\n :param args: positional arguments to call the callable with\n :param executor: either an :class:`~concurrent.futures.Executor` instance, the resource\n name of one or ``None`` to use the event loop's default executor\n :param kwargs: keyword arguments to call the callable with\n :return: an awaitable that resolves to the return value of the call\n\n \"\"\"\n assert check_argument_types()\n if isinstance(executor, str):\n executor = self.require_resource(Executor, executor)\n\n return asyncio_extras.call_in_executor(func, *args, executor=executor, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef threadpool(self, executor: Union[Executor, str] = None):\n assert check_argument_types()\n if isinstance(executor, str):\n executor = self.require_resource(Executor, executor)\n\n return asyncio_extras.threadpool(executor)", "response": "Returns an asynchronous context manager that runs the block in a thread pool."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn an async generator that yields events from the given signals.", "response": "def stream_events(signals: Sequence[Signal], filter: Callable[[T_Event], bool] = None, *,\n max_queue_size: int = 0) -> AsyncIterator[T_Event]:\n \"\"\"\n Return an async generator that yields events from the given signals.\n\n Only events that pass the filter callable (if one has been given) are returned.\n If no filter function was given, all events are yielded from the generator.\n\n :param signals: the signals to get events from\n :param filter: a callable that takes an event object as an argument and returns ``True`` if\n the event should pass, ``False`` if not\n :param max_queue_size: maximum size of the queue, after which it will start to drop events\n\n \"\"\"\n @async_generator\n async def streamer():\n try:\n while True:\n event = await queue.get()\n if filter is None or filter(event):\n await yield_(event)\n finally:\n cleanup()\n\n def cleanup():\n nonlocal queue\n if queue is not None:\n for signal in signals:\n signal.disconnect(queue.put_nowait)\n\n queue = None\n\n assert check_argument_types()\n queue = Queue(max_queue_size) # type: Queue[T_Event]\n for signal in signals:\n signal.connect(queue.put_nowait)\n\n gen = [streamer()] # this is to allow the reference count to drop to 0\n weakref.finalize(gen[0], cleanup)\n return gen.pop()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\nasync def wait_event(signals: Sequence['Signal[T_Event]'],\n filter: Callable[[T_Event], bool] = None) -> T_Event:\n \"\"\"\n Wait until any of the given signals dispatches an event that satisfies the filter (if any).\n\n If no filter has been given, the first event dispatched from the signal is returned.\n\n :param signals: the signals to get events from\n :param filter: a callable that takes an event object as an argument and returns ``True`` if\n the event should pass, ``False`` if not\n :return: the event that was dispatched\n\n \"\"\"\n if sys.version_info >= (3, 5, 3):\n assert check_argument_types()\n\n async with aclosing(stream_events(signals, filter)) as events:\n return await events.asend(None)", "response": "Wait until any of the given signals dispatches an event."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconnecting a callback to this signal.", "response": "def connect(self, callback: Callable[[T_Event], Any]) -> Callable[[T_Event], Any]:\n \"\"\"\n Connect a callback to this signal.\n\n Each callable can only be connected once. Duplicate registrations are ignored.\n\n If you need to pass extra arguments to the callback, you can use :func:`functools.partial`\n to wrap the callable.\n\n :param callback: a callable that will receive an event object as its only argument.\n :return: the value of ``callback`` argument\n\n \"\"\"\n assert check_argument_types()\n if self.listeners is None:\n self.listeners = []\n if callback not in self.listeners:\n self.listeners.append(callback)\n\n return callback"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef disconnect(self, callback: Callable) -> None:\n assert check_argument_types()\n try:\n if self.listeners is not None:\n self.listeners.remove(callback)\n except ValueError:\n pass", "response": "Disconnects the given callback from the current object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndispatches the given event object to all listeners and returns a Future that resolves with True if all the callbacks have been called.", "response": "def dispatch_raw(self, event: Event) -> Awaitable[bool]:\n \"\"\"\n Dispatch the given event object to all listeners.\n\n Creates a new task in which all listener callbacks are called with the given event as\n the only argument. Coroutine callbacks are converted to their own respective tasks and\n waited for concurrently.\n\n Before the dispatching is done, a snapshot of the listeners is taken and the event is only\n dispatched to those listeners, so adding a listener between the call to this method and the\n actual dispatching will only affect future calls to this method.\n\n :param event: the event object to dispatch\n :returns: an awaitable that completes when all the callbacks have been called (and any\n awaitables waited on) and resolves to ``True`` if there were no exceptions raised by\n the callbacks, ``False`` otherwise\n\n \"\"\"\n async def do_dispatch() -> None:\n awaitables = []\n all_successful = True\n for callback in listeners:\n try:\n retval = callback(event)\n except Exception:\n logger.exception('Uncaught exception in event listener')\n all_successful = False\n else:\n if isawaitable(retval):\n awaitables.append(retval)\n\n # For any callbacks that returned awaitables, wait for their completion and log any\n # exceptions they raised\n if awaitables:\n done, _ = await wait(awaitables, loop=loop)\n for f in done:\n exc = f.exception()\n if exc is not None:\n all_successful = False\n logger.error('Uncaught exception in event listener', exc_info=exc)\n\n if not future.cancelled():\n future.set_result(all_successful)\n\n if not isinstance(event, self.event_class):\n raise TypeError('event must be of type {}'.format(qualified_name(self.event_class)))\n\n loop = get_event_loop()\n future = loop.create_future()\n if self.listeners:\n listeners = list(self.listeners)\n loop.create_task(do_dispatch())\n else:\n future.set_result(True)\n\n return future"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef dispatch(self, *args, **kwargs) -> Awaitable[bool]:\n event = self.event_class(self.source(), cast(str, self.topic), *args, **kwargs)\n return self.dispatch_raw(event)", "response": "Create and dispatch an event."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef wait_event(self, filter: Callable[[T_Event], bool] = None) -> Awaitable[T_Event]:\n return wait_event([self], filter)", "response": "Shortcut for calling wait_event with this signal in the first argument."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nhandles characters seen when in an escape sequence.", "response": "def _escape_sequence(self, char):\n \"\"\"\n Handle characters seen when in an escape sequence. Most non-vt52\n commands start with a left-bracket after the escape and then a \n stream of parameters and a command.\n \"\"\"\n\n num = ord(char)\n if char == \"[\":\n self.state = \"escape-lb\"\n elif char == \"(\":\n self.state = \"charset-g0\"\n elif char == \")\":\n self.state = \"charset-g1\"\n elif num in self.escape:\n self.dispatch(self.escape[num])\n self.state = \"stream\"\n elif self.fail_on_unknown_esc:\n raise StreamProcessError(\"Unexpected character '%c' == '0x%02x'\" % (char, ord(char)))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nhandle the end of an escape sequence.", "response": "def _end_escape_sequence(self, char):\n \"\"\"\n Handle the end of an escape sequence. The final character in an escape\n sequence is the command to execute, which corresponds to the event that\n is dispatched here.\n \"\"\"\n\n num = ord(char)\n if num in self.sequence:\n self.dispatch(self.sequence[num], *self.params)\n self.state = \"stream\"\n self.current_param = \"\"\n self.params = []"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _escape_parameters(self, char):\n\n if char == \";\":\n self.params.append(int(self.current_param))\n self.current_param = \"\"\n elif char == \"?\":\n self.state = \"mode\"\n elif not char.isdigit():\n if len(self.current_param) > 0:\n self.params.append(int(self.current_param))\n\n # If we're in parameter parsing mode, but we see a non-numeric \n # value, it must be the end of the control sequence.\n self._end_escape_sequence(char)\n else:\n self.current_param += char", "response": "Parse the parameters in an escape sequence."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nprocess a character when in the stream state.", "response": "def _stream(self, char):\n \"\"\"\n Process a character when in the\n default 'stream' state.\n \"\"\"\n\n num = ord(char)\n if num in self.basic:\n self.dispatch(self.basic[num])\n elif num == ctrl.ESC:\n self.state = \"escape\"\n elif num == 0x00:\n # nulls are just ignored.\n pass\n else: \n self.dispatch(\"print\", char)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconsumes a single character and advance the state as necessary.", "response": "def consume(self, char):\n \"\"\"\n Consume a single character and advance the state as necessary.\n \"\"\"\n\n if self.state == \"stream\":\n self._stream(char)\n elif self.state == \"escape\":\n self._escape_sequence(char)\n elif self.state == \"escape-lb\":\n self._escape_parameters(char)\n elif self.state == \"mode\":\n self._mode(char)\n elif self.state == \"charset-g0\":\n self._charset_g0(char)\n elif self.state == \"charset-g1\":\n self._charset_g1(char)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconsumes a string of and advance the state as necessary.", "response": "def process(self, chars):\n \"\"\"\n Consume a string of and advance the state as necessary.\n \"\"\"\n\n while len(chars) > 0:\n self.consume(chars[0])\n chars = chars[1:]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_event_listener(self, event, function):\n\n if event not in self.listeners:\n self.listeners[event] = []\n\n self.listeners[event].append(function)", "response": "Add an event listener function to be invoked when a particular event occurs."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndispatch an event to the related objects.", "response": "def dispatch(self, event, *args):\n \"\"\"\n Dispatch an event where `args` is a tuple of the arguments to send to \n any callbacks. If any callback throws an exception, the subsequent \n callbacks will be aborted.\n \"\"\"\n\n for callback in self.listeners.get(event, []):\n if len(args) > 0:\n callback(*args)\n else:\n callback()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nattach this screen to a events that processes commands and dispatches events.", "response": "def attach(self, events):\n \"\"\"\n Attach this screen to a events that processes commands and dispatches \n events. Sets up the appropriate event handlers so that the screen will\n update itself automatically as the events processes data.\n \"\"\"\n\n if events is not None:\n events.add_event_listener(\"print\", self._print)\n events.add_event_listener(\"backspace\", self._backspace)\n events.add_event_listener(\"tab\", self._tab)\n events.add_event_listener(\"linefeed\", self._linefeed)\n events.add_event_listener(\"reverse-linefeed\", \n self._reverse_linefeed)\n events.add_event_listener(\"carriage-return\", self._carriage_return)\n events.add_event_listener(\"index\", self._index)\n events.add_event_listener(\"reverse-index\", self._reverse_index)\n events.add_event_listener(\"store-cursor\", self._save_cursor)\n events.add_event_listener(\"restore-cursor\", self._restore_cursor)\n events.add_event_listener(\"cursor-up\", self._cursor_up)\n events.add_event_listener(\"cursor-down\", self._cursor_down)\n events.add_event_listener(\"cursor-right\", self._cursor_forward)\n events.add_event_listener(\"cursor-left\", self._cursor_back)\n events.add_event_listener(\"cursor-move\", self._cursor_position)\n events.add_event_listener(\"erase-in-line\", self._erase_in_line)\n events.add_event_listener(\"erase-in-display\", \n self._erase_in_display)\n events.add_event_listener(\"delete-characters\", \n self._delete_character)\n events.add_event_listener(\"insert-lines\", self._insert_line)\n events.add_event_listener(\"delete-lines\", self._delete_line)\n events.add_event_listener(\"select-graphic-rendition\",\n self._select_graphic_rendition)\n events.add_event_listener(\"charset-g0\", self._charset_g0)\n events.add_event_listener(\"charset-g1\", self._charset_g1)\n events.add_event_listener(\"shift-in\", self._shift_in)\n events.add_event_listener(\"shift-out\", self._shift_out)\n events.add_event_listener(\"bell\", self._bell)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef resize(self, shape):\n rows, cols = shape\n\n # Honestly though, you can't trust anyone these days...\n assert(rows > 0 and cols > 0)\n\n # First resize the rows\n if self.size[0] < rows:\n # If the current display size is shorter than the requested screen\n # size, then add rows to the bottom. Note that the old column size\n # is used here so these new rows will get expanded/contracted as\n # necessary by the column resize when it happens next.\n self.display += [u\" \" * self.size[1]] * (rows - self.size[0])\n self.attributes += [[self.default_attributes] * self.size[1]] * \\\n (rows - self.size[0])\n elif self.size[0] > rows:\n # If the current display size is taller than the requested display,\n # then take rows off the top.\n self.display = self.display[self.size[0]-rows:]\n self.attributes = self.attributes[self.size[0]-rows:]\n\n # Next, of course, resize the columns.\n if self.size[1] < cols:\n # If the current display size is thinner than the requested size,\n # expand each row to be the new size.\n self.display = \\\n [row + (u\" \" * (cols - self.size[1])) for row in self.display]\n self.attributes = \\\n [row + ([self.default_attributes] * (cols - self.size[1])) for row in self.attributes]\n elif self.size[1] > cols:\n # If the current display size is fatter than the requested size,\n # then trim each row from the right to be the new size.\n self.display = [row[:cols-self.size[1]] for row in self.display]\n self.attributes = [row[:cols-self.size[1]] for row in self.attributes]\n\n self.size = (rows, cols)\n return self.size", "response": "Resize the screen. If the requested screen size has more rows than the\n existing screen, rows will be added at the bottom. If the requested\n size has less rows than the existing screen rows will be clipped at the\n top of the screen.\n\n Similarly if the existing screen has less columns than the requested \n size, columns will be added at the right, and it it has more, columns \n will be clipped at the right."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _print(self, char):\n\n # Don't make bugs where we try to print a screen. \n assert len(char) == 1\n\n try:\n try:\n # Python 3\n char = self.decoder(bytes(char, self.encoding))[0]\n except TypeError:\n # Python 2.x\n char = self.decoder(char)[0]\n\n except UnicodeDecodeError:\n char = \"?\"\n\n if self.current_charset == \"g0\" and self.g0 is not None:\n char = char.translate(self.g0)\n elif self.current_charset == \"g1\" and self.g1 is not None:\n char = char.translate(self.g1)\n\n row = self.display[self.y]\n self.display[self.y] = row[:self.x] + char + row[self.x+1:]\n\n attrs = self.attributes[self.y]\n self.attributes[self.y] = attrs[:self.x] + [self.cursor_attributes] + \\\n attrs[self.x+1:]\n\n self.x += 1\n\n if self.x >= self.size[1]:\n # If this was the last column in a row, move the cursor to the\n # next row.\n self._linefeed()", "response": "Print a character at the current cursor position and advance the cursor."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _index(self):\n\n if self.y + 1 >= self.size[0]:\n # If the cursor is currently on the last row, then spawn another\n # and scroll down (removing the top row).\n self.display = self.display[1:] + [u\" \" * self.size[1]]\n else:\n # If the cursor is anywhere else, then just move it to the \n # next line.\n self.y += 1", "response": "Move the cursor down one row in the same column."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nmoving the cursor up one row in the same column.", "response": "def _reverse_index(self):\n \"\"\"\n Move the cursor up one row in the same column. If the cursor is at the\n first row, create a new row at the top.\n \"\"\"\n if self.y == 0:\n # If the cursor is currently at the first row, then scroll the\n # screen up.\n self.display = [u\" \" * self.size[1]] + self.display[:-1]\n else:\n # If the cursor is anywhere other than the first row than just move\n # it up by one row.\n self.y -= 1"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the x value of the next available tabstop.", "response": "def _next_tab_stop(self):\n \"\"\"\n Return the x value of the next available tabstop or the x value of the \n margin if there are no more tabstops.\n \"\"\"\n\n for stop in sorted(self.tabstops):\n if self.x < stop:\n return stop\n return self.size[1] - 1"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _restore_cursor(self):\n\n if len(self.cursor_save_stack):\n self.x, self.y = self.cursor_save_stack.pop()", "response": "Restores the current cursor position."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _insert_line(self, count=1):\n trimmed = self.display[:self.y+1] + \\\n [u\" \" * self.size[1]] * count + \\\n self.display[self.y+1:self.y+count+1]\n self.display = trimmed[:self.size[0]]", "response": "Insert count lines at cursor."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndelete count lines starting at cursor and moves up of the screen.", "response": "def _delete_line(self, count=1):\n \"\"\"\n Deletes count lines, starting at line with cursor. As lines are \n deleted, lines displayed below cursor move up. Lines added to bottom of\n screen have spaces with same character attributes as last line moved \n up.\n \"\"\"\n self.display = self.display[:self.y] + \\\n self.display[self.y+1:]\n self.display.append([u\" \" * self.size[1]] * count)\n self.attributes = self.attributes[:self.y] + \\\n self.attributes[self.y+1:]\n last_attributes = self.attributes[-1]\n for _ in xrange(count):\n self.attributes.append(copy(last_attributes))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndelete count characters starting with the character at cursor position.", "response": "def _delete_character(self, count=1):\n \"\"\"\n Deletes count characters, starting with the character at cursor\n position. When a character is deleted, all characters to the right \n of cursor move left.\n \"\"\"\n\n # First resize the text display\n row = self.display[self.y]\n count = min(count, self.size[1] - self.x)\n row = row[:self.x] + row[self.x+count:] + u\" \" * count\n self.display[self.y] = row\n\n # Then resize the attribute array too\n attrs = self.attributes[self.y]\n attrs = attrs[:self.x] + attrs[self.x+count:] + [self.default_attributes] * count\n self.attributes[self.y] = attrs"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _erase_in_line(self, type_of=0):\n\n row = self.display[self.y]\n attrs = self.attributes[self.y]\n if type_of == 0:\n # Erase from the cursor to the end of line, including the cursor\n row = row[:self.x] + u\" \" * (self.size[1] - self.x)\n attrs = attrs[:self.x] + [self.default_attributes] * (self.size[1] - self.x)\n elif type_of == 1:\n # Erase from the beginning of the line to the cursor, including it\n row = u\" \" * (self.x+1) + row[self.x+1:]\n attrs = [self.default_attributes] * (self.x+1) + attrs[self.x+1:]\n elif type_of == 2:\n # Erase the entire line.\n row = u\" \" * self.size[1]\n attrs = [self.default_attributes] * self.size[1]\n\n self.display[self.y] = row\n self.attributes[self.y] = attrs", "response": "Erases the line in a specific way."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nmove cursor right count columns. Cursor stops at right margin.", "response": "def _cursor_forward(self, count=1):\n \"\"\"\n Moves cursor right count columns. Cursor stops at right margin.\n \"\"\"\n self.x = min(self.size[1] - 1, self.x + count)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting the cursor to a specific row and column.", "response": "def _cursor_position(self, row=0, column=0):\n \"\"\"\n Set the cursor to a specific row and column. \n\n Obnoxiously row/column is 1 based, instead of zero based, so we need \n to compensate. I know I've created bugs in here somehow.\n Confoundingly, inputs of 0 are still acceptable, and should move to\n the beginning of the row/column as if they were 1. *sigh*\n \"\"\"\n\n if row == 0: \n row = 1\n if column == 0: \n column = 1\n \n self.y = min(row - 1, self.size[0] - 1)\n self.x = min(column - 1, self.size[1] - 1)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _text_attr(self, attr):\n attr = text[attr]\n if attr == \"reset\":\n self.cursor_attributes = self.default_attributes\n elif attr == \"underline-off\":\n self.cursor_attributes = self._remove_text_attr(\"underline\")\n elif attr == \"blink-off\":\n self.cursor_attributes = self._remove_text_attr(\"blink\")\n elif attr == \"reverse-off\":\n self.cursor_attributes = self._remove_text_attr(\"reverse\")\n else:\n self.cursor_attributes = self._add_text_attr(attr)", "response": "Given a text attribute set the current cursor appropriately."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _color_attr(self, ground, attr):\n attr = colors[ground][attr] \n attrs = self.cursor_attributes\n if ground == \"foreground\":\n self.cursor_attributes = (attrs[0], attr, attrs[2])\n elif ground == \"background\":\n self.cursor_attributes = (attrs[0], attrs[1], attr)", "response": "Given a color attribute set the current cursor appropriately."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nset the current cursor attributes appropriately.", "response": "def _set_attr(self, attr):\n \"\"\"\n Given some text attribute, set the current cursor attributes \n appropriately.\n \"\"\"\n if attr in text:\n self._text_attr(attr)\n elif attr in colors[\"foreground\"]:\n self._color_attr(\"foreground\", attr)\n elif attr in colors[\"background\"]:\n self._color_attr(\"background\", attr)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _select_graphic_rendition(self, *attrs):\n\n if len(attrs) == 0:\n # No arguments means that we're really trying to do a reset.\n attrs = [0]\n\n for attr in attrs:\n self._set_attr(attr)", "response": "Select the current text attribute."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the : class : ~flatdict. FlatDict as a dict", "response": "def as_dict(self):\n \"\"\"Return the :class:`~flatdict.FlatDict` as a :class:`dict`\n\n :rtype: dict\n\n \"\"\"\n out = dict({})\n for key in self.keys():\n if self._has_delimiter(key):\n pk, ck = key.split(self._delimiter, 1)\n if self._has_delimiter(ck):\n ck = ck.split(self._delimiter, 1)[0]\n if isinstance(self._values[pk], FlatDict) and pk not in out:\n out[pk] = dict()\n if isinstance(self._values[pk][ck], FlatDict):\n out[pk][ck] = self._values[pk][ck].as_dict()\n else:\n out[pk][ck] = self._values[pk][ck]\n else:\n out[key] = self._values[key]\n return out"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef keys(self):\n keys = []\n for key, value in self._values.items():\n if isinstance(value, (FlatDict, dict)):\n nested = [self._delimiter.join([key, k]) for k in value.keys()]\n keys += nested if nested else [key]\n else:\n keys.append(key)\n return sorted(keys)", "response": "Return a copy of the flat dictionary s list of keys."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef pop(self, key, default=NO_DEFAULT):\n if key not in self and default != NO_DEFAULT:\n return default\n value = self[key]\n self.__delitem__(key)\n return value", "response": "Removes the key from the flat dictionary and returns its value. If default is given the key is returned. If default is given the value is returned."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef set_delimiter(self, delimiter):\n for key in self.keys():\n if delimiter in key:\n raise ValueError('Key {!r} collides with delimiter {!r}', key,\n delimiter)\n self._delimiter = delimiter\n for key in self._values.keys():\n if isinstance(self._values[key], FlatDict):\n self._values[key].set_delimiter(delimiter)", "response": "Override the default or passed in delimiter with a new value. If the delimiter already exists in a key an exception will be raised."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef as_dict(self):\n out = dict({})\n for key in self.keys():\n if self._has_delimiter(key):\n pk, ck = key.split(self._delimiter, 1)\n if self._has_delimiter(ck):\n ck = ck.split(self._delimiter, 1)[0]\n if isinstance(self._values[pk], FlatterDict) and pk not in out:\n out[pk] = dict()\n if isinstance(self._values[pk][ck], FlatterDict):\n if self._values[pk][ck].original_type == tuple:\n out[pk][ck] = tuple(self._child_as_list(pk, ck))\n elif self._values[pk][ck].original_type == list:\n out[pk][ck] = self._child_as_list(pk, ck)\n elif self._values[pk][ck].original_type == set:\n out[pk][ck] = set(self._child_as_list(pk, ck))\n elif self._values[pk][ck].original_type == dict:\n out[pk][ck] = self._values[pk][ck].as_dict()\n else:\n out[pk][ck] = self._values[pk][ck]\n else:\n out[key] = self._values[key]\n return out", "response": "Return the FlatterDict as a nested dict."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a list of values from the child FlatterDict instance with string based integer keys.", "response": "def _child_as_list(self, pk, ck):\n \"\"\"Returns a list of values from the child FlatterDict instance\n with string based integer keys.\n\n :param str pk: The parent key\n :param str ck: The child key\n :rtype: list\n\n \"\"\"\n return [self._values[pk][ck][k]\n for k in sorted(self._values[pk][ck].keys(),\n key=lambda x: int(x))]"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ngenerate an SQL query for the edge object.", "response": "def gen_query(self):\n \"\"\"\n Generate an SQL query for the edge object.\n \"\"\"\n return (\n SQL.forwards_relation(self.src, self.rel) if self.dst is None else\n SQL.inverse_relation(self.dst, self.rel)\n )"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef derived(self, statement, params=(), replace=False):\n return Query(\n db=self.db,\n sql=(statement,) if replace else self.sql + (statement,),\n params=self.params + params,\n )", "response": "Returns a new Query object set up correctly with\n the statement and params appended to the end\n s internal query and params."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntraverse the graph and select the destination nodes for a particular relation.", "response": "def traverse(self, edge):\n \"\"\"\n Traverse the graph, and selecting the destination\n nodes for a particular relation that the selected\n nodes are a source of, i.e. select the friends of\n my friends. You can traverse indefinitely.\n\n :param edge: The edge query. If the edge's\n destination node is specified then the source\n nodes will be selected.\n \"\"\"\n query = self.statement\n rel, dst = edge.rel, edge.dst\n statement, params = (\n SQL.compound_fwd_query(query, rel) if dst is None else\n SQL.compound_inv_query(query, rel, dst)\n )\n return self.derived(statement, params, replace=True)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef setup_sql(self, graphs):\n with closing(self.db.cursor()) as cursor:\n for table in graphs:\n cursor.execute(SQL.CREATE_TABLE % (table))\n for index in SQL.INDEXES:\n cursor.execute(index % (table))\n self.db.commit()", "response": "Sets up the SQL tables for the object and creates indexes as well."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef remove(src, rel, dst):\n smt = 'DELETE FROM %s' % rel\n queries = []\n params = []\n\n if src is not None:\n queries.append('src = ?')\n params.append(src)\n\n if dst is not None:\n queries.append('dst = ?')\n params.append(dst)\n\n if not queries:\n return smt, params\n smt = '%s WHERE %s' % (smt, ' AND '.join(queries))\n return smt, params", "response": "Returns a SQL statement that removes edges from the SQL backing store. Either src or dst may be specified even both."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef limit(lower, upper):\n offset = lower or 0\n lim = (upper - offset) if upper else -1\n smt = 'LIMIT %d OFFSET %d' % (lim, offset)\n return smt, ()", "response": "Returns a SQlite - compliant LIMIT statement that takes the lower and upper bounds into account."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef perform_ops(self):\n with self.db:\n with closing(self.db.cursor()) as cursor:\n cursor.execute('BEGIN TRANSACTION')\n self._perform_ops(cursor)", "response": "Perform the operations on the database."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate a client from a url.", "response": "def from_url(cls, url, **kwargs):\n \"\"\"Create a client from a url.\"\"\"\n url = urllib3.util.parse_url(url)\n if url.host:\n kwargs.setdefault('host', url.host)\n\n if url.port:\n kwargs.setdefault('port', url.port)\n\n if url.scheme == 'https':\n kwargs.setdefault('connection_class', urllib3.HTTPSConnectionPool)\n\n return cls(**kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ninitialize connection to the nsqd.", "response": "def connect(self):\n \"\"\"Initialize connection to the nsqd.\"\"\"\n if self.state == DISCONNECTED:\n raise errors.NSQException('connection already closed')\n\n if self.is_connected:\n return\n\n stream = Stream(self.address, self.port, self.timeout)\n stream.connect()\n\n self.stream = stream\n self.state = CONNECTED\n self.send(nsq.MAGIC_V2)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nclosing the underlying socket.", "response": "def close_stream(self):\n \"\"\"Close the underlying socket.\"\"\"\n if not self.is_connected:\n return\n\n self.stream.close()\n self.state = DISCONNECTED\n self.on_close.send(self)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nread an individual response from nsqd.", "response": "def read_response(self):\n \"\"\"Read an individual response from nsqd.\n\n :returns: tuple of the frame type and the processed data.\n \"\"\"\n response = self._read_response()\n frame, data = nsq.unpack_response(response)\n self.last_response = time.time()\n\n if frame not in self._frame_handlers:\n raise errors.NSQFrameError('unknown frame {}'.format(frame))\n\n frame_handler = self._frame_handlers[frame]\n processed_data = frame_handler(data)\n\n return frame, processed_data"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nupdate the client metadata on the server and negotiate features.", "response": "def identify(self):\n \"\"\"Update client metadata on the server and negotiate features.\n\n :returns: nsqd response data if there was feature negotiation,\n otherwise ``None``\n \"\"\"\n self.send(nsq.identify({\n # nsqd 0.2.28+\n 'client_id': self.client_id,\n 'hostname': self.hostname,\n\n # nsqd 0.2.19+\n 'feature_negotiation': True,\n 'heartbeat_interval': self.heartbeat_interval,\n\n # nsqd 0.2.21+\n 'output_buffer_size': self.output_buffer_size,\n 'output_buffer_timeout': self.output_buffer_timeout,\n\n # nsqd 0.2.22+\n 'tls_v1': self.tls_v1,\n\n # nsqd 0.2.23+\n 'snappy': self.snappy,\n 'deflate': self.deflate,\n 'deflate_level': self.deflate_level,\n\n # nsqd nsqd 0.2.25+\n 'sample_rate': self.sample_rate,\n 'user_agent': self.user_agent,\n }))\n\n frame, data = self.read_response()\n\n if frame == nsq.FRAME_TYPE_ERROR:\n raise data\n\n if data == nsq.OK:\n return\n\n try:\n data = json.loads(data.decode('utf-8'))\n\n except ValueError:\n self.close_stream()\n raise errors.NSQException(\n 'failed to parse IDENTIFY response JSON from nsqd: '\n '{!r}'.format(data))\n\n self.max_ready_count = data.get('max_rdy_count', self.max_ready_count)\n\n if self.tls_v1 and data.get('tls_v1'):\n self.upgrade_to_tls()\n\n if self.snappy and data.get('snappy'):\n self.upgrade_to_snappy()\n\n elif self.deflate and data.get('deflate'):\n self.deflate_level = data.get('deflate_level', self.deflate_level)\n self.upgrade_to_defalte()\n\n if self.auth_secret and data.get('auth_required'):\n self.auth()\n\n return data"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef auth(self):\n self.send(nsq.auth(self.auth_secret))\n frame, data = self.read_response()\n\n if frame == nsq.FRAME_TYPE_ERROR:\n raise data\n\n try:\n response = json.loads(data.decode('utf-8'))\n except ValueError:\n self.close_stream()\n raise errors.NSQException(\n 'failed to parse AUTH response JSON from nsqd: '\n '{!r}'.format(data))\n\n self.on_auth.send(self, response=response)\n return response", "response": "Send an auth secret to nsqd and return the response."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef subscribe(self, topic, channel):\n self.send(nsq.subscribe(topic, channel))", "response": "Subscribe to a nsq topic and channel."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef publish(self, topic, data, defer=None):\n if defer is None:\n self.send(nsq.publish(topic, data))\n else:\n self.send(nsq.deferpublish(topic, data, defer))", "response": "Publish a message to the given topic over tcp."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\npublishing an iterable of messages to the given topic over http.", "response": "def multipublish(self, topic, messages):\n \"\"\"Publish an iterable of messages to the given topic over http.\n\n :param topic: the topic to publish to\n\n :param messages: iterable of bytestrings to publish\n \"\"\"\n self.send(nsq.multipublish(topic, messages))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef ready(self, count):\n self.ready_count = count\n self.send(nsq.ready(count))", "response": "Indicate you are ready to receive count messages."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef finish(self, message_id):\n self.send(nsq.finish(message_id))\n self.finish_inflight()\n self.on_finish.send(self, message_id=message_id)", "response": "Finish a message (indicate successful processing)."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\npublishes a message to the given topic over http.", "response": "def publish(self, topic, data, defer=None):\n \"\"\"Publish a message to the given topic over http.\n\n :param topic: the topic to publish to\n\n :param data: bytestring data to publish\n\n :param defer: duration in millisconds to defer before publishing\n (requires nsq 0.3.6)\n \"\"\"\n nsq.assert_valid_topic_name(topic)\n fields = {'topic': topic}\n\n if defer is not None:\n fields['defer'] = '{}'.format(defer)\n\n return self._request('POST', '/pub', fields=fields, body=data)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\npublishing an iterable of messages to the given topic over http.", "response": "def multipublish(self, topic, messages, binary=False):\n \"\"\"Publish an iterable of messages to the given topic over http.\n\n :param topic: the topic to publish to\n\n :param messages: iterable of bytestrings to publish\n\n :param binary: enable binary mode. defaults to False\n (requires nsq 1.0.0)\n\n By default multipublish expects messages to be delimited by ``\"\\\\n\"``,\n use the binary flag to enable binary mode where the POST body is\n expected to be in the following wire protocol format.\n \"\"\"\n nsq.assert_valid_topic_name(topic)\n fields = {'topic': topic}\n\n if binary:\n fields['binary'] = 'true'\n body = nsq.multipublish_body(messages)\n else:\n body = b'\\n'.join(self._validate_mpub_message(m) for m in messages)\n\n return self._request('POST', '/mpub', fields=fields, body=body)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nempty all the queued messages for an existing topic.", "response": "def empty_topic(self, topic):\n \"\"\"Empty all the queued messages for an existing topic.\"\"\"\n nsq.assert_valid_topic_name(topic)\n return self._request('POST', '/topic/empty', fields={'topic': topic})"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nemptying all the queued messages for an existing channel.", "response": "def empty_channel(self, topic, channel):\n \"\"\"Empty all the queued messages for an existing channel.\"\"\"\n nsq.assert_valid_topic_name(topic)\n nsq.assert_valid_channel_name(channel)\n return self._request('POST', '/channel/empty',\n fields={'topic': topic, 'channel': channel})"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\npauses message flow to all channels on an existing topic.", "response": "def pause_topic(self, topic):\n \"\"\"Pause message flow to all channels on an existing topic.\n\n Messages will queue at topic.\n \"\"\"\n nsq.assert_valid_topic_name(topic)\n return self._request('POST', '/topic/pause', fields={'topic': topic})"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nresumes message flow to channels of an existing paused topic.", "response": "def unpause_topic(self, topic):\n \"\"\"Resume message flow to channels of an existing, paused, topic.\"\"\"\n nsq.assert_valid_topic_name(topic)\n return self._request('POST', '/topic/unpause', fields={'topic': topic})"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns internal instrumented statistics.", "response": "def stats(self, topic=None, channel=None, text=False):\n \"\"\"Return internal instrumented statistics.\n\n :param topic: (optional) filter to topic\n\n :param channel: (optional) filter to channel\n\n :param text: return the stats as a string (default: ``False``)\n \"\"\"\n if text:\n fields = {'format': 'text'}\n else:\n fields = {'format': 'json'}\n\n if topic:\n nsq.assert_valid_topic_name(topic)\n fields['topic'] = topic\n\n if channel:\n nsq.assert_valid_channel_name(channel)\n fields['channel'] = channel\n\n return self._request('GET', '/stats', fields=fields)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\npublishes data to a topic.", "response": "def publish_tcp(self, topic, data, **kwargs):\n \"\"\"Use :meth:`NsqdTCPClient.publish` instead.\n\n .. deprecated:: 1.0.0\n \"\"\"\n return self.__tcp_client.publish(topic, data, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef publish_http(self, topic, data, **kwargs):\n self.__http_client.publish(topic, data, **kwargs)", "response": "Deprecated use nsqd. HTTPClient. publish instead."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nuse this method to publish TCP messages to a topic.", "response": "def multipublish_tcp(self, topic, messages, **kwargs):\n \"\"\"Use :meth:`NsqdTCPClient.multipublish` instead.\n\n .. deprecated:: 1.0.0\n \"\"\"\n return self.__tcp_client.multipublish(topic, messages, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nuse the HTTP client to publish a set of messages.", "response": "def multipublish_http(self, topic, messages, **kwargs):\n \"\"\"Use :meth:`NsqdHTTPClient.multipublish` instead.\n\n .. deprecated:: 1.0.0\n \"\"\"\n return self.__http_client.multipublish(topic, messages, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef start(self):\n if self._state == CLOSED:\n raise NSQException('producer already closed')\n\n if self.is_running:\n self.logger.warn('producer already started')\n return\n\n self.logger.debug('starting producer...')\n self._state = RUNNING\n\n for address in self.nsqd_tcp_addresses:\n address, port = address.split(':')\n self.connect_to_nsqd(address, int(port))", "response": "Start discovering and listing to connections."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nblock until all connections have closed and workers are stopped.", "response": "def join(self, timeout=None, raise_error=False):\n \"\"\"Block until all connections have closed and workers stopped.\"\"\"\n self._workers.join(timeout, raise_error)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\npublishing a message to the given topic.", "response": "def publish(self, topic, data, defer=None, block=True, timeout=None,\n raise_error=True):\n \"\"\"Publish a message to the given topic.\n\n :param topic: the topic to publish to\n\n :param data: bytestring data to publish\n\n :param defer: duration in milliseconds to defer before publishing\n (requires nsq 0.3.6)\n\n :param block: wait for a connection to become available before\n publishing the message. If block is `False` and no connections\n are available, :class:`~gnsq.errors.NSQNoConnections` is raised\n\n :param timeout: if timeout is a positive number, it blocks at most\n ``timeout`` seconds before raising\n :class:`~gnsq.errors.NSQNoConnections`\n\n :param raise_error: if ``True``, it blocks until a response is received\n from the nsqd server, and any error response is raised. Otherwise\n an :class:`~gevent.event.AsyncResult` is returned\n \"\"\"\n result = AsyncResult()\n conn = self._get_connection(block=block, timeout=timeout)\n\n try:\n self._response_queues[conn].append(result)\n conn.publish(topic, data, defer=defer)\n finally:\n self._put_connection(conn)\n\n if raise_error:\n return result.get()\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\npublishing an iterable of messages to the given topic.", "response": "def multipublish(self, topic, messages, block=True, timeout=None,\n raise_error=True):\n \"\"\"Publish an iterable of messages to the given topic.\n\n :param topic: the topic to publish to\n\n :param messages: iterable of bytestrings to publish\n\n :param block: wait for a connection to become available before\n publishing the message. If block is `False` and no connections\n are available, :class:`~gnsq.errors.NSQNoConnections` is raised\n\n :param timeout: if timeout is a positive number, it blocks at most\n ``timeout`` seconds before raising\n :class:`~gnsq.errors.NSQNoConnections`\n\n :param raise_error: if ``True``, it blocks until a response is received\n from the nsqd server, and any error response is raised. Otherwise\n an :class:`~gevent.event.AsyncResult` is returned\n \"\"\"\n result = AsyncResult()\n conn = self._get_connection(block=block, timeout=timeout)\n\n try:\n self._response_queues[conn].append(result)\n conn.multipublish(topic, messages)\n finally:\n self._put_connection(conn)\n\n if raise_error:\n return result.get()\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nrespond to nsqd that this message was successfully processed.", "response": "def finish(self):\n \"\"\"\n Respond to nsqd that you\u2019ve processed this message successfully\n (or would like to silently discard it).\n \"\"\"\n if self._has_responded:\n raise NSQException('already responded')\n self._has_responded = True\n self.on_finish.send(self)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nresponding to nsqd that this message was successfully processed.", "response": "def requeue(self, time_ms=0, backoff=True):\n \"\"\"\n Respond to nsqd that you\u2019ve failed to process this message successfully\n (and would like it to be requeued).\n \"\"\"\n if self._has_responded:\n raise NSQException('already responded')\n self._has_responded = True\n self.on_requeue.send(self, timeout=time_ms, backoff=backoff)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns producers for a topic.", "response": "def lookup(self, topic):\n \"\"\"Returns producers for a topic.\"\"\"\n nsq.assert_valid_topic_name(topic)\n return self._request('GET', '/lookup', fields={'topic': topic})"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef channels(self, topic):\n nsq.assert_valid_topic_name(topic)\n return self._request('GET', '/channels', fields={'topic': topic})", "response": "Returns all known channels of a topic."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef tombstone_topic(self, topic, node):\n nsq.assert_valid_topic_name(topic)\n return self._request('POST', '/topic/tombstone',\n fields={'topic': topic, 'node': node})", "response": "Tombstones a specific producer of an existing topic."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef start(self, block=True):\n if self._state == INIT:\n if not any(self.on_message.receivers_for(blinker.ANY)):\n raise RuntimeError('no receivers connected to on_message')\n\n self.logger.debug('starting %s...', self.name)\n self._state = RUNNING\n self.query_nsqd()\n\n if self.lookupds:\n self.query_lookupd()\n self._killables.add(self._workers.spawn(self._poll_lookupd))\n\n self._killables.add(self._workers.spawn(self._poll_ready))\n\n else:\n self.logger.warn('%s already started', self.name)\n\n if block:\n self.join()", "response": "Start discovering and listing to connections."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef publish(self, topic, message):\n if not self.connections:\n raise NSQNoConnections()\n conn = random.choice(list(self.connections))\n conn.publish(topic, message)", "response": "Use : class : ~gnsq. Producer instead."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef deprecated(fn):\n @functools.wraps(fn)\n def wrapper(*args, **kwargs):\n warnings.warn(fn.__doc__.split('\\n')[0],\n category=DeprecationWarning, stacklevel=2)\n return fn(*args, **kwargs)\n return wrapper", "response": "Mark a function as deprecated and warn the user on use."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _new_lnotab(instrs, lnotab):\n return {\n lno: _a_if_not_none(instr._stolen_by, instr)\n for lno, instr in lnotab.items()\n }", "response": "Returns a new lnotab after the instructions have been transformed."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef transform(self, code, *, name=None, filename=None):\n # reverse lookups from for constants and names.\n reversed_consts = {}\n reversed_names = {}\n reversed_varnames = {}\n for instr in code:\n if isinstance(instr, LOAD_CONST):\n reversed_consts[instr] = instr.arg\n if instr.uses_name:\n reversed_names[instr] = instr.arg\n if isinstance(instr, (STORE_FAST, LOAD_FAST)):\n reversed_varnames[instr] = instr.arg\n\n instrs, consts = tuple(zip(*reversed_consts.items())) or ((), ())\n for instr, const in zip(instrs, self.transform_consts(consts)):\n instr.arg = const\n\n instrs, names = tuple(zip(*reversed_names.items())) or ((), ())\n for instr, name_ in zip(instrs, self.transform_names(names)):\n instr.arg = name_\n\n instrs, varnames = tuple(zip(*reversed_varnames.items())) or ((), ())\n for instr, varname in zip(instrs, self.transform_varnames(varnames)):\n instr.arg = varname\n\n with self._new_context(code):\n post_transform = self.patterndispatcher(code)\n\n return Code(\n post_transform,\n code.argnames,\n cellvars=self.transform_cellvars(code.cellvars),\n freevars=self.transform_freevars(code.freevars),\n name=name if name is not None else code.name,\n filename=filename if filename is not None else code.filename,\n firstlineno=code.firstlineno,\n lnotab=_new_lnotab(post_transform, code.lnotab),\n flags=code.flags,\n )", "response": "Transform a codetransformer. Code object applying the transforms."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef pformat_ast(node,\n include_attributes=INCLUDE_ATTRIBUTES_DEFAULT,\n indent=INDENT_DEFAULT):\n \"\"\"\n Pretty-format an AST tree element\n\n Parameters\n ----------\n node : ast.AST\n Top-level node to render.\n include_attributes : bool, optional\n Whether to include node attributes. Default False.\n indent : str, optional.\n Indentation string for nested expressions. Default is two spaces.\n \"\"\"\n def _fmt(node, prefix, level):\n\n def with_indent(*strs):\n return ''.join(((indent * level,) + strs))\n\n with_prefix = partial(with_indent, prefix)\n\n if isinstance(node, Name):\n # Special Case:\n # Render Name nodes on a single line.\n yield with_prefix(\n type(node).__name__,\n '(id=',\n repr(node.id),\n ', ctx=',\n type(node.ctx).__name__,\n '()),',\n )\n\n elif isinstance(node, Num):\n # Special Case:\n # Render Num nodes on a single line without names.\n yield with_prefix(\n type(node).__name__,\n '(%r),' % node.n,\n )\n\n elif isinstance(node, AST):\n fields_attrs = list(\n chain(\n iter_fields(node),\n iter_attributes(node) if include_attributes else (),\n )\n )\n if not fields_attrs:\n # Special Case:\n # Render the whole expression on one line if there are no\n # attributes.\n yield with_prefix(type(node).__name__, '(),')\n return\n\n yield with_prefix(type(node).__name__, '(')\n for name, value in fields_attrs:\n yield from _fmt(value, name + '=', level + 1)\n # Put a trailing comma if we're not at the top level.\n yield with_indent(')', ',' if level > 0 else '')\n\n elif isinstance(node, list):\n if not node:\n # Special Case:\n # Render empty lists on one line.\n yield with_prefix('[],')\n return\n\n yield with_prefix('[')\n yield from chain.from_iterable(\n map(partial(_fmt, prefix='', level=level + 1), node)\n )\n yield with_indent('],')\n else:\n yield with_prefix(repr(node), ',')\n\n return '\\n'.join(_fmt(node, prefix='', level=0))", "response": "Pretty - format an AST tree element."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef walk_code(co, _prefix=''):\n name = _prefix + co.co_name\n yield name, co\n yield from chain.from_iterable(\n walk_code(c, _prefix=_extend_name(name, co))\n for c in co.co_consts\n if isinstance(c, CodeType)\n )", "response": "Walk a code object and yield all the names and code objects that are also code objects."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef d(obj, mode='exec', file=None):\n if file is None:\n file = sys.stdout\n\n for name, co in walk_code(extract_code(obj, compile_mode=mode)):\n print(name, file=file)\n print('-' * len(name), file=file)\n dis.dis(co, file=file)\n print('', file=file)", "response": "Interactive convenience for displaying the disassembly of a function module or code string."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef display(text, mode='exec', file=None):\n\n if file is None:\n file = sys.stdout\n\n ast_section = StringIO()\n a(text, mode=mode, file=ast_section)\n\n code_section = StringIO()\n d(text, mode=mode, file=code_section)\n\n rendered = _DISPLAY_TEMPLATE.format(\n text=text,\n ast=ast_section.getvalue(),\n code=code_section.getvalue(),\n )\n print(rendered, file=file)", "response": "Display text as AST and Bytecode."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef match(match_expr, exc_type, exc_value, exc_traceback):\n # Emulate standard behavior when match_expr is an exception subclass.\n if isinstance(match_expr, type) and issubclass(match_expr, BaseException):\n return issubclass(exc_type, match_expr)\n\n # Match on type and args when match_expr is an exception instance.\n return (\n issubclass(exc_type, type(match_expr))\n and\n match_expr.args == exc_value.args\n )", "response": "Match an exception expression."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef pycode_to_body(co, context):\n code = Code.from_pycode(co)\n\n # On each instruction, temporarily store all the jumps to the **next**\n # instruction. This is used in _make_expr to determine when an expression\n # is part of a short-circuiting expression.\n for a, b in sliding_window(2, code.instrs):\n a._next_target_of = b._target_of\n b._next_target_of = set()\n\n try:\n body = instrs_to_body(deque(code.instrs), context)\n if context.in_function_block:\n return make_global_and_nonlocal_decls(code.instrs) + body\n return body\n finally:\n # Clean up jump target data.\n for i in code.instrs:\n del i._next_target_of", "response": "Convert a Python code object to a list of AST body elements."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconverts a list of Instruction objects to a list of AST body nodes.", "response": "def instrs_to_body(instrs, context):\n \"\"\"\n Convert a list of Instruction objects to a list of AST body nodes.\n \"\"\"\n stack = []\n body = []\n process_instrs(instrs, stack, body, context)\n\n if stack:\n raise DecompilationError(\n \"Non-empty stack at the end of instrs_to_body(): %s.\" % stack\n )\n return body"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nprocessing all the instructions in the instruction queue.", "response": "def process_instrs(queue, stack, body, context):\n \"\"\"\n Process instructions from the instruction queue.\n \"\"\"\n next_instr = queue.popleft\n while queue:\n newcontext = _process_instr(next_instr(), queue, stack, body, context)\n if newcontext is not None:\n context = newcontext"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef make_if_statement(instr, queue, stack, context):\n test_expr = make_expr(stack)\n if isinstance(instr, instrs.POP_JUMP_IF_TRUE):\n test_expr = ast.UnaryOp(op=ast.Not(), operand=test_expr)\n\n first_block = popwhile(op.is_not(instr.arg), queue, side='left')\n if isinstance(first_block[-1], instrs.RETURN_VALUE):\n body = instrs_to_body(first_block, context)\n return ast.If(test=test_expr, body=body, orelse=[])\n\n jump_to_end = expect(\n first_block.pop(), instrs.JUMP_FORWARD, \"at end of if-block\"\n )\n\n body = instrs_to_body(first_block, context)\n\n # First instruction after the whole if-block.\n end = jump_to_end.arg\n if instr.arg is jump_to_end.arg:\n orelse = []\n else:\n orelse = instrs_to_body(\n popwhile(op.is_not(end), queue, side='left'),\n context,\n )\n\n return ast.If(test=test_expr, body=body, orelse=orelse)", "response": "Make an ast. If block from a POP_JUMP_IF_TRUE or POP_JUMP_IF_FALSE instruction."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _process_instr_import_name(instr, queue, stack, body, context):\n # If this is \"import module\", fromlist is None.\n # If this this is \"from module import a, b fromlist will be ('a', 'b').\n fromlist = stack.pop().arg\n\n # level argument to __import__. Should be 0, 1, or 2.\n level = stack.pop().arg\n\n module = instr.arg\n if fromlist is None: # Regular import.\n attr_loads = _pop_import_LOAD_ATTRs(module, queue)\n store = queue.popleft()\n # There are two cases where we should emit an alias:\n # import a as \n # import a.b.c as \n if attr_loads or module.split('.')[0] != store.arg:\n asname = store.arg\n else:\n asname = None\n body.append(\n ast.Import(\n names=[\n ast.alias(\n name=module,\n asname=(asname),\n ),\n ],\n level=level,\n ),\n )\n return\n elif fromlist == ('*',): # From module import *.\n expect(queue.popleft(), instrs.IMPORT_STAR, \"after IMPORT_NAME\")\n body.append(\n ast.ImportFrom(\n module=module,\n names=[ast.alias(name='*', asname=None)],\n level=level,\n ),\n )\n return\n\n # Consume a pair of IMPORT_FROM, STORE_NAME instructions for each entry in\n # fromlist.\n names = list(map(make_importfrom_alias(queue, body, context), fromlist))\n body.append(ast.ImportFrom(module=module, names=names, level=level))\n\n # Remove the final POP_TOP of the imported module.\n expect(queue.popleft(), instrs.POP_TOP, \"after 'from import'\")", "response": "Process an IMPORT_NAME instruction."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _pop_import_LOAD_ATTRs(module_name, queue):\n popped = popwhile(is_a(instrs.LOAD_ATTR), queue, side='left')\n if popped:\n expected = module_name.split('.', maxsplit=1)[1]\n actual = '.'.join(map(op.attrgetter('arg'), popped))\n if expected != actual:\n raise DecompilationError(\n \"Decompiling import of module %s, but LOAD_ATTRS imply %s\" % (\n expected, actual,\n )\n )\n return popped", "response": "Pop LOAD_ATTR instructions for an import of the form a. b. c. d."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nmaking an ast. alias node for the names list of an ast. ImportFrom node.", "response": "def make_importfrom_alias(queue, body, context, name):\n \"\"\"\n Make an ast.alias node for the names list of an ast.ImportFrom.\n\n Parameters\n ----------\n queue : deque\n Instruction Queue\n body : list\n Current body.\n context : DecompilationContext\n name : str\n Expected name of the IMPORT_FROM node to be popped.\n\n Returns\n -------\n alias : ast.alias\n\n Side Effects\n ------------\n Consumes IMPORT_FROM and STORE_NAME instructions from queue.\n \"\"\"\n import_from, store = queue.popleft(), queue.popleft()\n expect(import_from, instrs.IMPORT_FROM, \"after IMPORT_NAME\")\n\n if not import_from.arg == name:\n raise DecompilationError(\n \"IMPORT_FROM name mismatch. Expected %r, but got %s.\" % (\n name, import_from,\n )\n )\n return ast.alias(\n name=name,\n asname=store.arg if store.arg != name else None,\n )"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nset a make_function_context, then push onto the stack.", "response": "def _make_function(instr, queue, stack, body, context):\n \"\"\"\n Set a make_function_context, then push onto the stack.\n \"\"\"\n assert stack, \"Empty stack before MAKE_FUNCTION.\"\n prev = stack[-1]\n expect(prev, instrs.LOAD_CONST, \"before MAKE_FUNCTION\")\n\n stack.append(instr)\n\n if is_lambda_name(prev.arg):\n return\n\n return context.update(\n make_function_context=MakeFunctionContext(\n closure=isinstance(instr, instrs.MAKE_CLOSURE),\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nmaking an ast. Assign node.", "response": "def make_assignment(instr, queue, stack):\n \"\"\"\n Make an ast.Assign node.\n \"\"\"\n value = make_expr(stack)\n\n # Make assignment targets.\n # If there are multiple assignments (e.g. 'a = b = c'),\n # each LHS expression except the last is preceded by a DUP_TOP instruction.\n # Thus, we make targets until we don't see a DUP_TOP, and then make one\n # more.\n targets = []\n while isinstance(instr, instrs.DUP_TOP):\n targets.append(make_assign_target(queue.popleft(), queue, stack))\n instr = queue.popleft()\n\n targets.append(make_assign_target(instr, queue, stack))\n\n return ast.Assign(targets=targets, value=value)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\npop instructions from queue that form the body of a with - block.", "response": "def pop_with_body_instrs(setup_with_instr, queue):\n \"\"\"\n Pop instructions from `queue` that form the body of a with block.\n \"\"\"\n body_instrs = popwhile(op.is_not(setup_with_instr.arg), queue, side='left')\n\n # Last two instructions should always be POP_BLOCK, LOAD_CONST(None).\n # These don't correspond to anything in the AST, so remove them here.\n load_none = body_instrs.pop()\n expect(load_none, instrs.LOAD_CONST, \"at end of with-block\")\n pop_block = body_instrs.pop()\n expect(pop_block, instrs.POP_BLOCK, \"at end of with-block\")\n if load_none.arg is not None:\n raise DecompilationError(\n \"Expected LOAD_CONST(None), but got \"\n \"%r instead\" % (load_none)\n )\n\n # Target of the setup_with should be a WITH_CLEANUP instruction followed by\n # an END_FINALLY. Neither of these correspond to anything in the AST.\n with_cleanup = queue.popleft()\n expect(with_cleanup, instrs.WITH_CLEANUP, \"at end of with-block\")\n end_finally = queue.popleft()\n expect(end_finally, instrs.END_FINALLY, \"at end of with-block\")\n\n return body_instrs"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nmaking an ast. withitem node.", "response": "def make_withitem(queue, stack):\n \"\"\"\n Make an ast.withitem node.\n \"\"\"\n context_expr = make_expr(stack)\n # This is a POP_TOP for just \"with :\".\n # This is a STORE_NAME(name) for \"with as :\".\n as_instr = queue.popleft()\n if isinstance(as_instr, (instrs.STORE_FAST,\n instrs.STORE_NAME,\n instrs.STORE_DEREF,\n instrs.STORE_GLOBAL)):\n return ast.withitem(\n context_expr=context_expr,\n optional_vars=make_assign_target(as_instr, queue, stack),\n )\n elif isinstance(as_instr, instrs.POP_TOP):\n return ast.withitem(context_expr=context_expr, optional_vars=None)\n else:\n raise DecompilationError(\n \"Don't know how to make withitem from %s\" % as_instr,\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef make_for_loop(loop_body_instrs, else_body_instrs, context):\n # Instructions from start until GET_ITER are the builders for the iterator\n # expression.\n iterator_expr = make_expr(\n popwhile(not_a(instrs.GET_ITER), loop_body_instrs, side='left')\n )\n\n # Next is the GET_ITER instruction, which we don't need.\n loop_body_instrs.popleft()\n\n # Next is FOR_ITER, which is the jump target for Continue nodes.\n top_of_loop = loop_body_instrs.popleft()\n\n # This can be a STORE_* or an UNPACK_SEQUENCE followed by some number of\n # stores.\n target = make_assign_target(\n loop_body_instrs.popleft(),\n loop_body_instrs,\n stack=[],\n )\n\n body, orelse_body = make_loop_body_and_orelse(\n top_of_loop, loop_body_instrs, else_body_instrs, context\n )\n\n return ast.For(\n target=target,\n iter=iterator_expr,\n body=body,\n orelse=orelse_body,\n )", "response": "Make an ast. For node."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nmake the body and orelse lists for a for loop.", "response": "def make_loop_body_and_orelse(top_of_loop, body_instrs, else_instrs, context):\n \"\"\"\n Make body and orelse lists for a for/while loop whose first instruction is\n `top_of_loop`.\n\n Parameters\n ----------\n top_of_loop : Instruction\n The first body of the loop. For a for-loop, this should always be a\n FOR_ITER. For a while loop, it's the first instruction of the stack\n builders for the loop test expression\n body_instrs : deque\n Queue of Instructions that form the body of the loop. The last two\n elements of body_instrs should be a JUMP_ABSOLUTE to `top_of_loop` and\n a POP_BLOCK.\n else_instrs : deque\n Queue of Instructions that form the else block of the loop. Should be\n an empty deque if there is no else block.\n context : DecompilationContext\n\n Returns\n -------\n body : list[ast.AST]\n List of ast nodes forming the loop body.\n orelse_body : list[ast.AST]\n List of ast nodes forming the else-block body.\n \"\"\"\n # Remove the JUMP_ABSOLUTE and POP_BLOCK instructions at the bottom of the\n # loop.\n body_instrs.pop()\n body_instrs.pop()\n body = instrs_to_body(body_instrs, context.update(top_of_loop=top_of_loop))\n\n if else_instrs:\n else_body = instrs_to_body(else_instrs, context)\n else:\n else_body = []\n\n return body, else_body"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_while_loop(test_and_body_instrs, else_body_instrs, context):\n top_of_loop = test_and_body_instrs[0]\n\n # The popped elements are the stack_builders for the loop test expression.\n # The top of the loop_body_instrs is either a POP_JUMP_IF_TRUE or a\n # POP_JUMP_IF_FALSE.\n test, body_instrs = make_while_loop_test_expr(test_and_body_instrs)\n body, orelse_body = make_loop_body_and_orelse(\n top_of_loop, body_instrs, else_body_instrs, context,\n )\n\n # while-else blocks are not yet supported or handled.\n return ast.While(test=test, body=body, orelse=orelse_body)", "response": "Make an ast. While node."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef pop_loop_instrs(setup_loop_instr, queue):\n # Grab everything from left side of the queue until the jump target of\n # SETUP_LOOP.\n body = popwhile(op.is_not(setup_loop_instr.arg), queue, side='left')\n\n # Anything after the last POP_BLOCK instruction is the else-block.\n else_body = popwhile(not_a(instrs.POP_BLOCK), body, side='right')\n\n jump_to_top, pop_block = body[-2], body[-1]\n if not isinstance(jump_to_top, instrs.JUMP_ABSOLUTE):\n raise DecompilationError(\n \"Penultimate instruction of loop body is \"\n \"%s, not JUMP_ABSOLUTE.\" % jump_to_top,\n )\n\n if not isinstance(pop_block, instrs.POP_BLOCK):\n raise DecompilationError(\n \"Last instruction of loop body is \"\n \"%s, not pop_block.\" % pop_block,\n )\n\n loop_expr = jump_to_top.arg\n if isinstance(loop_expr, instrs.FOR_ITER):\n return 'for', body, else_body\n return 'while', body, else_body", "response": "Given a SETUP_LOOP instruction and a queue of instructions from the queue pop the instructions from the queue."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _make_expr(toplevel, stack_builders):\n base_expr = _make_expr_internal(toplevel, stack_builders)\n if not toplevel._next_target_of:\n return base_expr\n\n subexprs = deque([base_expr])\n ops = deque([])\n while stack_builders and stack_builders[-1] in toplevel._next_target_of:\n jump = stack_builders.pop()\n if not isinstance(jump, _BOOLOP_JUMP_TYPES):\n raise DecompilationError(\n \"Don't know how to decompile %s inside expression.\" % jump,\n )\n subexprs.appendleft(make_expr(stack_builders))\n ops.appendleft(_BOOLOP_JUMP_TO_AST_OP[type(jump)]())\n\n if len(subexprs) <= 1:\n raise DecompilationError(\n \"Expected at least one JUMP instruction before expression.\"\n )\n\n return normalize_boolop(make_boolop(subexprs, ops))", "response": "Override the single - dispatched make_expr with wrapper logic for handling\n short - circuiting expressions."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a boolop expression from a list of expressions and a list of op_types.", "response": "def make_boolop(exprs, op_types):\n \"\"\"\n Parameters\n ----------\n exprs : deque\n op_types : deque[{ast.And, ast.Or}]\n \"\"\"\n if len(op_types) > 1:\n return ast.BoolOp(\n op=op_types.popleft(),\n values=[exprs.popleft(), make_boolop(exprs, op_types)],\n )\n\n assert len(exprs) == 2\n return ast.BoolOp(op=op_types.popleft(), values=list(exprs))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nnormalize a boolop by folding together nested And or exprs.", "response": "def normalize_boolop(expr):\n \"\"\"\n Normalize a boolop by folding together nested And/Or exprs.\n \"\"\"\n optype = expr.op\n newvalues = []\n for subexpr in expr.values:\n if not isinstance(subexpr, ast.BoolOp):\n newvalues.append(subexpr)\n elif type(subexpr.op) != type(optype):\n newvalues.append(normalize_boolop(subexpr))\n else:\n # Normalize subexpression, then inline its values into the\n # top-level subexpr.\n newvalues.extend(normalize_boolop(subexpr).values)\n return ast.BoolOp(op=optype, values=newvalues)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nmakes the keywords entry for an ast. Call node.", "response": "def make_call_keywords(stack_builders, count):\n \"\"\"\n Make the keywords entry for an ast.Call node.\n \"\"\"\n out = []\n for _ in range(count):\n value = make_expr(stack_builders)\n load_kwname = stack_builders.pop()\n if not isinstance(load_kwname, instrs.LOAD_CONST):\n raise DecompilationError(\n \"Expected a LOAD_CONST, but got %r\" % load_kwname\n )\n if not isinstance(load_kwname.arg, str):\n raise DecompilationError(\n \"Expected LOAD_CONST of a str, but got %r.\" % load_kwname,\n )\n out.append(ast.keyword(arg=load_kwname.arg, value=value))\n out.reverse()\n return out"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nmake the args entry for an ast. Call node.", "response": "def make_call_positionals(stack_builders, count):\n \"\"\"\n Make the args entry for an ast.Call node.\n \"\"\"\n out = [make_expr(stack_builders) for _ in range(count)]\n out.reverse()\n return out"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef make_exprs(stack_builders, count):\n exprs = [make_expr(stack_builders) for _ in range(count)]\n # Elements are on the stack from right to left, but we want them from right\n # to left.\n exprs.reverse()\n return exprs", "response": "Make elements of set and list of tuple literal."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _make_expr_empty_dict(toplevel, stack_builders):\n if toplevel.arg:\n raise DecompilationError(\n \"make_expr() called with nonzero BUILD_MAP arg %d\" % toplevel.arg\n )\n\n if stack_builders:\n raise DecompilationError(\n \"Unexpected stack_builders for BUILD_MAP(0): %s\" % stack_builders\n )\n return ast.Dict(keys=[], values=[])", "response": "Make an empty dict."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nfinds the BUILD_MAP instruction for which the last element of stack_builders is a store.", "response": "def find_build_map(stack_builders):\n \"\"\"\n Find the BUILD_MAP instruction for which the last element of\n ``stack_builders`` is a store.\n \"\"\"\n assert isinstance(stack_builders[-1], instrs.STORE_MAP)\n\n to_consume = 0\n for instr in reversed(stack_builders):\n if isinstance(instr, instrs.STORE_MAP):\n # NOTE: This branch should always be hit on the first iteration.\n to_consume += 1\n elif isinstance(instr, instrs.BUILD_MAP):\n to_consume -= instr.arg\n if to_consume <= 0:\n return instr\n else:\n raise DecompilationError(\n \"Couldn't find BUILD_MAP for last element of %s.\" % stack_builders\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _make_dict_elems(build_instr, builders):\n keys = []\n values = []\n for _ in range(build_instr.arg):\n popped = builders.pop()\n if not isinstance(popped, instrs.STORE_MAP):\n raise DecompilationError(\n \"Expected a STORE_MAP but got %s\" % popped\n )\n\n keys.append(make_expr(builders))\n values.append(make_expr(builders))\n\n # Keys and values are emitted in reverse order of how they appear in the\n # AST.\n keys.reverse()\n values.reverse()\n return keys, values", "response": "Return a list of keys and values for the dictionary literal\n generated by build_instr."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef normalize_tuple_slice(node):\n if not any(isinstance(elt, ast.Slice) for elt in node.elts):\n return ast.Index(value=node)\n\n return ast.ExtSlice(\n [\n # Wrap non-Slice nodes in Index nodes.\n elt if isinstance(elt, ast.Slice) else ast.Index(value=elt)\n for elt in node.elts\n ]\n )", "response": "Normalizes an ast. Tuple node representing the internals of a slice. Returns an ast. ExtSlice node if there are any any\n slices."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a handler function for binary operator nodes.", "response": "def _binop_handler(nodetype):\n \"\"\"\n Factory function for binary operator handlers.\n \"\"\"\n def _handler(toplevel, stack_builders):\n right = make_expr(stack_builders)\n left = make_expr(stack_builders)\n return ast.BinOp(left=left, op=nodetype(), right=right)\n return _handler"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconstructs a function from a sequence of builders and a closure.", "response": "def make_function(function_builders, *, closure):\n \"\"\"\n Construct a FunctionDef AST node from a sequence of the form:\n\n LOAD_CLOSURE, N times (when handling MAKE_CLOSURE)\n BUILD_TUPLE(N) (when handling MAKE_CLOSURE)\n (optional)\n , (optional)\n (optional)\n LOAD_CONST() (optional)\n LOAD_CONST(code),\n LOAD_CONST(name),\n MAKE_FUNCTION | MAKE_CLOSURE\n (optional)\n \"\"\"\n decorator_calls = deque()\n while isinstance(function_builders[-1], instrs.CALL_FUNCTION):\n decorator_calls.appendleft(function_builders.pop())\n\n *builders, load_code_instr, load_name_instr, make_function_instr = (\n function_builders\n )\n\n _check_make_function_instrs(\n load_code_instr, load_name_instr, make_function_instr,\n )\n\n co = load_code_instr.arg\n name = load_name_instr.arg\n args, kwonly, varargs, varkwargs = paramnames(co)\n\n # Convert default and annotation builders to AST nodes.\n defaults, kw_defaults, annotations = make_defaults_and_annotations(\n make_function_instr,\n builders,\n )\n\n # Convert decorator function builders. The stack is in reverse order.\n decorators = [make_expr(builders) for _ in decorator_calls]\n decorators.reverse()\n\n if closure:\n # There should be a tuple of closure cells still on the stack here.\n # These don't appear in the AST, but we need to consume them to ensure\n # correctness down the line.\n closure_cells = make_closure_cells(builders) # noqa\n\n # We should have consumed all our builders by this point.\n if builders:\n raise DecompilationError(\n \"Unexpected leftover builders for %s: %s.\" % (\n make_function_instr, builders\n )\n )\n\n return ast.FunctionDef(\n body_code=co,\n name=name.split('.')[-1],\n args=make_function_arguments(\n args,\n kwonly,\n varargs,\n varkwargs,\n defaults,\n kw_defaults,\n annotations,\n ),\n body=pycode_to_body(co, DecompilationContext(in_function_block=True)),\n decorator_list=decorators,\n returns=annotations.get('return'),\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef make_function_arguments(args,\n kwonly,\n varargs,\n varkwargs,\n defaults,\n kw_defaults,\n annotations):\n \"\"\"\n Make an ast.arguments from the args parsed out of a code object.\n \"\"\"\n return ast.arguments(\n args=[ast.arg(arg=a, annotation=annotations.get(a)) for a in args],\n kwonlyargs=[\n ast.arg(arg=a, annotation=annotations.get(a)) for a in kwonly\n ],\n defaults=defaults,\n kw_defaults=list(map(kw_defaults.get, kwonly)),\n vararg=None if varargs is None else ast.arg(\n arg=varargs, annotation=annotations.get(varargs),\n ),\n kwarg=None if varkwargs is None else ast.arg(\n arg=varkwargs, annotation=annotations.get(varkwargs)\n ),\n )", "response": "Make an ast. arguments from the args parsed out of a code object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngiving a list of instructions in instrs and a list of ast. Global and ast. Nonlocal declarations return a canonical list of ast. Global and ast. Nonlocal declarations.", "response": "def make_global_and_nonlocal_decls(code_instrs):\n \"\"\"\n Find all STORE_GLOBAL and STORE_DEREF instructions in `instrs` and convert\n them into a canonical list of `ast.Global` and `ast.Nonlocal` declarations.\n \"\"\"\n globals_ = sorted(set(\n i.arg for i in code_instrs if isinstance(i, instrs.STORE_GLOBAL)\n ))\n nonlocals = sorted(set(\n i.arg for i in code_instrs\n if isinstance(i, instrs.STORE_DEREF) and i.vartype == 'free'\n ))\n\n out = []\n if globals_:\n out.append(ast.Global(names=globals_))\n if nonlocals:\n out.append(ast.Nonlocal(names=nonlocals))\n return out"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the AST expressions corresponding to the defaults kwonly defaults and annotations for a function created by make_function_instr.", "response": "def make_defaults_and_annotations(make_function_instr, builders):\n \"\"\"\n Get the AST expressions corresponding to the defaults, kwonly defaults, and\n annotations for a function created by `make_function_instr`.\n \"\"\"\n # Integer counts.\n n_defaults, n_kwonlydefaults, n_annotations = unpack_make_function_arg(\n make_function_instr.arg\n )\n if n_annotations:\n # TOS should be a tuple of annotation names.\n load_annotation_names = builders.pop()\n annotations = dict(zip(\n reversed(load_annotation_names.arg),\n (make_expr(builders) for _ in range(n_annotations - 1))\n ))\n else:\n annotations = {}\n\n kwonlys = {}\n while n_kwonlydefaults:\n default_expr = make_expr(builders)\n key_instr = builders.pop()\n if not isinstance(key_instr, instrs.LOAD_CONST):\n raise DecompilationError(\n \"kwonlydefault key is not a LOAD_CONST: %s\" % key_instr\n )\n if not isinstance(key_instr.arg, str):\n raise DecompilationError(\n \"kwonlydefault key builder is not a \"\n \"'LOAD_CONST of a string: %s\" % key_instr\n )\n\n kwonlys[key_instr.arg] = default_expr\n n_kwonlydefaults -= 1\n\n defaults = make_exprs(builders, n_defaults)\n return defaults, kwonlys, annotations"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _check_make_function_instrs(load_code_instr,\n load_name_instr,\n make_function_instr,\n *,\n expect_lambda=False):\n \"\"\"\n Validate the instructions passed to a make_function call.\n \"\"\"\n\n # Validate load_code_instr.\n if not isinstance(load_code_instr, instrs.LOAD_CONST):\n raise TypeError(\n \"make_function expected 'load_code_instr` to be a \"\n \"LOAD_CONST, but got %s\" % load_code_instr,\n )\n if not isinstance(load_code_instr.arg, types.CodeType):\n raise TypeError(\n \"make_function expected load_code_instr \"\n \"to load a code object, but got %s\" % load_code_instr.arg,\n )\n\n # Validate load_name_instr\n if not isinstance(load_name_instr, instrs.LOAD_CONST):\n raise TypeError(\n \"make_function expected 'load_name_instr` to be a \"\n \"LOAD_CONST, but got %s\" % load_code_instr,\n )\n\n if not isinstance(load_name_instr.arg, str):\n raise TypeError(\n \"make_function expected load_name_instr \"\n \"to load a string, but got %r instead\" % load_name_instr.arg\n )\n\n # This is an endswith rather than '==' because the arg is the\n # fully-qualified name.\n is_lambda = is_lambda_name(load_name_instr.arg)\n if expect_lambda and not is_lambda:\n raise ValueError(\n \"Expected to make a function named , but \"\n \"got %r instead.\" % load_name_instr.arg\n )\n if not expect_lambda and is_lambda:\n raise ValueError(\"Unexpectedly received lambda function.\")\n\n # Validate make_function_instr\n if not isinstance(make_function_instr, (instrs.MAKE_FUNCTION,\n instrs.MAKE_CLOSURE)):\n raise TypeError(\n \"make_function expected a MAKE_FUNCTION or MAKE_CLOSURE\"\n \"instruction, but got %s instead.\" % make_function_instr\n )", "response": "Validate the instructions passed to a make_function call."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef pop_arguments(instr, stack):\n needed = instr.stack_effect\n if needed >= 0:\n raise DecompilationError(\n \"%s is does not have a negative stack effect\" % instr\n )\n\n for popcount, to_pop in enumerate(reversed(stack), start=1):\n needed += to_pop.stack_effect\n if not needed:\n break\n else:\n raise DecompilationError(\n \"Reached end of stack without finding inputs to %s\" % instr,\n )\n\n popped = stack[-popcount:]\n stack[:] = stack[:-popcount]\n\n return popped", "response": "Pop instructions off stack until we pop all instructions that will\n produce values popped by instr."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _check_stack_for_module_return(stack):\n fail = (\n len(stack) != 1\n or not isinstance(stack[0], instrs.LOAD_CONST)\n or stack[0].arg is not None\n )\n\n if fail:\n raise DecompilationError(\n \"Reached end of non-function code \"\n \"block with unexpected stack: %s.\" % stack\n )", "response": "Verify that the stack is in the expected state before the dummy\n RETURN_VALUE instruction."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking that an instruction is of the expected type.", "response": "def expect(instr, expected, context):\n \"\"\"\n Check that an instruction is of the expected type.\n \"\"\"\n if not isinstance(instr, expected):\n raise DecompilationError(\n \"Expected a {expected} instruction {context}. Got {instr}.\".format(\n instr=instr, expected=expected, context=context,\n )\n )\n return instr"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\npop elements off a queue while cond is True.", "response": "def popwhile(cond, queue, *, side):\n \"\"\"\n Pop elements off a queue while `cond(nextelem)` is True.\n\n Parameters\n ----------\n cond : predicate\n queue : deque\n side : {'left', 'right'}\n\n Returns\n -------\n popped : deque\n\n Examples\n --------\n >>> from collections import deque\n >>> d = deque([1, 2, 3, 2, 1])\n >>> popwhile(lambda x: x < 3, d, side='left')\n deque([1, 2])\n >>> d\n deque([3, 2, 1])\n >>> popwhile(lambda x: x < 3, d, side='right')\n deque([2, 1])\n >>> d\n deque([3])\n \"\"\"\n if side not in ('left', 'right'):\n raise ValueError(\"`side` must be one of 'left' or 'right'\")\n\n out = deque()\n\n if side == 'left':\n popnext = queue.popleft\n pushnext = out.append\n nextidx = 0\n else:\n popnext = queue.pop\n pushnext = out.appendleft\n nextidx = -1\n\n while queue:\n if not cond(queue[nextidx]):\n break\n pushnext(popnext())\n return out"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef paramnames(co):\n flags = co.co_flags\n varnames = co.co_varnames\n\n argcount, kwonlyargcount = co.co_argcount, co.co_kwonlyargcount\n total = argcount + kwonlyargcount\n\n args = varnames[:argcount]\n kwonlyargs = varnames[argcount:total]\n varargs, varkwargs = None, None\n if flags & Flag.CO_VARARGS:\n varargs = varnames[total]\n total += 1\n if flags & Flag.CO_VARKEYWORDS:\n varkwargs = varnames[total]\n\n return args, kwonlyargs, varargs, varkwargs", "response": "Get the parameter names from a pycode object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the name of the variable manipulated by a uses_free instruction.", "response": "def _freevar_argname(arg, cellvars, freevars):\n \"\"\"\n Get the name of the variable manipulated by a 'uses_free' instruction.\n\n Parameters\n ----------\n arg : int\n The raw argument to a uses_free instruction that we want to resolve to\n a name.\n cellvars : list[str]\n The co_cellvars of the function for which we want to resolve `arg`.\n freevars : list[str]\n The co_freevars of the function for which we want to resolve `arg`.\n\n Notes\n -----\n From https://docs.python.org/3.5/library/dis.html#opcode-LOAD_CLOSURE:\n\n The name of the variable is co_cellvars[i] if i is less than the length\n of co_cellvars. Otherwise it is co_freevars[i - len(co_cellvars)]\n \"\"\"\n len_cellvars = len(cellvars)\n if arg < len_cellvars:\n return cellvars[arg]\n return freevars[arg - len_cellvars]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef pycode(argcount,\n kwonlyargcount,\n nlocals,\n stacksize,\n flags,\n codestring,\n constants,\n names,\n varnames,\n filename,\n name,\n firstlineno,\n lnotab,\n freevars=(),\n cellvars=()):\n \"\"\"types.CodeType constructor that accepts keyword arguments.\n\n See Also\n --------\n types.CodeType\n \"\"\"\n return CodeType(\n argcount,\n kwonlyargcount,\n nlocals,\n stacksize,\n flags,\n codestring,\n constants,\n names,\n varnames,\n filename,\n name,\n firstlineno,\n lnotab,\n freevars,\n cellvars,\n )", "response": "Returns a new CodeType instance that can be used to create a code object."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef from_pycode(cls, co):\n # Make it sparse to instrs[n] is the instruction at bytecode[n]\n sparse_instrs = tuple(\n _sparse_args(\n Instruction.from_opcode(\n b.opcode,\n Instruction._no_arg if b.arg is None else _RawArg(b.arg),\n ) for b in Bytecode(co)\n ),\n )\n for idx, instr in enumerate(sparse_instrs):\n if instr is None:\n # The sparse value\n continue\n if instr.absjmp:\n instr.arg = sparse_instrs[instr.arg]\n elif instr.reljmp:\n instr.arg = sparse_instrs[instr.arg + idx + argsize + 1]\n elif isinstance(instr, LOAD_CONST):\n instr.arg = co.co_consts[instr.arg]\n elif instr.uses_name:\n instr.arg = co.co_names[instr.arg]\n elif instr.uses_varname:\n instr.arg = co.co_varnames[instr.arg]\n elif instr.uses_free:\n instr.arg = _freevar_argname(\n instr.arg,\n co.co_freevars,\n co.co_cellvars,\n )\n elif instr.have_arg and isinstance(instr.arg, _RawArg):\n instr.arg = int(instr.arg)\n\n flags = Flag.unpack(co.co_flags)\n has_vargs = flags['CO_VARARGS']\n has_kwargs = flags['CO_VARKEYWORDS']\n\n # Here we convert the varnames format into our argnames format.\n paramnames = co.co_varnames[\n :(co.co_argcount +\n co.co_kwonlyargcount +\n has_vargs +\n has_kwargs)\n ]\n # We start with the positional arguments.\n new_paramnames = list(paramnames[:co.co_argcount])\n # Add *args next.\n if has_vargs:\n new_paramnames.append('*' + paramnames[-1 - has_kwargs])\n # Add positional only arguments next.\n new_paramnames.extend(paramnames[\n co.co_argcount:co.co_argcount + co.co_kwonlyargcount\n ])\n # Add **kwargs last.\n if has_kwargs:\n new_paramnames.append('**' + paramnames[-1])\n\n return cls(\n filter(bool, sparse_instrs),\n argnames=new_paramnames,\n cellvars=co.co_cellvars,\n freevars=co.co_freevars,\n name=co.co_name,\n filename=co.co_filename,\n firstlineno=co.co_firstlineno,\n lnotab={\n lno: sparse_instrs[off] for off, lno in findlinestarts(co)\n },\n flags=flags,\n )", "response": "Create a Code object from a python code object."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a python code object from the more abstract object.", "response": "def to_pycode(self):\n \"\"\"Create a python code object from the more abstract\n codetransfomer.Code object.\n\n Returns\n -------\n co : CodeType\n The python code object.\n \"\"\"\n consts = self.consts\n names = self.names\n varnames = self.varnames\n freevars = self.freevars\n cellvars = self.cellvars\n bc = bytearray()\n for instr in self.instrs:\n bc.append(instr.opcode) # Write the opcode byte.\n if isinstance(instr, LOAD_CONST):\n # Resolve the constant index.\n bc.extend(consts.index(instr.arg).to_bytes(argsize, 'little'))\n elif instr.uses_name:\n # Resolve the name index.\n bc.extend(names.index(instr.arg).to_bytes(argsize, 'little'))\n elif instr.uses_varname:\n # Resolve the local variable index.\n bc.extend(\n varnames.index(instr.arg).to_bytes(argsize, 'little'),\n )\n elif instr.uses_free:\n # uses_free is really \"uses freevars **or** cellvars\".\n try:\n # look for the name in cellvars\n bc.extend(\n cellvars.index(instr.arg).to_bytes(argsize, 'little'),\n )\n except ValueError:\n # fall back to freevars, incrementing the length of\n # cellvars.\n bc.extend(\n (freevars.index(instr.arg) + len(cellvars)).to_bytes(\n argsize,\n 'little',\n )\n )\n elif instr.absjmp:\n # Resolve the absolute jump target.\n bc.extend(\n self.bytecode_offset(instr.arg).to_bytes(\n argsize,\n 'little',\n ),\n )\n elif instr.reljmp:\n # Resolve the relative jump target.\n # We do this by subtracting the curren't instructions's\n # sparse index from the sparse index of the argument.\n # We then subtract argsize - 1 to account for the bytes the\n # current instruction takes up.\n bytecode_offset = self.bytecode_offset\n bc.extend((\n bytecode_offset(instr.arg) -\n bytecode_offset(instr) -\n argsize -\n 1\n ).to_bytes(argsize, 'little',))\n elif instr.have_arg:\n # Write any other arg here.\n bc.extend(instr.arg.to_bytes(argsize, 'little'))\n elif WORDCODE:\n # with wordcode, all instructions are padded to 2 bytes\n bc.append(0)\n\n return CodeType(\n self.argcount,\n self.kwonlyargcount,\n len(varnames),\n self.stacksize,\n self.py_flags,\n bytes(bc),\n consts,\n names,\n varnames,\n self.filename,\n self.name,\n self.firstlineno,\n self.py_lnotab,\n freevars,\n cellvars,\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef varnames(self):\n # We must sort to preserve the order between calls.\n # The set comprehension is to drop the duplicates.\n return self._argnames + tuple(sorted({\n instr.arg\n for instr in self.instrs\n if instr.uses_varname and instr.arg not in self._argnames\n }))", "response": "The names of all of the local variables in this code object."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef py_lnotab(self):\n reverse_lnotab = reverse_dict(self.lnotab)\n py_lnotab = []\n prev_instr = 0\n prev_lno = self.firstlineno\n for addr, instr in enumerate(_sparse_args(self.instrs)):\n lno = reverse_lnotab.get(instr)\n if lno is None:\n continue\n\n delta = lno - prev_lno\n py_lnotab.append(addr - prev_instr)\n py_lnotab.append(min(delta, max_lnotab_increment))\n delta -= max_lnotab_increment\n while delta > 0:\n py_lnotab.append(0)\n py_lnotab.append(min(delta, max_lnotab_increment))\n delta -= max_lnotab_increment\n\n prev_lno = lno\n prev_instr = addr\n\n return bytes(py_lnotab)", "response": "The encoded lnotab that python uses to compute when lines start."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef stacksize(self):\n return max(scanl(\n op.add,\n 0,\n map(op.attrgetter('stack_effect'), self.instrs),\n ))", "response": "The maximum amount of stack space used by this code object."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ninitializing a unitialized slot to a value.", "response": "def initialize_slot(obj, name, value):\n \"\"\"Initalize an unitialized slot to a value.\n\n If there is already a value for this slot, this is a nop.\n\n Parameters\n ----------\n obj : immutable\n An immutable object.\n name : str\n The name of the slot to initialize.\n value : any\n The value to initialize the slot to.\n \"\"\"\n if not hasattr(obj, name):\n object_setattr(obj, name, value)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate the init function for an immutable object.", "response": "def _create_init(name, slots, defaults):\n \"\"\"Create the __init__ function for an immutable object.\n\n Parameters\n ----------\n name : str\n The name of the immutable class.\n slots : iterable of str\n The __slots__ field from the class.\n defaults : dict or None\n The default values for the arguments to __init__.\n\n Returns\n -------\n init : callable\n The __init__ function for the new immutable class.\n \"\"\"\n if any(s.startswith('__') for s in slots):\n raise TypeError(\n \"immutable classes may not have slots that start with '__'\",\n )\n\n # If we have no defaults, ignore all of this.\n kwdefaults = None\n if defaults is not None:\n hit_default = False\n _defaults = [] # positional defaults\n kwdefaults = {} # kwonly defaults\n kwdefs = False\n for s in slots:\n if s not in defaults and hit_default:\n raise SyntaxError(\n 'non-default argument follows default argument'\n )\n\n if not kwdefs:\n try:\n # Try to grab the next default.\n # Pop so that we know they were all consumed when we\n # are done.\n _defaults.append(defaults.pop(s))\n except KeyError:\n # Not in the dict, we haven't hit any defaults yet.\n pass\n else:\n # We are now consuming default arguments.\n hit_default = True\n if s.startswith('*'):\n if s in defaults:\n raise TypeError(\n 'cannot set default for var args or var kwargs',\n )\n if not s.startswith('**'):\n kwdefs = True\n else:\n kwdefaults[s] = defaults.pop(s)\n\n if defaults:\n # We didn't consume all of the defaults.\n raise TypeError(\n 'default value for non-existent argument%s: %s' % (\n 's' if len(defaults) > 1 else '',\n ', '.join(starmap('{0}={1!r}'.format, defaults.items())),\n )\n )\n\n # cast back to tuples\n defaults = tuple(_defaults)\n\n if not slots:\n return _no_arg_init, ()\n\n ns = {'__initialize_slot': initialize_slot}\n # filter out lone star\n slotnames = tuple(filter(None, (s.strip('*') for s in slots)))\n # We are using exec here so that we can later inspect the call signature\n # of the __init__. This makes the positional vs keywords work as intended.\n # This is totally reasonable, no h8 m8!\n exec(\n 'def __init__(_{name}__self, {args}): \\n {assign}'.format(\n name=name,\n args=', '.join(slots),\n assign='\\n '.join(\n map(\n '__initialize_slot(_{1}__self, \"{0}\", {0})'.format,\n slotnames,\n repeat(name),\n ),\n ),\n ),\n ns,\n )\n init = ns['__init__']\n init.__defaults__ = defaults\n init.__kwdefaults__ = kwdefaults\n return init, slotnames"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _wrapinit(init):\n try:\n spec = getfullargspec(init)\n except TypeError:\n # we cannot preserve the type signature.\n def __init__(*args, **kwargs):\n self = args[0]\n __setattr__._initializing.add(self)\n init(*args, **kwargs)\n __setattr__._initializing.remove(self)\n _check_missing_slots(self)\n\n return __init__\n\n args = spec.args\n varargs = spec.varargs\n if not (args or varargs):\n raise TypeError(\n \"%r must accept at least one positional argument for 'self'\" %\n getattr(init, '__qualname__', getattr(init, '__name__', init)),\n )\n\n if not args:\n self = '%s[0]' % varargs\n forward = argspec = '*' + varargs\n else:\n self = args[0]\n forward = argspec = ', '.join(args)\n\n if args and varargs:\n forward = '%s, *%s' % (forward, spec.varargs)\n argspec = '%s, *%s' % (argspec, spec.varargs)\n if spec.kwonlyargs:\n forward = '%s, %s' % (\n forward,\n ', '.join(map('{0}={0}'.format, spec.kwonlyargs))\n )\n argspec = '%s,%s%s' % (\n argspec,\n '*, ' if not spec.varargs else '',\n ', '.join(spec.kwonlyargs),\n )\n if spec.varkw:\n forward = '%s, **%s' % (forward, spec.varkw)\n argspec = '%s, **%s' % (argspec, spec.varkw)\n\n ns = {\n '__init': init,\n '__initializing': __setattr__._initializing,\n '__check_missing_slots': _check_missing_slots,\n }\n exec(\n dedent(\n \"\"\"\\\n def __init__({argspec}):\n __initializing.add({self})\n __init({forward})\n __initializing.remove({self})\n __check_missing_slots({self})\n \"\"\".format(\n argspec=argspec,\n self=self,\n forward=forward,\n ),\n ),\n ns,\n )\n __init__ = ns['__init__']\n __init__.__defaults__ = spec.defaults\n __init__.__kwdefaults__ = spec.kwonlydefaults\n __init__.__annotations__ = spec.annotations\n return __init__", "response": "Wrap an existing initialize function by thawing self for the duration\nAttributeNames of the init."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks that all the available slots are set in immutable .", "response": "def _check_missing_slots(ob):\n \"\"\"Check that all slots have been initialized when a custom __init__ method\n is provided.\n\n Parameters\n ----------\n ob : immutable\n The instance that was just initialized.\n\n Raises\n ------\n TypeError\n Raised when the instance has not set values that are named in the\n __slots__.\n \"\"\"\n missing_slots = tuple(\n filter(lambda s: not hasattr(ob, s), ob.__slots__),\n )\n if missing_slots:\n raise TypeError(\n 'not all slots initialized in __init__, missing: {0}'.format(\n missing_slots,\n ),\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreduce ns by f starting with n yielding each intermediate value.", "response": "def scanl(f, n, ns):\n \"\"\"Reduce ns by f starting with n yielding each intermediate value.\n\n tuple(scanl(f, n, ns))[-1] == reduce(f, ns, n)\n\n Parameters\n ----------\n f : callable\n A binary function.\n n : any\n The starting value.\n ns : iterable of any\n The iterable to scan over.\n\n Yields\n ------\n p : any\n The value of reduce(f, ns[:idx]) where idx is the current index.\n\n Examples\n --------\n >>> import operator as op\n >>> tuple(scanl(op.add, 0, (1, 2, 3, 4)))\n (0, 1, 3, 6, 10)\n \"\"\"\n yield n\n for m in ns:\n n = f(n, m)\n yield n"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef ffill(iterable):\n it = iter(iterable)\n previous = next(it)\n yield previous\n for e in it:\n if e is None:\n yield previous\n else:\n previous = e\n yield e", "response": "Forward fill non None values in some iterable."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconvert a ( possibly nested ) iterable into a flattened iterator.", "response": "def flatten(seq, *, recurse_types=(tuple, list, set, frozenset)):\n \"\"\"\n Convert a (possibly nested) iterator into a flattened iterator.\n\n Parameters\n ----------\n seq : iterable\n The sequence to flatten.\n recurse_types, optional\n Types to recursively flatten.\n Defaults to (tuple, list, set, frozenset).\n\n >>> list(flatten((1, (2, 3), ((4,), 5))))\n [1, 2, 3, 4, 5]\n\n >>> list(flatten([\"abc\", \"def\"], recurse_types=(str,)))\n ['a', 'b', 'c', 'd', 'e', 'f']\n \"\"\"\n for elem in seq:\n if isinstance(elem, recurse_types):\n yield from flatten(elem)\n else:\n yield elem"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef types(self):\n out = []\n if self._transform_bytes:\n out.append(bytes)\n if self._transform_str:\n out.append(str)\n return tuple(out)", "response": "Returns a tuple containing the types transformed by this transformer."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _transform_constant_sequence(self, seq):\n should_transform = is_a(self.types)\n\n if not any(filter(should_transform, flatten(seq))):\n # Tuple doesn't contain any transformable strings. Ignore.\n yield LOAD_CONST(seq)\n return\n\n for const in seq:\n if should_transform(const):\n yield from self.transform_stringlike(const)\n elif isinstance(const, (tuple, frozenset)):\n yield from self._transform_constant_sequence(const)\n else:\n yield LOAD_CONST(const)\n\n if isinstance(seq, tuple):\n yield BUILD_TUPLE(len(seq))\n else:\n assert isinstance(seq, frozenset)\n yield BUILD_TUPLE(len(seq))\n yield LOAD_CONST(frozenset)\n yield ROT_TWO()\n yield CALL_FUNCTION(1)", "response": "Transform a frozenset or tuple."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef transform_stringlike(self, const):\n yield LOAD_CONST(const)\n if isinstance(const, bytes):\n yield from self.bytes_instrs\n elif isinstance(const, str):\n yield from self.str_instrs", "response": "Yield instructions to process a str or bytes constant."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating an __init__ method for a call type instruction.", "response": "def _mk_call_init(class_):\n \"\"\"Create an __init__ function for a call type instruction.\n\n Parameters\n ----------\n class_ : type\n The type to bind the function to.\n\n Returns\n -------\n __init__ : callable\n The __init__ method for the class.\n \"\"\"\n def __init__(self, packed=no_default, *, positional=0, keyword=0):\n if packed is no_default:\n arg = int.from_bytes(bytes((positional, keyword)), 'little')\n elif not positional and not keyword:\n arg = packed\n else:\n raise TypeError('cannot specify packed and unpacked arguments')\n self.positional, self.keyword = arg.to_bytes(2, 'little')\n super(class_, self).__init__(arg)\n\n return __init__"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nstealing the jump index off of instr. This makes anything that would have jumped to this instruction.", "response": "def steal(self, instr):\n \"\"\"Steal the jump index off of `instr`.\n\n This makes anything that would have jumped to `instr` jump to\n this Instruction instead.\n\n Parameters\n ----------\n instr : Instruction\n The instruction to steal the jump sources from.\n\n Returns\n -------\n self : Instruction\n The instruction that owns this method.\n\n Notes\n -----\n This mutates self and ``instr`` inplace.\n \"\"\"\n instr._stolen_by = self\n for jmp in instr._target_of:\n jmp.arg = self\n self._target_of = instr._target_of\n instr._target_of = set()\n return self"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate an instruction from an opcode and raw argument.", "response": "def from_opcode(cls, opcode, arg=_no_arg):\n \"\"\"\n Create an instruction from an opcode and raw argument.\n\n Parameters\n ----------\n opcode : int\n Opcode for the instruction to create.\n arg : int, optional\n The argument for the instruction.\n\n Returns\n -------\n intsr : Instruction\n An instance of the instruction named by ``opcode``.\n \"\"\"\n return type(cls)(opname[opcode], (cls,), {}, opcode=opcode)(arg)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef stack_effect(self):\n if self.opcode == NOP.opcode: # noqa\n # dis.stack_effect is broken here\n return 0\n\n return stack_effect(\n self.opcode,\n *((self.arg if isinstance(self.arg, int) else 0,)\n if self.have_arg else ())\n )", "response": "The net effect of executing this instruction on the interpreter stack."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef equiv(self, instr):\n return type(self) == type(instr) and self.arg == instr.arg", "response": "Checks if this instruction is equivalent to another instruction."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_environment(home_dir, site_packages=False, clear=False,\n unzip_setuptools=False,\n prompt=None, search_dirs=None, download=False,\n no_setuptools=False, no_pip=False, no_wheel=False,\n symlink=True):\n \"\"\"\n Creates a new environment in ``home_dir``.\n\n If ``site_packages`` is true, then the global ``site-packages/``\n directory will be on the path.\n\n If ``clear`` is true (default False) then the environment will\n first be cleared.\n \"\"\"\n home_dir, lib_dir, inc_dir, bin_dir = path_locations(home_dir)\n\n py_executable = os.path.abspath(install_python(\n home_dir, lib_dir, inc_dir, bin_dir,\n site_packages=site_packages, clear=clear, symlink=symlink))\n\n install_distutils(home_dir)\n\n to_install = []\n\n if not no_setuptools:\n to_install.append('setuptools')\n\n if not no_pip:\n to_install.append('pip')\n\n if not no_wheel:\n to_install.append('wheel')\n\n if to_install:\n install_wheel(\n to_install,\n py_executable,\n search_dirs,\n download=download,\n )\n\n install_activate(home_dir, bin_dir, prompt)\n\n install_python_config(home_dir, bin_dir, prompt)", "response": "Creates a new environment in the given home_dir."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the path locations for the environment.", "response": "def path_locations(home_dir):\n \"\"\"Return the path locations for the environment (where libraries are,\n where scripts go, etc)\"\"\"\n home_dir = os.path.abspath(home_dir)\n # XXX: We'd use distutils.sysconfig.get_python_inc/lib but its\n # prefix arg is broken: http://bugs.python.org/issue3386\n if is_win:\n # Windows has lots of problems with executables with spaces in\n # the name; this function will remove them (using the ~1\n # format):\n mkdir(home_dir)\n if ' ' in home_dir:\n import ctypes\n GetShortPathName = ctypes.windll.kernel32.GetShortPathNameW\n size = max(len(home_dir)+1, 256)\n buf = ctypes.create_unicode_buffer(size)\n try:\n u = unicode\n except NameError:\n u = str\n ret = GetShortPathName(u(home_dir), buf, size)\n if not ret:\n print('Error: the path \"%s\" has a space in it' % home_dir)\n print('We could not determine the short pathname for it.')\n print('Exiting.')\n sys.exit(3)\n home_dir = str(buf.value)\n lib_dir = join(home_dir, 'Lib')\n inc_dir = join(home_dir, 'Include')\n bin_dir = join(home_dir, 'Scripts')\n if is_jython:\n lib_dir = join(home_dir, 'Lib')\n inc_dir = join(home_dir, 'Include')\n bin_dir = join(home_dir, 'bin')\n elif is_pypy:\n lib_dir = home_dir\n inc_dir = join(home_dir, 'include')\n bin_dir = join(home_dir, 'bin')\n elif not is_win:\n lib_dir = join(home_dir, 'lib', py_version)\n inc_dir = join(home_dir, 'include', py_version + abiflags)\n bin_dir = join(home_dir, 'bin')\n return home_dir, lib_dir, inc_dir, bin_dir"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncopying tcl and tk libraries on Windows", "response": "def copy_tcltk(src, dest, symlink):\n \"\"\" copy tcl/tk libraries on Windows (issue #93) \"\"\"\n if majver == 2:\n libver = '8.5'\n else:\n libver = '8.6'\n for name in ['tcl', 'tk']:\n srcdir = src + '/tcl/' + name + libver\n dstdir = dest + '/tcl/' + name + libver\n copyfileordir(srcdir, dstdir, symlink)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ninstall the python modules and returns a new version of the base environment.", "response": "def install_python(home_dir, lib_dir, inc_dir, bin_dir, site_packages, clear, symlink=True):\n \"\"\"Install just the base environment, no distutils patches etc\"\"\"\n if sys.executable.startswith(bin_dir):\n print('Please use the *system* python to run this script')\n return\n\n if clear:\n rmtree(lib_dir)\n ## FIXME: why not delete it?\n ## Maybe it should delete everything with #!/path/to/venv/python in it\n logger.notify('Not deleting %s', bin_dir)\n\n if hasattr(sys, 'real_prefix'):\n logger.notify('Using real prefix %r' % sys.real_prefix)\n prefix = sys.real_prefix\n elif hasattr(sys, 'base_prefix'):\n logger.notify('Using base prefix %r' % sys.base_prefix)\n prefix = sys.base_prefix\n else:\n prefix = sys.prefix\n mkdir(lib_dir)\n fix_lib64(lib_dir, symlink)\n stdlib_dirs = [os.path.dirname(os.__file__)]\n if is_win:\n stdlib_dirs.append(join(os.path.dirname(stdlib_dirs[0]), 'DLLs'))\n elif is_darwin:\n stdlib_dirs.append(join(stdlib_dirs[0], 'site-packages'))\n if hasattr(os, 'symlink'):\n logger.info('Symlinking Python bootstrap modules')\n else:\n logger.info('Copying Python bootstrap modules')\n logger.indent += 2\n try:\n # copy required files...\n for stdlib_dir in stdlib_dirs:\n if not os.path.isdir(stdlib_dir):\n continue\n for fn in os.listdir(stdlib_dir):\n bn = os.path.splitext(fn)[0]\n if fn != 'site-packages' and bn in REQUIRED_FILES:\n copyfile(join(stdlib_dir, fn), join(lib_dir, fn), symlink)\n # ...and modules\n copy_required_modules(home_dir, symlink)\n finally:\n logger.indent -= 2\n # ...copy tcl/tk\n if is_win:\n copy_tcltk(prefix, home_dir, symlink)\n mkdir(join(lib_dir, 'site-packages'))\n import site\n site_filename = site.__file__\n if site_filename.endswith('.pyc') or site_filename.endswith('.pyo'):\n site_filename = site_filename[:-1]\n elif site_filename.endswith('$py.class'):\n site_filename = site_filename.replace('$py.class', '.py')\n site_filename_dst = change_prefix(site_filename, home_dir)\n site_dir = os.path.dirname(site_filename_dst)\n writefile(site_filename_dst, SITE_PY)\n writefile(join(site_dir, 'orig-prefix.txt'), prefix)\n site_packages_filename = join(site_dir, 'no-global-site-packages.txt')\n if not site_packages:\n writefile(site_packages_filename, '')\n\n if is_pypy or is_win:\n stdinc_dir = join(prefix, 'include')\n else:\n stdinc_dir = join(prefix, 'include', py_version + abiflags)\n if os.path.exists(stdinc_dir):\n copyfile(stdinc_dir, inc_dir, symlink)\n else:\n logger.debug('No include dir %s' % stdinc_dir)\n\n platinc_dir = distutils.sysconfig.get_python_inc(plat_specific=1)\n if platinc_dir != stdinc_dir:\n platinc_dest = distutils.sysconfig.get_python_inc(\n plat_specific=1, prefix=home_dir)\n if platinc_dir == platinc_dest:\n # Do platinc_dest manually due to a CPython bug;\n # not http://bugs.python.org/issue3386 but a close cousin\n platinc_dest = subst_path(platinc_dir, prefix, home_dir)\n if platinc_dest:\n # PyPy's stdinc_dir and prefix are relative to the original binary\n # (traversing virtualenvs), whereas the platinc_dir is relative to\n # the inner virtualenv and ignores the prefix argument.\n # This seems more evolved than designed.\n copyfile(platinc_dir, platinc_dest, symlink)\n\n # pypy never uses exec_prefix, just ignore it\n if sys.exec_prefix != prefix and not is_pypy:\n if is_win:\n exec_dir = join(sys.exec_prefix, 'lib')\n elif is_jython:\n exec_dir = join(sys.exec_prefix, 'Lib')\n else:\n exec_dir = join(sys.exec_prefix, 'lib', py_version)\n for fn in os.listdir(exec_dir):\n copyfile(join(exec_dir, fn), join(lib_dir, fn), symlink)\n\n if is_jython:\n # Jython has either jython-dev.jar and javalib/ dir, or just\n # jython.jar\n for name in 'jython-dev.jar', 'javalib', 'jython.jar':\n src = join(prefix, name)\n if os.path.exists(src):\n copyfile(src, join(home_dir, name), symlink)\n # XXX: registry should always exist after Jython 2.5rc1\n src = join(prefix, 'registry')\n if os.path.exists(src):\n copyfile(src, join(home_dir, 'registry'), symlink=False)\n copyfile(join(prefix, 'cachedir'), join(home_dir, 'cachedir'),\n symlink=False)\n\n mkdir(bin_dir)\n py_executable = join(bin_dir, os.path.basename(sys.executable))\n if 'Python.framework' in prefix:\n # OS X framework builds cause validation to break\n # https://github.com/pypa/virtualenv/issues/322\n if os.environ.get('__PYVENV_LAUNCHER__'):\n del os.environ[\"__PYVENV_LAUNCHER__\"]\n if re.search(r'/Python(?:-32|-64)*$', py_executable):\n # The name of the python executable is not quite what\n # we want, rename it.\n py_executable = os.path.join(\n os.path.dirname(py_executable), 'python')\n\n logger.notify('New %s executable in %s', expected_exe, py_executable)\n pcbuild_dir = os.path.dirname(sys.executable)\n pyd_pth = os.path.join(lib_dir, 'site-packages', 'virtualenv_builddir_pyd.pth')\n if is_win and os.path.exists(os.path.join(pcbuild_dir, 'build.bat')):\n logger.notify('Detected python running from build directory %s', pcbuild_dir)\n logger.notify('Writing .pth file linking to build directory for *.pyd files')\n writefile(pyd_pth, pcbuild_dir)\n else:\n pcbuild_dir = None\n if os.path.exists(pyd_pth):\n logger.info('Deleting %s (not Windows env or not build directory python)' % pyd_pth)\n os.unlink(pyd_pth)\n\n if sys.executable != py_executable:\n ## FIXME: could I just hard link?\n executable = sys.executable\n shutil.copyfile(executable, py_executable)\n make_exe(py_executable)\n if is_win or is_cygwin:\n pythonw = os.path.join(os.path.dirname(sys.executable), 'pythonw.exe')\n if os.path.exists(pythonw):\n logger.info('Also created pythonw.exe')\n shutil.copyfile(pythonw, os.path.join(os.path.dirname(py_executable), 'pythonw.exe'))\n python_d = os.path.join(os.path.dirname(sys.executable), 'python_d.exe')\n python_d_dest = os.path.join(os.path.dirname(py_executable), 'python_d.exe')\n if os.path.exists(python_d):\n logger.info('Also created python_d.exe')\n shutil.copyfile(python_d, python_d_dest)\n elif os.path.exists(python_d_dest):\n logger.info('Removed python_d.exe as it is no longer at the source')\n os.unlink(python_d_dest)\n # we need to copy the DLL to enforce that windows will load the correct one.\n # may not exist if we are cygwin.\n py_executable_dll = 'python%s%s.dll' % (\n sys.version_info[0], sys.version_info[1])\n py_executable_dll_d = 'python%s%s_d.dll' % (\n sys.version_info[0], sys.version_info[1])\n pythondll = os.path.join(os.path.dirname(sys.executable), py_executable_dll)\n pythondll_d = os.path.join(os.path.dirname(sys.executable), py_executable_dll_d)\n pythondll_d_dest = os.path.join(os.path.dirname(py_executable), py_executable_dll_d)\n if os.path.exists(pythondll):\n logger.info('Also created %s' % py_executable_dll)\n shutil.copyfile(pythondll, os.path.join(os.path.dirname(py_executable), py_executable_dll))\n if os.path.exists(pythondll_d):\n logger.info('Also created %s' % py_executable_dll_d)\n shutil.copyfile(pythondll_d, pythondll_d_dest)\n elif os.path.exists(pythondll_d_dest):\n logger.info('Removed %s as the source does not exist' % pythondll_d_dest)\n os.unlink(pythondll_d_dest)\n if is_pypy:\n # make a symlink python --> pypy-c\n python_executable = os.path.join(os.path.dirname(py_executable), 'python')\n if sys.platform in ('win32', 'cygwin'):\n python_executable += '.exe'\n logger.info('Also created executable %s' % python_executable)\n copyfile(py_executable, python_executable, symlink)\n\n if is_win:\n for name in ['libexpat.dll', 'libpypy.dll', 'libpypy-c.dll',\n 'libeay32.dll', 'ssleay32.dll', 'sqlite3.dll',\n 'tcl85.dll', 'tk85.dll']:\n src = join(prefix, name)\n if os.path.exists(src):\n copyfile(src, join(bin_dir, name), symlink)\n\n for d in sys.path:\n if d.endswith('lib_pypy'):\n break\n else:\n logger.fatal('Could not find lib_pypy in sys.path')\n raise SystemExit(3)\n logger.info('Copying lib_pypy')\n copyfile(d, os.path.join(home_dir, 'lib_pypy'), symlink)\n\n if os.path.splitext(os.path.basename(py_executable))[0] != expected_exe:\n secondary_exe = os.path.join(os.path.dirname(py_executable),\n expected_exe)\n py_executable_ext = os.path.splitext(py_executable)[1]\n if py_executable_ext.lower() == '.exe':\n # python2.4 gives an extension of '.4' :P\n secondary_exe += py_executable_ext\n if os.path.exists(secondary_exe):\n logger.warn('Not overwriting existing %s script %s (you must use %s)'\n % (expected_exe, secondary_exe, py_executable))\n else:\n logger.notify('Also creating executable in %s' % secondary_exe)\n shutil.copyfile(sys.executable, secondary_exe)\n make_exe(secondary_exe)\n\n if '.framework' in prefix:\n if 'Python.framework' in prefix:\n logger.debug('MacOSX Python framework detected')\n # Make sure we use the embedded interpreter inside\n # the framework, even if sys.executable points to\n # the stub executable in ${sys.prefix}/bin\n # See http://groups.google.com/group/python-virtualenv/\n # browse_thread/thread/17cab2f85da75951\n original_python = os.path.join(\n prefix, 'Resources/Python.app/Contents/MacOS/Python')\n if 'EPD' in prefix:\n logger.debug('EPD framework detected')\n original_python = os.path.join(prefix, 'bin/python')\n shutil.copy(original_python, py_executable)\n\n # Copy the framework's dylib into the virtual\n # environment\n virtual_lib = os.path.join(home_dir, '.Python')\n\n if os.path.exists(virtual_lib):\n os.unlink(virtual_lib)\n copyfile(\n os.path.join(prefix, 'Python'),\n virtual_lib,\n symlink)\n\n # And then change the install_name of the copied python executable\n try:\n mach_o_change(py_executable,\n os.path.join(prefix, 'Python'),\n '@executable_path/../.Python')\n except:\n e = sys.exc_info()[1]\n logger.warn(\"Could not call mach_o_change: %s. \"\n \"Trying to call install_name_tool instead.\" % e)\n try:\n call_subprocess(\n [\"install_name_tool\", \"-change\",\n os.path.join(prefix, 'Python'),\n '@executable_path/../.Python',\n py_executable])\n except:\n logger.fatal(\"Could not call install_name_tool -- you must \"\n \"have Apple's development tools installed\")\n raise\n\n if not is_win:\n # Ensure that 'python', 'pythonX' and 'pythonX.Y' all exist\n py_exe_version_major = 'python%s' % sys.version_info[0]\n py_exe_version_major_minor = 'python%s.%s' % (\n sys.version_info[0], sys.version_info[1])\n py_exe_no_version = 'python'\n required_symlinks = [ py_exe_no_version, py_exe_version_major,\n py_exe_version_major_minor ]\n\n py_executable_base = os.path.basename(py_executable)\n\n if py_executable_base in required_symlinks:\n # Don't try to symlink to yourself.\n required_symlinks.remove(py_executable_base)\n\n for pth in required_symlinks:\n full_pth = join(bin_dir, pth)\n if os.path.exists(full_pth):\n os.unlink(full_pth)\n if symlink:\n os.symlink(py_executable_base, full_pth)\n else:\n copyfile(py_executable, full_pth, symlink)\n\n if is_win and ' ' in py_executable:\n # There's a bug with subprocess on Windows when using a first\n # argument that has a space in it. Instead we have to quote\n # the value:\n py_executable = '\"%s\"' % py_executable\n # NOTE: keep this check as one line, cmd.exe doesn't cope with line breaks\n cmd = [py_executable, '-c', 'import sys;out=sys.stdout;'\n 'getattr(out, \"buffer\", out).write(sys.prefix.encode(\"utf-8\"))']\n logger.info('Testing executable with %s %s \"%s\"' % tuple(cmd))\n try:\n proc = subprocess.Popen(cmd,\n stdout=subprocess.PIPE)\n proc_stdout, proc_stderr = proc.communicate()\n except OSError:\n e = sys.exc_info()[1]\n if e.errno == errno.EACCES:\n logger.fatal('ERROR: The executable %s could not be run: %s' % (py_executable, e))\n sys.exit(100)\n else:\n raise e\n\n proc_stdout = proc_stdout.strip().decode(\"utf-8\")\n proc_stdout = os.path.normcase(os.path.abspath(proc_stdout))\n norm_home_dir = os.path.normcase(os.path.abspath(home_dir))\n if hasattr(norm_home_dir, 'decode'):\n norm_home_dir = norm_home_dir.decode(sys.getfilesystemencoding())\n if proc_stdout != norm_home_dir:\n logger.fatal(\n 'ERROR: The executable %s is not functioning' % py_executable)\n logger.fatal(\n 'ERROR: It thinks sys.prefix is %r (should be %r)'\n % (proc_stdout, norm_home_dir))\n logger.fatal(\n 'ERROR: virtualenv is not compatible with this system or executable')\n if is_win:\n logger.fatal(\n 'Note: some Windows users have reported this error when they '\n 'installed Python for \"Only this user\" or have multiple '\n 'versions of Python installed. Copying the appropriate '\n 'PythonXX.dll to the virtualenv Scripts/ directory may fix '\n 'this problem.')\n sys.exit(100)\n else:\n logger.info('Got sys.prefix result: %r' % proc_stdout)\n\n pydistutils = os.path.expanduser('~/.pydistutils.cfg')\n if os.path.exists(pydistutils):\n logger.notify('Please make sure you remove any previous custom paths from '\n 'your %s file.' % pydistutils)\n ## FIXME: really this should be calculated earlier\n\n fix_local_scheme(home_dir, symlink)\n\n if site_packages:\n if os.path.exists(site_packages_filename):\n logger.info('Deleting %s' % site_packages_filename)\n os.unlink(site_packages_filename)\n\n return py_executable"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfixing lib64 in the current directory.", "response": "def fix_lib64(lib_dir, symlink=True):\n \"\"\"\n Some platforms (particularly Gentoo on x64) put things in lib64/pythonX.Y\n instead of lib/pythonX.Y. If this is such a platform we'll just create a\n symlink so lib64 points to lib\n \"\"\"\n # PyPy's library path scheme is not affected by this.\n # Return early or we will die on the following assert.\n if is_pypy:\n logger.debug('PyPy detected, skipping lib64 symlinking')\n return\n # Check we have a lib64 library path\n if not [p for p in distutils.sysconfig.get_config_vars().values()\n if isinstance(p, basestring) and 'lib64' in p]:\n return\n\n logger.debug('This system uses lib64; symlinking lib64 to lib')\n\n assert os.path.basename(lib_dir) == 'python%s' % sys.version[:3], (\n \"Unexpected python lib dir: %r\" % lib_dir)\n lib_parent = os.path.dirname(lib_dir)\n top_level = os.path.dirname(lib_parent)\n lib_dir = os.path.join(top_level, 'lib')\n lib64_link = os.path.join(top_level, 'lib64')\n assert os.path.basename(lib_parent) == 'lib', (\n \"Unexpected parent dir: %r\" % lib_parent)\n if os.path.lexists(lib64_link):\n return\n if symlink:\n os.symlink('lib', lib64_link)\n else:\n copyfile('lib', lib64_link)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef mach_o_change(path, what, value):\n\n def do_macho(file, bits, endian):\n # Read Mach-O header (the magic number is assumed read by the caller)\n cputype, cpusubtype, filetype, ncmds, sizeofcmds, flags = read_data(file, endian, 6)\n # 64-bits header has one more field.\n if bits == 64:\n read_data(file, endian)\n # The header is followed by ncmds commands\n for n in range(ncmds):\n where = file.tell()\n # Read command header\n cmd, cmdsize = read_data(file, endian, 2)\n if cmd == LC_LOAD_DYLIB:\n # The first data field in LC_LOAD_DYLIB commands is the\n # offset of the name, starting from the beginning of the\n # command.\n name_offset = read_data(file, endian)\n file.seek(where + name_offset, os.SEEK_SET)\n # Read the NUL terminated string\n load = file.read(cmdsize - name_offset).decode()\n load = load[:load.index('\\0')]\n # If the string is what is being replaced, overwrite it.\n if load == what:\n file.seek(where + name_offset, os.SEEK_SET)\n file.write(value.encode() + '\\0'.encode())\n # Seek to the next command\n file.seek(where + cmdsize, os.SEEK_SET)\n\n def do_file(file, offset=0, size=maxint):\n file = fileview(file, offset, size)\n # Read magic number\n magic = read_data(file, BIG_ENDIAN)\n if magic == FAT_MAGIC:\n # Fat binaries contain nfat_arch Mach-O binaries\n nfat_arch = read_data(file, BIG_ENDIAN)\n for n in range(nfat_arch):\n # Read arch header\n cputype, cpusubtype, offset, size, align = read_data(file, BIG_ENDIAN, 5)\n do_file(file, offset, size)\n elif magic == MH_MAGIC:\n do_macho(file, 32, BIG_ENDIAN)\n elif magic == MH_CIGAM:\n do_macho(file, 32, LITTLE_ENDIAN)\n elif magic == MH_MAGIC_64:\n do_macho(file, 64, BIG_ENDIAN)\n elif magic == MH_CIGAM_64:\n do_macho(file, 64, LITTLE_ENDIAN)\n\n assert(len(what) >= len(value))\n\n with open(path, 'r+b') as f:\n do_file(f)", "response": "Replace a given name in any Mach - O command found in the given binary with a new name."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_bin_dir(self):\n for subdir in (\"bin\", \"Scripts\"):\n bin_dir = os.path.join(self.abs_home_dir, subdir)\n if os.path.isdir(bin_dir):\n print(\"bin dir: %r\" % bin_dir)\n return bin_dir\n raise RuntimeError(\"Can't find 'bin/Scripts' dir in: %r\" % self.abs_home_dir)", "response": "Get the bin dir of the current virtual environment."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the python executable in the virtualenv.", "response": "def _get_python_cmd(self):\n \"\"\"\n return the python executable in the virtualenv.\n Try first sys.executable but use fallbacks.\n \"\"\"\n file_names = [\"pypy.exe\", \"python.exe\", \"python\"]\n executable = sys.executable\n if executable is not None:\n executable = os.path.split(executable)[1]\n file_names.insert(0, executable)\n\n return self._get_bin_file(*file_names)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a string representation of a dictionary.", "response": "def hex_repr(d):\n \"\"\"\n >>> hex_repr({\"A\":0x1,\"B\":0xabc})\n 'A=$01 B=$0abc'\n \"\"\"\n txt = []\n for k, v in sorted(d.items()):\n if isinstance(v, int):\n txt.append(\"%s=%s\" % (k, nice_hex(v)))\n else:\n txt.append(\"%s=%s\" % (k, v))\n return \" \".join(txt)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _run_dragonpy_cli(self, *args):\n verbosity = self.frame_settings.var_verbosity.get()\n verbosity_no = VERBOSITY_DICT2[verbosity]\n log.debug(\"Verbosity: %i (%s)\" % (verbosity_no, verbosity))\n\n args = (\n \"--verbosity\", \"%s\" % verbosity_no\n # \"--log_list\",\n # \"--log\",\n # \"dragonpy.components.cpu6809,40\",\n # \"dragonpy.Dragon32.MC6821_PIA,50\",\n ) + args\n click.echo(\"\\n\")\n run_dragonpy(*args, verbose=True)", "response": "Run the dragonpy CLI with given args"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _run_command(self, command):\n machine_name = self.frame_run_buttons.var_machine.get()\n self._run_dragonpy_cli(\"--machine\", machine_name, command)", "response": "Run DragonPy CLI with given command like run editor or editor"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts an integer to a 0 - padded hex byte string", "response": "def int_to_padded_hex_byte(integer):\n \"\"\"\n Convert an int to a 0-padded hex byte string\n example: 65 == 41, 10 == 0A\n Returns: The hex byte as string (ex: \"0C\")\n \"\"\"\n to_hex = hex(integer)\n xpos = to_hex.find('x')\n hex_byte = to_hex[xpos+1 : len(to_hex)].upper()\n\n if len(hex_byte) == 1:\n hex_byte = ''.join(['0', hex_byte])\n\n return hex_byte"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncomputes the checksum byte of a given S - Record object", "response": "def compute_srec_checksum(srec):\n \"\"\"\n Compute the checksum byte of a given S-Record\n Returns: The checksum as a string hex byte (ex: \"0C\")\n \"\"\"\n # Get the summable data from srec\n # start at 2 to remove the S* record entry\n data = srec[2:len(srec)]\n\n sum = 0\n # For each byte, convert to int and add.\n # (step each two character to form a byte)\n for position in range(0, len(data), 2):\n current_byte = data[position : position+2]\n int_value = int(current_byte, 16)\n sum += int_value\n\n # Extract the Least significant byte from the hex form\n hex_sum = hex(sum)\n least_significant_byte = hex_sum[len(hex_sum)-2:]\n least_significant_byte = least_significant_byte.replace('x', '0')\n\n # turn back to int and find the 8-bit one's complement\n int_lsb = int(least_significant_byte, 16)\n computed_checksum = (~int_lsb) & 0xff\n\n return computed_checksum"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef validate_srec_checksum(srec):\n checksum = srec[len(srec)-2:]\n\n # Strip the original checksum and compare with the computed one\n if compute_srec_checksum(srec[:len(srec) - 2]) == int(checksum, 16):\n return True\n else:\n return False", "response": "Validate if the checksum of the supplied s - record is valid."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\noffsetting a given byte in the provided data payload.", "response": "def offset_byte_in_data(target_data, offset, target_byte_pos, readable = False, wraparound = False):\n \"\"\"\n Offset a given byte in the provided data payload (kind of rot(x))\n readable will return a human-readable representation of the byte+offset\n wraparound will wrap around 255 to 0 (ex: 257 = 2)\n Returns: the offseted byte\n \"\"\"\n byte_pos = target_byte_pos * 2\n prefix = target_data[:byte_pos]\n suffix = target_data[byte_pos+2:]\n target_byte = target_data[byte_pos:byte_pos+2]\n int_value = int(target_byte, 16)\n int_value += offset\n\n # Wraparound\n if int_value > 255 and wraparound:\n int_value -= 256\n\n # Extract readable char for analysis\n if readable:\n if int_value < 256 and int_value > 0:\n offset_byte = get_readable_string(int_value)\n else:\n offset_byte = int_to_padded_hex_byte(int_value)\n else:\n offset_byte = int_to_padded_hex_byte(int_value)\n\n return ''.join([prefix, offset_byte, suffix])"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef offset_data(data_section, offset, readable = False, wraparound = False):\n for pos in range(0, len(data_section)/2):\n data_section = offset_byte_in_data(data_section, offset, pos, readable, wraparound)\n\n return data_section", "response": "Offset the entire data section by the given offset."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef parse_srec(srec):\n record_type = srec[0:2]\n data_len = srec[2:4]\n addr_len = __ADDR_LEN.get(record_type) * 2\n addr = srec[4:4 + addr_len]\n data = srec[4 + addr_len:len(srec)-2]\n checksum = srec[len(srec) - 2:]\n return record_type, data_len, addr, data, checksum", "response": "Parse a given S - Record into a tuple of the record type data length addr data and checksum"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconverts a wav file to another file.", "response": "def convert(source_file, destination_file, cfg):\n \"\"\"\n convert in every way.\n \"\"\"\n source_ext = os.path.splitext(source_file)[1]\n source_ext = source_ext.lower()\n\n dest_ext = os.path.splitext(destination_file)[1]\n dest_ext = dest_ext.lower()\n\n if source_ext not in (\".wav\", \".cas\", \".bas\"):\n raise AssertionError(\n \"Source file type %r not supported.\" % repr(source_ext)\n )\n if dest_ext not in (\".wav\", \".cas\", \".bas\"):\n raise AssertionError(\n \"Destination file type %r not supported.\" % repr(dest_ext)\n )\n\n print \"Convert %s -> %s\" % (source_ext, dest_ext)\n\n c = Cassette(cfg)\n\n if source_ext == \".wav\":\n c.add_from_wav(source_file)\n elif source_ext == \".cas\":\n c.add_from_cas(source_file)\n elif source_ext == \".bas\":\n c.add_from_bas(source_file)\n else:\n raise RuntimeError # Should never happen\n\n c.print_debug_info()\n\n if dest_ext == \".wav\":\n c.write_wave(destination_file)\n elif dest_ext == \".cas\":\n c.write_cas(destination_file)\n elif dest_ext == \".bas\":\n c.write_bas(destination_file)\n else:\n raise RuntimeError"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the initial RAM size", "response": "def get_initial_RAM(self):\n \"\"\"\n init the Dragon RAM\n See: http://archive.worldofdragon.org/phpBB3/viewtopic.php?f=5&t=4444\n \"\"\"\n mem_FF = [0xff for _ in xrange(4)]\n mem_00 = [0x00 for _ in xrange(4)]\n\n mem = []\n for _ in xrange(self.RAM_SIZE // 8):\n mem += mem_FF\n mem += mem_00\n\n return mem"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef save_position(self):\n # save text cursor position:\n self.old_text_pos = self.index(tkinter.INSERT)\n # save scroll position:\n self.old_first, self.old_last = self.yview()", "response": "save cursor and scroll position"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef restore_position(self):\n # restore text cursor position:\n self.mark_set(tkinter.INSERT, self.old_text_pos)\n # restore scroll position:\n self.yview_moveto(self.old_first)", "response": "restore cursor and scroll position"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nhighlights the current line", "response": "def __update_interval(self):\n \"\"\" highlight the current line \"\"\"\n self.update()\n self.after_id = self.text.after(250, self.__update_interval)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates the line number", "response": "def update(self, event=None, force=False):\n \"\"\" highlight the current line \"\"\"\n line_no = self.text.index(tkinter.INSERT).split(\".\")[0]\n\n # if not force:\n # if line_no == self.current_line:\n# log.critical(\"no highlight line needed.\")\n# return\n\n# log.critical(\"highlight line: %s\" % line_no)\n# self.current_line = line_no\n\n self.text.tag_remove(self.tag_current_line.id, \"1.0\", \"end\")\n self.text.tag_add(self.tag_current_line.id, \"%s.0\" % line_no, \"%s.0+1lines\" % line_no)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef download(self):\n if os.path.isfile(self.archive_path):\n print(\"Use %r\" % self.archive_path)\n with open(self.archive_path, \"rb\") as f:\n content = f.read()\n else:\n print(\"Request: %r...\" % self.URL)\n # Warning: HTTPS requests do not do any verification of the server's certificate.\n f = urlopen(self.URL)\n content = f.read()\n with open(self.archive_path, \"wb\") as out_file:\n out_file.write(content)\n\n # Check SHA hash:\n current_sha1 = hashlib.sha1(content).hexdigest()\n assert current_sha1 == self.DOWNLOAD_SHA1, \"Download sha1 value is wrong! SHA1 is: %r\" % current_sha1\n print(\"Download SHA1: %r, ok.\" % current_sha1)", "response": "Download the content of the current object into the temp directory."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef paste_clipboard(self, event):\n log.critical(\"paste clipboard\")\n clipboard = self.root.clipboard_get()\n for line in clipboard.splitlines():\n log.critical(\"paste line: %s\", repr(line))\n self.add_user_input(line + \"\\r\")", "response": "Pastes the contents of the clipboard into the CPU."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef display_callback(self, cpu_cycles, op_address, address, value):\n self.display.write_byte(cpu_cycles, op_address, address, value)\n return value", "response": "Called via memory write_byte_middleware"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fix_virtualenv_tkinter():\n if \"TCL_LIBRARY\" in os.environ:\n # Fix not needed (e.g. virtualenv issues #93 fixed?)\n return\n\n if not hasattr(sys, \"real_prefix\"):\n # we are not in a activated virtualenv\n return\n\n if sys.version_info[0] == 2:\n # Python v2\n virtualprefix = sys.prefix\n sys.prefix = sys.real_prefix\n\n import FixTk\n\n if \"TCL_LIBRARY\" not in os.environ:\n reload(FixTk)\n\n sys.prefix = virtualprefix\n else:\n # Python v3\n virtualprefix = sys.base_prefix\n sys.base_prefix = sys.real_prefix\n\n from tkinter import _fix\n\n if \"TCL_LIBRARY\" not in os.environ:\n from imp import reload\n reload(_fix)\n\n sys.base_prefix = virtualprefix", "response": "This function is used to fix the tkinter init. tcl file in a virtualenv."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef internal_reset(self):\n log.critical(\"PIA internal_reset()\")\n self.empty_key_toggle = True\n self.current_input_char = None\n self.input_repead = 0", "response": "internal state reset.\n used e.g. in unittests"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef read_PIA0_A_data(self, cpu_cycles, op_address, address):\n pia0b = self.pia_0_B_data.value # $ff02\n\n # FIXME: Find a way to handle CoCo and Dragon in the same way!\n if self.cfg.CONFIG_NAME == COCO2B:\n# log.critical(\"\\t count: %i\", self.input_repead)\n if self.input_repead == 7:\n try:\n self.current_input_char = self.user_input_queue.get_nowait()\n except queue.Empty:\n self.current_input_char = None\n else:\n log.critical(\n \"\\tget new key from queue: %s\", repr(self.current_input_char))\n elif self.input_repead == 18:\n# log.critical(\"\\tForce send 'no key pressed'\")\n self.current_input_char = None\n elif self.input_repead > 20:\n self.input_repead = 0\n\n self.input_repead += 1\n else: # Dragon\n if pia0b == self.cfg.PIA0B_KEYBOARD_START: # FIXME\n if self.empty_key_toggle:\n # Work-a-round for \"poor\" dragon keyboard scan routine:\n # The scan routine in ROM ignores key pressed directly behind\n # one another if they are in the same row!\n # See \"Inside the Dragon\" book, page 203 ;)\n #\n # Here with the empty_key_toggle, we always send a \"no key pressed\"\n # after every key press back and then we send the next key from\n # the self.user_input_queue\n #\n # TODO: We can check the row of the previous key press and only\n # force a 'no key pressed' if the row is the same\n self.empty_key_toggle = False\n self.current_input_char = None\n# log.critical(\"\\tForce send 'no key pressed'\")\n else:\n try:\n self.current_input_char = self.user_input_queue.get_nowait()\n except queue.Empty:\n# log.critical(\"\\tinput_queue is empty\"))\n self.current_input_char = None\n else:\n# log.critical(\"\\tget new key from queue: %s\", repr(self.current_input_char))\n self.empty_key_toggle = True\n\n if self.current_input_char is None:\n# log.critical(\"\\tno key pressed\")\n result = 0xff\n self.empty_key_toggle = False\n else:\n# log.critical(\"\\tsend %s\", repr(self.current_input_char))\n result = self.cfg.pia_keymatrix_result(\n self.current_input_char, pia0b)\n\n# if not is_bit_set(pia0b, bit=7):\n# bit 7 | PA7 | joystick comparison input\n# result = clear_bit(result, bit=7)\n\n# if self.current_input_char is not None:\n# log.critical(\n# \"%04x| read $%04x ($ff02 is $%02x %s) send $%02x %s back\\t|%s\",\n# op_address, address,\n# pia0b, '{0:08b}'.format(pia0b),\n# result, '{0:08b}'.format(result),\n# self.cfg.mem_info.get_shortest(op_address)\n# )\n return result", "response": "Read the PIA 0 A side data from the system."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nwriting to 0xff00 -> PIA 0 A side Data reg.", "response": "def write_PIA0_A_data(self, cpu_cycles, op_address, address, value):\n \"\"\" write to 0xff00 -> PIA 0 A side Data reg. \"\"\"\n log.error(\"%04x| write $%02x (%s) to $%04x -> PIA 0 A side Data reg.\\t|%s\",\n op_address, value, byte2bit_string(value), address,\n self.cfg.mem_info.get_shortest(op_address)\n )\n self.pia_0_A_register.set(value)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef read_PIA0_A_control(self, cpu_cycles, op_address, address):\n value = 0xb3\n log.error(\n \"%04x| read $%04x (PIA 0 A side Control reg.) send $%02x (%s) back.\\t|%s\",\n op_address, address, value, byte2bit_string(value),\n self.cfg.mem_info.get_shortest(op_address)\n )\n return value", "response": "read from 0xff01 -> PIA 0 A side control register"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nwriting to 0xff01 -> PIA 0 A side control register.", "response": "def write_PIA0_A_control(self, cpu_cycles, op_address, address, value):\n \"\"\"\n write to 0xff01 -> PIA 0 A side control register\n\n TODO: Handle IRQ\n\n bit 7 | IRQ 1 (HSYNC) flag\n bit 6 | IRQ 2 flag(not used)\n bit 5 | Control line 2 (CA2) is an output = 1\n bit 4 | Control line 2 (CA2) set by bit 3 = 1\n bit 3 | select line LSB of analog multiplexor (MUX): 0 = control line 2 LO / 1 = control line 2 HI\n bit 2 | set data direction: 0 = $FF00 is DDR / 1 = $FF00 is normal data lines\n bit 1 | control line 1 (CA1): IRQ polarity 0 = IRQ on HI to LO / 1 = IRQ on LO to HI\n bit 0 | HSYNC IRQ: 0 = disabled IRQ / 1 = enabled IRQ\n \"\"\"\n log.error(\n \"%04x| write $%02x (%s) to $%04x -> PIA 0 A side Control reg.\\t|%s\",\n op_address, value, byte2bit_string(value), address,\n self.cfg.mem_info.get_shortest(op_address)\n )\n if not is_bit_set(value, bit=2):\n self.pia_0_A_register.select_pdr()\n else:\n self.pia_0_A_register.deselect_pdr()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nread from 0xff02 - > PIA 0 B side Data reg.", "response": "def read_PIA0_B_data(self, cpu_cycles, op_address, address):\n \"\"\"\n read from 0xff02 -> PIA 0 B side Data reg.\n\n bit 7 | PB7 | keyboard matrix column 8\n bit 6 | PB6 | keyboard matrix column 7 / ram size output\n bit 5 | PB5 | keyboard matrix column 6\n bit 4 | PB4 | keyboard matrix column 5\n bit 3 | PB3 | keyboard matrix column 4\n bit 2 | PB2 | keyboard matrix column 3\n bit 1 | PB1 | keyboard matrix column 2\n bit 0 | PB0 | keyboard matrix column 1\n\n bits 0-7 also printer data lines\n \"\"\"\n value = self.pia_0_B_data.value # $ff02\n log.debug(\n \"%04x| read $%04x (PIA 0 B side Data reg.) send $%02x (%s) back.\\t|%s\",\n op_address, address, value, byte2bit_string(value),\n self.cfg.mem_info.get_shortest(op_address)\n )\n return value"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef read_PIA0_B_control(self, cpu_cycles, op_address, address):\n value = self.pia_0_B_control.value\n log.error(\n \"%04x| read $%04x (PIA 0 B side Control reg.) send $%02x (%s) back.\\t|%s\",\n op_address, address, value, byte2bit_string(value),\n self.cfg.mem_info.get_shortest(op_address)\n )\n return value", "response": "read from 0xff03 -> PIA 0 B side Control reg."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef write_PIA0_B_control(self, cpu_cycles, op_address, address, value):\n log.critical(\n \"%04x| write $%02x (%s) to $%04x -> PIA 0 B side Control reg.\\t|%s\",\n op_address, value, byte2bit_string(value),\n address, self.cfg.mem_info.get_shortest(op_address)\n )\n\n if is_bit_set(value, bit=0):\n log.critical(\n \"%04x| write $%02x (%s) to $%04x -> VSYNC IRQ: enable\\t|%s\",\n op_address, value, byte2bit_string(value),\n address, self.cfg.mem_info.get_shortest(op_address)\n )\n self.cpu.irq_enabled = True\n value = set_bit(value, bit=7)\n else:\n log.critical(\n \"%04x| write $%02x (%s) to $%04x -> VSYNC IRQ: disable\\t|%s\",\n op_address, value, byte2bit_string(value),\n address, self.cfg.mem_info.get_shortest(op_address)\n )\n self.cpu.irq_enabled = False\n\n if not is_bit_set(value, bit=2):\n self.pia_0_B_control.select_pdr()\n else:\n self.pia_0_B_control.deselect_pdr()\n\n self.pia_0_B_control.set(value)", "response": "write to 0xff03 - > PIA 0 B side Control reg."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef basic_addresses_write(self, cycles, last_op_address, address, word):\n log.critical(\"%04x| write $%04x to $%04x\", last_op_address, word, address)\n return word", "response": "write a word to an address in the basic tables of the current process"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconverting all lowercase letters to uppercase and vice versa.", "response": "def event_text_key(self, event):\n \"\"\"\n So a \"invert shift\" for user inputs:\n Convert all lowercase letters to uppercase and vice versa.\n \"\"\"\n char = event.char\n if not char or char not in string.ascii_letters:\n # ignore all non letter inputs\n return\n\n converted_char = invert_shift(char)\n log.debug(\"convert keycode %s - char %s to %s\", event.keycode, repr(char), converted_char)\n# self.text.delete(Tkinter.INSERT + \"-1c\") # Delete last input char\n self.text.insert(tkinter.INSERT, converted_char) # Insert converted char\n return \"break\""} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a list of bytes from a given list of bits.", "response": "def pop_bytes_from_bit_list(bit_list, count):\r\n \"\"\"\r\n >>> bit_str = (\r\n ... \"00110011\"\r\n ... \"00001111\"\r\n ... \"01010101\"\r\n ... \"11001100\")\r\n >>> bit_list = [int(i) for i in bit_str]\r\n >>> bit_list, bytes = pop_bytes_from_bit_list(bit_list, 1)\r\n >>> bytes\r\n [[0, 0, 1, 1, 0, 0, 1, 1]]\r\n >>> bit_list, bytes = pop_bytes_from_bit_list(bit_list, 2)\r\n >>> bytes\r\n [[0, 0, 0, 0, 1, 1, 1, 1], [0, 1, 0, 1, 0, 1, 0, 1]]\r\n >>> bit_list, bytes = pop_bytes_from_bit_list(bit_list, 1)\r\n >>> bytes\r\n [[1, 1, 0, 0, 1, 1, 0, 0]]\r\n \"\"\"\r\n data_bit_count = count * 8\r\n\r\n data_bit_list = bit_list[:data_bit_count]\r\n data = list(iter_steps(data_bit_list, steps=8))\r\n\r\n bit_list = bit_list[data_bit_count:]\r\n return bit_list, data"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nprints the bit list stats for the current node.", "response": "def print_bit_list_stats(bit_list):\r\n \"\"\"\r\n >>> print_bit_list_stats([1,1,1,1,0,0,0,0])\r\n 8 Bits: 4 positive bits and 4 negative bits\r\n \"\"\"\r\n print \"%i Bits:\" % len(bit_list),\r\n positive_count = 0\r\n negative_count = 0\r\n for bit in bit_list:\r\n if bit == 1:\r\n positive_count += 1\r\n elif bit == 0:\r\n negative_count += 1\r\n else:\r\n raise TypeError(\"Not a bit: %s\" % repr(bit))\r\n print \"%i positive bits and %i negative bits\" % (positive_count, negative_count)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef inject_basic_program(self, ascii_listing):\n program_start = self.cpu.memory.read_word(\n self.machine_api.PROGRAM_START_ADDR\n )\n tokens = self.machine_api.ascii_listing2program_dump(ascii_listing)\n self.cpu.memory.load(program_start, tokens)\n log.critical(\"BASIC program injected into Memory.\")\n\n # Update the BASIC addresses:\n program_end = program_start + len(tokens)\n self.cpu.memory.write_word(self.machine_api.VARIABLES_START_ADDR, program_end)\n self.cpu.memory.write_word(self.machine_api.ARRAY_START_ADDR, program_end)\n self.cpu.memory.write_word(self.machine_api.FREE_SPACE_START_ADDR, program_end)\n log.critical(\"BASIC addresses updated.\")", "response": "Injects the given ASCII BASIC program listing into the emulator RAM."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the foreground and background color of a node.", "response": "def get_rgb_color(color):\n \"\"\"\n >>> get_rgb_color(BLUE)\n ((0, 0, 0), (33, 16, 181))\n\n >>> get_rgb_color(NORMAL)\n ((0, 65, 0), (0, 255, 0))\n \"\"\"\n if color == INVERTED:\n foreground = (0, 255, 0)\n background = (0, 65, 0)\n elif color == NORMAL:\n foreground = (0, 65, 0)\n background = (0, 255, 0)\n else:\n foreground = (0, 0, 0)\n background = COLOR_INFO[color]\n return (foreground, background)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate the wiki page for the character map.", "response": "def create_wiki_page():\n \"\"\"\n for http://archive.worldofdragon.org/index.php?title=CharMap\n \"\"\"\n print (\n '{| class=\"wikitable\"'\n ' style=\"font-family: monospace;'\n ' background-color:#ffffcc;\"'\n ' cellpadding=\"10\"'\n )\n print(\"|-\")\n print(\"! POKE\")\n print(\"value\")\n print(\"! \")\n print(\"! unicode\")\n print(\"codepoint\")\n print(\"! type\")\n print(\"|-\")\n for no, data in enumerate(DRAGON_CHARS_MAP):\n item, item_type = data\n\n codepoint = ord(item)\n print(\"|%i\" % no)\n\n foreground, background = get_rgb_color(item_type)\n foreground = \"#%02x%02x%02x\" % foreground\n background = \"#%02x%02x%02x\" % background\n\n style = \"color: #%s;\"\n print('| style=\"color:%s; background-color:%s;\" | &#x%x;' % (\n foreground, background, codepoint\n ))\n print(\"|%i\" % codepoint)\n print(\"|%s\" % item_type)\n print(\"|-\")\n print(\"|}\")"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef bytes2codeline(raw_bytes):\n code_line = \"\"\n func_token = False\n for byte_no in raw_bytes:\n if byte_no == 0xff: # Next byte is a function token\n func_token = True\n continue\n elif func_token == True:\n func_token = False\n try:\n character = FUNCTION_TOKEN[byte_no]\n except KeyError, err:\n print \"Error: BASIC function torken for '%s' not found!\" % hex(byte_no)\n character = chr(byte_no)\n elif byte_no in BASIC_TOKENS:\n try:\n character = BASIC_TOKENS[byte_no]\n except KeyError, err:\n print \"Error: BASIC torken for '%s' not found!\" % hex(byte_no)\n character = chr(byte_no)\n else:\n character = chr(byte_no)\n# print byte_no, repr(character)\n code_line += character\n return code_line", "response": "Convert a byte array to a Codeline code string."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef analyze(self):\n log.debug(\"enable half sinus scan\")\n self.half_sinus = True\n statistics = self._get_statistics()\n\n width = 50\n max_count = max(statistics.values())\n\n print\n print \"Found this zeror crossing timings in the wave file:\"\n print\n\n for duration, count in sorted(statistics.items(), reverse=True):\n hz = duration2hz(duration, self.framerate / 2)\n w = int(round(float(width) / max_count * count))\n stars = \"*\"*w\n print \"%5sHz (%5s Samples) exist: %4s %s\" % (hz, duration, count, stars)\n\n print\n print \"Notes:\"\n print \" - Hz values are converted to full sinus cycle duration.\"\n print \" - Sample cound is from half sinus cycle.\"", "response": "This function is called by the _get_statistics method. It will print out the information that is needed to analyze the wave file."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef sync(self, length):\n\n # go in wave stream to the first bit\n try:\n self.next()\n except StopIteration:\n print \"Error: no bits identified!\"\n sys.exit(-1)\n\n log.info(\"First bit is at: %s\" % self.pformat_pos())\n log.debug(\"enable half sinus scan\")\n self.half_sinus = True\n\n # Toggle sync test by consuming one half sinus sample\n# self.iter_trigger_generator.next() # Test sync\n\n # get \"half sinus cycle\" test data\n test_durations = itertools.islice(self.iter_duration_generator, length)\n # It's a tuple like: [(frame_no, duration)...]\n\n test_durations = list(test_durations)\n\n diff1, diff2 = diff_info(test_durations)\n log.debug(\"sync diff info: %i vs. %i\" % (diff1, diff2))\n\n if diff1 > diff2:\n log.info(\"\\nbit-sync one step.\")\n self.iter_trigger_generator.next()\n log.debug(\"Synced.\")\n else:\n log.info(\"\\nNo bit-sync needed.\")\n\n self.half_sinus = False\n log.debug(\"disable half sinus scan\")", "response": "Synchronize weave with another bit."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef iter_bitstream(self, iter_duration_generator):\n assert self.half_sinus == False # Allways trigger full sinus cycle\n\n # build min/max Hz values\n bit_nul_min_hz = self.cfg.BIT_NUL_HZ - self.cfg.HZ_VARIATION\n bit_nul_max_hz = self.cfg.BIT_NUL_HZ + self.cfg.HZ_VARIATION\n\n bit_one_min_hz = self.cfg.BIT_ONE_HZ - self.cfg.HZ_VARIATION\n bit_one_max_hz = self.cfg.BIT_ONE_HZ + self.cfg.HZ_VARIATION\n\n bit_nul_max_duration = self._hz2duration(bit_nul_min_hz)\n bit_nul_min_duration = self._hz2duration(bit_nul_max_hz)\n\n bit_one_max_duration = self._hz2duration(bit_one_min_hz)\n bit_one_min_duration = self._hz2duration(bit_one_max_hz)\n\n log.info(\"bit-0 in %sHz - %sHz (duration: %s-%s) | bit-1 in %sHz - %sHz (duration: %s-%s)\" % (\n bit_nul_min_hz, bit_nul_max_hz, bit_nul_min_duration, bit_nul_max_duration,\n bit_one_min_hz, bit_one_max_hz, bit_one_min_duration, bit_one_max_duration,\n ))\n assert bit_nul_max_hz < bit_one_min_hz, \"HZ_VARIATION value is %sHz too high!\" % (\n ((bit_nul_max_hz - bit_one_min_hz) / 2) + 1\n )\n assert bit_one_max_duration < bit_nul_min_duration, \"HZ_VARIATION value is too high!\"\n\n # for end statistics\n bit_one_count = 0\n one_hz_min = sys.maxint\n one_hz_avg = None\n one_hz_max = 0\n bit_nul_count = 0\n nul_hz_min = sys.maxint\n nul_hz_avg = None\n nul_hz_max = 0\n\n for duration in iter_duration_generator:\n\n if bit_one_min_duration < duration < bit_one_max_duration:\n hz = self._duration2hz(duration)\n log.log(5,\n \"bit 1 at %s in %sSamples = %sHz\" % (\n self.pformat_pos(), duration, hz\n )\n )\n yield 1\n bit_one_count += 1\n if hz < one_hz_min:\n one_hz_min = hz\n if hz > one_hz_max:\n one_hz_max = hz\n one_hz_avg = average(one_hz_avg, hz, bit_one_count)\n elif bit_nul_min_duration < duration < bit_nul_max_duration:\n hz = self._duration2hz(duration)\n log.log(5,\n \"bit 0 at %s in %sSamples = %sHz\" % (\n self.pformat_pos(), duration, hz\n )\n )\n yield 0\n bit_nul_count += 1\n if hz < nul_hz_min:\n nul_hz_min = hz\n if hz > nul_hz_max:\n nul_hz_max = hz\n nul_hz_avg = average(nul_hz_avg, hz, bit_nul_count)\n else:\n hz = self._duration2hz(duration)\n log.log(7,\n \"Skip signal at %s with %sHz (%sSamples) out of frequency range.\" % (\n self.pformat_pos(), hz, duration\n )\n )\n continue\n\n bit_count = bit_one_count + bit_nul_count\n\n if bit_count == 0:\n print \"ERROR: No information from wave to generate the bits\"\n print \"trigger volume to high?\"\n sys.exit(-1)\n\n log.info(\"\\n%i Bits: %i positive bits and %i negative bits\" % (\n bit_count, bit_one_count, bit_nul_count\n ))\n if bit_one_count > 0:\n log.info(\"Bit 1: %sHz - %sHz avg: %.1fHz variation: %sHz\" % (\n one_hz_min, one_hz_max, one_hz_avg, one_hz_max - one_hz_min\n ))\n if bit_nul_count > 0:\n log.info(\"Bit 0: %sHz - %sHz avg: %.1fHz variation: %sHz\" % (\n nul_hz_min, nul_hz_max, nul_hz_avg, nul_hz_max - nul_hz_min\n ))", "response": "iterate over the bitstream and yield the bits"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef iter_duration(self, iter_trigger):\n print\n process_info = ProcessInfo(self.frame_count, use_last_rates=4)\n start_time = time.time()\n next_status = start_time + 0.25\n\n old_pos = next(iter_trigger)\n for pos in iter_trigger:\n duration = pos - old_pos\n# log.log(5, \"Duration: %s\" % duration)\n yield duration\n old_pos = pos\n\n if time.time() > next_status:\n next_status = time.time() + 1\n self._print_status(process_info)\n\n self._print_status(process_info)\n print", "response": "Yields the duration of two frames in a row."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\niterate over the wave values and yield the mid crossing of the wave.", "response": "def iter_trigger(self, iter_wave_values):\n \"\"\"\n trigger middle crossing of the wave sinus curve\n \"\"\"\n window_size = (2 * self.cfg.END_COUNT) + self.cfg.MID_COUNT\n\n # sinus curve goes from negative into positive:\n pos_null_transit = [(0, self.cfg.END_COUNT), (self.cfg.END_COUNT, 0)]\n\n # sinus curve goes from positive into negative:\n neg_null_transit = [(self.cfg.END_COUNT, 0), (0, self.cfg.END_COUNT)]\n\n if self.cfg.MID_COUNT > 3:\n mid_index = int(round(self.cfg.MID_COUNT / 2.0))\n else:\n mid_index = 0\n\n in_pos = False\n for values in iter_window(iter_wave_values, window_size):\n\n # Split the window\n previous_values = values[:self.cfg.END_COUNT] # e.g.: 123-----\n mid_values = values[self.cfg.END_COUNT:self.cfg.END_COUNT + self.cfg.MID_COUNT] # e.g.: ---45---\n next_values = values[-self.cfg.END_COUNT:] # e.g.: -----678\n\n # get only the value and strip the frame_no\n # e.g.: (frame_no, value) tuple -> value list\n previous_values = [i[1] for i in previous_values]\n next_values = [i[1] for i in next_values]\n\n # Count sign from previous and next values\n sign_info = [\n count_sign(previous_values, 0),\n count_sign(next_values, 0)\n ]\n# log.log(5, \"sign info: %s\" % repr(sign_info))\n # yield the mid crossing\n if in_pos == False and sign_info == pos_null_transit:\n log.log(5, \"sinus curve goes from negative into positive\")\n# log.debug(\" %s | %s | %s\" % (previous_values, mid_values, next_values))\n yield mid_values[mid_index][0]\n in_pos = True\n elif in_pos == True and sign_info == neg_null_transit:\n if self.half_sinus:\n log.log(5, \"sinus curve goes from positive into negative\")\n# log.debug(\" %s | %s | %s\" % (previous_values, mid_values, next_values))\n yield mid_values[mid_index][0]\n in_pos = False"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\niterates over the wave file entries.", "response": "def iter_wave_values(self):\n \"\"\"\n yield frame numer + volume value from the WAVE file\n \"\"\"\n typecode = self.get_typecode(self.samplewidth)\n\n if log.level >= 5:\n if self.cfg.AVG_COUNT > 1:\n # merge samples -> log output in iter_avg_wave_values\n tlm = None\n else:\n tlm = TextLevelMeter(self.max_value, 79)\n\n # Use only a read size which is a quare divider of the samplewidth\n # Otherwise array.array will raise: ValueError: string length not a multiple of item size\n divider = int(round(float(WAVE_READ_SIZE) / self.samplewidth))\n read_size = self.samplewidth * divider\n if read_size != WAVE_READ_SIZE:\n log.info(\"Real use wave read size: %i Bytes\" % read_size)\n\n get_wave_block_func = functools.partial(self.wavefile.readframes, read_size)\n skip_count = 0\n\n manually_audioop_bias = self.samplewidth == 1 and audioop is None\n\n for frames in iter(get_wave_block_func, \"\"):\n\n if self.samplewidth == 1:\n if audioop is None:\n log.warning(\"use audioop.bias() work-a-round for missing audioop.\")\n else:\n # 8 bit samples are unsigned, see:\n # http://docs.python.org/2/library/audioop.html#audioop.lin2lin\n frames = audioop.bias(frames, 1, 128)\n\n try:\n values = array.array(typecode, frames)\n except ValueError, err:\n # e.g.:\n # ValueError: string length not a multiple of item size\n # Work-a-round: Skip the last frames of this block\n frame_count = len(frames)\n divider = int(math.floor(float(frame_count) / self.samplewidth))\n new_count = self.samplewidth * divider\n frames = frames[:new_count] # skip frames\n log.error(\n \"Can't make array from %s frames: Value error: %s (Skip %i and use %i frames)\" % (\n frame_count, err, frame_count - new_count, len(frames)\n ))\n values = array.array(typecode, frames)\n\n for value in values:\n self.wave_pos += 1 # Absolute position in the frame stream\n\n if manually_audioop_bias:\n # audioop.bias can't be used.\n # See: http://hg.python.org/cpython/file/482590320549/Modules/audioop.c#l957\n value = value % 0xff - 128\n\n# if abs(value) < self.min_volume:\n# # log.log(5, \"Ignore to lower amplitude\")\n# skip_count += 1\n# continue\n\n yield (self.wave_pos, value)\n\n log.info(\"Skip %i samples that are lower than %i\" % (\n skip_count, self.min_volume\n ))\n log.info(\"Last readed Frame is: %s\" % self.pformat_pos())"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef iter_steps(g, steps):\n values = []\n for value in g:\n values.append(value)\n if len(values) == steps:\n yield list(values)\n values = []\n if values:\n yield list(values)", "response": "iterate over the given number of steps in the given sequence."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get(self, start, end):\n return [self.read_byte(addr) for addr in xrange(start, end)]", "response": "Get a list of bytes from the cache."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _new_output_char(self, char):\n self.text.config(state=tkinter.NORMAL)\n self.text.insert(\"end\", char)\n self.text.see(\"end\")\n self.text.config(state=tkinter.DISABLED)", "response": "insert in text field"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef check_cpu_interval(self, cpu_process):\n try:\n# log.critical(\"check_cpu_interval()\")\n if not cpu_process.is_alive():\n log.critical(\"raise SystemExit, because CPU is not alive.\")\n _thread.interrupt_main()\n raise SystemExit(\"Kill pager.getch()\")\n except KeyboardInterrupt:\n _thread.interrupt_main()\n else:\n t = threading.Timer(1.0, self.check_cpu_interval, args=[cpu_process])\n t.start()", "response": "check the CPU interval"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nmanaging the dragon - py emulator.", "response": "def cli(ctx, **kwargs):\n \"\"\"\n DragonPy is a Open source (GPL v3 or later) emulator\n for the 30 years old homecomputer Dragon 32\n and Tandy TRS-80 Color Computer (CoCo)...\n\n Homepage: https://github.com/jedie/DragonPy\n \"\"\"\n log.critical(\"cli kwargs: %s\", repr(kwargs))\n ctx.obj = CliConfig(**kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _windows_get_window_size():\n sbi = CONSOLE_SCREEN_BUFFER_INFO()\n ret = windll.kernel32.GetConsoleScreenBufferInfo(console_handle, byref(sbi))\n if ret == 0:\n return (0, 0)\n return (sbi.srWindow.Right - sbi.srWindow.Left + 1,\n sbi.srWindow.Bottom - sbi.srWindow.Top + 1)", "response": "Get the width and height of the available window area on Windows."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _posix_get_window_size():\n # see README.txt for reference information\n # http://www.kernel.org/doc/man-pages/online/pages/man4/tty_ioctl.4.html\n\n from fcntl import ioctl\n from termios import TIOCGWINSZ\n from array import array\n\n \"\"\"\n struct winsize {\n unsigned short ws_row;\n unsigned short ws_col;\n unsigned short ws_xpixel; /* unused */\n unsigned short ws_ypixel; /* unused */\n };\n \"\"\"\n winsize = array(\"H\", [0] * 4)\n try:\n ioctl(sys.stdout.fileno(), TIOCGWINSZ, winsize)\n except IOError:\n # for example IOError: [Errno 25] Inappropriate ioctl for device\n # when output is redirected\n # [ ] TODO: check fd with os.isatty\n pass\n return (winsize[1], winsize[0])", "response": "Get the width and height of the terminal on POSIX system."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef dumpkey(key):\n def hex3fy(key):\n \"\"\"Helper to convert string into hex string (Python 3 compatible)\"\"\"\n from binascii import hexlify\n # Python 3 strings are no longer binary, encode them for hexlify()\n if PY3K:\n key = key.encode('utf-8')\n keyhex = hexlify(key).upper()\n if PY3K:\n keyhex = keyhex.decode('utf-8')\n return keyhex\n if type(key) == str:\n return hex3fy(key)\n else:\n return ' '.join( [hex3fy(s) for s in key] )", "response": "Dump a key into a string."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _getch_unix(_getall=False):\n import sys, termios\n\n fd = sys.stdin.fileno()\n # save old terminal settings\n old_settings = termios.tcgetattr(fd)\n\n chars = []\n try:\n # change terminal settings - turn off canonical mode and echo.\n # in canonical mode read from stdin returns one line at a time\n # and we need one char at a time (see DESIGN.rst for more info)\n newattr = list(old_settings)\n newattr[3] &= ~termios.ICANON\n newattr[3] &= ~termios.ECHO\n newattr[6][termios.VMIN] = 1 # block until one char received\n newattr[6][termios.VTIME] = 0\n # TCSANOW below means apply settings immediately\n termios.tcsetattr(fd, termios.TCSANOW, newattr)\n\n # [ ] this fails when stdin is redirected, like\n # ls -la | pager.py\n # [ ] also check on Windows\n ch = sys.stdin.read(1)\n chars = [ch]\n\n if _getall:\n # move rest of chars (if any) from input buffer\n # change terminal settings - enable non-blocking read\n newattr = termios.tcgetattr(fd)\n newattr[6][termios.VMIN] = 0 # CC structure\n newattr[6][termios.VTIME] = 0\n termios.tcsetattr(fd, termios.TCSANOW, newattr)\n\n while True:\n ch = sys.stdin.read(1)\n if ch != '':\n chars.append(ch)\n else:\n break\n finally:\n # restore terminal settings. Do this when all output is\n # finished - TCSADRAIN flag\n termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)\n\n if _getall:\n return chars\n else:\n return chars[0]", "response": "getch_unix - get a single character from the terminal"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef prompt(pagenum):\n prompt = \"Page -%s-. Press any key to continue . . . \" % pagenum\n echo(prompt)\n if getch() in [ESC_, CTRL_C_, 'q', 'Q']:\n return False\n echo('\\r' + ' '*(len(prompt)-1) + '\\r')", "response": "Show default prompt to continue and process keypress."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\noutputs content call pagecallback after every page with pagenumber as a parameter. pagecallback may return False to terminate pagination on Ctrl - C or Ctrl - C.", "response": "def page(content, pagecallback=prompt):\n \"\"\"\n Output `content`, call `pagecallback` after every page with page\n number as a parameter. `pagecallback` may return False to terminate\n pagination.\n\n Default callback shows prompt, waits for keypress and aborts on\n 'q', ESC or Ctrl-C.\n \"\"\"\n width = getwidth()\n height = getheight()\n pagenum = 1\n\n try:\n try:\n line = content.next().rstrip(\"\\r\\n\")\n except AttributeError:\n # Python 3 compatibility\n line = content.__next__().rstrip(\"\\r\\n\")\n except StopIteration:\n pagecallback(pagenum)\n return\n\n while True: # page cycle\n linesleft = height-1 # leave the last line for the prompt callback\n while linesleft:\n linelist = [line[i:i+width] for i in range(0, len(line), width)]\n if not linelist:\n linelist = ['']\n lines2print = min(len(linelist), linesleft)\n for i in range(lines2print):\n if WINDOWS and len(line) == width:\n # avoid extra blank line by skipping linefeed print\n echo(linelist[i])\n else:\n print((linelist[i]))\n linesleft -= lines2print\n linelist = linelist[lines2print:]\n\n if linelist: # prepare symbols left on the line for the next iteration\n line = ''.join(linelist)\n continue\n else:\n try:\n try:\n line = content.next().rstrip(\"\\r\\n\")\n except AttributeError:\n # Python 3 compatibility\n line = content.__next__().rstrip(\"\\r\\n\")\n except StopIteration:\n pagecallback(pagenum)\n return\n if pagecallback(pagenum) == False:\n return\n pagenum += 1"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef reformat_v09_trace(raw_trace, max_lines=None):\n print()\n print(\"Reformat v09 trace...\")\n mem_info = SBC09MemInfo(sys.stderr)\n\n result = []\n next_update = time.time() + 1\n old_line = None\n for line_no, line in enumerate(raw_trace.splitlines()):\n if max_lines is not None and line_no >= max_lines:\n msg = \"max lines %i arraived -> Abort.\" % max_lines\n print(msg)\n result.append(msg)\n break\n\n if time.time() > next_update:\n print(\"reformat %i trace lines...\" % line_no)\n next_update = time.time() + 1\n\n try:\n pc = int(line[3:7], 16)\n op_code = int(line[10:15].strip().replace(\" \", \"\"), 16)\n cc = int(line[57:59], 16)\n a = int(line[46:48], 16)\n b = int(line[51:53], 16)\n x = int(line[18:22], 16)\n y = int(line[25:29], 16)\n u = int(line[32:36], 16)\n s = int(line[39:43], 16)\n except ValueError as err:\n print(\"Error in line %i: %s\" % (line_no, err))\n print(\"Content on line %i:\" % line_no)\n print(\"-\"*79)\n print(repr(line))\n print(\"-\"*79)\n continue\n\n op_data = MC6809OP_DATA_DICT[op_code]\n mnemonic = op_data[\"mnemonic\"]\n\n cc_txt = cc_value2txt(cc)\n mem = mem_info.get_shortest(pc)\n# print op_data\n\n register_line = \"cc=%02x a=%02x b=%02x dp=?? x=%04x y=%04x u=%04x s=%04x| %s\" % (\n cc, a, b, x, y, u, s, cc_txt\n )\n if old_line is None:\n line = \"(init with: %s)\" % register_line\n else:\n line = old_line % register_line\n\n old_line = \"%04x| %-11s %-27s %%s | %s\" % (\n pc, \"%x\" % op_code, mnemonic,\n mem\n )\n\n result.append(line)\n\n print(\"Done, %i trace lines converted.\" % line_no)\n# print raw_trace[:700]\n return result", "response": "reformat v09 trace simmilar to XRoar one\n and add CC and Memory-Information.\n\n Note:\n v09 traces contains the register info line one trace line later!\n We reoder it as XRoar done: addr+Opcode with resulted registers"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef human_duration(t):\n if not isinstance(t, (int, float)):\n raise TypeError(\"human_duration() argument must be integer or float\")\n\n chunks = (\n (60 * 60 * 24 * 365, u'years'),\n (60 * 60 * 24 * 30, u'months'),\n (60 * 60 * 24 * 7, u'weeks'),\n (60 * 60 * 24, u'days'),\n (60 * 60, u'hours'),\n )\n\n if t < 1:\n return u\"%.1f ms\" % round(t * 1000, 1)\n if t < 60:\n return u\"%.1f sec\" % round(t, 1)\n if t < 60 * 60:\n return u\"%.1f min\" % round(t / 60, 1)\n\n for seconds, name in chunks:\n count = t / seconds\n if count >= 1:\n count = round(count, 1)\n break\n return u\"%(number).1f %(type)s\" % {'number': count, 'type': name}", "response": "Return a human readable text representation of a time duration."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncalculates the average value of a count of a single object.", "response": "def average(old_avg, current_value, count):\n \"\"\"\n Calculate the average. Count must start with 0\n\n >>> average(None, 3.23, 0)\n 3.23\n >>> average(0, 1, 0)\n 1.0\n >>> average(2.5, 5, 4)\n 3.0\n \"\"\"\n if old_avg is None:\n return current_value\n return (float(old_avg) * count + current_value) / (count + 1)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef iter_window(g, window_size):\n values = collections.deque(maxlen=window_size)\n for value in g:\n values.append(value)\n if len(values) == window_size:\n yield list(values)", "response": "Iterate over a sequence of elements in a given size."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef count_continuous_pattern(bitstream, pattern):\n assert isinstance(bitstream, (collections.Iterable, types.GeneratorType))\n assert isinstance(pattern, (list, tuple))\n\n window_size = len(pattern)\n count = -1\n for count, data in enumerate(iter_steps(bitstream, window_size), 1):\n# print count, data, pattern\n if data != pattern:\n count -= 1\n break\n\n return count", "response": "Count the number of continuous patterns in a single element of a sequence."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfind the next available language in a sequence of bytes.", "response": "def find_iter_window(bitstream, pattern, max_pos=None):\n \"\"\"\n >>> pattern = list(bytes2bit_strings(\"B\"))\n >>> bitstream = bytes2bit_strings(\"AAABCCC\")\n >>> find_iter_window(bitstream, pattern)\n 24\n >>> \"\".join(list(bitstream2string(bitstream)))\n 'CCC'\n\n >>> find_iter_window(bytes2bit_strings(\"HELLO!\"), list(bytes2bit_strings(\"LO\")))\n 24\n\n >>> find_iter_window(bytes2bit_strings(\"HELLO!\"), list(bytes2bit_strings(\"LO\")), max_pos=16)\n Traceback (most recent call last):\n ...\n MaxPosArraived: 17\n\n >>> find_iter_window(bytes2bit_strings(\"HELLO!\"), list(bytes2bit_strings(\"X\")))\n Traceback (most recent call last):\n ...\n PatternNotFound: 40\n \"\"\"\n assert isinstance(bitstream, (collections.Iterable, types.GeneratorType))\n assert isinstance(pattern, (list, tuple))\n\n window_size = len(pattern)\n pos = -1\n for pos, data in enumerate(iter_window(bitstream, window_size)):\n# print pos, data, pattern\n if data == pattern:\n return pos\n if max_pos is not None and pos > max_pos:\n raise MaxPosArraived(pos)\n raise PatternNotFound(pos)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncount the number of elements in the given iterable that are the same.", "response": "def count_the_same(iterable, sentinel):\n \"\"\"\n >>> count_the_same([0x55,0x55,0x55,0x55,0x3C,\"foo\",\"bar\"],0x55)\n (4, 60)\n >>> 0x3C == 60\n True\n \"\"\"\n count = 0\n x = None\n for count, x in enumerate(iterable):\n if x != sentinel:\n break\n return count, x"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef diff_info(data):\n def get_diff(l):\n diff = 0\n for no1, no2 in iter_steps(l, steps=2):\n diff += abs(no1 - no2)\n return diff\n\n data1 = data[2:]\n diff1 = get_diff(data1)\n\n data2 = data[1:-1]\n diff2 = get_diff(data2)\n\n return diff1, diff2", "response": "Return the diff between two words."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef count_sign(values, min_value):\n positive_count = 0\n negative_count = 0\n for value in values:\n if value > min_value:\n positive_count += 1\n elif value < -min_value:\n negative_count += 1\n return positive_count, negative_count", "response": "Count the number of times a given value is in a given order."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconverts a sequence of bits into a single codepoint.", "response": "def bits2codepoint(bits):\n \"\"\"\n >>> c = bits2codepoint([0, 0, 0, 1, 0, 0, 1, 0])\n >>> c\n 72\n >>> chr(c)\n 'H'\n\n >>> bits2codepoint(\"00010010\")\n 72\n\n >>> bits2codepoint([0, 0, 1, 1, 0, 0, 1, 0])\n 76\n \"\"\"\n bit_string = \"\".join([str(c) for c in reversed(bits)])\n return int(bit_string, 2)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef byte2bit_string(data):\n if isinstance(data, basestring):\n assert len(data) == 1\n data = ord(data)\n\n bits = '{0:08b}'.format(data)\n bits = bits[::-1]\n return bits", "response": "Convert a byte string to a base 16 bit string."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef codepoints2bitstream(codepoints):\n if isinstance(codepoints, int):\n codepoints = [codepoints]\n for codepoint in codepoints:\n bit_string = byte2bit_string(codepoint)\n for bit in bit_string:\n yield int(bit)", "response": "Convert a list of codepoints into a list of bit streams."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef print_codepoint_stream(codepoint_stream, display_block_count=8, no_repr=False):\n in_line_count = 0\n\n line = []\n for no, codepoint in enumerate(codepoint_stream, 1):\n r = repr(chr(codepoint))\n if \"\\\\x\" in r: # FIXME\n txt = \"%s %i\" % (hex(codepoint), codepoint)\n else:\n txt = \"%s %s\" % (hex(codepoint), r)\n\n line.append(txt.center(8))\n\n in_line_count += 1\n if in_line_count >= display_block_count:\n in_line_count = 0\n print \"%4s | %s |\" % (no, \" | \".join(line))\n line = []\n if line:\n print \"%4s | %s |\" % (no, \" | \".join(line))\n\n if in_line_count > 0:\n print", "response": "Print a stream of codepoints."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef pformat_codepoints(codepoints):\n printable = string.printable.replace(\"\\n\", \"\").replace(\"\\r\", \"\")\n line = []\n strings = \"\"\n for codepoint in codepoints:\n char = chr(codepoint)\n if char in printable:\n strings += char\n else:\n if strings != \"\":\n line.append(strings)\n strings = \"\"\n line.append(char)\n return line", "response": "Return a list of strings for a list of codepoints."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nprint a list of blocks of size display_block_count", "response": "def print_block_bit_list(block_bit_list, display_block_count=8, no_repr=False):\n \"\"\"\n >>> bit_list = (\n ... [0,0,1,1,0,0,1,0], # L\n ... [1,0,0,1,0,0,1,0], # I\n ... )\n >>> print_block_bit_list(bit_list)\n ... # doctest: +NORMALIZE_WHITESPACE\n 2 - 00110010 10010010\n 0x4c 'L' 0x49 'I'\n \"\"\"\n def print_line(no, line, line_info):\n print \"%4s - %s\" % (no, line)\n if no_repr:\n return\n\n line = []\n for codepoint in line_info:\n r = repr(chr(codepoint))\n if \"\\\\x\" in r: # FIXME\n txt = \"%s\" % hex(codepoint)\n else:\n txt = \"%s %s\" % (hex(codepoint), r)\n txt = txt.center(8)\n line.append(txt)\n\n print \" %s\" % \" \".join(line)\n\n\n in_line_count = 0\n\n line = \"\"\n line_info = []\n for no, bits in enumerate(block_bit_list, 1):\n line += \"%s \" % \"\".join([str(c) for c in bits])\n\n codepoint = bits2codepoint(bits)\n line_info.append(codepoint)\n\n in_line_count += 1\n if in_line_count >= display_block_count:\n in_line_count = 0\n print_line(no, line, line_info)\n line_info = []\n line = \"\"\n if line:\n print_line(no, line, line_info)\n\n if in_line_count > 0:\n print"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef print_bitlist(bitstream, no_repr=False):\n block_bit_list = iter_steps(bitstream, steps=8)\n print_block_bit_list(block_bit_list, no_repr=no_repr)", "response": "Print a list of blocks in a single bitstream."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets a word from a byte iterator.", "response": "def get_word(byte_iterator):\n \"\"\"\n return a uint16 value\n\n >>> g=iter([0x1e, 0x12])\n >>> v=get_word(g)\n >>> v\n 7698\n >>> hex(v)\n '0x1e12'\n \"\"\"\n byte_values = list(itertools.islice(byte_iterator, 2))\n try:\n word = (byte_values[0] << 8) | byte_values[1]\n except TypeError, err:\n raise TypeError(\"Can't build word from %s: %s\" % (repr(byte_values), err))\n return word"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates sinus values for a given framerate and Hz.", "response": "def sinus_values_by_hz(framerate, hz, max_value):\n \"\"\"\n Create sinus values with the given framerate and Hz.\n Note:\n We skip the first zero-crossing, so the values can be used directy in a loop.\n\n >>> values = sinus_values_by_hz(22050, 1200, 255)\n >>> len(values) # 22050 / 1200Hz = 18,375\n 18\n >>> values\n (87, 164, 221, 251, 251, 221, 164, 87, 0, -87, -164, -221, -251, -251, -221, -164, -87, 0)\n\n >>> tl = TextLevelMeter(255, width=40)\n >>> for v in values:\n ... tl.feed(v)\n '| | * |'\n '| | * |'\n '| | * |'\n '| | *|'\n '| | *|'\n '| | * |'\n '| | * |'\n '| | * |'\n '| * |'\n '| * | |'\n '| * | |'\n '| * | |'\n '|* | |'\n '|* | |'\n '| * | |'\n '| * | |'\n '| * | |'\n '| * |'\n\n >>> values = sinus_values_by_hz(44100, 1200, 255)\n >>> len(values) # 44100 / 1200Hz = 36,75\n 37\n \"\"\"\n count = int(round(float(framerate) / float(hz)))\n count += 1\n values = tuple(sinus_values(count, max_value))\n values = values[1:]\n return values"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the module name of the given package.", "response": "def get_module_name(package):\n \"\"\"\n package must have these attributes:\n e.g.:\n package.DISTRIBUTION_NAME = \"DragonPyEmulator\"\n package.DIST_GROUP = \"console_scripts\"\n package.ENTRY_POINT = \"DragonPy\"\n\n :return: a string like: \"dragonpy.core.cli\"\n \"\"\"\n distribution = get_distribution(package.DISTRIBUTION_NAME)\n entry_info = distribution.get_entry_info(package.DIST_GROUP, package.ENTRY_POINT)\n if not entry_info:\n raise RuntimeError(\n \"Can't find entry info for distribution: %r (group: %r, entry point: %r)\" % (\n package.DISTRIBUTION_NAME, package.DIST_GROUP, package.ENTRY_POINT\n )\n )\n return entry_info.module_name"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _run(*args, **kwargs):\n verbose = kwargs.pop(\"verbose\", False)\n if verbose:\n click.secho(\" \".join([repr(i) for i in args]), bg='blue', fg='white')\n\n executable = args[0]\n if not os.path.isfile(executable):\n raise RuntimeError(\"First argument %r is not a existing file!\" % executable)\n if not os.access(executable, os.X_OK):\n raise RuntimeError(\"First argument %r exist, but is not executeable!\" % executable)\n\n return subprocess.Popen(args, **kwargs)", "response": "Run current executable via subprocess and given args\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef after_install(options, home_dir):\n # --- CUT here ---\n \"\"\"\n called after virtualenv was created and pip/setuptools installed.\n Now we installed requirement libs/packages.\n \"\"\"\n if options.install_type==INST_PYPI:\n requirements=NORMAL_INSTALLATION\n elif options.install_type==INST_GIT:\n requirements=GIT_READONLY_INSTALLATION\n elif options.install_type==INST_DEV:\n requirements=DEVELOPER_INSTALLATION\n else:\n # Should never happen\n raise RuntimeError(\"Install type %r unknown?!?\" % options.install_type)\n\n env_subprocess = EnvSubprocess(home_dir) # from bootstrap_env.bootstrap_install_pip\n\n logfile = os.path.join(env_subprocess.abs_home_dir, \"install.log\")\n\n for requirement in requirements:\n sys.stdout.write(\"\\n\\nInstall %r:\\n\" % requirement)\n env_subprocess.call_env_pip([\"install\", \"--log=%s\" % logfile, requirement])\n sys.stdout.write(\"\\n\")", "response": "Called after virtualenv is created and pip is installed."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nprint the usual traceback information followed by a listing of all the available local variables in each frame.", "response": "def print_exc_plus():\n \"\"\"\n Print the usual traceback information, followed by a listing of all the\n local variables in each frame.\n \"\"\"\n sys.stderr.flush() # for eclipse\n sys.stdout.flush() # for eclipse\n\n tb = sys.exc_info()[2]\n while True:\n if not tb.tb_next:\n break\n tb = tb.tb_next\n stack = []\n f = tb.tb_frame\n while f:\n stack.append(f)\n f = f.f_back\n\n txt = traceback.format_exc()\n txt_lines = txt.splitlines()\n first_line = txt_lines.pop(0)\n last_line = txt_lines.pop(-1)\n click.secho(first_line, fg=\"red\")\n for line in txt_lines:\n if line.strip().startswith(\"File\"):\n click.echo(line)\n else:\n click.secho(line, fg=\"white\", bold=True)\n click.secho(last_line, fg=\"red\")\n\n click.echo()\n click.secho(\n \"Locals by frame, most recent call first:\",\n fg=\"blue\", bold=True\n )\n for frame in stack:\n msg = 'File \"%s\", line %i, in %s' % (\n frame.f_code.co_filename,\n frame.f_lineno,\n frame.f_code.co_name,\n )\n msg = click.style(msg, fg=\"white\", bold=True, underline=True)\n click.echo(\"\\n *** %s\" % msg)\n\n for key, value in list(frame.f_locals.items()):\n click.echo(\"%30s = \" % click.style(key, bold=True), nl=False)\n # We have to be careful not to cause a new error in our error\n # printer! Calling str() on an unknown object could cause an\n # error we don't want.\n if isinstance(value, int):\n value = \"$%x (decimal: %i)\" % (value, value)\n else:\n value = repr(value)\n\n if len(value) > MAX_CHARS:\n value = \"%s...\" % value[:MAX_CHARS]\n\n try:\n click.echo(value)\n except:\n click.echo(\"\")"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds a block of tokenized BASIC source code lines to the current file", "response": "def add_block_data(self, block_length, data):\n \"\"\"\n add a block of tokenized BASIC source code lines.\n\n >>> cfg = Dragon32Config\n >>> fc = FileContent(cfg)\n\n >>> block = [\n ... 0x1e,0x12,0x0,0xa,0x80,0x20,0x49,0x20,0xcb,0x20,0x31,0x20,0xbc,0x20,0x31,0x30,0x0,\n ... 0x0,0x0]\n >>> len(block)\n 19\n >>> fc.add_block_data(19,iter(block))\n 19 Bytes parsed\n >>> fc.print_code_lines()\n 10 FOR I = 1 TO 10\n\n >>> block = iter([\n ... 0x1e,0x29,0x0,0x14,0x87,0x20,0x49,0x3b,0x22,0x48,0x45,0x4c,0x4c,0x4f,0x20,0x57,0x4f,0x52,0x4c,0x44,0x21,0x22,0x0,\n ... 0x0,0x0])\n >>> fc.add_block_data(999,block)\n 25 Bytes parsed\n ERROR: Block length value 999 is not equal to parsed bytes!\n >>> fc.print_code_lines()\n 10 FOR I = 1 TO 10\n 20 PRINT I;\"HELLO WORLD!\"\n\n >>> block = iter([\n ... 0x1e,0x31,0x0,0x1e,0x8b,0x20,0x49,0x0,\n ... 0x0,0x0])\n >>> fc.add_block_data(10,block)\n 10 Bytes parsed\n >>> fc.print_code_lines()\n 10 FOR I = 1 TO 10\n 20 PRINT I;\"HELLO WORLD!\"\n 30 NEXT I\n\n\n Test function tokens in code\n\n >>> fc = FileContent(cfg)\n >>> data = iter([\n ... 0x1e,0x4a,0x0,0x1e,0x58,0xcb,0x58,0xc3,0x4c,0xc5,0xff,0x88,0x28,0x52,0x29,0x3a,0x59,0xcb,0x59,0xc3,0x4c,0xc5,0xff,0x89,0x28,0x52,0x29,0x0,\n ... 0x0,0x0\n ... ])\n >>> fc.add_block_data(30, data)\n 30 Bytes parsed\n >>> fc.print_code_lines()\n 30 X=X+L*SIN(R):Y=Y+L*COS(R)\n\n\n Test high line numbers\n\n >>> fc = FileContent(cfg)\n >>> data = [\n ... 0x1e,0x1a,0x0,0x1,0x87,0x20,0x22,0x4c,0x49,0x4e,0x45,0x20,0x4e,0x55,0x4d,0x42,0x45,0x52,0x20,0x54,0x45,0x53,0x54,0x22,0x0,\n ... 0x1e,0x23,0x0,0xa,0x87,0x20,0x31,0x30,0x0,\n ... 0x1e,0x2d,0x0,0x64,0x87,0x20,0x31,0x30,0x30,0x0,\n ... 0x1e,0x38,0x3,0xe8,0x87,0x20,0x31,0x30,0x30,0x30,0x0,\n ... 0x1e,0x44,0x27,0x10,0x87,0x20,0x31,0x30,0x30,0x30,0x30,0x0,\n ... 0x1e,0x50,0x80,0x0,0x87,0x20,0x33,0x32,0x37,0x36,0x38,0x0,\n ... 0x1e,0x62,0xf9,0xff,0x87,0x20,0x22,0x45,0x4e,0x44,0x22,0x3b,0x36,0x33,0x39,0x39,0x39,0x0,0x0,0x0\n ... ]\n >>> len(data)\n 99\n >>> fc.add_block_data(99, iter(data))\n 99 Bytes parsed\n >>> fc.print_code_lines()\n 1 PRINT \"LINE NUMBER TEST\"\n 10 PRINT 10\n 100 PRINT 100\n 1000 PRINT 1000\n 10000 PRINT 10000\n 32768 PRINT 32768\n 63999 PRINT \"END\";63999\n \"\"\"\n\n# data = list(data)\n# # print repr(data)\n# print_as_hex_list(data)\n# print_codepoint_stream(data)\n# sys.exit()\n\n # create from codepoint list a iterator\n data = iter(data)\n\n byte_count = 0\n while True:\n try:\n line_pointer = get_word(data)\n except (StopIteration, IndexError), err:\n log.error(\"No line pointer information in code line data. (%s)\" % err)\n break\n# print \"line_pointer:\", repr(line_pointer)\n byte_count += 2\n if not line_pointer:\n # arrived [0x00, 0x00] -> end of block\n break\n\n try:\n line_number = get_word(data)\n except (StopIteration, IndexError), err:\n log.error(\"No line number information in code line data. (%s)\" % err)\n break\n# print \"line_number:\", repr(line_number)\n byte_count += 2\n\n# data = list(data)\n# print_as_hex_list(data)\n# print_codepoint_stream(data)\n# data = iter(data)\n\n # get the code line:\n # new iterator to get all characters until 0x00 arraived\n code = iter(data.next, 0x00)\n\n code = list(code) # for len()\n byte_count += len(code) + 1 # from 0x00 consumed in iter()\n\n# print_as_hex_list(code)\n# print_codepoint_stream(code)\n\n # convert to a plain ASCII string\n code = bytes2codeline(code)\n\n self.code_lines.append(\n CodeLine(line_pointer, line_number, code)\n )\n\n print \"%i Bytes parsed\" % byte_count\n if block_length != byte_count:\n print \"ERROR: Block length value %i is not equal to parsed bytes!\" % block_length"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd a block of ASCII BASIC source code lines.", "response": "def add_ascii_block(self, block_length, data):\n \"\"\"\n add a block of ASCII BASIC source code lines.\n\n >>> data = [\n ... 0xd,\n ... 0x31,0x30,0x20,0x50,0x52,0x49,0x4e,0x54,0x20,0x22,0x54,0x45,0x53,0x54,0x22,\n ... 0xd,\n ... 0x32,0x30,0x20,0x50,0x52,0x49,0x4e,0x54,0x20,0x22,0x48,0x45,0x4c,0x4c,0x4f,0x20,0x57,0x4f,0x52,0x4c,0x44,0x21,0x22,\n ... 0xd\n ... ]\n >>> len(data)\n 41\n >>> fc = FileContent(Dragon32Config)\n >>> fc.add_ascii_block(41, iter(data))\n 41 Bytes parsed\n >>> fc.print_code_lines()\n 10 PRINT \"TEST\"\n 20 PRINT \"HELLO WORLD!\"\n \"\"\"\n data = iter(data)\n\n data.next() # Skip first \\r\n byte_count = 1 # incl. first \\r\n while True:\n code = iter(data.next, 0xd) # until \\r\n code = \"\".join([chr(c) for c in code])\n\n if not code:\n log.warning(\"code ended.\")\n break\n\n byte_count += len(code) + 1 # and \\r consumed in iter()\n\n try:\n line_number, code = code.split(\" \", 1)\n except ValueError, err:\n print \"\\nERROR: Splitting linenumber in %s: %s\" % (repr(code), err)\n break\n\n try:\n line_number = int(line_number)\n except ValueError, err:\n print \"\\nERROR: Part '%s' is not a line number!\" % repr(line_number)\n continue\n\n self.code_lines.append(\n CodeLine(None, line_number, code)\n )\n\n print \"%i Bytes parsed\" % byte_count\n if block_length != byte_count:\n log.error(\n \"Block length value %i is not equal to parsed bytes!\" % block_length\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the filename block as a list of codepoints.", "response": "def get_filename_block_as_codepoints(self):\n \"\"\"\n TODO: Support tokenized BASIC. Now we only create ASCII BASIC.\n \"\"\"\n codepoints = []\n codepoints += list(string2codepoint(self.filename.ljust(8, \" \")))\n codepoints.append(self.cfg.FTYPE_BASIC) # one byte file type\n codepoints.append(self.cfg.BASIC_ASCII) # one byte ASCII flag\n\n # one byte gap flag (0x00=no gaps, 0xFF=gaps)\n # http://archive.worldofdragon.org/phpBB3/viewtopic.php?f=8&t=4231&p=9110#p9110\n codepoints.append(self.gap_flag)\n\n # machine code starting/loading address\n if self.file_type != self.cfg.FTYPE_BASIC: # BASIC programm (0x00)\n codepoints = iter(codepoints)\n\n self.start_address = get_word(codepoints)\n log.info(\"machine code starting address: %s\" % hex(self.start_address))\n\n self.load_address = get_word(codepoints)\n log.info(\"machine code loading address: %s\" % hex(self.load_address))\n else:\n # not needed in BASIC files\n # http://archive.worldofdragon.org/phpBB3/viewtopic.php?f=8&t=4341&p=9109#p9109\n pass\n\n log.debug(\"filename block: %s\" % pformat_codepoints(codepoints))\n return codepoints"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef buffer2file(self):\n if self.current_file is not None and self.buffer:\n self.current_file.add_block_data(self.buffered_block_length, self.buffer)\n self.buffer = []\n self.buffered_block_length = 0", "response": "add the code buffer content to the CassetteFile instance"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef validate(validation, dictionary):\n\n errors = defaultdict(list)\n for key in validation:\n if isinstance(validation[key], (list, tuple)):\n if Required in validation[key]:\n if not Required(key, dictionary):\n errors[key] = [\"must be present\"]\n continue\n _validate_list_helper(validation, dictionary, key, errors)\n else:\n v = validation[key]\n if v == Required:\n if not Required(key, dictionary):\n errors[key] = [\"must be present\"]\n else:\n _validate_and_store_errs(v, dictionary, key, errors)\n if len(errors) > 0:\n # `errors` gets downgraded from defaultdict to dict\n # because it makes for prettier output\n return ValidationResult(valid=False, errors=dict(errors))\n else:\n return ValidationResult(valid=True, errors={})", "response": "Validate that a dictionary passes a set of key - based validators."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef ArgSpec(*args, **kwargs):\n\n def argspec_lambda(value):\n argspec = getargspec(value)\n argspec_kw_vals = ()\n if argspec.defaults is not None:\n argspec_kw_vals = argspec.defaults\n kw_vals = {}\n arg_offset = 0\n arg_len = len(argspec.args) - 1\n for val in argspec_kw_vals[::-1]:\n kw_vals[argspec.args[arg_len - arg_offset]] = val\n arg_offset += 1\n if kwargs == kw_vals:\n if len(args) != arg_len - arg_offset + 1:\n return False\n index = 0\n for arg in args:\n if argspec.args[index] != arg:\n return False\n index += 1\n return True\n return False\n argspec_lambda.err_message = \"must match argspec ({0}) {{{1}}}\".format(args, kwargs)\n # as little sense as negating this makes, best to just be consistent.\n argspec_lambda.not_message = \"must not match argspec ({0}) {{{1}}}\".format(args, kwargs)\n return argspec_lambda", "response": "Validate a function based on the given argspec."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_first_date_for_group(start_date,group_type,n):\n current_date = start_date\n if group_type == 'monthly':\n current_year = start_date.year\n current_month = start_date.month\n for i in range(n-1):\n current_month-=1\n if current_month == 0:\n current_month = 12\n current_year -= 1\n first_date = datetime.datetime(current_year,current_month,1)\n elif group_type == 'weekly':\n first_date=start_date-datetime.timedelta(days = start_date.weekday()+(n-1)*7)\n elif group_type == 'daily':\n first_date = start_date-datetime.timedelta(days = n-1)\n first_date = datetime.datetime(first_date.year,first_date.month,first_date.day,0,0,0)\n return first_date", "response": "Get the first date for a group"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_snapshots(self,**kwargs):\n commits = self.repository.get_commits(**kwargs)\n snapshots = []\n for commit in commits:\n for key in ('committer_date','author_date'):\n commit[key] = datetime.datetime.fromtimestamp(commit[key+'_ts'])\n snapshot = GitSnapshot(commit)\n hasher = Hasher()\n hasher.add(snapshot.sha)\n snapshot.hash = hasher.digest.hexdigest()\n snapshot.project = self.project\n snapshot.pk = uuid.uuid4().hex\n snapshots.append(snapshot)\n return snapshots", "response": "Returns a list of snapshots in a given repository."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a diff between two lists of objects.", "response": "def diff_objects(objects_a,objects_b,key,comparator = None,with_unchanged = False):\n \"\"\"\n Returns a \"diff\" between two lists of objects.\n\n :param key: The key that identifies objects with identical location in each set,\n such as files with the same path or code objects with the same URL.\n :param comparator: Comparison functions that decides if two objects are identical.\n \"\"\"\n\n objects_by_key = {'a' :defaultdict(list),\n 'b' : defaultdict(list)}\n\n for name,objects in ('a',objects_a),('b',objects_b):\n d = objects_by_key[name]\n for obj in objects:\n d[key(obj)].append(obj)\n\n added_objects = [obj for key,objs in objects_by_key['b'].items()\n if key not in objects_by_key['a'] for obj in objs]\n\n deleted_objects = [obj for key,objs in objects_by_key['a'].items()\n if key not in objects_by_key['b'] for obj in objs]\n\n joint_keys = [key for key in objects_by_key['a']\n if key in objects_by_key['b']]\n\n modified_objects = []\n\n #we go through the keys that exist in both object sets\n for key in joint_keys:\n objects_a = objects_by_key['a'][key]\n objects_b = objects_by_key['b'][key]\n\n if len(objects_a) > 1 or len(objects_b) > 1:\n\n #this is an ambiguous situation: we have more than one object for the same\n #key, so we have to decide which ones have been added or not\n #we try to remove identical objects from the set\n\n objects_a_copy = objects_a[:]\n objects_b_copy = objects_b[:]\n\n #for the next step, we need a comparator\n if comparator:\n #we iterate through the list and try to find different objects...\n for obj_a in objects_a:\n for obj_b in objects_b_copy:\n if comparator(obj_a,obj_b) == 0:\n #these objects are identical, we remove them from both sets...\n objects_a_copy.remove(obj_a)\n objects_b_copy.remove(obj_b)\n break\n\n #here we cannot distinguish objects...\n if len(objects_b_copy) > len(objects_a_copy):\n #we arbitrarily mark the last objects in objects_b as added\n added_objects.extend(objects_b_copy[len(objects_a_copy):])\n elif len(objects_a_copy) > len(objects_b_copy):\n #we arbitrarily mark the last objects in objects_a as deleted\n deleted_objects.extend(objects_a_copy[len(objects_b_copy):])\n else:\n if comparator and comparator(objects_a[0],objects_b[0]) != 0:\n #these objects are different\n modified_objects.append(objects_a[0])\n\n result = {\n 'added' : added_objects,\n 'deleted' : deleted_objects,\n 'modified' : modified_objects,\n }\n\n if with_unchanged:\n unchanged_objects = [objects_b_by_key[key]\n for key in joint_keys\n if not objects_b_by_key[key] in modified_objects]\n result['unchanged'] = unchanged_objects\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a list of differences between two snapshots.", "response": "def diff_snapshots(self,snapshot_a,snapshot_b,save = True, diff=None):\n\n \"\"\"\n Returns a list of\n \"\"\"\n\n file_revisions_a = snapshot_a.file_revisions\n file_revisions_b = snapshot_b.file_revisions\n\n file_revisions_diff = diff_objects(file_revisions_a,\n file_revisions_b,\n file_revision_key,\n file_revision_comparator)\n\n #We just generate code objects and issues\n #for the modified file revisions, to save time when diffing.\n\n logger.debug(\"Generating list of modified file revisions...\")\n modified_file_revisions_by_path = {}\n for fr_type in ('modified','added','deleted'):\n for fr in file_revisions_diff[fr_type]:\n if not fr.path in modified_file_revisions_by_path:\n modified_file_revisions_by_path[fr.path] = fr\n\n logger.debug(\"Generating list of modified issues...\")\n\n modified_file_revisions_a = [fr for fr in file_revisions_a\n if fr.path in modified_file_revisions_by_path]\n modified_file_revisions_b = [fr for fr in file_revisions_b\n if fr.path in modified_file_revisions_by_path]\n\n if modified_file_revisions_a:\n #to do: check the file revisions chunk-wise to avoid DB query errors\n issue_occurrences_a = self.project.backend.filter(IssueOccurrence,\n {\n 'file_revision' : {'$in' : modified_file_revisions_a}\n },\n include = ('file_revision','issue'))\n else:\n issue_occurrences_a = []\n\n if modified_file_revisions_b:\n #to do: check the file revisions chunk-wise to avoid DB query errors\n issue_occurrences_b = self.project.backend.filter(IssueOccurrence,\n {\n 'file_revision' : {'$in' : modified_file_revisions_b}\n },\n include = ('file_revision','issue'))\n else:\n issue_occurrences_b = []\n\n logger.debug(\"Diffing issues (%d in A, %d in B)\" % (len(issue_occurrences_a),\n len(issue_occurrences_b)))\n\n\n issue_occurrences_diff = diff_objects(issue_occurrences_a,\n issue_occurrences_b,\n issue_occurrence_key,\n issue_occurrence_comparator)\n\n logger.debug(\"Diffing summary...\")\n summary_diff = self.diff_summaries(snapshot_a,snapshot_b)\n\n if diff is None:\n diff = Diff({'summary' : summary_diff,\n 'snapshot_a' : snapshot_a,\n 'project' : self.project,\n 'configuration' : self.project.configuration,\n 'snapshot_b' : snapshot_b})\n #we generate the hash value for this diff\n hasher = Hasher()\n hasher.add(diff.snapshot_a.hash)\n hasher.add(diff.snapshot_b.hash)\n diff.hash = hasher.digest.hexdigest()\n elif save:\n with self.project.backend.transaction():\n self.project.backend.filter(DiffFileRevision,{'diff' : diff}).delete()\n self.project.backend.filter(DiffIssueOccurrence,{'diff' : diff}).delete()\n if save:\n with self.project.backend.transaction():\n self.project.backend.save(diff)\n\n diff_file_revisions = []\n\n with self.project.backend.transaction():\n for key,file_revisions in file_revisions_diff.items():\n for file_revision in file_revisions:\n hasher = Hasher()\n hasher.add(file_revision.hash)\n hasher.add(diff.hash)\n hasher.add(key)\n diff_file_revision = DiffFileRevision({\n 'diff' : diff,\n 'file_revision' : file_revision,\n 'hash' : hasher.digest.hexdigest(),\n 'key' : key})\n if save:\n self.project.backend.save(diff_file_revision)\n diff_file_revisions.append(diff_file_revision)\n\n diff_issue_occurrences = []\n mapping = {'deleted' : 'fixed','added' : 'added'}\n with self.project.backend.transaction():\n for key,issue_occurrences in issue_occurrences_diff.items():\n if not key in mapping:\n continue\n for issue_occurrence in issue_occurrences:\n hasher = Hasher()\n hasher.add(issue_occurrence.hash)\n hasher.add(diff.hash)\n hasher.add(mapping[key])\n diff_issue_occurrence = DiffIssueOccurrence({\n 'diff' : diff,\n 'hash' : hasher.digest.hexdigest(),\n 'issue_occurrence' : issue_occurrence,\n 'key' : mapping[key]\n })\n if save:\n self.project.backend.save(diff_issue_occurrence)\n diff_issue_occurrences.append(diff_issue_occurrence)\n\n return diff,diff_file_revisions,diff_issue_occurrences"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef analyze(self,file_revisions, save_if_empty = False, snapshot=None):\n\n \"\"\"\n Handling dependencies:\n\n * First, genreate a list of file revisions for this snapshot\n * Then, check which ones of of them already exist\n * For the existing ones, check their dependencies\n * If any of the dependencies are outdated, add the dependent file revision to the analyze list\n\n How to handle hashes? Dependencies should be included in the hash sum.\n\n * Just load the files based on their SHA values\n * Check if dependencies match with the current set based on SHA values\n * If not, re-analyze the file revision\n * After analysis, calculate the hash value based on path, SHA and dependencies\n \"\"\"\n\n logger.debug(\"Analyzing code environment...\")\n\n if snapshot is None:\n snapshot = Snapshot()\n snapshot.configuration = self.project.configuration\n\n file_revisions_by_pk = dict([(fr.hash,fr) for fr in file_revisions])\n\n filtered_file_revisions = self.filter_file_revisions(file_revisions)\n filtered_file_revisions_by_pk = dict([(fr.hash,fr) for fr in filtered_file_revisions])\n\n excluded_file_revisions = [file_revisions_by_pk[pk]\n for pk in file_revisions_by_pk.keys()\n if not pk in filtered_file_revisions_by_pk\n ]\n\n logger.info(\"Excluding %d file revisions\" % len(excluded_file_revisions))\n\n file_revisions = filtered_file_revisions\n file_revisions_by_pk = filtered_file_revisions_by_pk\n\n max_file_revisions = 10000\n\n if len(file_revisions) > max_file_revisions:\n\n logger.warning(\"Too many file revisions (%d) in snapshot, truncating at %d\" %\n (len(file_revisions),max_file_revisions))\n file_revisions_by_pk = dict(sorted(file_revisions_by_pk.items(),\n key = lambda x:x[0])[:max_file_revisions])\n file_revisions = file_revisions_by_pk.values()\n\n i = 0\n chunk_size = 50\n existing_file_revisions = []\n file_revisions_by_pk_keys = file_revisions_by_pk.keys()\n\n #we only check 50 keys at a time and then incrementally save them\n while i < len(file_revisions_by_pk_keys):\n file_revisions_by_pk_chunk = file_revisions_by_pk_keys[i:i+chunk_size]\n if not file_revisions_by_pk_chunk:\n break\n existing_file_revisions.extend(list(self.project.backend.filter(FileRevision,{\n 'project' : self.project,\n 'hash' : {'$in' : file_revisions_by_pk_chunk}\n })))\n i+=chunk_size\n\n existing_file_revisions_by_pk = dict([(fr.hash,fr) for fr in existing_file_revisions])\n new_file_revisions = [file_revision for file_revision in file_revisions\n if not file_revision.hash in existing_file_revisions_by_pk]\n\n new_file_revisions = []\n\n for file_revision in file_revisions:\n if not file_revision.hash in existing_file_revisions_by_pk:\n file_revision.configuration = self.project.configuration\n new_file_revisions.append(file_revision)\n elif existing_file_revisions_by_pk[file_revision.hash].configuration != self.project.configuration:\n #we replace the pk and configuration values of the new file_revision object, so that\n #it will overwrite the old version...\n file_revision.pk = existing_file_revisions_by_pk[file_revision.hash].pk\n file_revision.configuration = self.project.configuration\n new_file_revisions.append(file_revision)\n\n file_revisions_dict = {}\n\n for file_revision in existing_file_revisions+new_file_revisions:\n file_revisions_dict[file_revision.path] = file_revision\n\n logger.info(\"Analyzing %d new file revisions (%d are already analyzed)\" % (\n len(new_file_revisions),\n len(existing_file_revisions)\n ))\n i = 0\n\n #We set the project information in the snapshot.\n snapshot.project = self.project\n snapshot.file_revisions = file_revisions_dict.values()\n self.env['snapshot'] = snapshot\n\n try:\n while i < len(new_file_revisions):\n j = i+10 if i+10 < len(new_file_revisions) else len(new_file_revisions)\n logger.info(\"Analyzing and saving: %d - %d (%d remaining)\" %\n (i, j, len(new_file_revisions) - i ))\n file_revisions_slice = new_file_revisions[i:j]\n analyzed_file_revisions = self.analyze_file_revisions(file_revisions_slice)\n logger.info(\"Annotating and saving file revisions...\")\n self.save_file_revisions(snapshot,analyzed_file_revisions)\n i+=10\n logger.info(\"Summarizing file revisions...\")\n snapshot.summary = self.summarize(file_revisions_dict.values())\n finally:\n del self.env['snapshot']\n\n snapshot.analyzed = True\n\n logger.info(\"Saving snapshot...\")\n\n with self.project.backend.transaction():\n self.project.backend.save(snapshot)\n\n logger.info(\"Done analyzing snapshot %s\" % snapshot.pk)\n\n return snapshot", "response": "Analyzes the code environment and returns a dictionary of file revision names and their values."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef save_file_revisions(self,snapshot,file_revisions):\n\n annotations = defaultdict(list)\n\n for file_revision in file_revisions:\n issues_results = {}\n\n for analyzer_name,results in file_revision.results.items():\n\n if 'issues' in results:\n issues_results[analyzer_name] = results['issues']\n del results['issues']\n if len(issues_results) > 1000:\n issues_results[analyzer_name] = [{\n 'code' : 'TooManyIssues',\n 'analyzer' : analyzer_name,\n }]\n\n with self.project.backend.transaction():\n self.project.backend.save(file_revision)\n\n def location_sorter(issue):\n if issue['location'] and issue['location'][0] and issue['location'][0][0]:\n return issue['location'][0][0][0]\n return 0\n\n with self.project.backend.transaction():\n for analyzer_name,issues in issues_results.items():\n grouped_issues = group_issues_by_fingerprint(issues)\n for issue_dict in grouped_issues:\n\n hasher = Hasher()\n hasher.add(analyzer_name)\n hasher.add(issue_dict['code'])\n hasher.add(issue_dict['fingerprint'])\n issue_dict['hash'] = hasher.digest.hexdigest()\n\n try:\n #we check if the issue already exists\n issue = self.project.backend.get(Issue,{'hash' : issue_dict['hash'],\n 'project' : self.project\n })\n except Issue.DoesNotExist:\n #if not, we create it\n d = issue_dict.copy()\n d['analyzer'] = analyzer_name\n if 'location' in d:\n del d['location']\n if 'occurrences' in d:\n del d['occurrences']\n issue = Issue(d)\n issue.project = self.project\n self.project.backend.save(issue)\n\n for occurrence in issue_dict['occurrences']:\n hasher = Hasher()\n hasher.add(file_revision.hash)\n hasher.add(issue.hash)\n hasher.add(occurrence.get('from_row'))\n hasher.add(occurrence.get('from_column'))\n hasher.add(occurrence.get('to_row'))\n hasher.add(occurrence.get('to_column'))\n hasher.add(occurrence.get('sequence'))\n occurrence['hash'] = hasher.digest.hexdigest()\n\n try:\n #we check if the occurrence already exists\n occurrence = self.project.backend.get(IssueOccurrence,{'hash' : occurrence['hash'],\n 'issue' : issue\n })\n except IssueOccurrence.DoesNotExist:\n #if not, we create it\n occurrence = IssueOccurrence(occurrence)\n occurrence.issue = issue\n occurrence.file_revision = file_revision\n self.project.backend.save(occurrence)\n annotations['occurrences'].append(occurrence)\n annotations['issues'].append(issue)\n\n return annotations", "response": "Save the file revisions to the database."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ntries to find a directory with a. git repository", "response": "def find_git_repository(self, path):\n \"\"\"\n Tries to find a directory with a .git repository\n \"\"\"\n while path is not None:\n git_path = os.path.join(path,'.git')\n if os.path.exists(git_path) and os.path.isdir(git_path):\n return path\n path = os.path.dirname(path)\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef run_migrations_offline():\n\n project_path = get_project_path()\n project_config = get_project_config(project_path)\n backend = get_backend(project_path,project_config,initialize_db = False)\n url = str(backend.engine.url)\n with backend.transaction():\n context.configure(\n connection=backend.connection,\n url=url, target_metadata=backend.metadata,\n literal_binds=True)\n\n with context.begin_transaction():\n context.run_migrations()", "response": "Run migrations in offline mode."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nruns migrations in online mode.", "response": "def run_migrations_online():\n \"\"\"Run migrations in 'online' mode.\n\n In this scenario we need to create an Engine\n and associate a connection with the context.\n\n \"\"\"\n print(\"Running migrations online\")\n\n project_path = get_project_path()\n project_config = get_project_config(project_path)\n\n backend = get_backend(project_path,project_config,initialize_db = False)\n\n context.configure(\n connection=backend.connection,\n target_metadata=backend.metadata,\n )\n\n with context.begin_transaction():\n context.run_migrations()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_hash(node,fields = None,exclude = ['pk','_id'],target = 'pk'):\n\n hasher = Hasher()\n\n def add_to_hash(value):\n\n if isinstance(value,dict):\n if target in value:\n add_to_hash(value[target])\n else:\n attribute_list = []\n for key,v in sorted(value.items(),key = lambda x: x[0]):\n\n if (fields is not None and key not in fields) \\\n or (exclude is not None and key in exclude):\n continue\n add_to_hash(key)\n add_to_hash(v)\n elif isinstance(value,(tuple,list)) and value and isinstance(value[0],(dict,node_class)):\n for i,v in enumerate(value):\n hasher.add(i)\n add_to_hash(v)\n else:\n hasher.add(value)\n\n add_to_hash(node)\n\n return hasher.digest.hexdigest()", "response": "This function generates a unique hash for a given node in the syntax tree."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding a message to the list of messages to send", "response": "def add_message(self, msg_id, location, msg):\n \"\"\"Client API to send a message\"\"\"\n\n self._messages.append((msg_id,location,msg))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_fingerprint_from_code(self,file_revision,location, extra_data=None):\n code = file_revision.get_file_content()\n if not isinstance(code,unicode):\n code = unicode(code,errors = 'ignore')\n lines = code.split(u\"\\n\")\n s = \"\"\n for l in location:\n ((from_row,from_column),(to_row,to_column)) = l\n if from_column is None:\n continue\n if from_row == to_row:\n s+=lines[from_row-1][from_column:to_column]\n else:\n if to_row < from_row:\n raise ValueError(\"from_row must be smaller than to_row\")\n s+=lines[from_row-1][from_column:]\n current_row = from_row+1\n while current_row < to_row:\n s+=lines[current_row-1]\n current_row+=1\n s+=lines[current_row-1][:to_column]\n\n hasher = Hasher()\n hasher.add(s)\n\n if extra_data is not None:\n hasher.add(extra_data)\n\n return hasher.digest.hexdigest()", "response": "This function generates a fingerprint from a series of code snippets."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ngroup issues by fingerprint.", "response": "def group_issues_by_fingerprint(issues):\n \"\"\"\n Groups issues by fingerprint. Grouping is done by issue code in addition.\n IMPORTANT: It is assumed that all issues come from the SAME analyzer.\n \"\"\"\n issues_by_fingerprint = defaultdict(list)\n for issue in issues:\n if not 'fingerprint' in issue:\n raise AttributeError(\"No fingerprint defined for issue with analyzer %s and code %s!\" %\n (issue.get('analyzer','(undefined)'),issue['code']))\n fp_code = \"%s:%s\" % (issue['fingerprint'],issue['code'])\n if fp_code in issues_by_fingerprint:\n grouped_issue = issues_by_fingerprint[fp_code]\n else:\n grouped_issue = issue.copy()\n grouped_issue['occurrences'] = []\n if 'location' in grouped_issue:\n del grouped_issue['location']\n issues_by_fingerprint[fp_code] = grouped_issue\n\n locations = issue.get('location',[])\n if locations:\n for i,start_stop in enumerate(locations):\n\n occurrence = {\n 'from_row' : None,\n 'to_row' : None,\n 'from_column' : None,\n 'to_column' : None,\n 'sequence' : i\n }\n\n grouped_issue['occurrences'].append(occurrence)\n\n if not isinstance(start_stop,(list,tuple)) or not len(start_stop) == 2:\n continue\n\n start,stop = start_stop\n\n if isinstance(start,(list,tuple)) and len(start) == 2:\n occurrence['from_row'] = start[0]\n occurrence['from_column'] = start[1]\n\n if isinstance(stop,(list,tuple)) and len(stop) == 2:\n occurrence['to_row'] = stop[0]\n occurrence['to_column'] = stop[1]\n\n grouped_issue['occurrences'] = sorted(grouped_issue['occurrences'],key = lambda x: (x['from_row'],x['from_column']))\n\n return issues_by_fingerprint.values()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nretrieving the issue classes for a given backend and enabled or disabled.", "response": "def get_issue_classes(self,backend = None,enabled = True,sort = None,**kwargs):\n \"\"\"\n Retrieves the issue classes for a given backend\n\n :param backend: A backend to use. If None, the default backend will be used\n :param enabled: Whether to retrieve enabled or disabled issue classes.\n Passing `None` will retrieve all issue classes.\n \"\"\"\n if backend is None:\n backend = self.backend\n\n query = {'project_issue_classes.project' : self}\n if enabled is not None:\n query['project_issue_classes.enabled'] = enabled\n\n issue_classes = backend.filter(self.IssueClass,query,\n **kwargs)\n\n\n if sort is not None:\n issue_classes = issue_classes.sort(sort)\n\n return issue_classes"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nvalidates that the input text contains a valid symbol.", "response": "def validate_symbol(text):\n \"\"\"\n Verifying whether symbol(s) included in the ``text`` or not.\n\n :param str text: Input text.\n :raises pathvalidate.InvalidCharError:\n If symbol(s) included in the ``text``.\n \"\"\"\n\n match_list = __RE_SYMBOL.findall(preprocess(text))\n if match_list:\n raise InvalidCharError(\"invalid symbols found: {}\".format(match_list))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef replace_symbol(text, replacement_text=\"\", is_replace_consecutive_chars=False, is_strip=False):\n\n try:\n new_text = __RE_SYMBOL.sub(replacement_text, preprocess(text))\n except (TypeError, AttributeError):\n raise TypeError(\"text must be a string\")\n\n if not replacement_text:\n return new_text\n\n if is_replace_consecutive_chars:\n new_text = re.sub(\"{}+\".format(re.escape(replacement_text)), replacement_text, new_text)\n\n if is_strip:\n new_text = new_text.strip(replacement_text)\n\n return new_text", "response": "Replaces all of the symbols in the text."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef validate_filename(filename, platform=None, min_len=1, max_len=_DEFAULT_MAX_FILENAME_LEN):\n\n FileNameSanitizer(platform=platform, min_len=min_len, max_len=max_len).validate(filename)", "response": "Validates that the filename is a valid file name or not."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nvalidates that the file_path is a valid file path.", "response": "def validate_filepath(file_path, platform=None, min_len=1, max_len=None):\n \"\"\"Verifying whether the ``file_path`` is a valid file path or not.\n\n Args:\n file_path (str):\n File path to validate.\n platform (str, optional):\n .. include:: platform.txt\n min_len (int, optional):\n Minimum length of the ``file_path``. The value must be greater or equal to one.\n Defaults to ``1``.\n max_len (int, optional):\n Maximum length of the ``file_path`` length. If the value is |None|,\n in the default, automatically determined by the ``platform``:\n\n - ``Linux``: 4096\n - ``macOS``: 1024\n - ``Windows``: 260\n\n Raises:\n NullNameError:\n If the ``file_path`` is empty.\n InvalidCharError:\n If the ``file_path`` includes invalid char(s):\n |invalid_file_path_chars|.\n The following characters are also invalid for Windows platform:\n |invalid_win_file_path_chars|\n InvalidLengthError:\n If the ``file_path`` is longer than ``max_len`` characters.\n\n Example:\n :ref:`example-validate-file-path`\n\n See Also:\n `Naming Files, Paths, and Namespaces (Windows)\n `__\n \"\"\"\n\n FilePathSanitizer(platform=platform, min_len=min_len, max_len=max_len).validate(file_path)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef sanitize_filename(\n filename, replacement_text=\"\", platform=None, max_len=_DEFAULT_MAX_FILENAME_LEN\n):\n \"\"\"Make a valid filename from a string.\n\n To make a valid filename the function does:\n\n - Replace invalid characters as file names included in the ``filename``\n with the ``replacement_text``. Invalid characters are:\n\n - unprintable characters\n - |invalid_filename_chars|\n - for Windows only: |invalid_win_filename_chars|\n\n - Append underscore (``\"_\"``) at the tail of the name if sanitized name\n is one of the reserved names by the operating system.\n\n Args:\n filename (str or PathLike object): Filename to sanitize.\n replacement_text (str, optional):\n Replacement text for invalid characters. Defaults to ``\"\"``.\n platform (str, optional):\n .. include:: platform.txt\n max_len (int, optional):\n The upper limit of the ``filename`` length. Truncate the name length if\n the ``filename`` length exceeds this value.\n Defaults to ``255``.\n\n Returns:\n Same type as the ``filename`` (str or PathLike object):\n Sanitized filename.\n\n Raises:\n ValueError:\n If the ``filename`` is an invalid filename.\n\n Example:\n :ref:`example-sanitize-filename`\n \"\"\"\n\n return FileNameSanitizer(platform=platform, max_len=max_len).sanitize(\n filename, replacement_text\n )", "response": "Make a valid filename from a string."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsanitizes a file path.", "response": "def sanitize_filepath(file_path, replacement_text=\"\", platform=None, max_len=None):\n \"\"\"Make a valid file path from a string.\n\n Replace invalid characters for a file path within the ``file_path``\n with the ``replacement_text``.\n Invalid characters are as followings:\n |invalid_file_path_chars|, |invalid_win_file_path_chars| (and non printable characters).\n\n Args:\n file_path (str or PathLike object):\n File path to sanitize.\n replacement_text (str, optional):\n Replacement text for invalid characters.\n Defaults to ``\"\"``.\n platform (str, optional):\n .. include:: platform.txt\n max_len (int, optional):\n The upper limit of the ``file_path`` length. Truncate the name if the ``file_path``\n length exceedd this value. If the value is |None|, the default value automatically\n determined by the execution platform:\n\n - ``Linux``: 4096\n - ``macOS``: 1024\n - ``Windows``: 260\n\n Returns:\n Same type as the argument (str or PathLike object):\n Sanitized filepath.\n\n Raises:\n ValueError:\n If the ``file_path`` is an invalid file path.\n\n Example:\n :ref:`example-sanitize-file-path`\n \"\"\"\n\n return FilePathSanitizer(platform=platform, max_len=max_len).sanitize(\n file_path, replacement_text\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nvalidates that label is a valid LTSV label.", "response": "def validate_ltsv_label(label):\n \"\"\"\n Verifying whether ``label`` is a valid\n `Labeled Tab-separated Values (LTSV) `__ label or not.\n\n :param str label: Label to validate.\n :raises pathvalidate.NullNameError: If the ``label`` is empty.\n :raises pathvalidate.InvalidCharError:\n If invalid character(s) found in the ``label`` for a LTSV format label.\n \"\"\"\n\n validate_null_string(label, error_msg=\"label is empty\")\n\n match_list = __RE_INVALID_LTSV_LABEL.findall(preprocess(label))\n if match_list:\n raise InvalidCharError(\n \"invalid character found for a LTSV format label: {}\".format(match_list)\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsanitizing a LTSV label.", "response": "def sanitize_ltsv_label(label, replacement_text=\"\"):\n \"\"\"\n Replace all of the symbols in text.\n\n :param str label: Input text.\n :param str replacement_text: Replacement text.\n :return: A replacement string.\n :rtype: str\n \"\"\"\n\n validate_null_string(label, error_msg=\"label is empty\")\n\n return __RE_INVALID_LTSV_LABEL.sub(replacement_text, preprocess(label))"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the domain from the url", "response": "def domain(self):\n \"\"\"\n Return domain from the url\n \"\"\"\n remove_pac = self.cleanup.replace(\n \"https://\", \"\").replace(\"http://\", \"\").replace(\"www.\", \"\")\n try:\n return remove_pac.split('/')[0]\n except:\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef to_xml(self):\n root = ET.Element(\"entry\")\n for x in self.xml_tags:\n if getattr(self, x):\n if x in ['episodes', 'scores', 'status', 'dates', 'storage', 'rewatched', 'flags', 'tags']:\n if x == 'episodes':\n if self.episodes.current:\n temp = ET.SubElement(root, 'episode')\n temp.text = str(self.episodes.current)\n elif x == 'scores':\n if self.scores.user:\n temp = ET.SubElement(root, 'score')\n temp.text = str(self.scores.user)\n elif x == 'status':\n if self.status.user:\n temp = ET.SubElement(root, 'status')\n temp.text = str(self.status.user)\n elif x == 'dates':\n if self.dates.user.start:\n start = ET.SubElement(root, 'date_start')\n start.text = format_date(self.dates.user.start)\n if self.dates.user.end:\n end = ET.SubElement(root, 'date_finish')\n end.text = format_date(self.dates.user.end)\n elif x == 'storage':\n if self.storage.type:\n stype = ET.SubElement(root, 'storage_type')\n stype.text = str(self.storage.type)\n if self.storage.value:\n sval = ET.SubElement(root, 'storage_value')\n sval.text = str(self.storage.value)\n elif x == 'rewatched':\n if self.rewatched.times:\n rt = ET.SubElement(root, 'times_rewatched')\n rt.text = str(self.rewatched.times)\n if self.rewatched.value:\n rv = ET.SubElement(root, 'rewatch_value')\n rv.text = str(self.rewatched.value)\n elif x == 'flags':\n if self.flags.discussion:\n df = ET.SubElement(root, 'enable_discussion')\n df.text = '1' if self.flags.discussion else '0'\n if self.flags.rewatching:\n rf = ET.SubElement(root, 'enable_rewatching')\n rf.text = '1' if self.flags.rewatching else '0'\n else:\n if self.tags:\n temp = ET.SubElement(root, 'tags')\n temp.text = ','.join(self.tags)\n else:\n temp = ET.SubElement(root, x)\n temp.text = str(getattr(self, x))\n return '{}'.format(ET.tostring(root, encoding=\"unicode\"))", "response": "Convert data to XML String."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef search(self, term):\n r = requests.get(self.apiurl + \"/users\", params={\"filter[name]\": term}, headers=self.header)\n\n if r.status_code != 200:\n raise ServerError\n\n jsd = r.json()\n\n if jsd['meta']['count']:\n return SearchWrapper(jsd['data'], jsd['links']['next'] if 'next' in jsd['links'] else None, self.header)\n else:\n return None", "response": "Search for a user by name."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a user. Please review the attributes required. You need only provide the attributes.", "response": "def create(self, data):\n \"\"\"\n Create a user. Please review the attributes required. You need only provide the attributes.\n\n :param data: A dictionary of the required attributes\n :return: Dictionary returned by server or a ServerError exception\n :rtype: Dictionary or Exception\n \"\"\"\n final_dict = {\"data\": {\"type\": \"users\", \"attributes\": data}}\n r = requests.post(self.apiurl + \"/users\", json=final_dict, headers=self.header)\n\n if r.status_code != 200:\n raise ServerError\n\n return r.json()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get(self, uid):\n r = requests.get(self.apiurl + \"/users/{}\".format(uid), headers=self.header)\n\n if r.status_code != 200:\n raise ServerError\n\n jsd = r.json()\n\n if jsd['data']:\n return jsd['data']\n else:\n return None", "response": "Get a user s information by their id."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef update(self, uid, data, token):\n final_dict = {\"data\": {\"id\": uid, \"type\": \"users\", \"attributes\": data}}\n final_headers = self.header\n final_headers['Authorization'] = \"Bearer {}\".format(token)\n r = requests.patch(self.apiurl + \"/users/{}\".format(uid), json=final_dict, headers=final_headers)\n\n if r.status_code != 200:\n raise ServerError\n\n return True", "response": "Update a user s data attributes. Requires an auth token."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef character(self, term, page = 1, perpage = 3):\n query_string = \"\"\"\\\n query ($query: String, $page: Int, $perpage: Int) {\n Page (page: $page, perPage: $perpage) {\n pageInfo {\n total\n currentPage\n lastPage\n hasNextPage\n }\n characters (search: $query) {\n id\n name {\n first\n last\n }\n image {\n large\n }\n }\n }\n }\n \"\"\"\n vars = {\"query\": term, \"page\": page, \"perpage\": perpage}\n r = requests.post(self.settings['apiurl'],\n headers=self.settings['header'],\n json={'query': query_string, 'variables': vars})\n jsd = r.text\n\n try:\n jsd = json.loads(jsd)\n except ValueError:\n return None\n else:\n return jsd", "response": "Search for a character by term."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef search(term, lang=None):\n r = requests.get(\n \"http://anisearch.outrance.pl/index.php\",\n params={\n \"task\": \"search\",\n \"query\": term,\n \"langs\": \"ja,x-jat,en\" if lang is None else ','.join(lang)\n }\n )\n\n if r.status_code != 200:\n raise ServerError\n\n tree = ET.fromstring(r.text)\n root = tree.getroot()\n\n for item in root.iter(\"anime\"):\n\n # Parse XML http://wiki.anidb.net/w/User:Eloyard/anititles_dump\n results[aid]={}\n for title in item.iter('title'):\n if title.attrib['type'] in ['official', 'main']:\n results[aid][title.attrib['xml:lang']] = title.text\n \n return results", "response": "This function will download and parse an XML dump of the AID search provided by Eloyard and return the AID search results."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsending a request to the API to return results related to Visual Novels. :param str stype: What are we searching for? One of: vn, release, producer, character, votelist, vnlist, wishlist :param flags: See the D11 docs. A comma separated list of flags for what data to return. Can be list or str. :param str filters: A string with the one filter to search by (apparently you only get one). This is kind of special. You need to pass them in the form \"\" for strings or for numbers. This is counter intuitive. Also, per the docs, = doesn't do what we think, use >, >= or < and <=. I will attempt to properly format this if not done so when called. :param dict options: A dictionary of options to customize the search by. Optional, defaults to None. :return dict: A dictionary containing a pages and data key. data contains a list of dictionaries with data on your results. If pages is true, you can call this command again with the same parameters and pass a page option to get more data. Otherwise no further results exist for this query. :raises ServerError: Raises a ServerError if an error is returned.", "response": "def get(self, stype, flags, filters, options=None):\n \"\"\"\n Send a request to the API to return results related to Visual Novels.\n\n :param str stype: What are we searching for? One of: vn, release, producer, character, votelist, vnlist, wishlist\n :param flags: See the D11 docs. A comma separated list of flags for what data to return. Can be list or str.\n :param str filters: A string with the one filter to search by (apparently you only get one).\n This is kind of special. You need to pass them in the form \"\"\n for strings or for numbers. This is counter intuitive.\n Also, per the docs, = doesn't do what we think, use >, >= or < and <=.\n I will attempt to properly format this if not done so when called.\n :param dict options: A dictionary of options to customize the search by. Optional, defaults to None.\n :return dict: A dictionary containing a pages and data key. data contains a list of dictionaries with data on your results. If pages is true, you can call this command again with the same parameters and pass a page option to get more data. Otherwise no further results exist for this query.\n :raises ServerError: Raises a ServerError if an error is returned.\n \"\"\"\n if not isinstance(flags, str):\n if isinstance(flags, list):\n finflags = \",\".join(flags)\n else:\n raise SyntaxError(\"Flags should be a list or comma separated string\")\n else:\n finflags = flags\n\n if not isinstance(filters, str):\n raise SyntaxError(\"Filters needs to be a string in the format FilterValue. The simplest form is search=\\\"\\\".\")\n\n if stype not in self.stypes:\n raise SyntaxError(\"{} not a valid Search type.\".format(stype))\n\n if '\"' not in filters or \"'\" not in filters:\n newfilters = self.helperpat.split(filters)\n newfilters = [x.strip() for x in newfilters]\n newfilters[1] = '\"' + newfilters[1] + '\"'\n op = self.helperpat.search(filters)\n newfilters = op.group(0).join(newfilters)\n command = '{} {} ({}){}'.format(stype, finflags, newfilters,\n ' ' + ujson.dumps(options) if options is not None else '')\n else:\n command = '{} {} ({}){}'.format(stype, finflags, filters,\n ' ' + ujson.dumps(options) if options is not None else '')\n\n data = self.connection.send_command('get', command)\n\n if 'id' in data:\n raise ServerError(data['msg'], data['id'])\n else:\n return {'pages': data.get('more', default=False), 'data': data['items']}"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsending a request to the API to modify something in the database if logged in.", "response": "def set(self, stype, sid, fields):\n \"\"\"\n Send a request to the API to modify something in the database if logged in.\n\n :param str stype: What are we modifying? One of: votelist, vnlist, wishlist\n :param int sid: The ID that we're modifying.\n :param dict fields: A dictionary of the fields and their values\n :raises ServerError: Raises a ServerError if an error is returned\n :return bool: True if successful, error otherwise\n \"\"\"\n if stype not in ['votelist', 'vnlist', 'wishlist']:\n raise SyntaxError(\"{} is not a valid type for set. Should be one of: votelist, vnlist or wishlist.\".format(stype))\n\n command = \"{} {} {}\".format(stype, id, ujson.dumps(fields))\n data = self.connection.send_command('set', command)\n\n if 'id' in data:\n raise ServerError(data['msg'], data['id'])\n else:\n return True"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef review(self, item_id, html = True):\n query_string = \"\"\"\\\n query ($id: Int, $html: Boolean) {\n Review (id: $id) {\n summary\n body(asHtml: $html)\n score\n rating\n ratingAmount\n createdAt\n updatedAt\n private\n media {\n id\n }\n user {\n id\n name\n avatar {\n large\n }\n }\n }\n }\n \"\"\"\n vars = {\"id\": item_id, \"html\": html}\n r = requests.post(self.settings['apiurl'],\n headers=self.settings['header'],\n json={'query': query_string, 'variables': vars})\n jsd = r.text\n\n try:\n jsd = json.loads(jsd)\n except ValueError:\n return None\n else:\n return jsd", "response": "This method returns the review information for the item with the given ID."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get(self, external_site: str, external_id: int):\n r = requests.get(self.apiurl + \"/mappings\", params={\"filter[externalSite]\": external_site, \"filter[externalId]\": external_id}, headers=self.header)\n\n if r.status_code != 200:\n raise ServerError\n\n jsd = r.json()\n\n if len(jsd['data']) < 1:\n return None\n\n r = requests.get(jsd['data'][0]['relationships']['item']['links']['related'], headers=self.header)\n\n if r.status_code != 200:\n return jsd\n else:\n return r.json()", "response": "Get a kitsu mapping by external site ID"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef authenticate(self, username, password):\n r = requests.post(self.apiurl + \"/token\",\n params={\"grant_type\": \"password\", \"username\": username, \"password\": password,\n \"client_id\": self.cid, \"client_secret\": self.csecret})\n\n if r.status_code != 200:\n raise ServerError\n\n jsd = r.json()\n\n if self.remember:\n self.token_storage[username] = {'token': jsd['access_token'], 'refresh': jsd['refresh_token'],\n 'expiration': int(jsd['created_at']) + int(jsd['expires_in'])}\n\n return jsd['access_token'], int(jsd['expires_in']) + int(jsd['created_at']), jsd['refresh_token']", "response": "Obtain an oauth token. Pass username and password."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef refresh(self, refresh_token):\n r = requests.post(self.apiurl + \"/token\", params={\"grant_type\": \"refresh_token\", \"client_id\": self.cid,\n \"client_secret\": self.csecret,\n \"refresh_token\": refresh_token})\n\n if r.status_code != 200:\n raise ServerError\n\n jsd = r.json()\n\n return jsd['access_token'], int(jsd['expires_in']) + int(jsd['created_at'])", "response": "Renew an oauth token given an appropriate refresh token."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget a user s token from the KitsuAuth token storage.", "response": "def get(self, username):\n \"\"\"\n If using the remember option and KitsuAuth is storing your tokens, this function will retrieve one.\n\n :param username: The username whose token we are retrieving\n :return: A token, NotFound or NotSaving error\n \"\"\"\n if not self.remember:\n raise NotSaving\n\n if username not in self.token_storage:\n raise UserNotFound\n\n if self.token_storage[username]['expiration'] < time.time():\n new_token = self.refresh(self.token_storage[username]['refresh'])\n self.token_storage[username]['token'] = new_token[0]\n self.token_storage[username]['expiration'] = new_token[1]\n return new_token[0]\n else:\n return self.token_storage[username]['token']"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndownloading the data for the given dump.", "response": "def download(which, destination=None):\n \"\"\"\n I realize that the download for the dumps is going to take awhile.\n Given that, I've decided to approach this using threads.\n When you call this method, it will launch a thread to download the data.\n By default, the dump is dropped into the current working directory.\n If the directory given doesn't exist, we'll try to make it.\n Don't use '..' in the path as this confuses makedirs.\n\n :param int which: 0 for dat (txt), 1 for xml\n :param str destination: a file path to save to, defaults to cwd\n \"\"\"\n if destination:\n if not os.path.exists(destination):\n os.makedirs(destination)\n\n pthread = threading.Thread(\n target=save,\n args=(\n self.urls[which],\n os.path.join(destination, self.urls[which])\n )\n )\n pthread.start()\n return pthread"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the manga information by id.", "response": "def get(self, aid):\n \"\"\"\n Get manga information by id.\n\n :param int aid: ID of the manga.\n :return: Dictionary or None (for not found)\n :rtype: Dictionary or None\n :raises: :class:`Pymoe.errors.ServerError`\n \"\"\"\n r = requests.get(self.apiurl + \"/manga/{}\".format(aid), headers=self.header)\n\n if r.status_code != 200:\n if r.status_code == 404:\n return None\n else:\n raise ServerError\n\n return r.json()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef active(self):\n projects = []\n r = requests.get(self.api,\n params={'action': 'query', 'list': 'categorymembers', 'cmpageid': self.active_id,\n 'cmtype': 'page', 'cmlimit': '500', 'format': 'json'},\n headers=self.header)\n if r.status_code == 200:\n jsd = r.json()\n projects.append([(x['title'], x['pageid']) for x in jsd['query']['categorymembers']])\n if 'query-continue' in jsd:\n while True:\n r = requests.get(self.api,\n params={'action': 'query', 'list': 'categorymembers',\n 'cmpageid': self.active_id, 'cmtype': 'page', 'cmlimit': '500',\n 'cmcontinue': jsd['query-continue']['categorymembers']['cmcontinue'],\n 'format': 'json'},\n headers=self.header)\n if r.status_code == 200:\n jsd = r.json()\n projects.append([(x['title'], x['pageid']) for x in jsd['query']['categorymembers']])\n if 'query-continue' not in jsd:\n break\n else:\n break\n return projects[0]", "response": "Get a list of active projects."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting a list of light novels under a certain language.", "response": "def light_novels(self, language=\"English\"):\n \"\"\"\n Get a list of light novels under a certain language.\n\n :param str language: Defaults to English. Replace with whatever language you want to query.\n :return list: A list of tuples containing a title and pageid element in that order.\n \"\"\"\n projects = []\n r = requests.get(self.api,\n params={'action': 'query', 'list': 'categorymembers',\n 'cmtitle': 'Category:Light_novel_({})'.format(language.replace(\" \", \"_\")),\n 'cmtype': 'page', 'cmlimit': '500', 'format': 'json'},\n headers=self.header)\n if r.status_code == 200:\n jsd = r.json()\n projects.append([(x['title'], x['pageid']) for x in jsd['query']['categorymembers']])\n if 'query-continue' in jsd:\n while True:\n r = requests.get(self.api,\n params={'action': 'query', 'list': 'categorymembers',\n 'cmtitle': 'Category:Light_novel_({})'.format(language.replace(\" \", \"_\")),\n 'cmtype': 'page', 'cmlimit': '500',\n 'cmcontinue': jsd['query-continue']['categorymembers']['cmcontinue'],\n 'format': 'json'},\n headers=self.header)\n if r.status_code == 200:\n jsd = r.json()\n projects.append([(x['title'], x['pageid']) for x in jsd['query']['categorymembers']])\n if 'query-continue' not in jsd:\n break\n else:\n break\n return projects[0]"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef chapters(self, title):\n r = requests.get(\"https://www.baka-tsuki.org/project/index.php?title={}\".format(title.replace(\" \", \"_\")),\n headers=self.header)\n if r.status_code != 200:\n raise requests.HTTPError(\"Not Found\")\n else:\n parsed = soup(r.text, 'html.parser')\n dd = parsed.find_all(\"a\")\n volumes = []\n for link in dd:\n if 'class' in link.attrs:\n if 'image' in link.get('class'):\n continue\n if 'href' in link.attrs:\n if re.search(self.chapter_regex, link.get('href')) is not None and not link.get('href').startswith('#'):\n volumes.append(link)\n seplist = OrderedDict()\n for item in volumes:\n if 'title' in item.attrs:\n result = re.search(self.separate_regex, item.get('title').lower())\n else:\n result = re.search(self.separate_regex, item.text.lower())\n\n if result and result.groups():\n if result.group('chapter').lstrip('0') in seplist:\n seplist[result.group('chapter').lstrip('0')].append([item.get('href'),\n item.get('title') if 'title' in item.attrs else item.text])\n else:\n seplist[result.group('chapter').lstrip('0')] = [[item.get('href'),\n item.get('title') if 'title' in item.attrs else item.text]]\n return seplist", "response": "Get a list of chapters for a visual novel."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets a cover image given a page id.", "response": "def cover(self, pageid):\n \"\"\"\n Get a cover image given a page id.\n\n :param str pageid: The pageid for the light novel you want a cover image for\n :return str: the image url\n \"\"\"\n r = requests.get(self.api,\n params={'action': 'query', 'prop': 'pageimages', 'pageids': pageid, 'format': 'json'},\n headers=self.header)\n jsd = r.json()\n image = \"File:\" + jsd['query']['pages'][str(pageid)]['pageimage']\n r = requests.get(self.api,\n params={'action': 'query', 'prop': 'imageinfo', 'iiprop': 'url', 'titles': image,\n 'format': 'json'},\n headers=self.header)\n jsd = r.json()\n return jsd['query']['pages'][list(jsd['query']['pages'].keys())[0]]['imageinfo'][0]['url']"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_text(self, title):\n r = requests.get(self.api,\n params={'action': 'parse', 'page': title, 'format': 'json'},\n headers=self.header)\n jsd = r.json()\n return jsd['parse']['text']['*']", "response": "This will grab the html content of the chapter given by url."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadd an anime to a user s list.", "response": "def _anime_add(self, data):\n \"\"\"\n Adds an anime to a user's list.\n\n :param data: A :class:`Pymoe.Mal.Objects.Anime` object with the anime data\n :raises: SyntaxError on invalid data type\n :raises: ServerError on failure to add\n :rtype: Bool\n :return: True on success\n \"\"\"\n if isinstance(data, Anime):\n xmlstr = data.to_xml()\n r = requests.get(self.apiurl + \"animelist/add/{}.xml\".format(data.id),\n params={'data': xmlstr},\n auth=HTTPBasicAuth(self._username, self._password),\n headers=self.header)\n if r.status_code != 201:\n raise ServerError(r.text, r.status_code)\n return True\n else:\n raise SyntaxError(\n \"Invalid type: data should be a Pymoe.Mal.Objects.Anime object. Got a {}\".format(type(data)))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef user(self, name):\n anime_data = requests.get(self.apiusers, params={'u': name, 'status': 'all', 'type': 'anime'},\n headers=self.header)\n\n if anime_data.status_code != 200:\n raise ConnectionError(\n \"Anime Data Request failed. Please Open a bug on https://github.com/ccubed/Pymoe and include the following data.\\nStatus Code: {}\\n\\nText:{}\".format(\n anime_data.status_code, anime_data.text))\n\n manga_data = requests.get(self.apiusers, params={'u': name, 'status': 'all', 'type': 'manga'},\n headers=self.header)\n\n if manga_data.status_code != 200:\n raise ConnectionError(\n \"Manga Data Request failed. Please Open a bug on https://github.com/ccubed/Pymoe and include the following data.\\nStatus Code: {}\\n\\nText:{}\".format(\n manga_data.status_code, manga_data.text))\n\n root = ET.fromstring(anime_data.text)\n uid = root.find('myinfo').find('user_id').text\n uname = root.find('myinfo').find('user_name').text\n anime_object_list = self.parse_anime_data(anime_data.text)\n manga_object_list = self.parse_manga_data(manga_data.text)\n return User(uid=uid,\n name=uname,\n anime_list=NT_USER_ANIME(\n watching=[x for x in anime_object_list['data'] if x.status.user == \"Currently Watching\"],\n completed=[x for x in anime_object_list['data'] if x.status.user == \"Completed\"],\n held=[x for x in anime_object_list['data'] if x.status.user == \"On Hold\"],\n dropped=[x for x in anime_object_list['data'] if x.status.user == \"Dropped\"],\n planned=[x for x in anime_object_list['data'] if x.status.user == \"Plan to Watch\"]\n ),\n anime_days=anime_object_list['days'],\n manga_list=NT_USER_MANGA(\n reading=[x for x in manga_object_list['data'] if x.status.user == \"Currently Reading\"],\n completed=[x for x in manga_object_list['data'] if x.status.user == \"Completed\"],\n held=[x for x in manga_object_list['data'] if x.status.user == \"On Hold\"],\n dropped=[x for x in manga_object_list['data'] if x.status.user == \"Dropped\"],\n planned=[x for x in manga_object_list['data'] if x.status.user == \"Plan to Read\"]\n ),\n manga_days=manga_object_list['days'])", "response": "Get a user s anime list and details. This returns an encapsulated data type."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nlogs in as a user.", "response": "def login(self, username, password):\n \"\"\"\n This handles login logic instead of stuffing all that in the __init__.\n\n :param username: The username to log in as or None\n :param password: The password for that user or None\n :return: Nothing\n :raises: :class:`Pymoe.errors.UserLoginFailed` - Didn't respond with Ok\n :raises: :class:`Pymoe.errors.GeneralLoginError` - For some reason, we were already logged in, tried to login again and it failed. This probably isn't bad.\n \"\"\"\n finvars = self.clientvars\n if username and password:\n finvars['username'] = username\n finvars['password'] = password\n self.loggedin = True\n ret = self.send_command('login', ujson.dumps(finvars))\n if not isinstance(ret, str): # should just be 'Ok'\n if self.loggedin:\n self.loggedin = False\n raise UserLoginFailed(ret['msg'])\n else:\n raise GeneralLoginError(ret['msg'])"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsending a command to VNDB and then get the result.", "response": "def send_command(self, command, args=None):\n \"\"\"\n Send a command to VNDB and then get the result.\n\n :param command: What command are we sending\n :param args: What are the json args for this command\n :return: Servers Response\n :rtype: Dictionary (See D11 docs on VNDB)\n \"\"\"\n if args:\n if isinstance(args, str):\n final_command = command + ' ' + args + '\\x04'\n else:\n # We just let ujson propogate the error here if it can't parse the arguments\n final_command = command + ' ' + ujson.dumps(args) + '\\x04'\n else:\n final_command = command + '\\x04'\n self.sslwrap.sendall(final_command.encode('utf-8'))\n return self._recv_data()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets a user s list of library entries.", "response": "def get(self, uid, filters=None):\n \"\"\"\n Get a user's list of library entries. While individual entries on this\n list don't show what type of entry it is, you can use the filters provided\n by the Kitsu API to only select which ones you want\n\n :param uid: str: User ID to get library entries for\n :param filters: dict: Dictionary of filters for the library\n :return: Results or ServerError\n :rtype: SearchWrapper or Exception\n \"\"\"\n filters = self.__format_filters(filters)\n\n r = requests.get(self.apiurl + \"/users/{}/library-entries\".format(uid), headers=self.header, params=filters)\n\n if r.status_code != 200:\n raise ServerError\n\n jsd = r.json()\n\n if jsd['meta']['count']:\n return SearchWrapper(jsd['data'], jsd['links']['next'] if 'next' in jsd['links'] else None, self.header)\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create(self, user_id, media_id, item_type, token, data):\n final_dict = {\n \"data\": {\n \"type\": \"libraryEntries\",\n \"attributes\": data,\n \"relationships\":{\n \"user\":{\n \"data\":{\n \"id\": user_id,\n \"type\": \"users\"\n }\n },\n \"media\":{\n \"data\":{\n \"id\": media_id,\n \"type\": item_type\n }\n }\n }\n }\n }\n final_headers = self.header\n final_headers['Authorization'] = \"Bearer {}\".format(token)\n\n r = requests.post(self.apiurl + \"/library-entries\", json=final_dict, headers=final_headers)\n\n if r.status_code != 201:\n raise ConnectionError(r.text)\n\n jsd = r.json()\n\n return jsd['data']['id']", "response": "Create a new library entry for a user. data should be just the attributes."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nupdating a given Library Entry.", "response": "def update(self, eid, data, token):\n \"\"\"\n Update a given Library Entry.\n\n :param eid str: Entry ID\n :param data dict: Attributes\n :param token str: OAuth token\n :return: True or ServerError\n :rtype: Bool or Exception\n \"\"\"\n final_dict = {\"data\": {\"id\": eid, \"type\": \"libraryEntries\", \"attributes\": data}}\n final_headers = self.header\n final_headers['Authorization'] = \"Bearer {}\".format(token)\n\n r = requests.patch(self.apiurl + \"/library-entries/{}\".format(eid), json=final_dict, headers=final_headers)\n\n if r.status_code != 200:\n raise ConnectionError(r.text)\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndelete a library entry.", "response": "def delete(self, eid, token):\n \"\"\"\n Delete a library entry.\n\n :param eid str: Entry ID\n :param token str: OAuth Token\n :return: True or ServerError\n :rtype: Bool or Exception\n \"\"\"\n final_headers = self.header\n final_headers['Authorization'] = \"Bearer {}\".format(token)\n\n r = requests.delete(self.apiurl + \"/library-entries/{}\".format(eid), headers=final_headers)\n\n if r.status_code != 204:\n print(r.status_code)\n raise ConnectionError(r.text)\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nformat the filters for the api query to filter [ filter - name ]", "response": "def __format_filters(filters):\n \"\"\"\n Format filters for the api query (to filter[])\n\n :param filters: dict: can be None, filters for the query\n :return: the formatted filters, or None\n \"\"\"\n if filters is not None:\n for k in filters:\n if 'filter[' not in k:\n filters['filter[{}]'.format(k)] = filters.pop(k)\n return filters"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef convert_to_record(func):\n @functools.wraps(func)\n def wrapper(self, *args, **kwargs):\n result = func(self, *args, **kwargs)\n if result is not None:\n if isinstance(result, dict):\n return _record(result)\n return (_record(i) for i in result)\n\n return result\n\n return wrapper", "response": "Wrap mongodb record to a dict record with default value None\n "} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconverting single record s values to str", "response": "def to_one_str(cls, value, *args, **kwargs):\n \"\"\"Convert single record's values to str\n \"\"\"\n if kwargs.get('wrapper'):\n return cls._wrapper_to_one_str(value)\n\n return _es.to_dict_str(value)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts many records s values to str", "response": "def to_str(cls, values, callback=None):\n \"\"\"Convert many records's values to str\n \"\"\"\n if callback and callable(callback):\n if isinstance(values, dict):\n return callback(_es.to_str(values))\n\n return [callback(_es.to_str(i)) for i in values]\n return _es.to_str(values)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef instance(cls, name):\n if not cls._instance.get(name):\n model_name = name.split('.')\n ins_name = '.'.join(\n ['models', model_name[0], 'model', model_name[1]])\n cls._instance[name] = cls.import_model(ins_name)()\n\n return cls._instance[name]", "response": "Instantiate a model class according to import path\n "} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nimport model class in models package", "response": "def import_model(cls, ins_name):\n \"\"\"Import model class in models package\n \"\"\"\n try:\n package_space = getattr(cls, 'package_space')\n except AttributeError:\n raise ValueError('package_space not exist')\n else:\n return import_object(ins_name, package_space)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef register_app(app_name, app_setting, web_application_setting, mainfile, package_space):\n from turbo import log\n app_config.app_name = app_name\n app_config.app_setting = app_setting\n app_config.project_name = os.path.basename(get_base_dir(mainfile, 2))\n app_config.web_application_setting.update(web_application_setting)\n if app_setting.get('session_config'):\n app_config.session_config.update(app_setting['session_config'])\n log.getLogger(**app_setting.log)\n _install_app(package_space)", "response": "register app with turbo"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nregisters url into tornado application handlers", "response": "def register_url(url, handler, name=None, kwargs=None):\n \"\"\"insert url into tornado application handlers group\n\n :arg str url: url\n :handler object handler: url mapping handler\n :name reverse url name\n :kwargs dict tornado handler initlize args\n \"\"\"\n if name is None and kwargs is None:\n app_config.urls.append((url, handler))\n return\n\n if name is None:\n app_config.urls.append((url, handler, kwargs))\n return\n\n app_config.urls.append((url, handler, kwargs, name))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\noverrides to implement custom error pages. is used to implement custom error pages.", "response": "def write_error(self, status_code, **kwargs):\n \"\"\"Override to implement custom error pages.\n http://tornado.readthedocs.org/en/stable/_modules/tornado/web.html#RequestHandler.write_error\n \"\"\"\n super(BaseHandler, self).write_error(status_code, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef parameter(self):\n '''\n according to request method config to filter all request paremter\n if value is invalid then set None\n '''\n method = self.request.method.lower()\n arguments = self.request.arguments\n files = self.request.files\n\n rpd = {} # request parameter dict\n\n def filter_parameter(key, tp, default=None):\n if tp not in self._types:\n raise ValueError(\n '%s parameter expected types %s' % (key, self._types))\n\n if not isinstance(tp, file_types):\n if key not in arguments:\n rpd[key] = default\n return\n\n if tp in [ObjectId, int, float, bool]:\n rpd[key] = getattr(self, 'to_%s' % getattr(\n tp, '__name__').lower())(self.get_argument(key))\n return\n\n if tp == basestring_type or issubclass(tp, basestring_type):\n rpd[key] = self.get_argument(key, strip=False)\n return\n\n if tp == list:\n rpd[key] = self.get_arguments(key)\n return\n\n if tp == file:\n if key not in files:\n rpd[key] = []\n return\n\n rpd[key] = self.request.files[key]\n\n required_params = getattr(self, '_required_params', None)\n if isinstance(required_params, list):\n for key, tp, default in required_params:\n filter_parameter(key, tp, default)\n\n # extract method required params\n method_required_params = getattr(\n self, '_%s_required_params' % method, None)\n if isinstance(method_required_params, list):\n for key, tp, default in method_required_params:\n filter_parameter(key, tp, default)\n\n params = getattr(self, '_%s_params' % method, None)\n if params is None:\n return rpd\n\n # need arguments\n try:\n for key, tp in params.get('need', []):\n if tp == list:\n filter_parameter(key, tp, [])\n else:\n filter_parameter(key, tp)\n except ValueError as e:\n app_log.error(\n '%s request need arguments parse error: %s' % (method, e))\n raise ValueError(e)\n except Exception as e:\n app_log.error(\n '%s request need arguments parse error: %s' % (method, e))\n raise e\n\n # option arguments\n for key, tp, default in params.get('option', []):\n filter_parameter(key, tp, default)\n\n return rpd", "response": "returns a dictionary of all the request parameters that are not None"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting the response from a Wo request to a Wo response.", "response": "def wo_resp(self, resp):\n \"\"\"\n can override for other style\n \"\"\"\n if self._data is not None:\n resp['res'] = self.to_str(self._data)\n\n return self.wo_json(resp)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ninserting method for the record.", "response": "def insert(self, doc_or_docs, **kwargs):\n \"\"\"Insert method\n \"\"\"\n check = kwargs.pop('check', True)\n if isinstance(doc_or_docs, dict):\n if check is True:\n doc_or_docs = self._valid_record(doc_or_docs)\n result = self.__collect.insert_one(doc_or_docs, **kwargs)\n return result.inserted_id\n else:\n if check is True:\n for d in doc_or_docs:\n d = self._valid_record(d)\n result = self.__collect.insert_many(doc_or_docs, **kwargs)\n return result.inserted_ids"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef save(self, to_save, **kwargs):\n check = kwargs.pop('check', True)\n if check:\n self._valid_record(to_save)\n if '_id' in to_save:\n self.__collect.replace_one(\n {'_id': to_save['_id']}, to_save, **kwargs)\n return to_save['_id']\n else:\n result = self.__collect.insert_one(to_save, **kwargs)\n return result.inserted_id", "response": "save method saves a record"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nupdates method for all the related objects", "response": "def update(self, filter_, document, multi=False, **kwargs):\n \"\"\"update method\n \"\"\"\n self._valide_update_document(document)\n if multi:\n return self.__collect.update_many(filter_, document, **kwargs)\n else:\n return self.__collect.update_one(filter_, document, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef remove(self, filter_=None, **kwargs):\n if isinstance(filter_, dict) and filter_ == {}:\n raise ValueError('not allowed remove all documents')\n\n if filter_ is None:\n raise ValueError('not allowed remove all documents')\n\n if kwargs.pop('multi', False) is True:\n return self.__collect.delete_many(filter_, **kwargs)\n else:\n return self.__collect.delete_one(filter_, **kwargs)", "response": "collection remove method\n warning:\n if you want to remove all documents,\n you must override _remove_all method to make sure\n you understand the result what you do"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef insert_one(self, doc_or_docs, **kwargs):\n check = kwargs.pop('check', True)\n if check is True:\n self._valid_record(doc_or_docs)\n\n return self.__collect.insert_one(doc_or_docs, **kwargs)", "response": "Insert one record into the collection."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef insert_many(self, doc_or_docs, **kwargs):\n check = kwargs.pop('check', True)\n if check is True:\n for i in doc_or_docs:\n i = self._valid_record(i)\n\n return self.__collect.insert_many(doc_or_docs, **kwargs)", "response": "Insert many records into the database."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting record to a dict that has no key error", "response": "def _wrapper_find_one(self, filter_=None, *args, **kwargs):\n \"\"\"Convert record to a dict that has no key error\n \"\"\"\n return self.__collect.find_one(filter_, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nupdating one document in the collection", "response": "def update_one(self, filter_, document, **kwargs):\n \"\"\"update method\n \"\"\"\n self._valide_update_document(document)\n return self.__collect.update_one(filter_, document, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef find_by_id(self, _id, projection=None):\n if isinstance(_id, list) or isinstance(_id, tuple):\n return list(self.__collect.find(\n {'_id': {'$in': [self._to_primary_key(i) for i in _id]}}, projection))\n\n document_id = self._to_primary_key(_id)\n\n if document_id is None:\n return None\n\n return self.__collect.find_one({'_id': document_id}, projection)", "response": "find record by _id"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef create_model(cls, name, field=None):\n if field:\n attrs = {'name': name, 'field': field}\n else:\n attrs = {'name': name, 'field': {'_id': ObjectId()}}\n\n return type(str(name), (cls, ), attrs)()", "response": "dynamic create new model"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef to_list_str(value, encode=None):\n result = []\n for index, v in enumerate(value):\n if isinstance(v, dict):\n result.append(to_dict_str(v, encode))\n continue\n\n if isinstance(v, list):\n result.append(to_list_str(v, encode))\n continue\n\n if encode:\n result.append(encode(v))\n else:\n result.append(default_encode(v))\n\n return result", "response": "recursively convert list content into string"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef default_encode(v):\n if isinstance(v, ObjectId):\n return unicode_type(v)\n\n if isinstance(v, datetime):\n return format_time(v)\n\n if isinstance(v, date):\n return format_time(v)\n\n return v", "response": "convert ObjectId datetime date into string\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting any list dict iterable and primitives object to string", "response": "def to_str(v, encode=None):\n \"\"\"convert any list, dict, iterable and primitives object to string\n \"\"\"\n if isinstance(v, basestring_type):\n return v\n\n if isinstance(v, dict):\n return to_dict_str(v, encode)\n\n if isinstance(v, Iterable):\n return to_list_str(v, encode)\n\n if encode:\n return encode(v)\n else:\n return default_encode(v)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting an ObjectId to a base objectid", "response": "def to_objectid(objid):\n \"\"\"\u5b57\u7b26\u5bf9\u8c61\u8f6c\u6362\u6210objectid\n \"\"\"\n if objid is None:\n return objid\n\n try:\n objid = ObjectId(objid)\n except:\n util_log.error('%s is invalid objectid' % objid)\n return None\n\n return objid"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfinds certain path according to currfile", "response": "def get_base_dir(currfile, dir_level_num=3):\n \"\"\"\n find certain path according to currfile\n \"\"\"\n root_path = os.path.abspath(currfile)\n for i in range(0, dir_level_num):\n root_path = os.path.dirname(root_path)\n\n return root_path"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nfinding certain path then load into sys. path", "response": "def join_sys_path(currfile, dir_level_num=3):\n \"\"\"\n find certain path then load into sys path\n \"\"\"\n if os.path.isdir(currfile):\n root_path = currfile\n else:\n root_path = get_base_dir(currfile, dir_level_num)\n\n sys.path.append(root_path)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert CamelCase style to under_score_case", "response": "def camel_to_underscore(name):\n \"\"\"\n convert CamelCase style to under_score_case\n \"\"\"\n as_list = []\n length = len(name)\n for index, i in enumerate(name):\n if index != 0 and index != length - 1 and i.isupper():\n as_list.append('_%s' % i.lower())\n else:\n as_list.append(i.lower())\n\n return ''.join(as_list)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert a string argument to a subclass of basestring.", "response": "def to_basestring(value):\n \"\"\"Converts a string argument to a subclass of basestring.\n\n In python2, byte and unicode strings are mostly interchangeable,\n so functions that deal with a user-supplied argument in combination\n with ascii string constants can use either and should return the type\n the user supplied. In python3, the two types are not interchangeable,\n so this method is needed to convert byte strings to unicode.\n \"\"\"\n if isinstance(value, _BASESTRING_TYPES):\n return value\n if not isinstance(value, bytes):\n raise TypeError(\n \"Expected bytes, unicode, or None; got %r\" % type(value)\n )\n return value.decode(\"utf-8\")"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _init_file_logger(logger, level, log_path, log_size, log_count):\n if level not in [logging.NOTSET, logging.DEBUG, logging.INFO, logging.WARNING, logging.ERROR, logging.CRITICAL]:\n level = logging.DEBUG\n\n for h in logger.handlers:\n if isinstance(h, logging.handlers.RotatingFileHandler):\n if h.level == level:\n return\n\n fh = logging.handlers.RotatingFileHandler(\n log_path, maxBytes=log_size, backupCount=log_count)\n fh.setLevel(level)\n fh.setFormatter(_formatter)\n logger.addHandler(fh)", "response": "Initialize a file logger."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _load(self):\n\n self.session_id = self._session_object.get_session_id()\n\n # protection against session_id tampering\n if self.session_id and not self._valid_session_id(self.session_id):\n self.session_id = None\n\n if self.session_id:\n d = self.store[self.session_id]\n if isinstance(d, dict) and d:\n self.update(d)\n\n if not self.session_id:\n self.session_id = self._session_object.generate_session_id()\n\n if not self._data:\n if self._initializer and isinstance(self._initializer, dict):\n self.update(deepcopy(self._initializer))\n\n self._session_object.set_session_id(self.session_id)", "response": "Load the session from the store by the id from the cookie"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngenerates a random id for session", "response": "def generate_session_id(self):\n \"\"\"Generate a random id for session\"\"\"\n secret_key = self._config.secret_key\n while True:\n rand = os.urandom(16)\n now = time.time()\n session_id = sha1(utf8(\"%s%s%s%s\" % (\n rand, now, self.handler.request.remote_ip, secret_key)))\n session_id = session_id.hexdigest()\n if session_id not in self.store:\n break\n\n return session_id"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef encode(self, session_data):\n pickled = pickle.dumps(session_data)\n return to_basestring(encodebytes(pickled))", "response": "encodes session dict as a string"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef decode(self, session_data):\n pickled = decodebytes(utf8(session_data))\n return pickle.loads(pickled)", "response": "decodes the data to get back the session dict"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef replace(self):\n\n replacement = self._tw.tasks.get(uuid=self._uuid)\n self.__class__ = replacement.__class__\n self.__dict__ = replacement.__dict__", "response": "Replaces the current object with the corresponding Task object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef replace(self):\n\n replacement = self._tw.tasks.filter(' '.join(self._uuids))\n self.__class__ = replacement.__class__\n self.__dict__ = replacement.__dict__", "response": "Replaces the current object with the corresponding TaskQuerySet object."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsave a task into TaskWarrior database using add or modify call.", "response": "def save_task(self, task):\n \"\"\"Save a task into TaskWarrior database using add/modify call\"\"\"\n\n args = [task['uuid'], 'modify'] if task.saved else ['add']\n args.extend(self._get_modified_task_fields_as_args(task))\n output = self.execute_command(args)\n\n # Parse out the new ID, if the task is being added for the first time\n if not task.saved:\n id_lines = [l for l in output if l.startswith('Created task ')]\n\n # Complain loudly if it seems that more tasks were created\n # Should not happen.\n # Expected output: Created task 1.\n # Created task 1 (recurrence template).\n if len(id_lines) != 1 or len(id_lines[0].split(' ')) not in (3, 5):\n raise TaskWarriorException(\n 'Unexpected output when creating '\n 'task: %s' % '\\n'.join(id_lines),\n )\n\n # Circumvent the ID storage, since ID is considered read-only\n identifier = id_lines[0].split(' ')[2].rstrip('.')\n\n # Identifier can be either ID or UUID for completed tasks\n try:\n task._data['id'] = int(identifier)\n except ValueError:\n task._data['uuid'] = identifier\n\n # Refreshing is very important here, as not only modification time\n # is updated, but arbitrary attribute may have changed due hooks\n # altering the data before saving\n task.refresh(after_save=True)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nnormalize the user s entry point.", "response": "def _normalize(self, key, value):\n \"\"\"\n Use normalize_ methods to normalize user input. Any user\n input will be normalized at the moment it is used as filter,\n or entered as a value of Task attribute.\n \"\"\"\n\n # None value should not be converted by normalizer\n if value is None:\n return None\n\n normalize_func = getattr(self, 'normalize_{0}'.format(key),\n lambda x: x)\n\n return normalize_func(value)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nnormalizes date and datetime values to localized datetime values.", "response": "def datetime_normalizer(self, value):\n \"\"\"\n Normalizes date/datetime value (considered to come from user input)\n to localized datetime value. Following conversions happen:\n\n naive date -> localized datetime with the same date, and time=midnight\n naive datetime -> localized datetime with the same value\n localized datetime -> localized datetime (no conversion)\n \"\"\"\n\n if (\n isinstance(value, datetime.date)\n and not isinstance(value, datetime.datetime)\n ):\n # Convert to local midnight\n value_full = datetime.datetime.combine(value, datetime.time.min)\n localized = local_zone.localize(value_full)\n elif isinstance(value, datetime.datetime):\n if value.tzinfo is None:\n # Convert to localized datetime object\n localized = local_zone.localize(value)\n else:\n # If the value is already localized, there is no need to change\n # time zone at this point. Also None is a valid value too.\n localized = value\n elif isinstance(value, six.string_types):\n localized = self.backend.convert_datetime_string(value)\n else:\n raise ValueError(\"Provided value could not be converted to \"\n \"datetime, its type is not supported: {}\"\n .format(type(value)))\n\n return localized"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _update_data(self, data, update_original=False, remove_missing=False):\n self._data.update(dict((key, self._deserialize(key, value))\n for key, value in data.items()))\n\n # In certain situations, we want to treat missing keys as removals\n if remove_missing:\n for key in set(self._data.keys()) - set(data.keys()):\n self._data[key] = None\n\n if update_original:\n self._original_data = copy.deepcopy(self._data)", "response": "Internal method to update internal data dictionary."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef export_data(self):\n\n # We need to remove spaces for TW-1504, use custom separators\n data_tuples = ((key, self._serialize(key, value))\n for key, value in six.iteritems(self._data))\n\n # Empty string denotes empty serialized value, we do not want\n # to pass that to TaskWarrior.\n data_tuples = filter(lambda t: t[1] is not '', data_tuples)\n data = dict(data_tuples)\n return json.dumps(data, separators=(',', ':'))", "response": "Exports current data contained in the Task as JSON\n "} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef from_input(cls, input_file=sys.stdin, modify=None, backend=None):\n\n # Detect the hook type if not given directly\n name = os.path.basename(sys.argv[0])\n modify = name.startswith('on-modify') if modify is None else modify\n\n # Create the TaskWarrior instance if none passed\n if backend is None:\n backends = importlib.import_module('tasklib.backends')\n hook_parent_dir = os.path.dirname(os.path.dirname(sys.argv[0]))\n backend = backends.TaskWarrior(data_location=hook_parent_dir)\n\n # TaskWarrior instance is set to None\n task = cls(backend)\n\n # Load the data from the input\n task._load_data(json.loads(input_file.readline().strip()))\n\n # If this is a on-modify event, we are provided with additional\n # line of input, which provides updated data\n if modify:\n task._update_data(json.loads(input_file.readline().strip()),\n remove_missing=True)\n\n return task", "response": "Create a Task object from the given input file."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a new TaskQuerySet with the given filters added.", "response": "def filter(self, *args, **kwargs):\n \"\"\"\n Returns a new TaskQuerySet with the given filters added.\n \"\"\"\n clone = self._clone()\n for f in args:\n clone.filter_obj.add_filter(f)\n for key, value in kwargs.items():\n clone.filter_obj.add_filter_param(key, value)\n return clone"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nperform the query and returns a single object matching the given keyword arguments.", "response": "def get(self, **kwargs):\n \"\"\"\n Performs the query and returns a single object matching the given\n keyword arguments.\n \"\"\"\n clone = self.filter(**kwargs)\n num = len(clone)\n if num == 1:\n return clone._result_cache[0]\n if not num:\n raise Task.DoesNotExist(\n 'Task matching query does not exist. '\n 'Lookup parameters were {0}'.format(kwargs),\n )\n raise ValueError(\n 'get() returned more than one Task -- it returned {0}! '\n 'Lookup parameters were {1}'.format(num, kwargs),\n )"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef is_local(filename, pathlist):\n '''\n Returns True if *filename* is a subpath of any of the paths in *pathlist*.\n '''\n\n filename = os.path.abspath(filename)\n for path_name in pathlist:\n path_name = os.path.abspath(path_name)\n if is_subpath(filename, path_name):\n return True\n return False", "response": "Returns True if filename is a subpath of any of the paths in pathlist."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn True if path points to the same or a subpath of parent.", "response": "def is_subpath(path, parent):\n '''\n Returns True if *path* points to the same or a subpath of *parent*.\n '''\n\n try:\n relpath = os.path.relpath(path, parent)\n except ValueError:\n return False # happens on Windows if drive letters don't match\n return relpath == os.curdir or not relpath.startswith(os.pardir)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nevaluates a. pth file and appends the result to the list dest.", "response": "def eval_pth(filename, sitedir, dest=None, imports=None):\n '''\n Evaluates a `.pth` file (including support for `import` statements), and\n appends the result to the list *dest*. If *dest* is #None, it will fall\n back to `sys.path`.\n\n If *imports* is specified, it must be a list. `import` statements will not\n executed but instead appended to that list in tuples of\n (*filename*, *line*, *stmt*).\n\n Returns a tuple of (*dest*, *imports*).\n '''\n\n if dest is None:\n dest = sys.path\n if not os.path.isfile(filename):\n return\n with open(filename, 'r') as fp:\n for index, line in enumerate(fp):\n if line.startswith('import'):\n if imports is None:\n exec_pth_import(filename, index+1, line)\n else:\n imports.append((filename, index+1, line))\n else:\n index = line.find('#')\n if index > 0: line = line[:index]\n line = line.strip()\n if not os.path.isabs(line):\n line = os.path.join(os.path.dirname(filename), line)\n line = os.path.normpath(line)\n if line and line not in dest:\n dest.insert(0, line)\n\n return dest"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef extend_path(pth, name):\n '''\n Better implementation of #pkgutil.extend_path() which adds support for\n zipped Python eggs. The original #pkgutil.extend_path() gets mocked by this\n function inside the #localimport context.\n '''\n\n def zip_isfile(z, name):\n name.rstrip('/')\n return name in z.namelist()\n\n pname = os.path.join(*name.split('.'))\n zname = '/'.join(name.split('.'))\n init_py = '__init__' + os.extsep + 'py'\n init_pyc = '__init__' + os.extsep + 'pyc'\n init_pyo = '__init__' + os.extsep + 'pyo'\n\n mod_path = list(pth)\n for path in sys.path:\n if zipfile.is_zipfile(path):\n try:\n egg = zipfile.ZipFile(path, 'r')\n addpath = (\n zip_isfile(egg, zname + '/__init__.py') or\n zip_isfile(egg, zname + '/__init__.pyc') or\n zip_isfile(egg, zname + '/__init__.pyo'))\n fpath = os.path.join(path, path, zname)\n if addpath and fpath not in mod_path:\n mod_path.append(fpath)\n except (zipfile.BadZipfile, zipfile.LargeZipFile):\n pass # xxx: Show a warning at least?\n else:\n path = os.path.join(path, pname)\n if os.path.isdir(path) and path not in mod_path:\n addpath = (\n os.path.isfile(os.path.join(path, init_py)) or\n os.path.isfile(os.path.join(path, init_pyc)) or\n os.path.isfile(os.path.join(path, init_pyo)))\n if addpath and path not in mod_path:\n mod_path.append(path)\n\n return [os.path.normpath(x) for x in mod_path]", "response": "A simple path extension that adds support for\n."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _declare_namespace(self, package_name):\n '''\n Mock for #pkg_resources.declare_namespace() which calls\n #pkgutil.extend_path() afterwards as the original implementation doesn't\n seem to properly find all available namespace paths.\n '''\n\n self.state['declare_namespace'](package_name)\n mod = sys.modules[package_name]\n mod.__path__ = pkgutil.extend_path(mod.__path__, package_name)", "response": "Declare a new namespace for the given package."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the fragment of the given packet.", "response": "def ip_frag(packet):\n '''\n Not fragments:\n ip_frag(packet) == 0\n not ip_frag(packet)\n \n First packet of fragments:\n ip_frag(packet) == IP_FRAG_ANY\n \n Not first packet of fragments:\n ip_frag(packet) & IP_FRAG_LATER\n \n All fragments:\n ip_frag(packet) & IP_FRAG_ANY\n\n '''\n return ((packet.frag_off & IP_OFFMASK) and IP_FRAG_LATER) | ((packet.frag_off & (IP_OFFMASK | IP_MF)) and IP_FRAG_ANY)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create_extension(namespace, nicira_header, nx_action, nx_stats_request, nx_stats_reply,\n msg_subtype, action_subtype, stats_subtype):\n '''\n /* This command enables or disables an Open vSwitch extension that allows a\n * controller to specify the OpenFlow table to which a flow should be added,\n * instead of having the switch decide which table is most appropriate as\n * required by OpenFlow 1.0. Because NXM was designed as an extension to\n * OpenFlow 1.0, the extension applies equally to ofp10_flow_mod and\n * nx_flow_mod. By default, the extension is disabled.\n *\n * When this feature is enabled, Open vSwitch treats struct ofp10_flow_mod's\n * and struct nx_flow_mod's 16-bit 'command' member as two separate fields.\n * The upper 8 bits are used as the table ID, the lower 8 bits specify the\n * command as usual. A table ID of 0xff is treated like a wildcarded table ID.\n *\n * The specific treatment of the table ID depends on the type of flow mod:\n *\n * - OFPFC_ADD: Given a specific table ID, the flow is always placed in that\n * table. If an identical flow already exists in that table only, then it\n * is replaced. If the flow cannot be placed in the specified table,\n * either because the table is full or because the table cannot support\n * flows of the given type, the switch replies with an OFPFMFC_TABLE_FULL\n * error. (A controller can distinguish these cases by comparing the\n * current and maximum number of entries reported in ofp_table_stats.)\n *\n * If the table ID is wildcarded, the switch picks an appropriate table\n * itself. If an identical flow already exist in the selected flow table,\n * then it is replaced. The choice of table might depend on the flows\n * that are already in the switch; for example, if one table fills up then\n * the switch might fall back to another one.\n *\n * - OFPFC_MODIFY, OFPFC_DELETE: Given a specific table ID, only flows\n * within that table are matched and modified or deleted. If the table ID\n * is wildcarded, flows within any table may be matched and modified or\n * deleted.\n *\n * - OFPFC_MODIFY_STRICT, OFPFC_DELETE_STRICT: Given a specific table ID,\n * only a flow within that table may be matched and modified or deleted.\n * If the table ID is wildcarded and exactly one flow within any table\n * matches, then it is modified or deleted; if flows in more than one\n * table match, then none is modified or deleted.\n */\n '''\n with _warnings.catch_warnings():\n _warnings.filterwarnings('ignore', '^padding', StructDefWarning)\n nx_flow_mod_table_id = nstruct(\n (uint8, 'set'), # /* Nonzero to enable, zero to disable. */\n (uint8[7],),\n name = 'nx_flow_mod_table_id',\n base = nicira_header,\n criteria = lambda x: getattr(x, msg_subtype) == NXT_FLOW_MOD_TABLE_ID,\n classifyby = (NXT_FLOW_MOD_TABLE_ID,),\n init = packvalue(NXT_FLOW_MOD_TABLE_ID, msg_subtype)\n )\n namespace['nx_flow_mod_table_id'] = nx_flow_mod_table_id\n \n \n '''\n /* NXT_SET_PACKET_IN_FORMAT request. */\n '''\n nx_set_packet_in_format = nstruct(\n (uint32, 'format'), # /* One of NXPIF_*. */\n name = 'nx_set_packet_in_format',\n base = nicira_header,\n criteria = lambda x: getattr(x, msg_subtype) == NXT_SET_PACKET_IN_FORMAT,\n classifyby = (NXT_SET_PACKET_IN_FORMAT,),\n init = packvalue(NXT_SET_PACKET_IN_FORMAT, msg_subtype)\n )\n namespace['nx_set_packet_in_format'] = nx_set_packet_in_format\n '''\n /* NXT_PACKET_IN (analogous to OFPT_PACKET_IN).\n *\n * NXT_PACKET_IN is similar to the OpenFlow 1.2 OFPT_PACKET_IN. The\n * differences are:\n *\n * - NXT_PACKET_IN includes the cookie of the rule that triggered the\n * message. (OpenFlow 1.3 OFPT_PACKET_IN also includes the cookie.)\n *\n * - The metadata fields use NXM (instead of OXM) field numbers.\n *\n * Open vSwitch 1.9.0 and later omits metadata fields that are zero (as allowed\n * by OpenFlow 1.2). Earlier versions included all implemented metadata\n * fields.\n *\n * Open vSwitch does not include non-metadata in the nx_match, because by\n * definition that information can be found in the packet itself. The format\n * and the standards allow this, however, so controllers should be prepared to\n * tolerate future changes.\n *\n * The NXM format is convenient for reporting metadata values, but it is\n * important not to interpret the format as matching against a flow, because it\n * does not. Nothing is being matched; arbitrary metadata masks would not be\n * meaningful.\n *\n * Whereas in most cases a controller can expect to only get back NXM fields\n * that it set up itself (e.g. flow dumps will ordinarily report only NXM\n * fields from flows that the controller added), NXT_PACKET_IN messages might\n * contain fields that the controller does not understand, because the switch\n * might support fields (new registers, new protocols, etc.) that the\n * controller does not. The controller must prepared to tolerate these.\n *\n * The 'cookie' field has no meaning when 'reason' is OFPR_NO_MATCH. In this\n * case it should be UINT64_MAX. */\n '''\n \n if 'ofp_oxm' in namespace:\n nx_match = namespace['ofp_oxm']\n namespace['nx_match'] = nx_match\n nx_match_mask = namespace['ofp_oxm_mask']\n namespace['nx_match_mask'] = nx_match_mask\n nx_match_nomask = namespace['ofp_oxm_nomask']\n namespace['nx_match_nomask'] = nx_match_nomask\n create_nxm = namespace['create_oxm']\n namespace['create_nxm'] = create_nxm\n \n nx_match_nomask_ext = nstruct(\n base = nx_match_nomask,\n criteria = lambda x: NXM_VENDOR(x.header) <= 1,\n extend = {'header': nxm_header},\n name = 'nx_match_nomask_ext'\n )\n namespace['nx_match_nomask_ext'] = nx_match_nomask_ext\n nx_match_mask_ext = nstruct(\n base = nx_match_mask,\n criteria = lambda x: NXM_VENDOR(x.header) <= 1,\n extend = {'header': nxm_header},\n name = 'nx_match_mask_ext'\n )\n namespace['nx_match_mask_ext'] = nx_match_mask_ext \n else:\n nx_match = nstruct(\n (nxm_header, 'header'),\n name = 'nx_match',\n padding = 1,\n size = lambda x: NXM_LENGTH(x.header) + 4\n )\n namespace['nx_match'] = nx_match\n nx_match_nomask = nstruct(\n (raw, 'value'),\n base = nx_match,\n criteria = lambda x: not NXM_HASMASK(x.header),\n init = packvalue(NXM_OF_IN_PORT, 'header'),\n name = 'nx_match_nomask'\n )\n namespace['nx_match_nomask'] = nx_match_nomask\n _nxm_mask_value = nstruct(\n (raw, 'value'),\n name = 'nxm_mask_value',\n size = lambda x: NXM_LENGTH(x.header) // 2,\n padding = 1\n )\n nx_match_mask = nstruct(\n (_nxm_mask_value,),\n (raw, 'mask'),\n base = nx_match,\n criteria = lambda x: NXM_HASMASK(x.header),\n init = packvalue(NXM_OF_ETH_SRC_W, 'header'),\n name = 'nx_match_mask',\n )\n namespace['nx_match_mask'] = nx_match_mask \n def create_nxm(header, value = None, mask = None):\n if NXM_HASMASK(header):\n nxm = nx_match_mask.new()\n size = NXM_LENGTH(header) // 2\n else:\n nxm = nx_match_nomask.new()\n size = NXM_LENGTH(header)\n nxm.header = header\n nxm.value = common.create_binary(value, size)\n if NXM_HASMASK(header):\n nxm.mask = common.create_binary(mask, size)\n nxm._pack()\n nxm._autosubclass()\n return nxm\n namespace['create_nxm'] = create_nxm\n nx_match_nomask_ext = nx_match_nomask\n nx_match_mask_ext = nx_match_mask\n namespace['nx_match_nomask_ext'] = nx_match_nomask_ext\n namespace['nx_match_mask_ext'] = nx_match_mask_ext\n from namedstruct.namedstruct import rawtype as _rawtype\n import socket as _socket\n if 'ip4_addr_bytes' in namespace:\n ip4_addr_bytes = namespace['ip4_addr_bytes']\n else:\n ip4_addr_bytes = prim('4s', 'ip4_addr_bytes')\n ip4_addr_bytes.formatter = lambda x: _socket.inet_ntoa(x)\n namespace['ip4_addr_bytes'] = ip4_addr_bytes\n \n nxm_mask_ipv4 = nstruct(name = 'nxm_mask_ipv4',\n base = nx_match_mask_ext,\n criteria = lambda x: x.header in (NXM_OF_IP_SRC_W, NXM_OF_IP_DST_W, NXM_OF_ARP_SPA_W, NXM_OF_ARP_TPA_W, NXM_NX_TUN_IPV4_SRC_W, NXM_NX_TUN_IPV4_DST_W),\n init = packvalue(NXM_OF_IP_SRC_W, 'header'),\n extend = {'value' : ip4_addr_bytes, 'mask' : ip4_addr_bytes}\n )\n namespace['nxm_mask_ipv4'] = nxm_mask_ipv4\n nxm_nomask_ipv4 = nstruct(name = 'nxm_nomask_ipv4',\n base = nx_match_nomask_ext,\n criteria = lambda x: x.header in (NXM_OF_IP_SRC, NXM_OF_IP_DST, NXM_OF_ARP_SPA, NXM_OF_ARP_TPA, NXM_NX_TUN_IPV4_SRC, NXM_NX_TUN_IPV4_DST),\n init = packvalue(NXM_OF_IP_SRC, 'header'),\n extend = {'value' : ip4_addr_bytes}\n )\n namespace['nxm_nomask_ipv4'] = nxm_nomask_ipv4\n if 'mac_addr_bytes' in namespace:\n mac_addr_bytes = namespace['mac_addr_bytes']\n else:\n mac_addr_bytes = _rawtype() \n mac_addr_bytes.formatter = lambda x: ':'.join('%02X' % (c,) for c in bytearray(x))\n namespace['mac_addr_bytes'] = mac_addr_bytes\n \n nxm_mask_eth = nstruct(name = 'nxm_mask_eth',\n base = nx_match_mask_ext,\n criteria = lambda x: x.header in (NXM_OF_ETH_SRC_W, NXM_OF_ETH_DST_W),\n init = packvalue(NXM_OF_ETH_SRC_W, 'header'),\n extend = {'value' : mac_addr_bytes, 'mask' : mac_addr_bytes})\n namespace['nxm_mask_eth'] = nxm_mask_eth\n \n nxm_nomask_eth = nstruct(name = 'nxm_nomask_eth',\n base = nx_match_nomask_ext,\n criteria = lambda x: x.header in (NXM_OF_ETH_SRC, NXM_OF_ETH_DST, NXM_NX_ND_SLL, NXM_NX_ND_TLL, NXM_NX_ARP_SHA, NXM_NX_ARP_THA),\n init = packvalue(NXM_OF_ETH_SRC, 'header'),\n extend = {'value' : mac_addr_bytes})\n namespace['nxm_nomask_eth'] = nxm_nomask_eth\n \n \n ofp_port_no = namespace['ofp_port_no']\n \n nx_port_no = enum('nx_port_no', None, uint16,\n **dict((k, v & 0xffff) for k,v in ofp_port_no.getDict().items())\n )\n nxm_port_no_raw = _rawtype()\n nxm_port_no_raw.formatter = lambda x: nx_port_no.formatter(nx_port_no.parse(x)[0])\n namespace['nx_port_no'] = nx_port_no\n namespace['nxm_port_no_raw'] = nxm_port_no_raw\n nxm_nomask_port = nstruct(name = 'nxm_nomask_port',\n base = nx_match_nomask_ext,\n criteria = lambda x: x.header == NXM_OF_IN_PORT,\n init = packvalue(NXM_OF_IN_PORT, 'header'),\n extend = {'value': nxm_port_no_raw}\n )\n namespace['nxm_nomask_port'] = nxm_nomask_port\n if 'ethtype_raw' in namespace:\n ethtype_raw = namespace['ethtype_raw']\n else:\n ethtype_raw = _rawtype()\n ethtype_raw.formatter = lambda x: ethertype.formatter(ethertype.parse(x)[0])\n namespace['ethtype_raw'] = ethtype_raw\n \n nxm_nomask_ethertype = nstruct(name = 'nxm_nomask_ethertype',\n base = nx_match_nomask_ext,\n criteria = lambda x: x.header == NXM_OF_ETH_TYPE,\n init = packvalue(NXM_OF_ETH_TYPE, 'header'),\n extend = {'value': ethtype_raw})\n namespace['nxm_nomask_ethertype'] = nxm_nomask_ethertype\n if 'arpop_raw' in namespace:\n arpop_raw = namespace['arpop_raw']\n else:\n arpop_raw = _rawtype()\n arpop_raw.formatter = lambda x: arp_op_code.formatter(arp_op_code.parse(x)[0])\n namespace['arpop_raw'] = arpop_raw\n \n nxm_nomask_arpopcode = nstruct(name = 'nxm_nomask_arpopcode',\n base = nx_match_nomask_ext,\n criteria = lambda x: x.header == NXM_OF_ARP_OP,\n init = packvalue(NXM_OF_ARP_OP, 'header'),\n extend = {'value': arpop_raw})\n namespace['nxm_nomask_arpopcode'] = nxm_nomask_arpopcode\n \n if 'ip_protocol_raw' in namespace:\n ip_protocol_raw = namespace['ip_protocol_raw']\n else:\n ip_protocol_raw = _rawtype()\n ip_protocol_raw.formatter = lambda x: ip_protocol.formatter(ip_protocol.parse(x)[0])\n namespace['ip_protocol_raw'] = ip_protocol_raw\n \n nxm_nomask_ip_protocol = nstruct(name = 'nxm_nomask_ip_protocol',\n base = nx_match_nomask_ext,\n criteria = lambda x: x.header == NXM_OF_IP_PROTO,\n init = packvalue(NXM_OF_IP_PROTO, 'header'),\n extend = {'value': ip_protocol_raw})\n namespace['nxm_nomask_ip_protocol'] = nxm_nomask_ip_protocol\n if 'ip6_addr_bytes' in namespace:\n nxm_nomask_ipv6 = nstruct(name = 'nxm_nomask_ipv6',\n base = nx_match_nomask_ext,\n criteria = lambda x: x.header in (NXM_NX_IPV6_SRC, NXM_NX_IPV6_DST, NXM_NX_ND_TARGET),\n init = packvalue(NXM_NX_IPV6_SRC, 'header'),\n extend = {'value': ip6_addr_bytes})\n namespace['nxm_nomask_ipv6'] = nxm_nomask_ipv6\n nxm_mask_ipv6 = nstruct(name = 'nxm_mask_ipv6',\n base = nx_match_mask_ext,\n criteria = lambda x: x.header in (NXM_NX_IPV6_SRC_W, NXM_NX_IPV6_DST_W),\n init = packvalue(NXM_NX_IPV6_SRC_W, 'header'),\n extend = {'value': ip6_addr_bytes, 'mask': ip6_addr_bytes})\n namespace['nxm_mask_ipv6'] = nxm_mask_ipv6\n \n nx_ip_frag_raw = _rawtype()\n nx_ip_frag_raw.formatter = lambda x: nx_ip_frag.formatter(nx_ip_frag.parse(x)[0])\n nxm_nomask_ipfrag = nstruct(name = 'nxm_nomask_ipfrag',\n base = nx_match_nomask_ext,\n criteria = lambda x: x.header == NXM_NX_IP_FRAG,\n init = packvalue(NXM_NX_IP_FRAG, 'header'),\n extend = {'value': nx_ip_frag_raw})\n namespace['nxm_nomask_ipfrag'] = nxm_nomask_ipfrag\n nxm_mask_ipfrag = nstruct(name = 'nxm_mask_ipfrag',\n base = nx_match_mask_ext,\n criteria = lambda x: x.header == NXM_NX_IP_FRAG_W,\n init = packvalue(NXM_NX_IP_FRAG_W, 'header'),\n extend = {'value': nx_ip_frag_raw, 'mask': nx_ip_frag_raw})\n namespace['nxm_mask_ipfrag'] = nxm_mask_ipfrag\n \n nx_matches = nstruct(\n (nx_match[0], 'matches'),\n name = 'nx_matches',\n size = sizefromlen(65536, 'match_len'),\n prepack = packrealsize('match_len'),\n padding = 8\n )\n \n namespace['nx_matches'] = nx_matches\n \n nx_packet_in = nstruct(\n (uint32, 'buffer_id'), # /* ID assigned by datapath. */\n (uint16, 'total_len'), # /* Full length of frame. */\n (uint8, 'reason'), # /* Reason packet is sent (one of OFPR_*). */\n (uint8, 'table_id'), # /* ID of the table that was looked up. */\n (uint64, 'cookie'), # /* Cookie of the rule that was looked up. */\n (uint16, 'match_len'), # /* Size of nx_match. */\n (uint8[6],), # /* Align to 64-bits. */\n (nx_matches,),\n (uint8[2],),\n (raw, 'data'),\n name = 'nx_packet_in',\n base = nicira_header,\n classifyby = (NXT_PACKET_IN,),\n criteria = lambda x: getattr(x, msg_subtype) == NXT_PACKET_IN,\n init = packvalue(NXT_PACKET_IN, msg_subtype)\n )\n namespace['nx_packet_in'] = nx_packet_in\n '''\n /* Configures the \"role\" of the sending controller. The default role is:\n *\n * - Other (NX_ROLE_OTHER), which allows the controller access to all\n * OpenFlow features.\n *\n * The other possible roles are a related pair:\n *\n * - Master (NX_ROLE_MASTER) is equivalent to Other, except that there may\n * be at most one Master controller at a time: when a controller\n * configures itself as Master, any existing Master is demoted to the\n * Slave role.\n *\n * - Slave (NX_ROLE_SLAVE) allows the controller read-only access to\n * OpenFlow features. In particular attempts to modify the flow table\n * will be rejected with an OFPBRC_EPERM error.\n *\n * Slave controllers do not receive OFPT_PACKET_IN or OFPT_FLOW_REMOVED\n * messages, but they do receive OFPT_PORT_STATUS messages.\n */\n '''\n nx_role_request = nstruct(\n (nx_role, 'role'), # /* One of NX_ROLE_*. */\n name = 'nx_role_request',\n base = nicira_header,\n classifyby = (NXT_ROLE_REQUEST, NXT_ROLE_REPLY),\n criteria = lambda x: getattr(x, msg_subtype) == NXT_ROLE_REQUEST or getattr(x, msg_subtype) == NXT_ROLE_REPLY,\n init = packvalue(NXT_ROLE_REQUEST, msg_subtype)\n )\n namespace['nx_role_request'] = nx_role_request\n '''\n /* NXT_SET_ASYNC_CONFIG.\n *\n * Sent by a controller, this message configures the asynchronous messages that\n * the controller wants to receive. Element 0 in each array specifies messages\n * of interest when the controller has an \"other\" or \"master\" role; element 1,\n * when the controller has a \"slave\" role.\n *\n * Each array element is a bitmask in which a 0-bit disables receiving a\n * particular message and a 1-bit enables receiving it. Each bit controls the\n * message whose 'reason' corresponds to the bit index. For example, the bit\n * with value 1<<2 == 4 in port_status_mask[1] determines whether the\n * controller will receive OFPT_PORT_STATUS messages with reason OFPPR_MODIFY\n * (value 2) when the controller has a \"slave\" role.\n *\n * As a side effect, for service controllers, this message changes the\n * miss_send_len from default of zero to OFP_DEFAULT_MISS_SEND_LEN (128).\n */\n '''\n ofp_packet_in_reason = namespace['ofp_packet_in_reason']\n if 'ofp_packet_in_reason_bitwise' in namespace:\n ofp_packet_in_reason_bitwise = namespace['ofp_packet_in_reason_bitwise']\n else:\n ofp_packet_in_reason_bitwise = enum('ofp_packet_in_reason_bitwise', None, uint32,\n **dict((k, 1<%s[%d..%d]' % (x['src'], x['src_ofs'], x['src_ofs'] + x['n_bits'] - 1, x['dst'], x['dst_ofs'], x['dst_ofs'] + x['n_bits'] - 1))\n )\n namespace['nx_action_reg_move'] = nx_action_reg_move\n '''\n /* Action structure for NXAST_REG_LOAD.\n *\n * Copies value[0:n_bits] to dst[ofs:ofs+n_bits], where a[b:c] denotes the bits\n * within 'a' numbered 'b' through 'c' (not including bit 'c'). Bit numbering\n * starts at 0 for the least-significant bit, 1 for the next most significant\n * bit, and so on.\n *\n * 'dst' is an nxm_header with nxm_hasmask=0. See the documentation for\n * NXAST_REG_MOVE, above, for the permitted fields and for the side effects of\n * loading them.\n *\n * The 'ofs' and 'n_bits' fields are combined into a single 'ofs_nbits' field\n * to avoid enlarging the structure by another 8 bytes. To allow 'n_bits' to\n * take a value between 1 and 64 (inclusive) while taking up only 6 bits, it is\n * also stored as one less than its true value:\n *\n * 15 6 5 0\n * +------------------------------+------------------+\n * | ofs | n_bits - 1 |\n * +------------------------------+------------------+\n *\n * The switch will reject actions for which ofs+n_bits is greater than the\n * width of 'dst', or in which any bits in 'value' with value 2**n_bits or\n * greater are set to 1, with error type OFPET_BAD_ACTION, code\n * OFPBAC_BAD_ARGUMENT.\n */\n '''\n nx_action_reg_load = nstruct(\n (uint16, 'ofs_nbits'), # /* (ofs << 6) | (n_bits - 1). */\n (nxm_header, 'dst'), # /* Destination register. */\n (uint64, 'value'), # /* Immediate value. */\n name = 'nx_action_reg_load',\n base = nx_action,\n classifyby = (NXAST_REG_LOAD,),\n criteria = lambda x: getattr(x, action_subtype) == NXAST_REG_LOAD,\n init = packvalue(NXAST_REG_LOAD, action_subtype),\n formatter = _createdesc(lambda x: 'load:0x%x->%s[%d..%d]' % (x['value'], x['dst'], x['ofs_nbits'] >> 6, (x['ofs_nbits'] >> 6) + (x['ofs_nbits'] & 0x3f)))\n )\n namespace['nx_action_reg_load'] = nx_action_reg_load\n \n '''\n /* Action structure for NXAST_STACK_PUSH and NXAST_STACK_POP.\n *\n * Pushes (or pops) field[offset: offset + n_bits] to (or from)\n * top of the stack.\n */\n '''\n nx_action_stack = nstruct(\n (uint16, 'offset'), # /* Bit offset into the field. */\n (nxm_header, 'field'), # /* The field used for push or pop. */\n (uint16, 'n_bits'), # /* (n_bits + 1) bits of the field. */\n (uint8[6],), # /* Reserved, must be zero. */\n name = 'nx_action_stack',\n base = nx_action,\n classifyby = (NXAST_STACK_PUSH, NXAST_STACK_POP),\n criteria = lambda x: getattr(x, action_subtype) == NXAST_STACK_PUSH or getattr(x, action_subtype) == NXAST_STACK_POP,\n init = packvalue(NXAST_STACK_PUSH, action_subtype),\n formatter = _createdesc(lambda x: '%s:%s[%d..%d]' % ('push' if x[action_subtype] == 'NXAST_STACK_PUSH' else 'pop', x['field'], x['offset'], (x['offset'] + x['n_bits'] - 1)))\n )\n namespace['nx_action_stack'] = nx_action_stack\n \n '''\n /* Action structure for NXAST_NOTE.\n *\n * This action has no effect. It is variable length. The switch does not\n * attempt to interpret the user-defined 'note' data in any way. A controller\n * can use this action to attach arbitrary metadata to a flow.\n *\n * This action might go away in the future.\n */\n '''\n nx_action_note = nstruct(\n (varchr, 'note'),\n name = 'nx_action_note',\n base = nx_action,\n classifyby = (NXAST_NOTE,),\n criteria = lambda x: getattr(x, action_subtype) == NXAST_NOTE,\n init = packvalue(NXAST_NOTE, action_subtype)\n )\n namespace['nx_action_note'] = nx_action_note\n \n '''\n /* Action structure for NXAST_MULTIPATH.\n *\n * This action performs the following steps in sequence:\n *\n * 1. Hashes the fields designated by 'fields', one of NX_HASH_FIELDS_*.\n * Refer to the definition of \"enum nx_mp_fields\" for details.\n *\n * The 'basis' value is used as a universal hash parameter, that is,\n * different values of 'basis' yield different hash functions. The\n * particular universal hash function used is implementation-defined.\n *\n * The hashed fields' values are drawn from the current state of the\n * flow, including all modifications that have been made by actions up to\n * this point.\n *\n * 2. Applies the multipath link choice algorithm specified by 'algorithm',\n * one of NX_MP_ALG_*. Refer to the definition of \"enum nx_mp_algorithm\"\n * for details.\n *\n * The output of the algorithm is 'link', an unsigned integer less than\n * or equal to 'max_link'.\n *\n * Some algorithms use 'arg' as an additional argument.\n *\n * 3. Stores 'link' in dst[ofs:ofs+n_bits]. The format and semantics of\n * 'dst' and 'ofs_nbits' are similar to those for the NXAST_REG_LOAD\n * action.\n *\n * The switch will reject actions that have an unknown 'fields', or an unknown\n * 'algorithm', or in which ofs+n_bits is greater than the width of 'dst', or\n * in which 'max_link' is greater than or equal to 2**n_bits, with error type\n * OFPET_BAD_ACTION, code OFPBAC_BAD_ARGUMENT.\n */\n '''\n nx_action_multipath = nstruct(\n \n #/* What fields to hash and how. */\n (nx_hash_fields, 'fields'), # /* One of NX_HASH_FIELDS_*. */\n (uint16, 'basis'), # /* Universal hash parameter. */\n (uint16,),\n \n #/* Multipath link choice algorithm to apply to hash value. */\n (nx_mp_algorithm, 'algorithm'), # /* One of NX_MP_ALG_*. */\n (uint16, 'max_link'), # /* Number of output links, minus 1. */\n (uint32, 'arg'), # /* Algorithm-specific argument. */\n (uint16,),\n \n # /* Where to store the result. */\n (uint16, 'ofs_nbits'), # /* (ofs << 6) | (n_bits - 1). */\n (nxm_header, 'dst'), # /* Destination. */\n name = 'nx_action_multipath',\n base = nx_action,\n classifyby = (NXAST_MULTIPATH,),\n criteria = lambda x: getattr(x, action_subtype) == NXAST_MULTIPATH,\n init = packvalue(NXAST_MULTIPATH, action_subtype),\n formatter = _createdesc(lambda x: 'multipath(%s,%d,%s,%d,%d,%s[%d..%d])' % (x['fields'], x['basis'], x['algorithm'],x['max_link'] + 1, x['arg'], x['dst'], x['ofs_nbits'] >> 6, (x['ofs_nbits'] >> 6) + (x['ofs_nbits'] & 0x3f)))\n )\n namespace['nx_action_multipath'] = nx_action_multipath\n \n '''\n /* Action structure for NXAST_LEARN.\n *\n * This action adds or modifies a flow in an OpenFlow table, similar to\n * OFPT_FLOW_MOD with OFPFC_MODIFY_STRICT as 'command'. The new flow has the\n * specified idle timeout, hard timeout, priority, cookie, and flags. The new\n * flow's match criteria and actions are built by applying each of the series\n * of flow_mod_spec elements included as part of the action.\n *\n * A flow_mod_spec starts with a 16-bit header. A header that is all-bits-0 is\n * a no-op used for padding the action as a whole to a multiple of 8 bytes in\n * length. Otherwise, the flow_mod_spec can be thought of as copying 'n_bits'\n * bits from a source to a destination. In this case, the header contains\n * multiple fields:\n *\n * 15 14 13 12 11 10 0\n * +------+---+------+---------------------------------+\n * | 0 |src| dst | n_bits |\n * +------+---+------+---------------------------------+\n *\n * The meaning and format of a flow_mod_spec depends on 'src' and 'dst'. The\n * following table summarizes the meaning of each possible combination.\n * Details follow the table:\n *\n * src dst meaning\n * --- --- ----------------------------------------------------------\n * 0 0 Add match criteria based on value in a field.\n * 1 0 Add match criteria based on an immediate value.\n * 0 1 Add NXAST_REG_LOAD action to copy field into a different field.\n * 1 1 Add NXAST_REG_LOAD action to load immediate value into a field.\n * 0 2 Add OFPAT_OUTPUT action to output to port from specified field.\n * All other combinations are undefined and not allowed.\n *\n * The flow_mod_spec header is followed by a source specification and a\n * destination specification. The format and meaning of the source\n * specification depends on 'src':\n *\n * - If 'src' is 0, the source bits are taken from a field in the flow to\n * which this action is attached. (This should be a wildcarded field. If\n * its value is fully specified then the source bits being copied have\n * constant values.)\n *\n * The source specification is an ovs_be32 'field' and an ovs_be16 'ofs'.\n * 'field' is an nxm_header with nxm_hasmask=0, and 'ofs' the starting bit\n * offset within that field. The source bits are field[ofs:ofs+n_bits-1].\n * 'field' and 'ofs' are subject to the same restrictions as the source\n * field in NXAST_REG_MOVE.\n *\n * - If 'src' is 1, the source bits are a constant value. The source\n * specification is (n_bits+15)/16*2 bytes long. Taking those bytes as a\n * number in network order, the source bits are the 'n_bits'\n * least-significant bits. The switch will report an error if other bits\n * in the constant are nonzero.\n *\n * The flow_mod_spec destination specification, for 'dst' of 0 or 1, is an\n * ovs_be32 'field' and an ovs_be16 'ofs'. 'field' is an nxm_header with\n * nxm_hasmask=0 and 'ofs' is a starting bit offset within that field. The\n * meaning of the flow_mod_spec depends on 'dst':\n *\n * - If 'dst' is 0, the flow_mod_spec specifies match criteria for the new\n * flow. The new flow matches only if bits field[ofs:ofs+n_bits-1] in a\n * packet equal the source bits. 'field' may be any nxm_header with\n * nxm_hasmask=0 that is allowed in NXT_FLOW_MOD.\n *\n * Order is significant. Earlier flow_mod_specs must satisfy any\n * prerequisites for matching fields specified later, by copying constant\n * values into prerequisite fields.\n *\n * The switch will reject flow_mod_specs that do not satisfy NXM masking\n * restrictions.\n *\n * - If 'dst' is 1, the flow_mod_spec specifies an NXAST_REG_LOAD action for\n * the new flow. The new flow copies the source bits into\n * field[ofs:ofs+n_bits-1]. Actions are executed in the same order as the\n * flow_mod_specs.\n *\n * A single NXAST_REG_LOAD action writes no more than 64 bits, so n_bits\n * greater than 64 yields multiple NXAST_REG_LOAD actions.\n *\n * The flow_mod_spec destination spec for 'dst' of 2 (when 'src' is 0) is\n * empty. It has the following meaning:\n *\n * - The flow_mod_spec specifies an OFPAT_OUTPUT action for the new flow.\n * The new flow outputs to the OpenFlow port specified by the source field.\n * Of the special output ports with value OFPP_MAX or larger, OFPP_IN_PORT,\n * OFPP_FLOOD, OFPP_LOCAL, and OFPP_ALL are supported. Other special ports\n * may not be used.\n *\n * Resource Management\n * -------------------\n *\n * A switch has a finite amount of flow table space available for learning.\n * When this space is exhausted, no new learning table entries will be learned\n * until some existing flow table entries expire. The controller should be\n * prepared to handle this by flooding (which can be implemented as a\n * low-priority flow).\n *\n * If a learned flow matches a single TCP stream with a relatively long\n * timeout, one may make the best of resource constraints by setting\n * 'fin_idle_timeout' or 'fin_hard_timeout' (both measured in seconds), or\n * both, to shorter timeouts. When either of these is specified as a nonzero\n * value, OVS adds a NXAST_FIN_TIMEOUT action, with the specified timeouts, to\n * the learned flow.\n *\n * Examples\n * --------\n *\n * The following examples give a prose description of the flow_mod_specs along\n * with informal notation for how those would be represented and a hex dump of\n * the bytes that would be required.\n *\n * These examples could work with various nx_action_learn parameters. Typical\n * values would be idle_timeout=OFP_FLOW_PERMANENT, hard_timeout=60,\n * priority=OFP_DEFAULT_PRIORITY, flags=0, table_id=10.\n *\n * 1. Learn input port based on the source MAC, with lookup into\n * NXM_NX_REG1[16:31] by resubmit to in_port=99:\n *\n * Match on in_port=99:\n * ovs_be16(src=1, dst=0, n_bits=16), 20 10\n * ovs_be16(99), 00 63\n * ovs_be32(NXM_OF_IN_PORT), ovs_be16(0) 00 00 00 02 00 00\n *\n * Match Ethernet destination on Ethernet source from packet:\n * ovs_be16(src=0, dst=0, n_bits=48), 00 30\n * ovs_be32(NXM_OF_ETH_SRC), ovs_be16(0) 00 00 04 06 00 00\n * ovs_be32(NXM_OF_ETH_DST), ovs_be16(0) 00 00 02 06 00 00\n *\n * Set NXM_NX_REG1[16:31] to the packet's input port:\n * ovs_be16(src=0, dst=1, n_bits=16), 08 10\n * ovs_be32(NXM_OF_IN_PORT), ovs_be16(0) 00 00 00 02 00 00\n * ovs_be32(NXM_NX_REG1), ovs_be16(16) 00 01 02 04 00 10\n *\n * Given a packet that arrived on port A with Ethernet source address B,\n * this would set up the flow \"in_port=99, dl_dst=B,\n * actions=load:A->NXM_NX_REG1[16..31]\".\n *\n * In syntax accepted by ovs-ofctl, this action is: learn(in_port=99,\n * NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],\n * load:NXM_OF_IN_PORT[]->NXM_NX_REG1[16..31])\n *\n * 2. Output to input port based on the source MAC and VLAN VID, with lookup\n * into NXM_NX_REG1[16:31]:\n *\n * Match on same VLAN ID as packet:\n * ovs_be16(src=0, dst=0, n_bits=12), 00 0c\n * ovs_be32(NXM_OF_VLAN_TCI), ovs_be16(0) 00 00 08 02 00 00\n * ovs_be32(NXM_OF_VLAN_TCI), ovs_be16(0) 00 00 08 02 00 00\n *\n * Match Ethernet destination on Ethernet source from packet:\n * ovs_be16(src=0, dst=0, n_bits=48), 00 30\n * ovs_be32(NXM_OF_ETH_SRC), ovs_be16(0) 00 00 04 06 00 00\n * ovs_be32(NXM_OF_ETH_DST), ovs_be16(0) 00 00 02 06 00 00\n *\n * Output to the packet's input port:\n * ovs_be16(src=0, dst=2, n_bits=16), 10 10\n * ovs_be32(NXM_OF_IN_PORT), ovs_be16(0) 00 00 00 02 00 00\n *\n * Given a packet that arrived on port A with Ethernet source address B in\n * VLAN C, this would set up the flow \"dl_dst=B, vlan_vid=C,\n * actions=output:A\".\n *\n * In syntax accepted by ovs-ofctl, this action is:\n * learn(NXM_OF_VLAN_TCI[0..11], NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],\n * output:NXM_OF_IN_PORT[])\n *\n * 3. Here's a recipe for a very simple-minded MAC learning switch. It uses a\n * 10-second MAC expiration time to make it easier to see what's going on\n *\n * ovs-vsctl del-controller br0\n * ovs-ofctl del-flows br0\n * ovs-ofctl add-flow br0 \"table=0 actions=learn(table=1, \\\n hard_timeout=10, NXM_OF_VLAN_TCI[0..11], \\\n NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[], \\\n output:NXM_OF_IN_PORT[]), resubmit(,1)\"\n * ovs-ofctl add-flow br0 \"table=1 priority=0 actions=flood\"\n *\n * You can then dump the MAC learning table with:\n *\n * ovs-ofctl dump-flows br0 table=1\n *\n * Usage Advice\n * ------------\n *\n * For best performance, segregate learned flows into a table that is not used\n * for any other flows except possibly for a lowest-priority \"catch-all\" flow\n * (a flow with no match criteria). If different learning actions specify\n * different match criteria, use different tables for the learned flows.\n *\n * The meaning of 'hard_timeout' and 'idle_timeout' can be counterintuitive.\n * These timeouts apply to the flow that is added, which means that a flow with\n * an idle timeout will expire when no traffic has been sent *to* the learned\n * address. This is not usually the intent in MAC learning; instead, we want\n * the MAC learn entry to expire when no traffic has been sent *from* the\n * learned address. Use a hard timeout for that.\n */\n '''\n def _nx_flow_mod_spec_formatter(x):\n if NX_FLOWMODSPEC_SRC(x['header']):\n srcdesc = '0x' + ''.join('%02x' % (c,) for c in bytearray(x['value']))\n else:\n srcdesc = '%s[%d..%d]' % (x['src'], x['src_ofs'], x['src_ofs'] + NX_FLOWMODSPEC_NBITS(x['header']) - 1)\n dstv = NX_FLOWMODSPEC_DST(x['header'])\n if dstv != NX_LEARN_DST_OUTPUT:\n dstdesc = '%s[%d..%d]' % (x['dst'], x['dst_ofs'], x['dst_ofs'] + NX_FLOWMODSPEC_NBITS(x['header']) - 1)\n if dstv == NX_LEARN_DST_MATCH:\n x['_desc'] = '%s=%s' % (dstdesc, srcdesc)\n elif dstv == NX_LEARN_DST_LOAD:\n x['_desc'] = 'load:%s->%s' % (srcdesc, dstdesc)\n elif NX_FLOWMODSPEC_SRC(x['header']):\n x['_desc'] = 'output:%s' % nxm_port_no_raw.formatter(common.create_binary(x['value'], 2))\n else:\n x['_desc'] = 'output:%s' % (srcdesc,)\n x['header'] = nx_flow_mod_spec_header.formatter(x['header'])\n return x\n \n nx_flow_mod_spec = nstruct(\n (uint16, 'header'),\n (_nx_flow_mod_spec_src,),\n (_nx_flow_mod_spec_dst,),\n name = 'nx_flow_mod_spec',\n padding = 1,\n formatter = _nx_flow_mod_spec_formatter,\n lastextra = False\n # if x.header == 0, size is 14, the padding should not be so large so it will not be successfully parsed\n )\n namespace['nx_flow_mod_spec'] = nx_flow_mod_spec\n def create_nxfms_matchfield(src, dst, src_ofs = 0, dst_ofs = 0, n_bits = None):\n if n_bits is None:\n n_bits = min(NXM_LENGTH(dst) * 8 - dst_ofs, NXM_LENGTH(src) * 8 - src_ofs)\n if n_bits <= 0:\n raise ValueError('Cannot create flow mod spec with 0 bits')\n return nx_flow_mod_spec.parse(_create_header(NX_LEARN_SRC_FIELD, NX_LEARN_DST_MATCH, n_bits) + _create_field(src, src_ofs) + _create_field(dst, dst_ofs))[0]\n namespace['create_nxfms_matchfield'] = create_nxfms_matchfield\n def create_nxfms_matchvalue(dst, value, dst_ofs, n_bits = None):\n if n_bits is None:\n n_bits = NXM_LENGTH(dst) * 8 - dst_ofs\n if n_bits <= 0:\n raise ValueError('Cannot create flow mod spec with 0 bits')\n return nx_flow_mod_spec.parse(_create_header(NX_LEARN_SRC_IMMEDIATE, NX_LEARN_DST_MATCH, n_bits) + common.create_binary(value, (n_bits + 15) // 16 * 2) + _create_field(dst, dst_ofs))[0]\n namespace['create_nxfms_matchvalue'] = create_nxfms_matchvalue\n def create_nxfms_loadfield(src, dst, src_ofs = 0, dst_ofs = 0, n_bits = None):\n if n_bits is None:\n n_bits = min(NXM_LENGTH(dst) * 8 - dst_ofs, NXM_LENGTH(src) * 8 - src_ofs)\n if n_bits <= 0:\n raise ValueError('Cannot create flow mod spec with 0 bits')\n return nx_flow_mod_spec.parse(_create_header(NX_LEARN_SRC_FIELD, NX_LEARN_DST_LOAD, n_bits) + _create_field(src, src_ofs) + _create_field(dst, dst_ofs))[0]\n namespace['create_nxfms_loadfield'] = create_nxfms_loadfield\n def create_nxfms_loadvalue(dst, value, dst_ofs, n_bits = None):\n if n_bits is None:\n n_bits = NXM_LENGTH(dst) * 8 - dst_ofs\n if n_bits <= 0:\n raise ValueError('Cannot create flow mod spec with 0 bits')\n return nx_flow_mod_spec.parse(_create_header(NX_LEARN_SRC_IMMEDIATE, NX_LEARN_DST_LOAD, n_bits) + common.create_binary(value, (n_bits + 15) // 16 * 2) + _create_field(dst, dst_ofs))[0]\n namespace['create_nxfms_loadvalue'] = create_nxfms_loadvalue\n def create_nxfms_outputfield(src, src_ofs = 0, n_bits = None):\n if n_bits is None:\n n_bits = NXM_LENGTH(src) * 8 - src_ofs\n if n_bits <= 0:\n raise ValueError('Cannot create flow mod spec with 0 bits')\n return nx_flow_mod_spec.parse(_create_header(NX_LEARN_SRC_FIELD, NX_LEARN_DST_OUTPUT, n_bits) + _create_field(src, src_ofs))[0]\n namespace['create_nxfms_outputfield'] = create_nxfms_outputfield\n def create_nxfms_outputvalue(dst, value):\n return nx_flow_mod_spec.parse(_create_header(NX_LEARN_SRC_IMMEDIATE, NX_LEARN_DST_OUTPUT, 16) + common.create_binary(value, 2))[0] \n namespace['create_nxfms_outputvalue'] = create_nxfms_outputvalue\n \n ofp_flow_mod_flags = namespace['ofp_flow_mod_flags']\n \n nx_action_learn = nstruct(\n (uint16, 'idle_timeout'), # /* Idle time before discarding (seconds). */\n (uint16, 'hard_timeout'), # /* Max time before discarding (seconds). */\n (uint16, 'priority'), # /* Priority level of flow entry. */\n (uint64, 'cookie'), # /* Cookie for new flow. */\n (ofp_flow_mod_flags, 'flags'), # /* Either 0 or OFPFF_SEND_FLOW_REM. */\n (uint8, 'table_id'), # /* Table to insert flow entry. */\n (uint8,), # /* Must be zero. */\n (uint16, 'fin_idle_timeout'),# /* Idle timeout after FIN, if nonzero. */\n (uint16, 'fin_hard_timeout'),# /* Hard timeout after FIN, if nonzero. */\n (nx_flow_mod_spec[0], 'specs'),\n base = nx_action,\n name = 'nx_action_learn',\n classifyby = (NXAST_LEARN,),\n criteria = lambda x: getattr(x, action_subtype) == NXAST_LEARN,\n init = packvalue(NXAST_LEARN, action_subtype),\n )\n namespace['nx_action_learn'] = nx_action_learn\n \n '''\n /* Action structure for NXAST_FIN_TIMEOUT.\n *\n * This action changes the idle timeout or hard timeout, or both, of this\n * OpenFlow rule when the rule matches a TCP packet with the FIN or RST flag.\n * When such a packet is observed, the action reduces the rule's idle timeout\n * to 'fin_idle_timeout' and its hard timeout to 'fin_hard_timeout'. This\n * action has no effect on an existing timeout that is already shorter than the\n * one that the action specifies. A 'fin_idle_timeout' or 'fin_hard_timeout'\n * of zero has no effect on the respective timeout.\n *\n * 'fin_idle_timeout' and 'fin_hard_timeout' are measured in seconds.\n * 'fin_hard_timeout' specifies time since the flow's creation, not since the\n * receipt of the FIN or RST.\n *\n * This is useful for quickly discarding learned TCP flows that otherwise will\n * take a long time to expire.\n *\n * This action is intended for use with an OpenFlow rule that matches only a\n * single TCP flow. If the rule matches multiple TCP flows (e.g. it wildcards\n * all TCP traffic, or all TCP traffic to a particular port), then any FIN or\n * RST in any of those flows will cause the entire OpenFlow rule to expire\n * early, which is not normally desirable.\n */\n '''\n nx_action_fin_timeout = nstruct(\n (uint16, 'fin_idle_timeout'), # /* New idle timeout, if nonzero. */\n (uint16, 'fin_hard_timeout'), # /* New hard timeout, if nonzero. */\n (uint16,),\n base = nx_action,\n name = 'nx_action_fin_timeout',\n criteria = lambda x: getattr(x, action_subtype) == NXAST_FIN_TIMEOUT,\n classifyby = (NXAST_FIN_TIMEOUT,),\n init = packvalue(NXAST_FIN_TIMEOUT, action_subtype)\n )\n namespace['nx_action_fin_timeout'] = nx_action_fin_timeout\n '''\n /* Action structure for NXAST_BUNDLE and NXAST_BUNDLE_LOAD.\n *\n * The bundle actions choose a slave from a supplied list of options.\n * NXAST_BUNDLE outputs to its selection. NXAST_BUNDLE_LOAD writes its\n * selection to a register.\n *\n * The list of possible slaves follows the nx_action_bundle structure. The size\n * of each slave is governed by its type as indicated by the 'slave_type'\n * parameter. The list of slaves should be padded at its end with zeros to make\n * the total length of the action a multiple of 8.\n *\n * Switches infer from the 'slave_type' parameter the size of each slave. All\n * implementations must support the NXM_OF_IN_PORT 'slave_type' which indicates\n * that the slaves are OpenFlow port numbers with NXM_LENGTH(NXM_OF_IN_PORT) ==\n * 2 byte width. Switches should reject actions which indicate unknown or\n * unsupported slave types.\n *\n * Switches use a strategy dictated by the 'algorithm' parameter to choose a\n * slave. If the switch does not support the specified 'algorithm' parameter,\n * it should reject the action.\n *\n * Several algorithms take into account liveness when selecting slaves. The\n * liveness of a slave is implementation defined (with one exception), but will\n * generally take into account things like its carrier status and the results\n * of any link monitoring protocols which happen to be running on it. In order\n * to give controllers a place-holder value, the OFPP_NONE port is always\n * considered live.\n *\n * Some slave selection strategies require the use of a hash function, in which\n * case the 'fields' and 'basis' parameters should be populated. The 'fields'\n * parameter (one of NX_HASH_FIELDS_*) designates which parts of the flow to\n * hash. Refer to the definition of \"enum nx_hash_fields\" for details. The\n * 'basis' parameter is used as a universal hash parameter. Different values\n * of 'basis' yield different hash results.\n *\n * The 'zero' parameter at the end of the action structure is reserved for\n * future use. Switches are required to reject actions which have nonzero\n * bytes in the 'zero' field.\n *\n * NXAST_BUNDLE actions should have 'ofs_nbits' and 'dst' zeroed. Switches\n * should reject actions which have nonzero bytes in either of these fields.\n *\n * NXAST_BUNDLE_LOAD stores the OpenFlow port number of the selected slave in\n * dst[ofs:ofs+n_bits]. The format and semantics of 'dst' and 'ofs_nbits' are\n * similar to those for the NXAST_REG_LOAD action. */\n '''\n nx_action_bundle = nstruct( \n # /* Slave choice algorithm to apply to hash value. */\n (nx_bd_algorithm, 'algorithm'), # /* One of NX_BD_ALG_*. */\n \n # /* What fields to hash and how. */\n (nx_hash_fields, 'fields'), # /* One of NX_HASH_FIELDS_*. */\n (uint16, 'basis'), # /* Universal hash parameter. */\n \n (nxm_header, 'slave_type'), # /* NXM_OF_IN_PORT. */\n (uint16, 'n_slaves'), # /* Number of slaves. */\n \n (uint16, 'ofs_nbits'), # /* (ofs << 6) | (n_bits - 1). */\n (nxm_header, 'dst'), # /* Destination. */\n \n (uint8[4],), # /* Reserved. Must be zero. */\n name = 'nx_action_bundle',\n base = nx_action,\n criteria = lambda x: getattr(x, action_subtype) == NXAST_BUNDLE or getattr(x, action_subtype) == NXAST_BUNDLE_LOAD,\n classifyby = (NXAST_BUNDLE, NXAST_BUNDLE_LOAD),\n init = packvalue(NXAST_BUNDLE, action_subtype)\n )\n namespace['nx_action_bundle'] = nx_action_bundle\n \n def _nx_slave_ports_prepack(x):\n x.n_slaves = len(x.bundles)\n _nx_slave_ports = nstruct(\n (nx_port_no[0], 'bundles'),\n name = '_nx_slave_ports',\n size = lambda x: x.n_slaves * 2,\n prepack = _nx_slave_ports_prepack,\n padding = 1\n )\n \n nx_action_bundle_port = nstruct(\n (_nx_slave_ports,),\n base = nx_action_bundle,\n name = 'nx_action_bundle_port',\n criteria = lambda x: x.slave_type == NXM_OF_IN_PORT,\n init = packvalue(NXM_OF_IN_PORT, 'slave_type'),\n lastextra = False,\n formatter = _createdesc(lambda x: 'bundle_load(%s,%d,%s,%s,%s[%d..%d],slaves:%r)' % \\\n (x['fields'], x['basis'], x['algorithm'], x['slave_type'], x['dst'], x['ofs_nbits'] >> 6, (x['ofs_nbits'] >> 6) + (x['ofs_nbits'] & 0x3f), x['bundles']) \\\n if x[action_subtype] == 'NXAST_BUNDLE_LOAD' else 'bundle(%s,%d,%s,%s,slaves:%r)' % (x['fields'], x['basis'], x['algorithm'], x['slave_type'], x['bundles']))\n )\n namespace['nx_action_bundle_port'] = nx_action_bundle_port\n \n def _nx_slave_others_prepack(x):\n x.n_slaves = len(x.bundlesraw) // NXM_LENGTH(x.slave_type)\n \n _nx_slave_others = nstruct(\n (raw, 'bundlesraw'),\n name = '_nx_slave_others',\n size = lambda x: x.n_slaves * NXM_LENGTH(x.slave_type),\n prepack = _nx_slave_others_prepack,\n padding = 1\n )\n \n nx_action_bundle_others = nstruct(\n (_nx_slave_others,),\n base = nx_action_bundle,\n name = 'nx_action_bundle_others',\n criteria = lambda x: x.slave_type != NXM_OF_IN_PORT,\n lastextra = False,\n init = packvalue(NXM_OF_ETH_DST, 'slave_type'),\n formatter = _createdesc(lambda x: 'bundle_load(%s,%d,%s,%s,%s[%d..%d],slaves:%r)' % \\\n (x['fields'], x['basis'], x['algorithm'], x['slave_type'], x['dst'], x['ofs_nbits'] >> 6, (x['ofs_nbits'] >> 6) + (x['ofs_nbits'] & 0x3f), x['bundleraw']) \\\n if x[action_subtype] == 'NXAST_BUNDLE_LOAD' else 'bundle(%s,%d,%s,%s,slaves:%r)' % (x['fields'], x['basis'], x['algorithm'], x['slave_type'], x['bundleraw']))\n )\n namespace['nx_action_bundle_others'] = nx_action_bundle_others\n '''\n /* Action structure for NXAST_DEC_TTL_CNT_IDS.\n *\n * If the packet is not IPv4 or IPv6, does nothing. For IPv4 or IPv6, if the\n * TTL or hop limit is at least 2, decrements it by 1. Otherwise, if TTL or\n * hop limit is 0 or 1, sends a packet-in to the controllers with each of the\n * 'n_controllers' controller IDs specified in 'cnt_ids'.\n *\n * (This differs from NXAST_DEC_TTL in that for NXAST_DEC_TTL the packet-in is\n * sent only to controllers with id 0.)\n */\n '''\n \n def _nx_action_cnt_ids_ids_prepack(x):\n x.n_controllers = len(x.cnt_ids)\n _nx_action_cnt_ids_ids = nstruct(\n (uint16[0], 'cnt_ids'),\n name = '_nx_action_cnt_ids_ids',\n size = lambda x: 2 * x.n_controllers,\n prepack = _nx_action_cnt_ids_ids_prepack\n )\n \n nx_action_cnt_ids = nstruct(\n (uint16, 'n_controllers'), # /* Number of controllers. */\n (uint8[4],), # /* Must be zero. */\n (_nx_action_cnt_ids_ids,),\n base = nx_action,\n classifyby = (NXAST_DEC_TTL_CNT_IDS,),\n criteria = lambda x: getattr(x, action_subtype) == NXAST_DEC_TTL_CNT_IDS,\n init = packvalue(NXAST_DEC_TTL_CNT_IDS, action_subtype),\n lastextra = False,\n name = 'nx_action_cnt_ids'\n )\n namespace['nx_action_cnt_ids'] = nx_action_cnt_ids\n \n '''\n /* Action structure for NXAST_OUTPUT_REG.\n *\n * Outputs to the OpenFlow port number written to src[ofs:ofs+nbits].\n *\n * The format and semantics of 'src' and 'ofs_nbits' are similar to those for\n * the NXAST_REG_LOAD action.\n *\n * The acceptable nxm_header values for 'src' are the same as the acceptable\n * nxm_header values for the 'src' field of NXAST_REG_MOVE.\n *\n * The 'max_len' field indicates the number of bytes to send when the chosen\n * port is OFPP_CONTROLLER. Its semantics are equivalent to the 'max_len'\n * field of OFPAT_OUTPUT.\n *\n * The 'zero' field is required to be zeroed for forward compatibility. */\n '''\n nx_action_output_reg = nstruct(\n (uint16, 'ofs_nbits'), # /* (ofs << 6) | (n_bits - 1). */\n (nxm_header, 'src'), # /* Source. */\n (uint16, 'max_len'), # /* Max length to send to controller. */\n (uint8[6],), # /* Reserved, must be zero. */\n base = nx_action,\n classifyby = (NXAST_OUTPUT_REG,),\n criteria = lambda x: getattr(x, action_subtype) == NXAST_OUTPUT_REG,\n init = packvalue(NXAST_OUTPUT_REG, action_subtype),\n name = 'nx_action_output_reg',\n formatter = _createdesc(lambda x: 'output:%s[%d..%d]' % (x['src'], x['ofs_nbits'] >> 6, (x['ofs_nbits'] >> 6) + (x['ofs_nbits'] & 0x3f)))\n )\n namespace['nx_action_output_reg'] = nx_action_output_reg\n \n '''\n /* NXAST_EXIT\n *\n * Discontinues action processing.\n *\n * The NXAST_EXIT action causes the switch to immediately halt processing\n * actions for the flow. Any actions which have already been processed are\n * executed by the switch. However, any further actions, including those which\n * may be in different tables, or different levels of the NXAST_RESUBMIT\n * hierarchy, will be ignored.\n *\n * Uses the nx_action_header structure. */\n \n /* ## --------------------- ## */\n /* ## Requests and replies. ## */\n /* ## --------------------- ## */\n '''\n \n '''\n /* NXT_SET_FLOW_FORMAT request. */\n '''\n nx_set_flow_format = nstruct(\n (nx_flow_format, 'format'), # /* One of NXFF_*. */\n name = 'nx_set_flow_format',\n base = nicira_header,\n criteria = lambda x: getattr(x, msg_subtype) == NXT_SET_FLOW_FORMAT,\n classifyby = (NXT_SET_FLOW_FORMAT,),\n init = packvalue(NXT_SET_FLOW_FORMAT, msg_subtype)\n )\n namespace['nx_set_flow_format'] = nx_set_flow_format\n \n '''\n /* NXT_FLOW_MOD (analogous to OFPT_FLOW_MOD).\n *\n * It is possible to limit flow deletions and modifications to certain\n * cookies by using the NXM_NX_COOKIE(_W) matches. The \"cookie\" field\n * is used only to add or modify flow cookies.\n */\n '''\n ofp_flow_mod_command = namespace['ofp_flow_mod_command']\n nx_flow_mod = nstruct(\n (uint64, 'cookie'), # /* Opaque controller-issued identifier. */\n (ofp_flow_mod_command, 'command'), # /* OFPFC_* + possibly a table ID (see comment\n # * on struct nx_flow_mod_table_id). */\n (uint16, 'idle_timeout'), # /* Idle time before discarding (seconds). */\n (uint16, 'hard_timeout'), # /* Max time before discarding (seconds). */\n (uint16, 'priority'), # /* Priority level of flow entry. */\n (uint32, 'buffer_id'), # /* Buffered packet to apply to (or -1).\n # Not meaningful for OFPFC_DELETE*. */\n (nx_port_no, 'out_port'), # /* For OFPFC_DELETE* commands, require\n # matching entries to include this as an\n # output port. A value of OFPP_NONE\n # indicates no restriction. */\n (ofp_flow_mod_flags, 'flags'), # /* One of OFPFF_*. */\n (uint16, 'match_len'), # /* Size of nx_match. */\n (uint8[6],), # /* Align to 64-bits. */\n (nx_matches,),\n base = nicira_header,\n criteria = lambda x: getattr(x, msg_subtype) == NXT_FLOW_MOD,\n classifyby = (NXT_FLOW_MOD,),\n init = packvalue(NXT_FLOW_MOD, msg_subtype),\n name = 'nx_flow_mod'\n )\n namespace['nx_flow_mod'] = nx_flow_mod\n \n '''\n /* NXT_FLOW_REMOVED (analogous to OFPT_FLOW_REMOVED).\n *\n * 'table_id' is present only in Open vSwitch 1.11 and later. In earlier\n * versions of Open vSwitch, this is a padding byte that is always zeroed.\n * Therefore, a 'table_id' value of 0 indicates that the table ID is not known,\n * and other values may be interpreted as one more than the flow's former table\n * ID. */\n '''\n nx_flow_removed = nstruct(\n (uint64, 'cookie'), # /* Opaque controller-issued identifier. */\n (uint16, 'priority'), # /* Priority level of flow entry. */\n (ofp_flow_removed_reason, 'reason'), # /* One of OFPRR_*. */\n (uint8, 'table_id'), # /* Flow's former table ID, plus one. */\n (uint32, 'duration_sec'), # /* Time flow was alive in seconds. */\n (uint32, 'duration_nsec'), # /* Time flow was alive in nanoseconds beyond\n # duration_sec. */\n (uint16, 'idle_timeout'), # /* Idle timeout from original flow mod. */\n (uint16, 'match_len'), # /* Size of nx_match. */\n (uint64, 'packet_count'),\n (uint64, 'byte_count'),\n (nx_matches,),\n base = nicira_header,\n criteria = lambda x: getattr(x, msg_subtype) == NXT_FLOW_REMOVED,\n classifyby = (NXT_FLOW_REMOVED,),\n init = packvalue(NXT_FLOW_REMOVED, msg_subtype),\n name = 'nx_flow_removed' \n )\n namespace['nx_flow_removed'] = nx_flow_removed\n \n '''\n /* Nicira vendor stats request of type NXST_FLOW (analogous to OFPST_FLOW\n * request).\n *\n * It is possible to limit matches to certain cookies by using the\n * NXM_NX_COOKIE and NXM_NX_COOKIE_W matches.\n */\n '''\n nx_flow_stats_request = nstruct(\n (nx_port_no, 'out_port'), #/* Require matching entries to include this\n # as an output port. A value of OFPP_NONE\n # indicates no restriction. */\n (uint16, 'match_len'), # /* Length of nx_match. */\n (uint8, 'table_id'), # /* ID of table to read (from ofp_table_stats)\n # or 0xff for all tables. */\n (uint8[3],), # /* Align to 64 bits. */\n (nx_matches,),\n base = nx_stats_request,\n criteria = lambda x: getattr(x, stats_subtype) == NXST_FLOW,\n classifyby = (NXST_FLOW,),\n init = packvalue(NXST_FLOW, stats_subtype),\n name = 'nx_flow_stats_request'\n )\n namespace['nx_flow_stats_request'] = nx_flow_stats_request\n '''\n /* Body for Nicira vendor stats reply of type NXST_FLOW (analogous to\n * OFPST_FLOW reply).\n *\n * The values of 'idle_age' and 'hard_age' are only meaningful when talking to\n * a switch that implements the NXT_FLOW_AGE extension. Zero means that the\n * true value is unknown, perhaps because hardware does not track the value.\n * (Zero is also the value that one should ordinarily expect to see talking to\n * a switch that does not implement NXT_FLOW_AGE, since those switches zero the\n * padding bytes that these fields replaced.) A nonzero value X represents X-1\n * seconds. A value of 65535 represents 65534 or more seconds.\n *\n * 'idle_age' is the number of seconds that the flow has been idle, that is,\n * the number of seconds since a packet passed through the flow. 'hard_age' is\n * the number of seconds since the flow was last modified (e.g. OFPFC_MODIFY or\n * OFPFC_MODIFY_STRICT). (The 'duration_*' fields are the elapsed time since\n * the flow was added, regardless of subsequent modifications.)\n *\n * For a flow with an idle or hard timeout, 'idle_age' or 'hard_age',\n * respectively, will ordinarily be smaller than the timeout, but flow\n * expiration times are only approximate and so one must be prepared to\n * tolerate expirations that occur somewhat early or late.\n */\n '''\n ofp_action = namespace['ofp_action']\n \n nx_flow_stats = nstruct(\n (uint16, 'length'), # /* Length of this entry. */\n (uint8, 'table_id'), # /* ID of table flow came from. */\n (uint8,),\n (uint32, 'duration_sec'), # /* Time flow has been alive in seconds. */\n (uint32, 'duration_nsec'), # /* Time flow has been alive in nanoseconds\n # beyond duration_sec. */\n (uint16, 'priority'), # /* Priority of the entry. */\n (uint16, 'idle_timeout'), # /* Number of seconds idle before expiration. */\n (uint16, 'hard_timeout'), # /* Number of seconds before expiration. */\n (uint16, 'match_len'), # /* Length of nx_match. */\n (uint16, 'idle_age'), # /* Seconds since last packet, plus one. */\n (uint16, 'hard_age'), # /* Seconds since last modification, plus one. */\n (uint64, 'cookie'), # /* Opaque controller-issued identifier. */\n (uint64, 'packet_count'), # /* Number of packets, UINT64_MAX if unknown. */\n (uint64, 'byte_count'), # /* Number of bytes, UINT64_MAX if unknown. */\n #=======================================================================\n # /* Followed by:\n # * - Exactly match_len (possibly 0) bytes containing the nx_match, then\n # * - Exactly (match_len + 7)/8*8 - match_len (between 0 and 7) bytes of\n # * all-zero bytes, then\n # * - Actions to fill out the remainder 'length' bytes (always a multiple\n # * of 8).\n # */\n #=======================================================================\n (nx_matches,),\n (ofp_action[0], 'actions'),\n name = 'nx_flow_stats',\n size = sizefromlen(65536, 'length'),\n prepack = packsize('length') \n )\n namespace['nx_flow_stats'] = nx_flow_stats\n \n nx_flow_stats_reply = nstruct(\n (nx_flow_stats[0], 'stats'),\n base = nx_stats_reply,\n classifyby = (NXST_FLOW,),\n criteria = lambda x: getattr(x, stats_subtype) == NXST_FLOW,\n init = packvalue(NXST_FLOW, stats_subtype),\n name = 'nx_flow_stats_reply'\n )\n namespace['nx_flow_stats_reply'] = nx_flow_stats_reply\n \n '''\n /* Nicira vendor stats request of type NXST_AGGREGATE (analogous to\n * OFPST_AGGREGATE request).\n *\n * The reply format is identical to the reply format for OFPST_AGGREGATE,\n * except for the header. */\n '''\n nx_aggregate_stats_request = nstruct(\n (nx_port_no, 'out_port'), # /* Require matching entries to include this\n # as an output port. A value of OFPP_NONE\n # indicates no restriction. */\n (uint16, 'match_len'), # /* Length of nx_match. */\n (uint8, 'table_id'), # /* ID of table to read (from ofp_table_stats)\n # or 0xff for all tables. */\n (uint8[3],), # /* Align to 64 bits. */\n #=======================================================================\n # /* Followed by:\n # * - Exactly match_len (possibly 0) bytes containing the nx_match, then\n # * - Exactly (match_len + 7)/8*8 - match_len (between 0 and 7) bytes of\n # * all-zero bytes, which must also exactly fill out the length of the\n # * message.\n # */\n #=======================================================================\n (nx_matches,),\n base = nx_stats_request,\n name = 'nx_aggregate_stats_request',\n criteria = lambda x: getattr(x, stats_subtype) == NXST_AGGREGATE,\n classifyby = (NXST_AGGREGATE,),\n init = packvalue(NXST_AGGREGATE, stats_subtype),\n lastextra = False\n )\n namespace['nx_aggregate_stats_request'] = nx_aggregate_stats_request\n \n nx_aggregate_stats_reply = nstruct(\n (uint64, 'packet_count'), # /* Number of packets in flows. */\n (uint64, 'byte_count'), # /* Number of bytes in flows. */\n (uint32, 'flow_count'), # /* Number of flows. */\n (uint8[4],),\n base = nx_stats_reply,\n name = 'nx_aggregate_stats_reply',\n criteria = lambda x: getattr(x, stats_subtype) == NXST_AGGREGATE,\n classifyby = (NXST_AGGREGATE,),\n init = packvalue(NXST_AGGREGATE, stats_subtype)\n )\n namespace['nx_aggregate_stats_reply'] = nx_aggregate_stats_reply\n \n '''\n /* NXT_SET_CONTROLLER_ID.\n *\n * Each OpenFlow controller connection has a 16-bit identifier that is\n * initially 0. This message changes the connection's ID to 'id'.\n *\n * Controller connection IDs need not be unique.\n *\n * The NXAST_CONTROLLER action is the only current user of controller\n * connection IDs. */\n '''\n nx_controller_id = nstruct(\n (uint8[6],), # /* Must be zero. */\n (uint16, 'controller_id'), # /* New controller connection ID. */\n base = nicira_header,\n name = 'nx_controller_id',\n criteria = lambda x: getattr(x, msg_subtype) == NXT_SET_CONTROLLER_ID,\n init = packvalue(NXT_SET_CONTROLLER_ID, msg_subtype),\n classifyby = (NXT_SET_CONTROLLER_ID,)\n )\n namespace['nx_controller_id'] = nx_controller_id\n \n '''\n /* Action structure for NXAST_CONTROLLER.\n *\n * This generalizes using OFPAT_OUTPUT to send a packet to OFPP_CONTROLLER. In\n * addition to the 'max_len' that OFPAT_OUTPUT supports, it also allows\n * specifying:\n *\n * - 'reason': The reason code to use in the ofp_packet_in or nx_packet_in.\n *\n * - 'controller_id': The ID of the controller connection to which the\n * ofp_packet_in should be sent. The ofp_packet_in or nx_packet_in is\n * sent only to controllers that have the specified controller connection\n * ID. See \"struct nx_controller_id\" for more information. */\n '''\n nx_action_controller = nstruct(\n (uint16, 'max_len'), # /* Maximum length to send to controller. */\n (uint16, 'controller_id'), # /* Controller ID to send packet-in. */\n (ofp_packet_in_reason, 'reason'), # /* enum ofp_packet_in_reason (OFPR_*). */\n (uint8,),\n base = nx_action,\n name = 'nx_action_controller',\n criteria = lambda x: getattr(x, action_subtype) == NXAST_CONTROLLER,\n classifyby = (NXAST_CONTROLLER,),\n init = packvalue(NXAST_CONTROLLER, action_subtype)\n )\n namespace['nx_action_controller'] = nx_action_controller\n '''\n /* Flow Table Monitoring\n * =====================\n *\n * NXST_FLOW_MONITOR allows a controller to keep track of changes to OpenFlow\n * flow table(s) or subsets of them, with the following workflow:\n *\n * 1. The controller sends an NXST_FLOW_MONITOR request to begin monitoring\n * flows. The 'id' in the request must be unique among all monitors that\n * the controller has started and not yet canceled on this OpenFlow\n * connection.\n *\n * 2. The switch responds with an NXST_FLOW_MONITOR reply. If the request's\n * 'flags' included NXFMF_INITIAL, the reply includes all the flows that\n * matched the request at the time of the request (with event NXFME_ADDED).\n * If 'flags' did not include NXFMF_INITIAL, the reply is empty.\n *\n * The reply uses the xid of the request (as do all replies to OpenFlow\n * requests).\n *\n * 3. Whenever a change to a flow table entry matches some outstanding monitor\n * request's criteria and flags, the switch sends a notification to the\n * controller as an additional NXST_FLOW_MONITOR reply with xid 0.\n *\n * When multiple outstanding monitors match a single change, only a single\n * notification is sent. This merged notification includes the information\n * requested in any of the individual monitors. That is, if any of the\n * matching monitors requests actions (NXFMF_ACTIONS), the notification\n * includes actions, and if any of the monitors request full changes for the\n * controller's own changes (NXFMF_OWN), the controller's own changes will\n * be included in full.\n *\n * 4. The controller may cancel a monitor with NXT_FLOW_MONITOR_CANCEL. No\n * further notifications will be sent on the basis of the canceled monitor\n * afterward.\n *\n *\n * Buffer Management\n * =================\n *\n * OpenFlow messages for flow monitor notifications can overflow the buffer\n * space available to the switch, either temporarily (e.g. due to network\n * conditions slowing OpenFlow traffic) or more permanently (e.g. the sustained\n * rate of flow table change exceeds the network bandwidth between switch and\n * controller).\n *\n * When Open vSwitch's notification buffer space reaches a limiting threshold,\n * OVS reacts as follows:\n *\n * 1. OVS sends an NXT_FLOW_MONITOR_PAUSED message to the controller, following\n * all the already queued notifications. After it receives this message,\n * the controller knows that its view of the flow table, as represented by\n * flow monitor notifications, is incomplete.\n *\n * 2. As long as the notification buffer is not empty:\n *\n * - NXMFE_ADD and NXFME_MODIFIED notifications will not be sent.\n *\n * - NXFME_DELETED notifications will still be sent, but only for flows\n * that existed before OVS sent NXT_FLOW_MONITOR_PAUSED.\n *\n * - NXFME_ABBREV notifications will not be sent. They are treated as\n * the expanded version (and therefore only the NXFME_DELETED\n * components, if any, are sent).\n *\n * 3. When the notification buffer empties, OVS sends NXFME_ADD notifications\n * for flows added since the buffer reached its limit and NXFME_MODIFIED\n * notifications for flows that existed before the limit was reached and\n * changed after the limit was reached.\n *\n * 4. OVS sends an NXT_FLOW_MONITOR_RESUMED message to the controller. After\n * it receives this message, the controller knows that its view of the flow\n * table, as represented by flow monitor notifications, is again complete.\n *\n * This allows the maximum buffer space requirement for notifications to be\n * bounded by the limit plus the maximum number of supported flows.\n *\n *\n * \"Flow Removed\" messages\n * =======================\n *\n * The flow monitor mechanism is independent of OFPT_FLOW_REMOVED and\n * NXT_FLOW_REMOVED. Flow monitor updates for deletion are sent if\n * NXFMF_DELETE is set on a monitor, regardless of whether the\n * OFPFF_SEND_FLOW_REM flag was set when the flow was added. */\n \n /* NXST_FLOW_MONITOR request.\n *\n * The NXST_FLOW_MONITOR request's body consists of an array of zero or more\n * instances of this structure. The request arranges to monitor the flows\n * that match the specified criteria, which are interpreted in the same way as\n * for NXST_FLOW.\n *\n * 'id' identifies a particular monitor for the purpose of allowing it to be\n * canceled later with NXT_FLOW_MONITOR_CANCEL. 'id' must be unique among\n * existing monitors that have not already been canceled.\n *\n * The reply includes the initial flow matches for monitors that have the\n * NXFMF_INITIAL flag set. No single flow will be included in the reply more\n * than once, even if more than one requested monitor matches that flow. The\n * reply will be empty if none of the monitors has NXFMF_INITIAL set or if none\n * of the monitors initially matches any flows.\n *\n * For NXFMF_ADD, an event will be reported if 'out_port' matches against the\n * actions of the flow being added or, for a flow that is replacing an existing\n * flow, if 'out_port' matches against the actions of the flow being replaced.\n * For NXFMF_DELETE, 'out_port' matches against the actions of a flow being\n * deleted. For NXFMF_MODIFY, an event will be reported if 'out_port' matches\n * either the old or the new actions. */\n '''\n ofp_table = namespace['ofp_table']\n nx_flow_monitor_request = nstruct(\n (uint32, 'id'), # /* Controller-assigned ID for this monitor. */\n (nx_flow_monitor_flags, 'flags'), # /* NXFMF_*. */\n (nx_port_no, 'out_port'), # /* Required output port, if not OFPP_NONE. */\n (uint16, 'match_len'), # /* Length of nx_match. */\n (ofp_table, 'table_id'), # /* One table's ID or 0xff for all tables. */\n (uint8[5],), # /* Align to 64 bits (must be zero). */\n (nx_matches,),\n name = 'nx_flow_monitor_request',\n base = nx_stats_request,\n criteria = lambda x: getattr(x, stats_subtype) == NXST_FLOW_MONITOR,\n init = packvalue(NXST_FLOW_MONITOR, stats_subtype),\n classifyby = (NXST_FLOW_MONITOR,)\n )\n namespace['nx_flow_monitor_request'] = nx_flow_monitor_request\n '''\n /* NXST_FLOW_MONITOR reply header.\n *\n * The body of an NXST_FLOW_MONITOR reply is an array of variable-length\n * structures, each of which begins with this header. The 'length' member may\n * be used to traverse the array, and the 'event' member may be used to\n * determine the particular structure.\n *\n * Every instance is a multiple of 8 bytes long. */\n '''\n nx_flow_update = nstruct(\n (uint16, 'length'), #/* Length of this entry. */\n (nx_flow_update_event, 'event'), # /* One of NXFME_*. */\n name = 'nx_flow_update',\n size = sizefromlen(65536, 'length'),\n prepack = packsize('length')\n )\n namespace['nx_flow_update'] = nx_flow_update\n \n '''\n /* NXST_FLOW_MONITOR reply for NXFME_ADDED, NXFME_DELETED, and\n * NXFME_MODIFIED. */\n '''\n nx_flow_update_full = nstruct(\n (ofp_flow_removed_reason, 'reason'), # /* OFPRR_* for NXFME_DELETED, else zero. */\n (uint16, 'priority'), # /* Priority of the entry. */\n (uint16, 'idle_timeout'), # /* Number of seconds idle before expiration. */\n (uint16, 'hard_timeout'), # /* Number of seconds before expiration. */\n (uint16, 'match_len'), # /* Length of nx_match. */\n (uint8, 'table_id'), # /* ID of flow's table. */\n (uint8,), # /* Reserved, currently zeroed. */\n (uint64, 'cookie'), # /* Opaque controller-issued identifier. */\n #=======================================================================\n # /* Followed by:\n # * - Exactly match_len (possibly 0) bytes containing the nx_match, then\n # * - Exactly (match_len + 7)/8*8 - match_len (between 0 and 7) bytes of\n # * all-zero bytes, then\n # * - Actions to fill out the remainder 'length' bytes (always a multiple\n # * of 8). If NXFMF_ACTIONS was not specified, or 'event' is\n # * NXFME_DELETED, no actions are included.\n # */\n #=======================================================================\n (nx_matches,),\n (ofp_action[0], 'actions'),\n name = 'nx_flow_update_full',\n base = nx_flow_update,\n criteria = lambda x: x.event in (NXFME_ADDED, NXFME_DELETED, NXFME_MODIFIED),\n init = packvalue(NXFME_ADDED, 'event')\n )\n namespace['nx_flow_update_full'] = nx_flow_update_full\n '''\n /* NXST_FLOW_MONITOR reply for NXFME_ABBREV.\n *\n * When the controller does not specify NXFMF_OWN in a monitor request, any\n * flow tables changes due to the controller's own requests (on the same\n * OpenFlow channel) will be abbreviated, when possible, to this form, which\n * simply specifies the 'xid' of the OpenFlow request (e.g. an OFPT_FLOW_MOD or\n * NXT_FLOW_MOD) that caused the change.\n *\n * Some changes cannot be abbreviated and will be sent in full:\n *\n * - Changes that only partially succeed. This can happen if, for example,\n * a flow_mod with type OFPFC_MODIFY affects multiple flows, but only some\n * of those modifications succeed (e.g. due to hardware limitations).\n *\n * This cannot occur with the current implementation of the Open vSwitch\n * software datapath. It could happen with other datapath implementations.\n *\n * - Changes that race with conflicting changes made by other controllers or\n * other flow_mods (not separated by barriers) by the same controller.\n *\n * This cannot occur with the current Open vSwitch implementation\n * (regardless of datapath) because Open vSwitch internally serializes\n * potentially conflicting changes.\n *\n * A flow_mod that does not change the flow table will not trigger any\n * notification, even an abbreviated one. For example, a \"modify\" or \"delete\"\n * flow_mod that does not match any flows will not trigger a notification.\n * Whether an \"add\" or \"modify\" that specifies all the same parameters that a\n * flow already has triggers a notification is unspecified and subject to\n * change in future versions of Open vSwitch.\n *\n * OVS will always send the notifications for a given flow table change before\n * the reply to a OFPT_BARRIER_REQUEST request that follows the flow table\n * change. Thus, if the controller does not receive an abbreviated (or\n * unabbreviated) notification for a flow_mod before the next\n * OFPT_BARRIER_REPLY, it will never receive one. */\n '''\n nx_flow_update_abbrev = nstruct(\n (uint32, 'xid'), # /* Controller-specified xid from flow_mod. */\n name = 'nx_flow_update_abbrev',\n base = nx_flow_update,\n criteria = lambda x: x.event == NXFME_ABBREV,\n init = packvalue(NXFME_ABBREV, 'event')\n )\n namespace['nx_flow_update_abbrev'] = nx_flow_update_abbrev\n \n nx_flow_monitor_reply = nstruct(\n (nx_flow_update[0], 'stats'),\n base = nx_stats_reply,\n classifyby = (NXST_FLOW_MONITOR,),\n name = 'nx_flow_monitor_reply',\n criteria = lambda x: getattr(x, stats_subtype) == NXST_FLOW_MONITOR,\n init = packvalue(NXST_FLOW_MONITOR, stats_subtype)\n )\n namespace['nx_flow_monitor_reply'] = nx_flow_monitor_reply\n '''\n /* NXT_FLOW_MONITOR_CANCEL.\n *\n * Used by a controller to cancel an outstanding monitor. */\n '''\n nx_flow_monitor_cancel = nstruct(\n (uint32, 'id'), # /* 'id' from nx_flow_monitor_request. */\n name = 'nx_flow_monitor_cancel',\n base = nicira_header,\n classifyby = (NXT_FLOW_MONITOR_CANCEL,),\n criteria = lambda x: getattr(x, msg_subtype) == NXT_FLOW_MONITOR_CANCEL,\n init = packvalue(NXT_FLOW_MONITOR_CANCEL, msg_subtype)\n )\n namespace['nx_flow_monitor_cancel'] = nx_flow_monitor_cancel\n \n '''\n /* Action structure for NXAST_WRITE_METADATA.\n *\n * Modifies the 'mask' bits of the metadata value. */\n '''\n nx_action_write_metadata = nstruct(\n (uint8[6],), # /* Must be zero. */\n (uint64, 'metadata'), # /* Metadata register. */\n (uint64, 'mask'), # /* Metadata mask. */\n base = nx_action,\n classifyby = (NXAST_WRITE_METADATA,),\n criteria = lambda x: getattr(x, action_subtype) == NXAST_WRITE_METADATA,\n init = packvalue(NXAST_WRITE_METADATA, action_subtype),\n name = 'nx_action_write_metadata'\n )\n namespace['nx_action_write_metadata'] = nx_action_write_metadata\n \n '''\n /* Action structure for NXAST_PUSH_MPLS. */\n '''\n nx_action_push_mpls = nstruct(\n (ethertype, 'ethertype'), # /* Ethertype */\n (uint8[4],),\n base = nx_action,\n classifyby = (NXAST_PUSH_MPLS,),\n criteria = lambda x: getattr(x, action_subtype) == NXAST_PUSH_MPLS,\n init = packvalue(NXAST_PUSH_MPLS, action_subtype),\n name = 'nx_action_push_mpls'\n )\n namespace['nx_action_push_mpls'] = nx_action_push_mpls\n \n '''\n /* Action structure for NXAST_POP_MPLS. */\n '''\n nx_action_pop_mpls = nstruct(\n (ethertype, 'ethertype'), # /* Ethertype */\n (uint8[4],),\n base = nx_action,\n classifyby = (NXAST_POP_MPLS,),\n criteria = lambda x: getattr(x, action_subtype) == NXAST_POP_MPLS,\n init = packvalue(NXAST_POP_MPLS, action_subtype),\n name = 'nx_action_pop_mpls'\n )\n namespace['nx_action_pop_mpls'] = nx_action_pop_mpls\n \n '''\n /* Action structure for NXAST_SET_MPLS_LABEL. */\n '''\n nx_action_mpls_label = nstruct(\n (uint8[2],), # /* Must be zero. */\n (uint32, 'label'), # /* LABEL */\n base = nx_action,\n classifyby = (NXAST_SET_MPLS_LABEL,),\n criteria = lambda x: getattr(x, action_subtype) == NXAST_SET_MPLS_LABEL,\n init = packvalue(NXAST_SET_MPLS_LABEL, action_subtype),\n name = 'nx_action_mpls_label'\n )\n namespace['nx_action_mpls_label'] = nx_action_mpls_label\n \n '''\n /* Action structure for NXAST_SET_MPLS_TC. */\n '''\n nx_action_mpls_tc = nstruct(\n (uint8, 'tc'), # /* TC */\n (uint8[5],),\n base = nx_action,\n classifyby = (NXAST_SET_MPLS_TC,),\n criteria = lambda x: getattr(x, action_subtype) == NXAST_SET_MPLS_TC,\n init = packvalue(NXAST_SET_MPLS_TC, action_subtype),\n name = 'nx_action_mpls_tc'\n )\n namespace['nx_action_mpls_tc'] = nx_action_mpls_tc\n \n '''\n /* Action structure for NXAST_SET_MPLS_TTL. */\n '''\n nx_action_mpls_ttl = nstruct(\n (uint8, 'ttl'), # /* TTL */\n (uint8[5],),\n base = nx_action,\n classifyby = (NXAST_SET_MPLS_TTL,),\n criteria = lambda x: getattr(x, action_subtype) == NXAST_SET_MPLS_TTL,\n init = packvalue(NXAST_SET_MPLS_TTL, action_subtype),\n name = 'nx_action_mpls_ttl'\n )\n namespace['nx_action_mpls_ttl'] = nx_action_mpls_ttl\n \n '''\n /* Action structure for NXAST_SAMPLE.\n *\n * Samples matching packets with the given probability and sends them\n * each to the set of collectors identified with the given ID. The\n * probability is expressed as a number of packets to be sampled out\n * of USHRT_MAX packets, and must be >0.\n *\n * When sending packet samples to IPFIX collectors, the IPFIX flow\n * record sent for each sampled packet is associated with the given\n * observation domain ID and observation point ID. Each IPFIX flow\n * record contain the sampled packet's headers when executing this\n * rule. If a sampled packet's headers are modified by previous\n * actions in the flow, those modified headers are sent. */\n '''\n nx_action_sample = nstruct(\n (uint16, 'probability'), # /* Fraction of packets to sample. */\n (uint32, 'collector_set_id'), # /* ID of collector set in OVSDB. */\n (uint32, 'obs_domain_id'), # /* ID of sampling observation domain. */\n (uint32, 'obs_point_id'), # /* ID of sampling observation point. */\n base = nx_action,\n classifyby = (NXAST_SAMPLE,),\n criteria = lambda x: getattr(x, action_subtype) == NXAST_SAMPLE,\n init = packvalue(NXAST_SAMPLE, action_subtype),\n name = 'nx_action_sample'\n )\n namespace['nx_action_sample'] = nx_action_sample", "response": "This function creates an Open vSwitch extension."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef find_longest_match(self, alo, ahi, blo, bhi):\n besti, bestj, bestsize = _cdifflib.find_longest_match(self, alo, ahi, blo, bhi)\n return _Match(besti, bestj, bestsize)", "response": "Find the longest matching block in a [ alo ahi bhi."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_seq1(self, a):\n if a is self.a:\n return\n self.a = a\n if not isinstance(self.a, list):\n self.a = list(self.a)\n # Types must be hashable to work in the c layer. This will raise if\n # list items are *not* hashable.\n [hash(x) for x in self.a]", "response": "Same as SequenceMatcher. set_seq1 but check for non - list inputs\n implementation."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _tostream(parser, obj, stream, skipprepack = False):\n if hasattr(parser, 'tostream'):\n return parser.tostream(obj, stream, skipprepack)\n else:\n data = parser.tobytes(obj, skipprepack)\n cls = type(parser)\n if cls not in _deprecated_parsers:\n _deprecated_parsers.add(cls)\n warnings.warn(\"Parser %r does not have 'tostream' interfaces\" % (cls,), UserWarning)\n return stream.write(data)", "response": "Internal helper function to write a Tostream object to a file - like object."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts bytes in a dump value to str allowing json encode", "response": "def _to_str(dumped_val, encoding='utf-8', ordered=True):\n \"\"\"\n Convert bytes in a dump value to str, allowing json encode\n \"\"\"\n _dict = OrderedDict if ordered else dict\n if isinstance(dumped_val, dict):\n return OrderedDict((k, _to_str(v, encoding)) for k,v in dumped_val.items())\n elif isinstance(dumped_val, (list, tuple)):\n return [_to_str(v, encoding) for v in dumped_val]\n elif isinstance(dumped_val, bytes):\n try:\n d = dumped_val.decode('utf-8')\n except Exception:\n d = repr(dumped_val)\n return d\n else:\n return dumped_val"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndump a value into a JSON - friendly format.", "response": "def dump(val, humanread = True, dumpextra = False, typeinfo = DUMPTYPE_FLAT, ordered=True,\n tostr=False, encoding='utf-8'):\n '''\n Convert a parsed NamedStruct (probably with additional NamedStruct as fields) into a\n JSON-friendly format, with only Python primitives (dictionaries, lists, bytes, integers etc.)\n Then you may use json.dumps, or pprint to further process the result.\n \n :param val: parsed result, may contain NamedStruct\n \n :param humanread: if True (default), convert raw data into readable format with type-defined formatters.\n For example, enumerators are converted into names, IP addresses are converted into dotted formats, etc.\n \n :param dumpextra: if True, dump \"extra\" data in '_extra' field. False (default) to ignore them.\n \n :param typeinfo: Add struct type information in the dump result. May be the following values:\n \n DUMPTYPE_FLAT ('flat')\n add a field '_type' for the type information (default)\n \n DUMPTYPE_KEY ('key')\n convert the value to dictionary like: {'': value}\n \n DUMPTYPE_NONE ('none')\n do not add type information\n \n :param tostr: if True, convert all bytes to str\n \n :param encoding: if tostr=`True`, first try to decode bytes in `encoding`. If failed, use `repr()` instead.\n \n :returns: \"dump\" format of val, suitable for JSON-encode or print.\n '''\n dumped = _dump(val, humanread, dumpextra, typeinfo, ordered)\n if tostr:\n dumped = _to_str(dumped, encoding, ordered)\n return dumped"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a function which gets size from specified fields with limits.", "response": "def sizefromlen(limit, *properties):\n '''\n Factory to generate a function which get size from specified field with limits.\n Often used in nstruct \"size\" parameter.\n \n To retrieve size without limit, simply use lambda expression: lambda x: x.header.length\n \n :param limit: the maximum size limit, if the acquired value if larger then the limit, BadLenError is raised\n to protect against serious result like memory overflow or dead loop.\n \n :param properties: the name of the specified fields. Specify more than one string to form a property path,\n like: sizefromlen(256, 'header', 'length') -> s.header.length\n \n :returns: a function which takes a NamedStruct as parameter, and returns the length value from specified\n property path.\n '''\n def func(namedstruct):\n v = namedstruct._target\n for p in properties:\n v = getattr(v, p)\n if v > limit:\n raise BadLenError('Struct length exceeds limit ' + str(limit))\n return v\n return func"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef packsize(*properties):\n '''\n Revert to sizefromlen, store the struct size (len(struct)) to specified property path. The size includes\n padding. To store the size without padding, use packrealsize() instead. Often used in nstruct \"prepack\"\n parameter.\n \n :param properties: specified field name, same as sizefromlen.\n \n :returns: a function which takes a NamedStruct as parameter, and pack the size to specified field.\n \n '''\n def func(namedstruct):\n v = namedstruct._target\n for p in properties[:-1]:\n v = getattr(v, p)\n setattr(v, properties[-1], len(namedstruct))\n return func", "response": "Returns a function that packs the size of a named struct to the specified property path."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef packrealsize(*properties):\n '''\n Revert to sizefromlen, pack the struct real size (struct._realsize()) to specified property path.\n Unlike packsize, the size without padding is stored. Often used in nstruct \"prepack\" parameter.\n \n :param properties: specified field name, same as sizefromlen.\n \n :returns: a function which takes a NamedStruct as parameter, and pack the size to specified field.\n \n '''\n def func(namedstruct):\n v = namedstruct._target\n for p in properties[:-1]:\n v = getattr(v, p)\n setattr(v, properties[-1], namedstruct._realsize())\n return func", "response": "Returns a function that packs the real size of the namedstruct to the specified property path."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\npack a value into a property path.", "response": "def packvalue(value, *properties):\n '''\n Store a specified value to specified property path. Often used in nstruct \"init\" parameter.\n \n :param value: a fixed value\n \n :param properties: specified field name, same as sizefromlen.\n \n :returns: a function which takes a NamedStruct as parameter, and store the value to property path.\n \n '''\n def func(namedstruct):\n v = namedstruct._target\n for p in properties[:-1]:\n v = getattr(v, p)\n setattr(v, properties[-1], value)\n return func"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nstores a evaluated value to specified property path. Often used in nstruct \"prepack\" parameter. :param func: a function which takes a NamedStruct as parameter and returns a value, often a lambda expression :param properties: specified field name, same as sizefromlen. :returns: a function which takes a NamedStruct as parameter, and store the return value of func to property path.", "response": "def packexpr(func, *properties):\n '''\n Store a evaluated value to specified property path. Often used in nstruct \"prepack\" parameter.\n \n :param func: a function which takes a NamedStruct as parameter and returns a value, often a lambda expression\n \n :param properties: specified field name, same as sizefromlen.\n \n :returns: a function which takes a NamedStruct as parameter, and store the return value of func to property path.\n '''\n def func2(namedstruct):\n v = namedstruct._target\n for p in properties[:-1]:\n v = getattr(v, p)\n setattr(v, properties[-1], func(namedstruct))\n return func2"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates indices for all the embedded structs. For parser internal use.", "response": "def _create_embedded_indices(self):\n '''\n Create indices for all the embedded structs. For parser internal use.\n '''\n try:\n _set(self, '_embedded_indices', dict((k,(self,v)) for k,v in getattr(self._parser.typedef, 'inline_names', {}).items()))\n except AttributeError:\n _set(self, '_embedded_indices', {})"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _unpack(self, data):\n '''\n Unpack a struct from bytes. For parser internal use.\n '''\n #self._logger.log(logging.DEBUG, 'unpacking %r', self)\n current = self\n while current is not None:\n data = current._parser.unpack(data, current)\n last = current\n current = getattr(current, '_sub', None)\n _set(last, '_extra', data)", "response": "Unpack a struct from bytes. For parser internal use."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\npack the current structure into a stream. For parser internal use.", "response": "def _packto(self, stream):\n '''\n Pack current struct into stream. For parser internal use.\n \n :param stream: a buffered stream (File or BytesIO)\n \n :return: packed bytes length\n '''\n #self._logger.log(logging.DEBUG, 'packing %r', self)\n total_size = 0\n current = self\n while current is not None:\n total_size += current._parser.packto(current, stream)\n last = current\n current = getattr(current, '_sub', None)\n if hasattr(last, '_extra'):\n _extra = last._extra\n total_size += stream.write(_extra)\n return total_size"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nconvert the named struct to bytes.", "response": "def _tobytes(self, skipprepack = False):\n '''\n Convert the struct to bytes. This is the standard way to convert a NamedStruct to bytes.\n \n :param skipprepack: if True, the prepack stage is skipped. For parser internal use.\n \n :returns: converted bytes\n '''\n stream = BytesIO()\n self._tostream(stream, skipprepack)\n return stream.getvalue()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nconvert the named struct into a bytes stream.", "response": "def _tostream(self, stream, skipprepack= False):\n '''\n Convert the struct into a bytes stream. This is the standard way to convert a NamedStruct to bytes.\n \n :param stream: a list of bytes to get the result\n \n :param skipprepack: if True, the prepack stage is skipped. For parser internal use.\n \n :returns: total appended size\n '''\n if not skipprepack:\n self._prepack()\n datasize = self._packto(stream)\n paddingSize = self._parser.paddingsize2(datasize)\n if paddingSize > datasize:\n stream.write(b'\\x00' * (paddingSize - datasize))\n return paddingSize"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _realsize(self):\n '''\n Get the struct size without padding (or the \"real size\")\n \n :returns: the \"real size\" in bytes\n \n '''\n current = self\n size= 0\n while current is not None:\n size += current._parser.sizeof(current)\n last = current\n current = getattr(current, '_sub', None)\n size += len(getattr(last, '_extra', b''))\n return size", "response": "Get the struct size without padding or the real size"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _subclass(self, parser):\n '''\n Create sub-classed struct from extra data, with specified parser. For parser internal use.\n \n :param parser: parser of subclass \n '''\n _set(self, '_sub', parser._create(memoryview(getattr(self, '_extra', b'')), self._target))\n try:\n object.__delattr__(self, '_extra')\n except:\n pass", "response": "Create sub - classed struct from extra data with specified parser. For parser internal use."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nextends the base class with a new subclass.", "response": "def _extend(self, newsub):\n '''\n Append a subclass (extension) after the base class. For parser internal use.\n '''\n current = self\n while hasattr(current, '_sub'):\n current = current._sub\n _set(current, '_sub', newsub)\n try:\n object.__delattr__(self, '_extra')\n except:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning current type of this struct", "response": "def _gettype(self):\n '''\n Return current type of this struct\n \n :returns: a typedef object (e.g. nstruct)\n \n '''\n current = self\n lastname = getattr(current._parser, 'typedef', None)\n while hasattr(current, '_sub'):\n current = current._sub\n tn = getattr(current._parser, 'typedef', None)\n if tn is not None:\n lastname = tn\n return lastname"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets the _extra field in the struct which stands for the additional data after the defined fields.", "response": "def _setextra(self, extradata):\n '''\n Set the _extra field in the struct, which stands for the additional (\"extra\") data after the\n defined fields.\n '''\n current = self\n while hasattr(current, '_sub'):\n current = current._sub\n _set(current, '_extra', extradata)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the extra data of this struct.", "response": "def _getextra(self):\n '''\n Get the extra data of this struct.\n '''\n current = self\n while hasattr(current, '_sub'):\n current = current._sub\n return getattr(current, '_extra', None)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreplacing the embedded struct with a newly - created struct of another type.", "response": "def _replace_embedded_type(self, name, newtype):\n '''\n Replace the embedded struct to a newly-created struct of another type (usually based on the\n original type). The attributes of the old struct is NOT preserved.\n \n :param name: either the original type, or the name of the original type. It is always the type\n used in type definitions, even if it is already replaced once or more.\n \n :param newtype: the new type to replace\n '''\n if hasattr(name, 'readablename'):\n name = name.readablename\n t,i = self._target._embedded_indices[name]\n t._seqs[i] = newtype.parser().new(self._target)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_embedded(self, name):\n '''\n Return an embedded struct object to calculate the size or use _tobytes(True) to convert just the\n embedded parts.\n \n :param name: either the original type, or the name of the original type. It is always the type\n used in type definitions, even if it is already replaced once or more.\n \n :returns: an embedded struct\n '''\n if hasattr(name, 'readablename'):\n name = name.readablename\n t,i = self._target._embedded_indices[name]\n return t._seqs[i]", "response": "Returns an embedded struct object for the specified type."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nregistering a type with the specified name.", "response": "def _registerPickleType(name, typedef):\n '''\n Register a type with the specified name. After registration, NamedStruct with this type\n (and any sub-types) can be successfully pickled and transfered.\n '''\n NamedStruct._pickleNames[typedef] = name\n NamedStruct._pickleTypes[name] = typedef"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates indices for all the embedded structs. For parser internal use.", "response": "def _create_embedded_indices(self):\n '''\n Create indices for all the embedded structs. For parser internal use.\n '''\n try:\n self._target._embedded_indices.update(((k,(self,v)) for k,v in getattr(self._parser.typedef, 'inline_names', {}).items()))\n except AttributeError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ntries to parse the named struct from bytes sequence.", "response": "def parse(self, buffer, inlineparent = None):\n '''\n Try to parse the struct from bytes sequence. The bytes sequence is taken from a streaming source.\n \n :param buffer: bytes sequence to be parsed from.\n \n :param inlineparent: if specified, this struct is embedded in another struct.\n \n :returns: None if the buffer does not have enough data for this struct (e.g. incomplete read\n from socket); (struct, size) else, where struct is the parsed result (usually a NamedStruct object)\n and size is the used bytes length, so you can start another parse from buffer[size:].\n '''\n if self.base is not None:\n return self.base.parse(buffer, inlineparent)\n r = self._parse(buffer, inlineparent)\n if r is None:\n return None\n (s, size) = r\n self.subclass(s)\n return (s, (size + self.padding - 1) // self.padding * self.padding)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate an empty struct of this type.", "response": "def new(self, inlineparent = None):\n '''\n Create an empty struct of this type. \"initfunc\" is called on the created struct to initialize it.\n \n :param inlineparent: if specified, this struct is embedded into another struct \"inlineparent\"\n \n :returns: a created struct (usually a NamedStruct object)\n '''\n if self.base is not None:\n s = self.base.new(inlineparent)\n s._extend(self._new(s._target))\n else:\n s = self._new(inlineparent)\n if self.initfunc is not None:\n self.initfunc(s)\n return s"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create(self, data, inlineparent = None):\n '''\n Create a struct and use all bytes of data. Different from parse(), this takes all data,\n store unused bytes in \"extra\" data of the struct. Some types like variable-length array\n may have different parse result with create() and parse().\n \n :param data: bytes of a packed struct.\n \n :param inlineparent: if specified, this struct is embedded in another struct \"inlineparent\"\n \n :returns: a created NamedStruct object.\n '''\n if self.base is not None:\n return self.base.create(data, inlineparent)\n c = self._create(data, inlineparent)\n self.subclass(c)\n return c", "response": "Create a new NamedStruct and use all bytes of data."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef paddingsize(self, namedstruct):\n '''\n Return the size of the padded struct (including the \"real\" size and the padding bytes)\n \n :param namedstruct: a NamedStruct object of this type.\n \n :returns: size including both data and padding.\n '''\n if self.base is not None:\n return self.base.paddingsize(namedstruct)\n realsize = namedstruct._realsize()\n return (realsize + self.padding - 1) // self.padding * self.padding", "response": "Returns the size of the padded struct including the real size and the padding bytes."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef paddingsize2(self, realsize):\n '''\n Return a padded size from realsize, for NamedStruct internal use.\n '''\n if self.base is not None:\n return self.base.paddingsize2(realsize)\n return (realsize + self.padding - 1) // self.padding * self.padding", "response": "Return a padded size from realsize for NamedStruct internal use."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconvert a NamedStruct to packed bytes append the bytes to the stream", "response": "def tostream(self, namedstruct, stream, skipprepack = False):\n '''\n Convert a NamedStruct to packed bytes, append the bytes to the stream\n \n :param namedstruct: a NamedStruct object of this type to pack.\n \n :param skipprepack: if True, the prepack stage is skipped.\n \n :param stream: a buffered stream\n \n :return: appended bytes size\n '''\n return namedstruct._tostream(stream, skipprepack)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef prepack(self, namedstruct, skip_self=False, skip_sub=False):\n '''\n Run prepack\n '''\n if not skip_self and self.prepackfunc is not None:\n self.prepackfunc(namedstruct)", "response": "Run prepackfunc on the namedstruct."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\npack a named struct to a buffered stream", "response": "def packto(self, namedstruct, stream):\n \"\"\"\n Pack a struct to a stream\n \n :param namedstruct: struct to pack\n \n :param stream: a buffered stream\n \n :return: appended bytes size\n \"\"\"\n # Default implementation\n data = self.pack(namedstruct)\n return stream.write(data)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nunpacking the struct from specified bytes.", "response": "def unpack(self, data, namedstruct):\n '''\n Unpack the struct from specified bytes. If the struct is sub-classed, definitions from the sub type\n is not unpacked.\n \n :param data: bytes of the struct, including fields of sub type and \"extra\" data.\n \n :param namedstruct: a NamedStruct object of this type\n \n :returns: unused bytes from data, which forms data of the sub type and \"extra\" data. \n '''\n try:\n result = self.struct.unpack(data[0:self.struct.size])\n except struct.error as exc:\n raise BadFormatError(exc)\n start = 0\n t = namedstruct._target\n for p in self.properties:\n if len(p) > 1:\n if isinstance(result[start], bytes):\n v = [r.rstrip(b'\\x00') for r in result[start:start + p[1]]]\n else:\n v = list(result[start:start + p[1]])\n start += p[1]\n else:\n v = result[start]\n if isinstance(v, bytes):\n v = v.rstrip(b'\\x00')\n start += 1\n setin = t\n for sp in p[0][0:-1]:\n if not hasattr(setin, sp):\n setin2 = InlineStruct(namedstruct._target)\n setattr(setin, sp, setin2)\n setin = setin2\n else:\n setin = getattr(setin, sp)\n setattr(setin, p[0][-1], v)\n return data[self.struct.size:]"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\npacks the struct and return the packed bytes.", "response": "def pack(self, namedstruct):\n '''\n Pack the struct and return the packed bytes.\n \n :param namedstruct: a NamedStruct of this type.\n \n :returns: packed bytes, only contains fields of definitions in this type, not the sub type and \"extra\" data.\n '''\n elements = []\n t = namedstruct._target\n for p in self.properties:\n v = t\n for sp in p[0]:\n v = getattr(v, sp)\n if len(p) > 1:\n elements.extend(v[0:p[1]])\n else:\n elements.append(v)\n return self.struct.pack(*elements)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse(self, buffer, inlineparent = None):\n '''\n Compatible to Parser.parse()\n '''\n if len(buffer) < self.struct.size:\n return None\n try:\n return (self.struct.unpack(buffer[:self.struct.size])[0], self.struct.size)\n except struct.error as exc:\n raise BadFormatError(exc)", "response": "Parses the ISO - 8601 tag from the given buffer."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse(self, buffer, inlineparent = None):\n '''\n Compatible to Parser.parse()\n '''\n size = 0\n v = []\n for i in range(0, self.size): # @UnusedVariable\n r = self.innerparser.parse(buffer[size:], None)\n if r is None:\n return None\n v.append(r[0])\n size += r[1]\n return (v, size)", "response": "Compatibility to Parser. parse"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef new(self, inlineparent = None):\n '''\n Compatible to Parser.new()\n '''\n v = list(range(0, self.size))\n for i in range(0, self.size):\n v[i] = self.innerparser.new()\n return v", "response": "Compatibility to Parser. new"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create(self, data, inlineparent = None):\n '''\n Compatible to Parser.create()\n '''\n if self.size > 0:\n r = self.parse(data)\n if r is None:\n raise ParseError('data is not enough to create an array of size ' + self.size)\n else:\n return r[0]\n else:\n v = []\n start = 0\n while start < len(data):\n r = self.innerparser.parse(data[start:], None)\n if r is None:\n break\n v.append(r[0])\n start += r[1]\n return v", "response": "Compatibility to Parser. create"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconverting the object to bytes.", "response": "def tobytes(self, prim, skipprepack = False):\n '''\n Compatible to Parser.tobytes()\n '''\n stream = BytesIO()\n self.tostream(prim, stream, skipprepack=skipprepack)\n return stream.getvalue()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create(self, data, inlineparent = None):\n '''\n Compatible to Parser.create()\n '''\n if self.cstr:\n return _copy(data).rstrip(b'\\x00')\n else:\n return _copy(data)", "response": "Create a new instance of the class."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the parser for this type. Create the parser on first call.", "response": "def parser(self):\n '''\n Get parser for this type. Create the parser on first call.\n '''\n if not hasattr(self, '_parser'):\n self._parser = self._compile()\n return self._parser"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a new object of this type with the specified arguments and keyword arguments set on the new object.", "response": "def new(self, *args, **kwargs):\n '''\n Create a new object of this type. It is also available as __call__, so you can create a new object\n just like creating a class instance: a = mytype(a=1,b=2)\n \n :param args: Replace the embedded struct type. Each argument is a tuple (name, newtype).\n It is equivalent to call _replace_embedded_type with *name* and *newtype*\n one by one. Both the \"directly\" embedded struct and the embedded struct inside another\n embedded struct can be set. If you want to replace an embedded struct in a\n replaced struct type, make sure the outer struct is replaced first. The embeded\n struct type must have a *name* to be replaced by specify *name* option.\n \n :param kwargs: extra key-value arguments, each one will be set on the new object, to set value\n to the fields conveniently.\n \n :returns: a new object, with the specified \"kwargs\" set.\n '''\n obj = self.parser().new()\n for k,v in args:\n obj._replace_embedded_type(k,v)\n for k,v in kwargs.items():\n setattr(obj, k, v)\n return obj"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef getName(self, value, defaultName = None):\n '''\n Get the enumerate name of a specified value.\n :param value: the enumerate value\n :param defaultName: returns if the enumerate value is not defined\n :returns: the corresponding enumerate value or *defaultName* if not found\n '''\n for k,v in self._values.items():\n if v == value:\n return k\n return defaultName", "response": "Get the enumerate name of a specified value."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nimport all the enumerate values from this enumerate to gs", "response": "def importAll(self, gs):\n '''\n Import all the enumerate values from this enumerate to *gs*\n :param gs: usually globals(), a dictionary. At lease __setitem__ should be implemented if not a dictionary.\n '''\n for k,v in self._values.items():\n gs[k] = v"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef extend(self, namespace = None, name = None, **kwargs):\n '''\n Create a new enumerate with current values merged with new enumerate values\n :param namespace: same as __init__\n :param name: same as __init__\n :param kwargs: same as __init__\n :returns: a new enumerate type\n '''\n if name is None:\n name = self._readablename\n d = dict(self._values)\n d.update(kwargs)\n return enum(name, namespace, self, self._bitwise, **d)", "response": "Create a new enumerate with current values merged with new enumerate values\n "} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nformat a enumerate value to enumerate names if possible. Used to generate human readable a dump result.", "response": "def formatter(self, value):\n '''\n Format a enumerate value to enumerate names if possible. Used to generate human readable\n dump result.\n '''\n if not self._bitwise:\n n = self.getName(value)\n if n is None:\n return value\n else:\n return n\n else:\n names = []\n for k,v in sorted(self._values.items(), key=lambda x: x[1], reverse=True):\n if (v & value) == v:\n names.append(k)\n value = value ^ v\n names.reverse()\n if value != 0:\n names.append(hex(value))\n if not names:\n return 0 \n return ' '.join(names)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\npack a struct to a stream", "response": "def packto(self, namedstruct, stream):\n \"\"\"\n Pack a struct to a stream\n \"\"\"\n if hasattr(namedstruct, self.name):\n return _tostream(self.basetypeparser, getattr(namedstruct, self.name), stream, True)\n else:\n return 0"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nruns prepack on the namedstruct.", "response": "def prepack(self, namedstruct, skip_self=False, skip_sub=False):\n '''\n Run prepack\n '''\n if not skip_sub and hasattr(namedstruct, self.name) and hasattr(self.basetypeparser, 'fullprepack'):\n self.basetypeparser.fullprepack(getattr(namedstruct, self.name))\n Parser.prepack(self, namedstruct, skip_self, skip_sub)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef prepack(self, namedstruct, skip_self=False, skip_sub=False):\n '''\n Run prepack\n '''\n if not skip_sub and hasattr(self.innertypeparser, 'fullprepack'):\n for v in getattr(namedstruct, self.name):\n self.innertypeparser.fullprepack(v)\n Parser.prepack(self, namedstruct, skip_self, skip_sub)", "response": "Run prepack on the namedstruct."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef prepack(self, namedstruct, skip_self=False, skip_sub=False):\n '''\n Run prepack\n '''\n if not skip_sub and self.header is not None and hasattr(self.header, 'fullprepack'):\n self.header.fullprepack(namedstruct._seqs[0])\n Parser.prepack(self, namedstruct, skip_self, skip_sub)", "response": "Run prepack on the namedstruct."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _checkresponse(response):\n\n results_error = 'No results were found matching your query'\n auth_error = 'The token or API key is not valid, please contact Josh Clark at joshua.m.clark@utah.edu to ' \\\n 'resolve this'\n rule_error = 'This request violates a rule of the API. Please check the guidelines for formatting a data ' \\\n 'request and try again'\n catch_error = 'Something went wrong. Check all your calls and try again'\n\n if response['SUMMARY']['RESPONSE_CODE'] == 1:\n return response\n elif response['SUMMARY']['RESPONSE_CODE'] == 2:\n if response['SUMMARY']['NUMBER_OF_OBJECTS'] == 0:\n return None\n raise MesoPyError(results_error)\n elif response['SUMMARY']['RESPONSE_CODE'] == 200:\n raise MesoPyError(auth_error)\n elif response['SUMMARY']['RESPONSE_CODE'] == 400:\n raise MesoPyError(rule_error)\n elif response['SUMMARY']['RESPONSE_CODE'] == -1:\n format_error = response['SUMMARY']['RESPONSE_MESSAGE']\n raise MesoPyError(format_error)\n else:\n raise MesoPyError(catch_error)", "response": "Checks the response from the API and returns the data requested by the other methods."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _get_response(self, endpoint, request_dict):\n http_error = 'Could not connect to the API. This could be because you have no internet connection, a parameter' \\\n ' was input incorrectly, or the API is currently down. Please try again.'\n \n json_error = 'Could not retrieve JSON values. Try again with a shorter date range.'\n \n # For python 3.4\n try:\n qsp = urllib.parse.urlencode(request_dict, doseq=True)\n resp = urllib.request.urlopen(self.base_url + endpoint + '?' + qsp).read()\n\n # For python 2.7\n except AttributeError or NameError:\n try:\n qsp = urllib.urlencode(request_dict, doseq=True)\n resp = urllib2.urlopen(self.base_url + endpoint + '?' + qsp).read()\n except urllib2.URLError:\n raise MesoPyError(http_error)\n except urllib.error.URLError:\n raise MesoPyError(http_error)\n \n try:\n json_data = json.loads(resp.decode('utf-8'))\n except ValueError:\n raise MesoPyError(json_error)\n \n return self._checkresponse(json_data)", "response": "Returns a dictionary of data requested by each function."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _check_geo_param(self, arg_list):\n\n geo_func = lambda a, b: any(i in b for i in a)\n check = geo_func(self.geo_criteria, arg_list)\n if check is False:\n raise MesoPyError('No stations or geographic search criteria specified. Please provide one of the '\n 'following: stid, state, county, country, radius, bbox, cwa, nwsfirezone, gacc, subgacc')", "response": "r Checks each function call to make sure that the user has provided at least one of the following geographic parameters."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef timeseries(self, start, end, **kwargs):\n\n self._check_geo_param(kwargs)\n kwargs['start'] = start\n kwargs['end'] = end\n kwargs['token'] = self.token\n\n return self._get_response('stations/timeseries', kwargs)", "response": "r Returns a time series of observations at a user specified location for a specified time."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef climate_stats(self, startclim, endclim, type, **kwargs):\n\n self._check_geo_param(kwargs)\n kwargs['type'] = type\n kwargs['startclim'] = startclim\n kwargs['endclim'] = endclim\n kwargs['token'] = self.token\n\n return self._get_response('stations/climatology', kwargs)", "response": "r Returns a dictionary of aggregated yearly climate statistics for a specified time series."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef metadata(self, **kwargs):\n\n self._check_geo_param(kwargs)\n kwargs['token'] = self.token\n\n return self._get_response('stations/metadata', kwargs)", "response": "r Returns the metadata for a given MesoWest station or station."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef latency(self, start, end, **kwargs):\n\n self._check_geo_param(kwargs)\n kwargs['start'] = start\n kwargs['end'] = end\n kwargs['token'] = self.token\n\n return self._get_response('stations/latency', kwargs)", "response": "r Returns the latency values for a given station based on a start and end date."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef avg(self, key=None):\n count = self.count()\n\n if count:\n return self.sum(key) / count", "response": "Get the average value of a given key."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef chunk(self, size):\n chunks = self._chunk(size)\n\n return self.__class__(list(map(self.__class__, chunks)))", "response": "Returns a new collection with the given size."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _chunk(self, size):\n items = self.items\n\n return [items[i:i + size] for i in range(0, len(items), size)]", "response": "Returns a list of size items from the underlying collection."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef contains(self, key, value=None):\n if value is not None:\n return self.contains(lambda x: data_get(x, key) == value)\n\n if self._use_as_callable(key):\n return self.first(key) is not None\n\n return key in self.items", "response": "Returns True if the element with the given key is in the collection False otherwise."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef collapse(self):\n results = []\n\n items = self.items\n\n for values in items:\n if isinstance(values, BaseCollection):\n values = values.all()\n\n results += values\n\n return self.__class__(results)", "response": "Collapse the collection items into a single element"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef each(self, callback):\n items = self.items\n\n for item in items:\n if callback(item) is False:\n break\n\n return self", "response": "Execute a callback over each item in the collection."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a new collection consisting of every n - th element.", "response": "def every(self, step, offset=0):\n \"\"\"\n Create a new collection consisting of every n-th element.\n\n :param step: The step size\n :type step: int\n\n :param offset: The start offset\n :type offset: int\n\n :rtype: Collection\n \"\"\"\n new = []\n\n for position, item in enumerate(self.items):\n if position % step == offset:\n new.append(item)\n\n return self.__class__(new)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a new Collection with all items except the specified keys.", "response": "def without(self, *keys):\n \"\"\"\n Get all items except for those with the specified keys.\n\n :param keys: The keys to remove\n :type keys: tuple\n\n :rtype: Collection\n \"\"\"\n items = copy(self.items)\n\n keys = reversed(sorted(keys))\n\n for key in keys:\n del items[key]\n\n return self.__class__(items)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef only(self, *keys):\n items = []\n\n for key, value in enumerate(self.items):\n if key in keys:\n items.append(value)\n\n return self.__class__(items)", "response": "Returns a new Collection with only the specified keys."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef filter(self, callback=None):\n if callback:\n return self.__class__(list(filter(callback, self.items)))\n\n return self.__class__(list(filter(None, self.items)))", "response": "Run a filter over each of the items."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nfiltering items by the given key value pair.", "response": "def where(self, key, value):\n \"\"\"\n Filter items by the given key value pair.\n\n :param key: The key to filter by\n :type key: str\n\n :param value: The value to filter by\n :type value: mixed\n\n :rtype: Collection\n \"\"\"\n return self.filter(lambda item: data_get(item, key) == value)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the first item of the collection.", "response": "def first(self, callback=None, default=None):\n \"\"\"\n Get the first item of the collection.\n\n :param default: The default value\n :type default: mixed\n \"\"\"\n if callback is not None:\n for val in self.items:\n if callback(val):\n return val\n\n return value(default)\n\n if len(self.items) > 0:\n return self.items[0]\n else:\n return default"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets a flattened list of the items in the collection.", "response": "def flatten(self):\n \"\"\"\n Get a flattened list of the items in the collection.\n\n :rtype: Collection\n \"\"\"\n\n def _flatten(d):\n if isinstance(d, dict):\n for v in d.values():\n for nested_v in _flatten(v):\n yield nested_v\n elif isinstance(d, list):\n for list_v in d:\n for nested_v in _flatten(list_v):\n yield nested_v\n else:\n yield d\n\n return self.__class__(list(_flatten(self.items)))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef forget(self, *keys):\n keys = reversed(sorted(keys))\n\n for key in keys:\n del self[key]\n\n return self", "response": "Removes an item from the collection by the given keys."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get(self, key, default=None):\n try:\n return self.items[key]\n except IndexError:\n return value(default)", "response": "Get an element of the collection."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef implode(self, value, glue=''):\n first = self.first()\n\n if not isinstance(first, (basestring)):\n return glue.join(self.pluck(value).all())\n\n return value.join(self.items)", "response": "Returns the string representation of the key as a string."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef last(self, callback=None, default=None):\n if callback is not None:\n for val in reversed(self.items):\n if callback(val):\n return val\n\n return value(default)\n\n if len(self.items) > 0:\n return self.items[-1]\n else:\n return default", "response": "Get the last item of the collection."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a list with the values of a given key.", "response": "def pluck(self, value, key=None):\n \"\"\"\n Get a list with the values of a given key.\n\n :rtype: Collection\n \"\"\"\n if key:\n return dict(map(lambda x: (data_get(x, key), data_get(x, value)), self.items))\n else:\n results = list(map(lambda x: data_get(x, value), self.items))\n\n return self.__class__(results)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nruns a map over each of the items in the collection.", "response": "def map(self, callback):\n \"\"\"\n Run a map over each of the item.\n\n :param callback: The map function\n :type callback: callable\n\n :rtype: Collection\n \"\"\"\n return self.__class__(list(map(callback, self.items)))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef max(self, key=None):\n\n def _max(result, item):\n val = data_get(item, key)\n\n if result is None or val > result:\n return value\n\n return result\n\n return self.reduce(_max)", "response": "Get the max value of a given key."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef min(self, key=None):\n\n def _min(result, item):\n val = data_get(item, key)\n\n if result is None or val < result:\n return value\n\n return result\n\n return self.reduce(_min)", "response": "Get the minimum value of a given key."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef for_page(self, page, per_page):\n start = (page - 1) * per_page\n\n return self[start:start + per_page]", "response": "Paginate the collection by slicing it into a smaller collection."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef pull(self, key, default=None):\n val = self.get(key, default)\n\n self.forget(key)\n\n return val", "response": "Pulls an item from the collection."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef reject(self, callback):\n if self._use_as_callable(callback):\n return self.filter(lambda item: not callback(item))\n\n return self.filter(lambda item: item != callback)", "response": "Create a collection of all elements that do not pass a given truth test."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef sort(self, callback=None):\n items = self.items\n\n if callback:\n return self.__class__(sorted(items, key=callback))\n else:\n return self.__class__(sorted(items))", "response": "Sort through each item in the collection with a callback."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the sum of the given values.", "response": "def sum(self, callback=None):\n \"\"\"\n Get the sum of the given values.\n\n :param callback: The callback\n :type callback: callable or string or None\n\n :rtype: mixed\n \"\"\"\n if callback is None:\n return sum(self.items)\n\n callback = self._value_retriever(callback)\n\n return self.reduce(lambda result, item: (result or 0) + callback(item))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef unique(self, key=None):\n if key is None:\n seen = set()\n seen_add = seen.add\n\n return self.__class__([x for x in self.items if not (x in seen or seen_add(x))])\n\n key = self._value_retriever(key)\n\n exists = []\n\n def _check(item):\n id_ = key(item)\n if id_ in exists:\n return True\n\n exists.append(id_)\n\n return self.reject(_check)", "response": "Return only unique items from the collection list."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nzips the collection together with one or more arrays.", "response": "def zip(self, *items):\n \"\"\"\n Zip the collection together with one or more arrays.\n\n :param items: The items to zip\n :type items: list\n\n :rtype: Collection\n \"\"\"\n return self.__class__(list(zip(self.items, *items)))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef merge(self, items):\n if isinstance(items, BaseCollection):\n items = items.all()\n\n if not isinstance(items, list):\n raise ValueError('Unable to merge uncompatible types')\n\n self._items += items\n\n return self", "response": "Merge the items with the given items."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntransforms each item in the collection using a callback.", "response": "def transform(self, callback):\n \"\"\"\n Transform each item in the collection using a callback.\n\n :param callback: The callback\n :type callback: callable\n\n :rtype: Collection\n \"\"\"\n self._items = self.map(callback).all()\n\n return self"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget a value retrieving callback.", "response": "def _value_retriever(self, value):\n \"\"\"\n Get a value retrieving callback.\n\n :type value: mixed\n\n :rtype: callable\n \"\"\"\n if self._use_as_callable(value):\n return value\n\n return lambda item: data_get(item, value)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef serialize(self):\n\n def _serialize(value):\n if hasattr(value, 'serialize'):\n return value.serialize()\n elif hasattr(value, 'to_dict'):\n return value.to_dict()\n else:\n return value\n\n return list(map(_serialize, self.items))", "response": "Returns the collection of items as a serialized object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nstep 4 ( 6 for invariant. Process contract ( validator )", "response": "def validate(self, *args, **kwargs):\n \"\"\"\n Step 4 (6 for invariant). Process contract (validator)\n \"\"\"\n # Schemes validation interface\n if is_scheme(self.validator):\n params = getcallargs(self.function, *args, **kwargs)\n params.update(kwargs)\n validator = self.validator(data=params, request=None)\n if validator.is_valid():\n return\n raise self.exception(validator.errors)\n\n # Simple validation interface\n if hasattr(self.validator, 'is_valid'):\n validator = self.validator(*args, **kwargs)\n # is valid\n if validator.is_valid():\n return\n # is invalid\n if hasattr(validator, 'errors'):\n raise self.exception(validator.errors)\n if hasattr(validator, '_errors'):\n raise self.exception(validator._errors)\n raise self.exception\n\n validation_result = self.validator(*args, **kwargs)\n # is invalid (validator return error message)\n if isinstance(validation_result, string_types):\n raise self.exception(validation_result)\n # is valid (truely result)\n if validation_result:\n return\n # is invalid (falsy result)\n raise self.exception"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef patched_function(self, *args, **kwargs):\n self.validate(*args, **kwargs)\n return self.function(*args, **kwargs)", "response": "Step 3. Wrapped function calling."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsteps 3. Wrapped function calling.", "response": "def patched_function(self, *args, **kwargs):\n \"\"\"\n Step 3. Wrapped function calling.\n \"\"\"\n result = self.function(*args, **kwargs)\n self.validate(result)\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nperforms validation of the object.", "response": "def _validate(self):\n \"\"\"\n Step 5 (1st flow) or Step 4 (2nd flow). Process contract for object.\n \"\"\"\n # disable methods matching before validation\n self._disable_patching = True\n # validation by Invariant.validate\n self._validate_base(self)\n # enable methods matching after validation\n self._disable_patching = False"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _patched_method(self, method, *args, **kwargs):\n self._validate()\n result = method(*args, **kwargs)\n self._validate()\n return result", "response": "Patched method to validate the result of the method."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef buildvrt(input_file_list, output_file,\n relative=True, **kwargs):\n \"\"\"Build a VRT\n\n See also: https://www.gdal.org/gdalbuildvrt.html\n\n You can find the possible BuildVRTOptions (**kwargs**) here:\n https://github.com/nextgis/pygdal/blob/78a793057d2162c292af4f6b240e19da5d5e52e2/2.1.0/osgeo/gdal.py#L1051\n\n Arguments:\n input_file_list {list of str or Path objects} -- List of input files.\n output_file {str or Path object} -- Output file (VRT).\n\n Keyword Arguments:\n relative {bool} -- If ``True``, the ``input_file_list`` paths are converted to relative\n paths (relative to the output file) and the VRT works even if the data is moved somewhere else -\n given that the relative location of theVRT and the input files does not chance!\n **kwargs {} -- BuildVRTOptions - see function description for a link to .\n\n Returns:\n [int] -- If successful, 0 is returned as exit code.\n \"\"\"\n\n # create destination directory\n if not Path(output_file).parent.exists():\n Path(output_file).parent.mkdir(parents=True, exist_ok=True)\n\n # make sure we have absolute paths and strings since BuildVRT does not like something else\n input_file_list = [str(Path(p).absolute()) for p in input_file_list]\n output_file = str(Path(output_file).absolute())\n\n vrt_options = gdal.BuildVRTOptions(**kwargs)\n\n vrt = gdal.BuildVRT(output_file,\n input_file_list,\n options=vrt_options)\n vrt = None\n\n # if needed, create the input file paths relative to the output vrt path\n # and replace them in the vrt.\n # if desired, fix the paths and the relativeToVRT tag in the VRT\n if relative:\n input_file_list_relative = [relpath(p, Path(output_file).parent) for p in input_file_list]\n\n with open(output_file, 'r') as file:\n # read a list of lines into data\n lines = file.readlines()\n\n new_lines = []\n counter = -1\n for line in lines:\n # sometimes it is relative by default\n # maybe when all files contain the parent directory of the output file (?)\n if \"relativeToVRT=\\\"1\\\"\" in line:\n counter += 1\n elif \"relativeToVRT=\\\"0\\\"\" in line:\n counter += 1\n input_file = str(input_file_list[counter])\n input_file_relative = str(input_file_list_relative[counter])\n if input_file not in line:\n raise Exception(f\"Expect path {input_file} not part of line {line}.\")\n line = line.replace(input_file,\n input_file_relative)\n line = line.replace(\"relativeToVRT=\\\"0\\\"\",\n \"relativeToVRT=\\\"1\\\"\")\n else:\n pass\n new_lines.append(line)\n\n with open(output_file, 'w') as file:\n file.writelines(new_lines)\n return 0", "response": "Builds a VRT from a list of input files and outputs a new VRT."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef reproject_on_template_raster(src_file, dst_file, template_file, resampling=\"near\", compress=None, overwrite=False):\n \n if not overwrite and Path(dst_file).exists():\n print(\"Processing skipped. Destination file exists.\")\n return 0\n \n GDAL_RESAMPLING_ALGORITHMS = {\n \"bilinear\": \"GRA_Bilinear\",\n \"cubic\": \"GRA_Cubic\",\n \"cubicspline\": \"GRA_CubicSpline\",\n \"lanczos\": \"GRA_Lanczos\",\n \"average\": \"GRA_Average\",\n \"mode\": \"GRA_Mode\",\n \"max\": \"GRA_Max\",\n \"min\": \"GRA_Min\",\n \"med\": \"GRA_Med\",\n \"near\": \"GRA_NearestNeighbour\",\n \"q1\": \"GRA_Q1\",\n \"q3\": \"GRA_Q3\"\n }\n\n compressions = [\"lzw\", \"packbits\", \"deflate\"]\n\n if resampling not in GDAL_RESAMPLING_ALGORITHMS.keys():\n raise ValueError(f\"'resampling must be one of {', '.join(GDAL_RESAMPLING_ALGORITHMS.keys())}\")\n\n if compress is None:\n options = []\n else:\n if compress.lower() not in compressions:\n raise ValueError(f\"'compress must be one of {', '.join(compressions)}\")\n else:\n options = [f'COMPRESS={compress.upper()}']\n \n # Source\n src = gdal.Open(src_file, gdalconst.GA_ReadOnly)\n src_band = src.GetRasterBand(1)\n src_proj = src.GetProjection()\n\n # We want a section of source that matches this:\n match_ds = gdal.Open(template_file, gdalconst.GA_ReadOnly)\n match_proj = match_ds.GetProjection()\n match_geotrans = match_ds.GetGeoTransform()\n wide = match_ds.RasterXSize\n high = match_ds.RasterYSize\n\n # Output / destination\n Path(dst_file).parent.mkdir(parents=True, exist_ok=True)\n dst = gdal.GetDriverByName('GTiff').Create(dst_file, wide, high, 1, src_band.DataType, options=options)\n dst.SetGeoTransform( match_geotrans )\n dst.SetProjection( match_proj)\n\n # Do the work\n gdal.ReprojectImage(src, dst, src_proj, match_proj, \n getattr(gdalconst, GDAL_RESAMPLING_ALGORITHMS[resampling]))\n\n\n del dst # Flush\n return 0", "response": "Reproject a one - band raster to fit the projection extend pixel size etc. of a template raster."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef rasterize(src_vector: str,\n burn_attribute: str,\n src_raster_template: str,\n dst_rasterized: str,\n gdal_dtype: int = 4):\n \"\"\"Rasterize the values of a spatial vector file.\n\n Arguments:\n src_vector {str}} -- A OGR vector file (e.g. GeoPackage, ESRI Shapefile) path containing the\n data to be rasterized.\n burn_attribute {str} -- The attribute of the vector data to be burned in the raster.\n src_raster_template {str} -- Path to a GDAL raster file to be used as template for the\n rasterized data.\n dst_rasterized {str} -- Path of the destination file.\n gdal_dtype {int} -- Numeric GDAL data type, defaults to 4 which is UInt32.\n See https://github.com/mapbox/rasterio/blob/master/rasterio/dtypes.py for useful look-up\n tables.\n Returns:\n None\n \"\"\"\n\n data = gdal.Open(str(src_raster_template), # str for the case that a Path instance arrives here\n gdalconst.GA_ReadOnly)\n geo_transform = data.GetGeoTransform()\n #source_layer = data.GetLayer()\n # x_max = x_min + geo_transform[1] * data.RasterXSize\n # y_min = y_max + geo_transform[5] * data.RasterYSize\n x_res = data.RasterXSize\n y_res = data.RasterYSize\n mb_v = ogr.Open(src_vector)\n mb_l = mb_v.GetLayer()\n target_ds = gdal.GetDriverByName('GTiff').Create(dst_rasterized,\n x_res, y_res, 1,\n gdal_dtype) # gdal.GDT_Byte\n # import osr\n target_ds.SetGeoTransform((geo_transform[0], # x_min\n geo_transform[1], # pixel_width\n 0,\n geo_transform[3], # y_max\n 0,\n geo_transform[5] # pixel_height\n ))\n prj = data.GetProjection()\n # srs = osr.SpatialReference(wkt=prj) # Where was this needed?\n target_ds.SetProjection(prj)\n band = target_ds.GetRasterBand(1)\n # NoData_value = 0\n # band.SetNoDataValue(NoData_value)\n band.FlushCache()\n gdal.RasterizeLayer(target_ds, [1], mb_l, options=[f\"ATTRIBUTE={burn_attribute}\"])\n\n target_ds = None", "response": "Rasterize the values of a spatial vector file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef calc_distance_to_border(polygons, template_raster, dst_raster, overwrite=False,\n keep_interim_files=False):\n \"\"\"Calculate the distance of each raster cell (in and outside the polygons) to the next polygon border.\n \n Arguments:\n polygons {str} -- Filename to a geopandas-readable file with polygon features. \n template_raster {[type]} -- Filename to a rasterio-readable file.\n dst_raster {[type]} -- Destination filename for the distance to polygon border raster file (tif).\n \n Keyword Arguments:\n overwrite {bool} -- Overwrite files if they exists? (default: {False})\n keep_interim_files {bool} -- Keep the interim line vector and raster files (default: {True})\n \n Returns:\n [type] -- [description]\n \"\"\"\n if Path(dst_raster).exists() and not overwrite:\n print(f\"Returning 0 - File exists: {dst_raster}\")\n return 0\n\n with rasterio.open(template_raster) as tmp:\n crs = tmp.crs\n \n dst_raster = Path(dst_raster)\n dst_raster.parent.mkdir(exist_ok=True, parents=True)\n\n tempdir = Path(tempfile.mkdtemp(prefix=f\"TEMPDIR_{dst_raster.stem}_\", dir=dst_raster.parent))\n interim_file_lines_vector = tempdir / \"interim_sample_vector_dataset_lines.shp\"\n interim_file_lines_raster = tempdir / \"interim_sample_vector_dataset_lines.tif\"\n\n exit_code = convert_polygons_to_lines(polygons, \n interim_file_lines_vector, \n crs=crs, \n add_allone_col=True)\n \n rasterize(src_vector=str(interim_file_lines_vector),\n burn_attribute=\"ALLONE\",\n src_raster_template=str(template_raster),\n dst_rasterized=str(interim_file_lines_raster),\n gdal_dtype=1)\n\n cmd = f\"{PROXIMITY_PATH} \" \\\n f\"{str(Path(interim_file_lines_raster).absolute())} \" \\\n f\"{str(Path(dst_raster).absolute())} \" \\\n f\"-ot Float32 -distunits PIXEL -values 1 -maxdist 255\"\n subprocess.check_call(cmd, shell=True)\n \n if not keep_interim_files:\n shutil.rmtree(tempdir)\n else:\n print(f\"Interim files are in {tempdir}\")\n return 0", "response": "Calculate the distance of each raster cell to the next polygon border."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef convert_polygons_to_lines(src_polygons, dst_lines, crs=None, add_allone_col=False):\n gdf = gpd.read_file(src_polygons)\n geom_coords = gdf[\"geometry\"] # featureset.get(5)[\"geometry\"][\"coordinates\"]\n lines = []\n row_ids = []\n for i_row, pol in tqdm(enumerate(geom_coords), total=len(geom_coords)):\n boundary = pol.boundary\n if boundary.type == 'MultiLineString':\n for line in boundary:\n lines.append(line)\n row_ids.append(i_row)\n else:\n lines.append(boundary)\n row_ids.append(i_row)\n\n gdf_lines = gdf.drop(\"geometry\", axis=1).iloc[row_ids, :]\n gdf_lines[\"Coordinates\"] = lines\n gdf_lines = gpd.GeoDataFrame(gdf_lines, geometry='Coordinates', crs=gdf.crs)\n if crs is not None:\n gdf_lines = gdf_lines.to_crs(crs)\n if add_allone_col:\n gdf_lines[\"ALLONE\"] = 1\n Path(dst_lines).parent.mkdir(exist_ok=True, parents=True)\n gdf_lines.to_file(dst_lines)\n return 0", "response": "Convert polygons to lines."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef dtype_checker_df(df, dtype, return_=None):\n dtype_range = dtype_ranges[dtype]\n df_out_of_range = (df < dtype_range[0]) | (df > dtype_range[1]) | (~np.isfinite(df))\n if df_out_of_range.any().any():\n if return_== \"colsums\":\n df_out_of_range = df_out_of_range.apply(sum, axis=0) # column\n elif return_== \"rowsums\":\n df_out_of_range = df_out_of_range.apply(sum, axis=1) # row\n elif return_== \"all\":\n df_out_of_range = df_out_of_range\n else:\n df_out_of_range = 1\n else:\n df_out_of_range = 0\n return df_out_of_range", "response": "Checks if the values of a given datatype are out of range."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting positions of rows matching specific band and date.", "response": "def get_df_ilocs(self, band, date):\n \"\"\"Get positions of rows matching specific band(s) and date(s).\n\n The method supports three typical queries:\n\n * one band and one date (both given as strings)\n\n * one band and of several dates (band given as strings, date as list of strings)\n\n * several band and of one date (date given as strings, band as list of strings)\n\n Arguments:\n band {str or list} -- Band(s) for which to derive the iloc index.\n date {str or list} -- Date(s) for which to derive the iloc index.\n\n Returns:\n int or list -- Integer (if band and date are str) or list of iloc indices.\n \"\"\"\n\n df = self.df_layers.copy()\n df[\"index\"] = range(df.shape[0])\n\n idx_layers = []\n if isinstance(band, str) and isinstance(date, str):\n idx_layers = df[(df[\"date\"] == date) & (df[\"band\"] == band)][\"index\"].values[0]\n if isinstance(band, list) and isinstance(date, str):\n for b in band:\n idx = df[(df[\"date\"] == date) & (df[\"band\"] == b)][\"index\"].values[0]\n idx_layers.append(idx)\n elif isinstance(band, str) and isinstance(date, list):\n for d in date:\n idx = df[(df[\"band\"] == band) & (df[\"date\"] == d)][\"index\"].values[0]\n idx_layers.append(idx)\n return idx_layers"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget the spatial bounds of the chunk.", "response": "def _get_spatial_bounds(self):\n \"\"\"Get the spatial bounds of the chunk.\"\"\" \n # This should be a MultiRasterIO method\n with rasterio.open(self._mrio._get_template_for_given_resolution(self._mrio.dst_res, \"path\")) as src_layer:\n pass # later we need src_layer for src_layer.window_transform(win)\n win_transform = src_layer.window_transform(self._window)\n bounds = rasterio.windows.bounds(window=self._window,\n transform=win_transform,\n height=0, width=0)\n return bounds"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef convert_data_to_ndarray(self):\n if self._data_structure != \"DataFrame\":\n raise Exception(f\"Data is not a DataFrame but {self._data_structure}.\")\n self._data = self._convert_to_ndarray(self._data)\n self._update_data_structure()\n return self", "response": "Converts the data from dataframe to ndarray format."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _convert_to_ndarray(self, data):\n if data.__class__.__name__ != \"DataFrame\":\n raise Exception(f\"data is not a DataFrame but {data.__class__.__name__}.\")\n shape_ndarray = (self._height, self._width, data.shape[1])\n data_ndarray = data.values.reshape(shape_ndarray)\n return data_ndarray", "response": "Converts data from dataframe to ndarray format."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nwriting results to disc.", "response": "def write_dataframe(self, result, dst_paths, nodata=None, compress='lzw'):\n \"\"\"Write results (dataframe) to disc.\"\"\"\n result = self._convert_to_ndarray(result)\n self.write_ndarray(result, dst_paths, nodata=nodata, compress=compress)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef write_ndarray(self, result, dst_paths, nodata=None, compress='lzw'):\n\n assert len(dst_paths) == result.shape[2]\n assert result.shape[0] == self._height\n assert result.shape[1] == self._width\n assert result.shape[2] == len(dst_paths)\n with rasterio.open(self._mrio._get_template_for_given_resolution(self._mrio.dst_res, \"path\")) as src_layer:\n pass # later we need src_layer for src_layer.window_transform(win)\n for i, pth in enumerate(dst_paths):\n dst_path_chunk = self.get_chunk_path_from_layer_path(pth, self.ji)\n\n result_layer_i = result[:, :, [i]]\n assert result_layer_i.shape[2] == 1\n kwargs = self._mrio._get_template_for_given_resolution(\n res=self._mrio.dst_res, return_=\"meta\").copy()\n kwargs.update({\"driver\": \"GTiff\",\n \"compress\": compress,\n \"nodata\": nodata,\n \"height\": self._height,\n \"width\": self._width,\n \"dtype\": result_layer_i.dtype,\n \"transform\": src_layer.window_transform(self._window)})\n with rasterio.open(dst_path_chunk, \"w\", **kwargs) as dst:\n dst.write(result_layer_i[:, :, 0], 1)", "response": "Write the result to disc."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget a robust data range i. e. 2nd and 98th percentile for vmin vmax parameters.", "response": "def robust_data_range(arr, robust=False, vmin=None, vmax=None):\n \"\"\"Get a robust data range, i.e. 2nd and 98th percentile for vmin, vmax parameters.\"\"\"\n # from the seaborn code \n # https://github.com/mwaskom/seaborn/blob/3a3ec75befab52c02650c62772a90f8c23046038/seaborn/matrix.py#L201\n\n def _get_vmin_vmax(arr2d, vmin=None, vmax=None):\n if vmin is None:\n vmin = np.percentile(arr2d, 2) if robust else arr2d.min()\n if vmax is None:\n vmax = np.percentile(arr2d, 98) if robust else arr2d.max()\n return vmin, vmax\n\n if len(arr.shape) == 3 and vmin is None and vmax is None:\n vmin = []\n vmax = []\n for i in range(arr.shape[2]):\n arr_i = arr[:, :, i]\n vmin_i, vmax_i = _get_vmin_vmax(arr_i, vmin=None, vmax=None)\n vmin.append(vmin_i)\n vmax.append(vmax_i)\n else:\n vmin, vmax = _get_vmin_vmax(arr, vmin=vmin, vmax=vmax)\n return vmin, vmax"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a EOCubeChunk object from an EOCube object.", "response": "def from_eocube(eocube, ji):\n \"\"\"Create a EOCubeChunk object from an EOCube object.\"\"\"\n eocubewin = EOCubeChunk(ji, eocube.df_layers, eocube.chunksize, eocube.wdir)\n return eocubewin"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget a EOCubeChunk object", "response": "def get_chunk(self, ji):\n \"\"\"Get a EOCubeChunk\"\"\"\n return EOCubeSceneCollectionChunk(ji=ji,\n df_layers=self.df_layers,\n chunksize=self.chunksize,\n variables=self.variables,\n qa=self.qa,\n qa_valid=self.qa_valid,\n wdir=self.wdir)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nread and masks the data and converts it in one dataframe per variable.", "response": "def read_data_by_variable(self, mask=True):\n \"\"\"Reads and masks (if desired) the data and converts it in one dataframe per variable.\"\"\"\n def print_elapsed_time(start, last_stopped, prefix):\n # print(f\"{prefix} - Elapsed time [s] since start / last stopped: \\\n # {(int(time.time() - start_time))} / {(int(time.time() - last_stopped))}\")\n return time.time()\n start_time = time.time()\n last_stopped = time.time()\n last_stopped = print_elapsed_time(start_time, last_stopped, \"Starting chunk function\")\n\n verbose = False\n\n self.read_data()\n last_stopped = print_elapsed_time(start_time, last_stopped, \"Data read\")\n\n # 2.\n sc_chunk = self.convert_data_to_dataframe()\n last_stopped = print_elapsed_time(start_time, last_stopped, \"Data converted to df\")\n\n\n # 3.B.\n if mask:\n # 3.A.\n ilocs_qa = np.where((self.df_layers[\"band\"] == self.qa).values)[0]\n df_qa = self.data.iloc[:, ilocs_qa]\n df_qa.columns = self.df_layers[\"date\"].iloc[ilocs_qa]\n df_clearsky = df_qa.isin(self.qa_valid)\n last_stopped = print_elapsed_time(start_time, last_stopped, \"Clearsky df created\")\n\n return_bands = self.variables\n else:\n return_bands = self.variables + [self.qa]\n\n dfs_variables = {}\n for var in return_bands:\n if verbose:\n print(\"VARIABLE:\", var)\n ilocs_var = np.where((self.df_layers[\"band\"] == var).values)[0]\n df_var = self.data.iloc[:, ilocs_var]\n df_var.columns = self.df_layers[\"date\"].iloc[ilocs_var]\n if mask:\n df_var = df_var.where(df_clearsky, other=np.nan)\n dfs_variables[var] = df_var\n last_stopped = print_elapsed_time(start_time, last_stopped, \"Clearsky df created\")\n self._data = dfs_variables\n return self"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget a specific sampledata for a specific dataset.", "response": "def get_dataset(dataset=\"s2l1c\"):\n \"\"\"Get a specific sampledata to play around.\n\n So far the following sampledata exist:\n\n * 's2l1c': One Sentinel-2 Level 1C scene with a reference dataset.\n * 'lsts': A time series of 105 Landsat scenes each with the bands b3 (red), b4 (nir), b5 (swir1) and fmask.\n\n Keyword Arguments:\n dataset {str} -- The name of the dataset (default: {'s2l1c'}).\n\n Returns:\n [dict] -- A dictionary with paths and information about the sampledata.\n \"\"\"\n if dataset == \"s2l1c\":\n search_string = os.path.join(DIR_DATA, dataset, \"**\", \"*_B??.jp2\")\n files = glob.glob(search_string, recursive=True)\n if not files:\n raise IOError(f\"Could not find raster files of the s2l1c dataset. Search string: {search_string}\")\n basename_splitted = [pth.replace(\".jp2\", \"\").split(\"_\")[-2:] for pth in files]\n dset = {\"raster_files\": files,\n \"raster_bands\": [ele[1] for ele in basename_splitted],\n \"raster_times\": [ele[0] for ele in basename_splitted],\n \"vector_file\": os.path.join(DIR_DATA, \"s2l1c\", \"s2l1c_ref.gpkg\"),\n \"vector_file_osm\": os.path.join(DIR_DATA, \"s2l1c\", \"gis_osm_landuse-water_a_free_1_area-10000-to-500000.gpkg\")}\n\n elif dataset == \"lsts\":\n search_string = os.path.join(DIR_DATA, dataset, \"**\", \"*.tif\")\n files = glob.glob(search_string, recursive=True)\n if not files:\n raise IOError(f\"Could not find raster files of the lsts dataset. Search string: {search_string}\")\n basename_splitted = [os.path.basename(pth).replace(\".tif\", \"\").split(\"_\") for pth in files]\n dset = {\"raster_files\": files,\n \"raster_bands\": [ele[1] for ele in basename_splitted],\n \"raster_times\": [ele[0][9:16] for ele in basename_splitted]}\n\n # If you want to add a new dataset here, do not forget to do all of the following steps:\n # 1) add the dataset in the eo-box/sampledata/eobox/sampledata/data/\n # 2) write the code here to get the paths of the data and eventually some additional information\n # 3) write a test to make sure you get the data\n # 4) add the new dataset to package_data in eo-box/sampledata/eobox/setup.py\n # 5) add the new dataset to package_data in eo-box/sampledata/MANIFEST.in\n # 4) change the version number in eo-box/sampledata/eobox/sampledata/__init__.py to '..0'\n\n return dset"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates rasterio. windows. Window instances with given size which fully cover a raster.", "response": "def windows_from_blocksize(blocksize_xy, width, height):\n \"\"\"Create rasterio.windows.Window instances with given size which fully cover a raster.\n\n Arguments:\n blocksize_xy {int or list of two int} -- [description]\n width {int} -- With of the raster for which to create the windows.\n height {int} -- Heigth of the raster for which to create the windows.\n\n Returns:\n list -- List of windows according to the following format\n ``[[, ], rasterio.windows.Window(, , , )]``.\n \"\"\"\n\n # checks the blocksize input\n value_error_msg = \"'blocksize must be an integer or a list of two integers.'\"\n if isinstance(blocksize_xy, int):\n blockxsize, blockysize = (blocksize_xy, blocksize_xy)\n elif isinstance(blocksize_xy, list):\n if len(blocksize_xy) != 2:\n raise ValueError(value_error_msg)\n else:\n if not all([isinstance(blocksize_xy[0], int), isinstance(blocksize_xy[1], int)]):\n raise ValueError(value_error_msg)\n blockxsize, blockysize = blocksize_xy\n else:\n raise ValueError(value_error_msg)\n\n # create the col_off and row_off elements for all windows\n n_cols = int(np.ceil(width / blockxsize))\n n_rows = int(np.ceil(height / blockysize))\n col = list(range(n_cols)) * n_rows\n col_off = np.array(col) * blockxsize\n row = np.repeat(list(range(n_rows)), n_cols)\n row_off = row * blockysize\n\n # create the windows\n # if necessary, reduce the width and/or height of the border windows\n blocksize_wins = []\n for ridx, roff, cidx, coff, in zip(row, row_off, col, col_off):\n if coff + blockxsize > width:\n bxsize = width - coff\n else:\n bxsize = blockxsize\n if roff + blockysize > height:\n bysize = height - roff\n else:\n bysize = blockysize\n blocksize_wins.append([[ridx, cidx], rasterio.windows.Window(coff, roff, bxsize, bysize)])\n return blocksize_wins"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_dst_resolution(self, dst_res=None):\n if dst_res is None:\n dst_res = min(self._res_indices.keys())\n return dst_res", "response": "Get default resolution i. e. the highest resolution or smallest cell size."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nloads windows for chunks - wise processing from raster internal tiling.", "response": "def block_windows(self, res=None): # setter and getter ?\n \"\"\"Load windows for chunks-wise processing from raster internal tiling (first raster of given resolution).\n\n Arguments:\n res {numeric} -- Resolution determining the raster (1st of resolution group) from which to take the tiling.\n \"\"\"\n\n if res is None:\n res = max(self._res_indices.keys())\n self._windows_res = res\n a_file_index_given_res = self._res_indices[res][0]\n with rasterio.open(self._layer_files[a_file_index_given_res]) as src:\n wins_of_first_dst_res_layer = tuple(src.block_windows())\n self.windows = np.array([win[1] for win in wins_of_first_dst_res_layer])\n self.windows_row = np.array([win[0][0] for win in wins_of_first_dst_res_layer])\n self.windows_col = np.array([win[0][1] for win in wins_of_first_dst_res_layer])"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate rasterio. windows. Window instances with given size which fully cover the raster.", "response": "def windows_from_blocksize(self, blocksize_xy=512):\n \"\"\"Create rasterio.windows.Window instances with given size which fully cover the raster.\n\n Arguments:\n blocksize_xy {int or list of two int} -- Size of the window. If one integer is given it defines\n the width and height of the window. If a list of two integers if given the first defines the\n width and the second the height.\n\n Returns:\n None -- But the attributes ``windows``, ``windows_row`` and ``windows_col`` are updated.\n \"\"\"\n meta = self._get_template_for_given_resolution(self.dst_res, \"meta\")\n width = meta[\"width\"]\n height = meta[\"height\"]\n blocksize_wins = windows_from_blocksize(blocksize_xy, width, height)\n\n self.windows = np.array([win[1] for win in blocksize_wins])\n self.windows_row = np.array([win[0][0] for win in blocksize_wins])\n self.windows_col = np.array([win[0][1] for win in blocksize_wins])\n return self"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_template_for_given_resolution(self, res, return_):\n path = self._layer_files[self._res_indices[res][0]]\n if return_ == \"path\":\n return_value = path\n else:\n with rasterio.open(str(path)) as src:\n if return_ == \"meta\":\n return_value = src.meta\n elif return_ == \"windows\":\n return_value = tuple(src.block_windows())\n else:\n raise ValueError(\"'return_' must be 'path', meta' or 'windows'.\")\n return return_value", "response": "Given specified resolution ('res') return template layer path meta or windows."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting Windows ( W ) W - row W - col and W - index of windows e. g. loaded with block_windows as a dataframe.", "response": "def windows_df(self):\n \"\"\"Get Windows (W) W-row, W-col and W-index of windows e.g. loaded with :meth:`block_windows` as a dataframe.\n\n Returns:\n [dataframe] -- A dataframe with the window information and indices (row, col, index).\n \"\"\"\n\n import pandas as pd\n if self.windows is None:\n raise Exception(\"You need to call the block_windows or windows before.\")\n df_wins = []\n for row, col, win in zip(self.windows_row, self.windows_col, self.windows):\n df_wins.append(pd.DataFrame({\"row\":[row], \"col\":[col], \"Window\":[win]}))\n df_wins = pd.concat(df_wins).set_index([\"row\", \"col\"])\n df_wins[\"window_index\"] = range(df_wins.shape[0])\n df_wins = df_wins.sort_index()\n return df_wins"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef ji_windows(self, ij_win): # what can be given to ij_win NOT intuitive/right name by now!!!\n ji_windows = {}\n transform_src = self._layer_meta[self._res_indices[self._windows_res][0]][\"transform\"]\n for res in self._res_indices:\n transform_dst = self._layer_meta[self._res_indices[res][0]][\"transform\"]\n ji_windows[res] = window_from_window(window_src=self.windows[ij_win],\n transform_src=transform_src,\n transform_dst=transform_dst)\n return ji_windows", "response": "Returns a dictionary of all windows for a given window."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_arrays(self, ji_win):\n if isinstance(ji_win, dict):\n ji_windows = ji_win\n else:\n ji_windows = self.ji_windows(ji_win)\n\n arrays = []\n for filename, res in zip(self._layer_files, self._layer_resolution):\n with rasterio.open(filename) as src:\n arr = src.read(1, window=ji_windows[res])\n arrays.append(arr)\n if self.dst_res is not None:\n arrays = self._resample(arrays=arrays, ji_windows=ji_windows)\n return arrays", "response": "Get the data of the a window given the ji_windows derived with ji_windows."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _resample(self, arrays, ji_windows):\n # get a destination array template\n win_dst = ji_windows[self.dst_res]\n aff_dst = self._layer_meta[self._res_indices[self.dst_res][0]][\"transform\"]\n arrays_dst = list()\n for i, array in enumerate(arrays):\n arr_dst = np.zeros((int(win_dst.height), int(win_dst.width)))\n if self._layer_resolution[i] > self.dst_res:\n resampling = getattr(Resampling, self.upsampler)\n elif self._layer_resolution[i] < self.dst_res:\n resampling = getattr(Resampling, self.downsampler)\n else:\n arrays_dst.append(array.copy())\n continue\n reproject(array, arr_dst, # arr_dst[0, :, :, i],\n src_transform=self._layer_meta[i][\"transform\"],\n dst_transform=aff_dst,\n src_crs=self._layer_meta[0][\"crs\"],\n dst_crs=self._layer_meta[0][\"crs\"],\n resampling=resampling)\n arrays_dst.append(arr_dst.copy())\n arrays_dst = np.stack(arrays_dst, axis=2) # n_images x n x m x 10 would be the synergise format\n return arrays_dst", "response": "Resample all arrays with potentially different resolutions to a common resolution."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _process_windows_merge_stack(self, func, **kwargs):\n ji_results = self._process_windows(func, **kwargs)\n for idx_layer in range(len(ji_results[0])): # this is the number of output layers\n for j in np.unique(self.windows_row):\n win_indices_j = np.where(self.windows_row == j)[0]\n layer_merged_j = np.hstack([ji_results[idx][idx_layer] for idx in win_indices_j])\n if j == 0:\n layer_merged = layer_merged_j\n else:\n layer_merged = np.vstack([layer_merged, layer_merged_j])\n if idx_layer == 0:\n layers_merged = layer_merged\n else:\n layers_merged = np.stack([layers_merged, layer_merged], axis=2)\n return layers_merged", "response": "Load ( resampled ) array of all windows apply custom function on it merge and stack results to one array."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nloading ( resampled ) array of all windows and apply custom function on it.", "response": "def _process_windows(self, func, **kwargs):\n \"\"\"Load (resampled) array of all windows and apply custom function on it.\"\"\"\n ji_results = []\n for ji_win in range(len(self.windows)):\n ji_results.append(self._process_window(ji_win, func, **kwargs))\n return ji_results"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nprocess a window and apply custom function on it.", "response": "def _process_window(self, ji_win, func, **kwargs):\n \"\"\"Load (resampled) array of window ji_win and apply custom function on it. \"\"\"\n arr = self.get_arrays(ji_win)\n result = func(arr, **kwargs)\n return result"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_window_from_xy(self, xy):\n a_transform = self._get_template_for_given_resolution(res=self.dst_res, return_=\"meta\")[\"transform\"]\n row, col = transform.rowcol(a_transform, xy[0], xy[1])\n ij_containing_xy = None\n for ji, win in enumerate(self.windows):\n (row_start, row_end), (col_start, col_end) = rasterio.windows.toranges(win)\n # print(row, col, row_start, row_end, col_start, col_end)\n if ((col >= col_start) & (col < col_end)) & ((row >= row_start) & (row < row_end)):\n ij_containing_xy = ji\n break\n if ij_containing_xy is None:\n raise ValueError(\"The given 'xy' value is not contained in any window.\")\n return ij_containing_xy", "response": "Get the window index given a coordinate ( raster CRS )."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nextracting values from a single band raster file for pixels overlapping with a vector dataset.", "response": "def extract(src_vector: str,\n burn_attribute: str,\n src_raster: list,\n dst_names: list,\n dst_dir: str,\n src_raster_template: str = None,\n gdal_dtype: int = 4,\n n_jobs: int = 1):\n \"\"\"Extract values from list of single band raster for pixels overlapping with a vector data.\n\n The extracted data will be stored in the ``dst_dir`` by using the ``dst_names`` for the\n filename. If a file with a given name already exists the raster will be skipped.\n\n Arguments:\n src_vector {str} -- Filename of the vector dataset. Currently it must have the same CRS as\n the raster.\n burn_attribute {str} -- Name of the attribute column in the ``src_vector`` dataset to be\n stored with the extracted data. This should usually be a unique ID for the features\n (points, lines, polygons) in the vector dataset.\n src_raster {list} -- List of filenames of the single band raster files from which to\n extract.\n dst_names {list} -- List corresponding to ``src_raster`` names used to store and later\n identify the extracted to.\n dst_dir {str} -- Directory to store the data to.\n\n Keyword Arguments:\n src_raster_template {str} -- A template raster to be used for rasterizing the vectorfile.\n Usually the first element of ``src_raster``. (default: {None})\n gdal_dtype {int} -- Numeric GDAL data type, defaults to 4 which is UInt32.\n See https://github.com/mapbox/rasterio/blob/master/rasterio/dtypes.py for useful look-up\n tables.\n\n Returns:\n [int] -- If successful, 0 is returned as exit code.\n \"\"\"\n if src_raster_template is None:\n src_raster_template = src_raster[0]\n path_rasterized = os.path.join(dst_dir, f\"burn_attribute_rasterized_{burn_attribute}.tif\")\n paths_extracted_aux = {ele: os.path.join(dst_dir, f\"{ele}.npy\") \\\n for ele in [f\"aux_vector_{burn_attribute}\",\n \"aux_coord_x\",\n \"aux_coord_y\"]}\n paths_extracted_raster = {}\n for path, name in zip(src_raster, dst_names):\n dst = f\"{os.path.join(dst_dir, name)}.npy\"\n if not os.path.exists(dst):\n paths_extracted_raster[path] = dst\n\n if not os.path.exists(dst_dir):\n os.makedirs(dst_dir)\n # if it does not already exist, here we first create the rasterized data\n if not os.path.exists(path_rasterized):\n if src_raster_template is None:\n src_raster_template = src_raster[0]\n # print(\"Rasterizing vector attribute.\")\n rasterize(src_vector=src_vector,\n burn_attribute=burn_attribute,\n src_raster_template=src_raster_template,\n dst_rasterized=path_rasterized,\n gdal_dtype=gdal_dtype)\n\n # if any of the destination files do not exist we need the locations of the pixels to be\n # extracted in form of a numpy array bool (mask_arr) that fits the rasters from which we will\n # extract below\n if not (all([os.path.exists(path) for path in paths_extracted_aux.values()]) and \\\n all([os.path.exists(path) for path in paths_extracted_raster.values()])):\n # print(\"Creating mask array for pixels to be extracted.\")\n mask_arr = _get_mask_array(path_rasterized, paths_extracted_aux, burn_attribute)\n else:\n return 0\n\n # create the pixel coordinates if they do not exist\n if not all([os.path.exists(paths_extracted_aux[\"aux_coord_x\"]),\n os.path.exists(paths_extracted_aux[\"aux_coord_y\"])]):\n _create_and_save_coords(path_rasterized, paths_extracted_aux, mask_arr)\n\n # lets extract the raster values in case of sequential processing\n # or remove existing raster layers to prepare parallel processing\n if n_jobs == 1:\n for path_src, path_dst in tqdm(paths_extracted_raster.items(),\n total=len(paths_extracted_raster)):\n _extract_and_save_one_layer(path_src, path_dst, mask_arr)\n else:\n import multiprocessing as mp\n if n_jobs == -1:\n n_jobs = mp.cpu_count()\n pool = mp.Pool(processes=n_jobs)\n _ = [pool.apply_async(_extract_and_save_one_layer,\n args=(src, dst, mask_arr)) for \\\n src, dst in paths_extracted_raster.items()]\n pool.close()\n pool.join()\n return 0"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nloads data extracted and stored by extract.", "response": "def load_extracted(src_dir: str,\n patterns=\"*.npy\",\n vars_in_cols: bool = True,\n index: pd.Series = None):\n \"\"\"Load data extracted and stored by :py:func:`extract`\n\n Arguments:\n src_dir {str} -- The directory where the data is stored.\n\n Keyword Arguments:\n patterns {str, or list of str} -- A pattern (str) or list of patterns (list)\n to identify the variables to be loaded.\n The default loads all variables, i.e. all .npy files. (default: {'*.npy'})\n vars_in_cols {bool} -- Return the variables in columns (``True``) or rows ``False``\n (default: {True})\n index {pd.Series} -- A boolean pandas Series which indicates with ``True`` which samples to\n load.\n\n Returns:\n pandas.DataFrame -- A dataframe with the data.\n \"\"\"\n def _load(path, index):\n if index is None:\n arr = np.load(str(path))\n else:\n arr = np.load(str(path), mmap_mode=\"r\")[index]\n return arr\n\n src_dir = Path(src_dir)\n paths = []\n if isinstance(patterns, str):\n patterns = [patterns]\n for pat in patterns:\n paths += src_dir.glob(pat)\n\n if vars_in_cols:\n df_data = {}\n for path in paths:\n df_data[path.stem] = _load(path, index)\n df_data = pd.DataFrame(df_data)\n if index is not None:\n df_data.index = index.index[index]\n else:\n df_data = []\n for path in paths:\n arr = _load(path, index)\n df_data.append(pd.DataFrame(np.expand_dims(arr, 0), index=[path.stem]))\n df_data = pd.concat(df_data)\n if index is not None:\n df_data.columns = index.index[index]\n return df_data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef extrema(self, x0, y0, w, h):\n minimum = 9223372036854775807\n maximum = 0\n for y in range(y0, y0 + h):\n for x in range(x0, x0 + w):\n value = self[x, y]\n if value != self.filler:\n minimum = min(minimum, value)\n maximum = max(maximum, value)\n return minimum, maximum", "response": "Returns the minimum and maximum values contained in a given area."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef rgb2short(rgb):\n incs = (0x00, 0x5f, 0x87, 0xaf, 0xd7, 0xff)\n # Break 6-char RGB code into 3 integer vals.\n parts = [ int(h, 16) for h in re.split(r'(..)(..)(..)', rgb)[1:4] ]\n res = []\n for part in parts:\n i = 0\n while i < len(incs)-1:\n s, b = incs[i], incs[i+1] # smaller, bigger\n if s <= part <= b:\n s1 = abs(s - part)\n b1 = abs(b - part)\n if s1 < b1: closest = s\n else: closest = b\n res.append(closest)\n break\n i += 1\n #print '***', res\n res = ''.join([ ('%02.x' % i) for i in res ])\n equiv = RGB2SHORT_DICT[ res ]\n #print '***', res, equiv\n return equiv, res", "response": "Find the closest xterm - 256 approximation to the given RGB value."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsets the cursor to the desired position.", "response": "def set_cursor(self, x, y):\n \"\"\"\n Sets the cursor to the desired position.\n\n :param x: X position\n :param y: Y position\n \"\"\"\n curses.curs_set(1)\n self.screen.move(y, x)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef put(self, x, y, text, fg, bg):\n if x < self.width and y < self.height:\n try:\n self.screen.addstr(y, x, symbols.encode(text), self.pairs[fg, bg])\n except curses.error:\n # Ignore out of bounds error\n pass", "response": "Writes a string at the desired coordinates using the provided colors."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nwait for an event to happen and returns a string related to the event.", "response": "def poll_event(self):\n \"\"\"\n Waits for an event to happen and returns a string related to the event.\n\n If the event is a normal (letter) key press, the letter is returned (case sensitive)\n\n :return: Event type\n \"\"\"\n # Flush all inputs before this one that were done since last poll\n curses.flushinp()\n\n ch = self.screen.getch()\n\n if ch == 27:\n return EVENT_ESC\n elif ch == -1 or ch == curses.KEY_RESIZE:\n return EVENT_RESIZE\n elif ch == 10 or ch == curses.KEY_ENTER:\n return EVENT_ENTER\n elif ch == 127 or ch == curses.KEY_BACKSPACE:\n return EVENT_BACKSPACE\n elif ch == curses.KEY_UP:\n return EVENT_UP\n elif ch == curses.KEY_DOWN:\n return EVENT_DOWN\n elif ch == curses.KEY_LEFT:\n return EVENT_LEFT\n elif ch == curses.KEY_RIGHT:\n return EVENT_RIGHT\n elif ch == 3:\n return EVENT_CTRL_C\n elif 0 <= ch < 256:\n return chr(ch)\n else:\n return EVENT_UNHANDLED"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncomputing the capture parameters.", "response": "def compute_capture(args):\n x, y, w, h, params = args\n \"\"\"Callable function for the multiprocessing pool.\"\"\"\n return x, y, mandelbrot_capture(x, y, w, h, params)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndrawing the main panel of the current application.", "response": "def draw_panel(cb, pool, params, plane):\n \"\"\"\n Draws the application's main panel, displaying the current Mandelbrot view.\n\n :param cb: Cursebox instance.\n :type cb: cursebox.Cursebox\n :param params: Current application parameters.\n :type params: params.Params\n :param plane: Plane containing the current Mandelbrot values.\n :type plane: plane.Plane\n \"\"\"\n w = cb.width - MENU_WIDTH - 1\n h = cb.height - 1\n\n params.plane_w = w\n params.plane_h = h\n params.resize(w, h)\n\n palette = PALETTES[params.palette][1]\n if params.reverse_palette:\n palette = palette[::-1]\n\n # draw_gradient(t, 1, 1, w, h, palette, params.dither_type)\n\n generated = 0\n missing_coords = []\n\n # Check for coordinates that have no value in current plane\n xs = range(params.plane_x0, params.plane_x0 + params.plane_w - 1)\n ys = range(params.plane_y0, params.plane_y0 + params.plane_h - 1)\n for x in xs:\n for y in ys:\n if plane[x, y] is None:\n missing_coords.append((x, y, params))\n generated += 1\n\n # Compute all missing values via multiprocessing\n n_processes = 0\n if len(missing_coords) > 0:\n n_cores = pool._processes\n n_processes = len(missing_coords) // 256\n if n_processes > n_cores:\n n_processes = n_cores\n\n start = time.time()\n for i, result in enumerate(pool.imap_unordered(compute, missing_coords, chunksize=256)):\n plane[result[0], result[1]] = result[2]\n if time.time() - start > 2:\n if i % 200 == 0:\n draw_progress_bar(cb, \"Render is taking a longer time...\", i, len(missing_coords))\n cb.refresh()\n\n if generated > 0:\n params.log(\"Added %d missing cells\" % generated)\n if n_processes > 1:\n params.log(\"(Used %d processes)\" % n_processes)\n\n min_value = 0.0\n max_value = params.max_iterations\n max_iterations = params.max_iterations\n\n if params.adaptive_palette:\n min_value, max_value = plane.extrema(params.plane_x0, params.plane_y0,\n params.plane_w, params.plane_h)\n\n crosshairs_coord = None\n if params.crosshairs:\n crosshairs_coord = params.crosshairs_coord\n\n # Draw all values in cursebox\n for x in xs:\n for y in ys:\n value = (plane[x, y] + params.palette_offset) % (params.max_iterations + 1)\n if params.adaptive_palette:\n # Remap values from (min_value, max_value) to (0, max_iterations)\n if max_value - min_value > 0:\n value = ((value - min_value) / (max_value - min_value)) * max_iterations\n else:\n value = max_iterations\n\n # Dithered mode\n if params.dither_type < 2:\n draw_dithered_color(cb, x - params.plane_x0 + 1,\n y - params.plane_y0 + 1,\n palette, params.dither_type,\n value, max_iterations,\n crosshairs_coord=crosshairs_coord)\n # 256 colors mode\n else:\n draw_color(cb, x - params.plane_x0 + 1,\n y - params.plane_y0 + 1,\n value, max_iterations, palette,\n crosshairs_coord=crosshairs_coord)\n\n # Draw bounding box\n draw_box(cb, 0, 0, w + 1, h + 1)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef draw_menu(cb, params, qwertz):\n w = cb.width\n h = cb.height\n\n x0 = w - MENU_WIDTH + 1\n\n # Clear buffer inside the box\n fill(cb, x0, 1, MENU_WIDTH, h - 2, \" \")\n\n def draw_option(key, value, shortcuts):\n \"\"\"\n Helper function to draw options. Self-increments own counter.\n\n :param key: Name of the option.\n :param value: Value of the option.\n :param shortcuts: Keyboard shortcut keys.\n :return:\n \"\"\"\n draw_text(cb, x0 + 1, 2 + draw_option.counter,\n \"%s %s %s\" % (key, str(value).rjust(MENU_WIDTH - 14 - len(key)), shortcuts))\n draw_option.counter += 1\n draw_option.counter = 1\n\n z = \"Z\"\n y = \"Y\"\n if qwertz:\n z, y = y, z\n\n h_seps = [2]\n # Draw title\n draw_text(cb, x0, 1, (\"Almonds %s\" % __version__).center(MENU_WIDTH - 2))\n # Write options (and stats)\n # Mandelbrot position\n draw_option(\"Real\", \"{0:.13g}\".format(params.mb_cx),\n \"$[\" + symbols[\"ARROW_LEFT\"] + \"]$, $[\" + symbols[\"ARROW_RIGHT\"] + \"]$\")\n draw_option(\"Imaginary\", \"{0:.13g}\".format(params.mb_cy),\n \"$[\" + symbols[\"ARROW_UP\"] + \"]$, $[\" + symbols[\"ARROW_DOWN\"] + \"]$\")\n # FIXME: try to find a way to avoid this hack\n if is_native_windows():\n cb.put_arrow(x0 + 30, 3, \"up\", colors.default_bg(), colors.default_fg())\n cb.put_arrow(x0 + 35, 3, \"down\", colors.default_bg(), colors.default_fg())\n cb.put_arrow(x0 + 30, 4, \"left\", colors.default_bg(), colors.default_fg())\n cb.put_arrow(x0 + 35, 4, \"right\", colors.default_bg(), colors.default_fg())\n draw_option(\"Input coordinates...\", \"\", \"$[Enter]$\")\n draw_option.counter += 1\n h_seps.append(draw_option.counter + 1)\n # Mandelbrot options\n draw_option(\"Move speed\", params.move_speed, \"$[C]$, $[V]$\")\n draw_option(\"Zoom\", \"{0:.13g}\".format(params.zoom), \"$[\" + y + \"]$, $[U]$\")\n draw_option(\"Iterations\", params.max_iterations, \"$[I]$, $[O]$\")\n draw_option(\"Julia mode\", \"On\" if params.julia else \"Off\", \"$[J]$\")\n draw_option.counter += 1\n h_seps.append(draw_option.counter + 1)\n # Palette options\n draw_option(\"Palette\", PALETTES[params.palette][0], \"$[P]$\")\n draw_option(\"Color mode\", DITHER_TYPES[params.dither_type][0], \"$[D]$\")\n draw_option(\"Order\", \"Reversed\" if params.reverse_palette else \"Normal\", \"$[R]$\")\n draw_option(\"Mode\", \"Adaptive\" if params.adaptive_palette else \"Linear\", \"$[A]$\")\n draw_option(\"Cycle!\", \"\", \"$[\" + z + \"]$\")\n draw_option.counter += 1\n h_seps.append(draw_option.counter + 1)\n # Misc.\n draw_option(\"Hi-res capture\", \"\", \"$[H]$\")\n draw_option(\"Crosshairs\", \"On\" if params.crosshairs else \"Off\", \"$[X]$\")\n draw_option(\"Theme\", \"Dark\" if colors.dark else \"Light\", \"$[T]$\")\n draw_option(\"Save\", \"\", \"$[S]$\")\n draw_option(\"Load...\", \"\", \"$[L]$\")\n draw_option(\"Exit\", \"\", \"$[ESC]$\")\n\n # Draw box with separators\n middle = 3 + draw_option.counter\n draw_box(cb, w - MENU_WIDTH, 0, MENU_WIDTH, h, h_seps=h_seps + [middle - 1, middle + 1])\n\n # Draw log\n draw_text(cb, x0, middle, \"Event log\".center(MENU_WIDTH - 2))\n latest_logs = params.log.get_latest(h - middle)\n latest_logs = [textwrap.wrap(l, MENU_WIDTH - 4)[::-1] for l in latest_logs] # Wrap all messages\n latest_logs = [l for ls in latest_logs for l in ls] # Flatten [[str]] -> [str]\n i = h - 2\n for l in latest_logs:\n draw_text(cb, x0 + 1, i, l)\n i -= 1\n if i == middle + 1:\n break", "response": "Draws the main menu and options."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef update_display(cb, pool, params, plane, qwertz):\n cb.clear()\n draw_panel(cb, pool, params, plane)\n update_position(params) # Update Mandelbrot-space coordinates before drawing them\n draw_menu(cb, params, qwertz)\n cb.refresh()", "response": "Draw all the panels and menu."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef save(params):\n if is_python3():\n import pickle\n cPickle = pickle\n else:\n import cPickle\n ts = datetime.datetime.fromtimestamp(time.time()).strftime(\"%Y-%m-%d_%H-%M-%S\")\n if not os.path.exists(\"saves/\"):\n os.makedirs(\"saves/\")\n with open(\"saves/almonds_%s.params\" % ts, \"wb\") as f:\n cPickle.dump(params, f)\n params.log(\"Current scene saved!\")", "response": "Saves the current parameters to a file."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef capture(cb, pool, params):\n w, h = screen_resolution()\n\n # Re-adapt dimensions to match current plane ratio\n old_ratio = w / h\n new_ratio = params.plane_ratio\n if old_ratio > new_ratio:\n w = int(h * new_ratio)\n else:\n h = int(w / new_ratio)\n\n image = Image.new(\"RGB\", (w, h), \"white\")\n pixels = image.load()\n\n # FIXME: refactor common code to get_palette(params)\n palette = PALETTES[params.palette][1]\n if params.reverse_palette:\n palette = palette[::-1]\n\n # All coordinates to be computed as single arguments for processes\n coords = [(x, y, w, h, params) for x in range(w) for y in range(h)]\n\n results = []\n # Dispatch work to pool and draw results as they come in\n for i, result in enumerate(pool.imap_unordered(compute_capture, coords, chunksize=256)):\n results.append(result)\n if i % 2000 == 0:\n draw_progress_bar(cb, \"Capturing current scene...\", i, w * h)\n cb.refresh()\n\n min_value = 0.0\n max_value = params.max_iterations\n max_iterations = params.max_iterations\n\n if params.adaptive_palette:\n from operator import itemgetter\n min_value = min(results, key=itemgetter(2))[2]\n max_value = max(results, key=itemgetter(2))[2]\n\n # Draw pixels\n for result in results:\n value = result[2]\n if params.adaptive_palette:\n # Remap values from (min_value, max_value) to (0, max_iterations)\n if max_value - min_value > 0:\n value = ((value - min_value) / (max_value - min_value)) * max_iterations\n else:\n value = max_iterations\n pixels[result[0], result[1]] = get_color(value, params.max_iterations, palette)\n\n if not os.path.exists(\"captures/\"):\n os.makedirs(\"captures/\")\n\n ts = datetime.datetime.fromtimestamp(time.time()).strftime(\"%Y-%m-%d_%H-%M-%S\")\n filename = \"captures/almonds_%s.png\" % ts\n image.save(filename, \"PNG\")\n params.log(\"Current scene captured!\")\n params.log(\"(Used %d processes)\" % pool._processes)\n\n open_file(filename)", "response": "Renders and saves a screen - sized picture of the current position."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef cycle(cb, pool, params, plane):\n step = params.max_iterations // 20\n if step == 0:\n step = 1\n for i in range(0, params.max_iterations, step):\n params.palette_offset = i\n draw_panel(cb, pool, params, plane)\n cb.refresh()\n params.palette_offset = 0", "response": "Cycle the current Mandelbrot by drawing a palette."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ninitializes coordinates and zoom for first use.", "response": "def init_coords(cb, params):\n \"\"\"\n Initializes coordinates and zoom for first use.\n\n Loads coordinates from Mandelbrot-space.\n\n :param cb: Cursebox instance.\n :type cb: cursebox.Cursebox\n :param params: Current application parameters.\n :type params: params.Params\n :return:\n \"\"\"\n w = cb.width - MENU_WIDTH - 1\n h = cb.height - 1\n\n params.plane_w = w\n params.plane_h = h\n params.resize(w, h)\n\n zoom(params, 1)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef clamp(n, lower, upper):\n if lower > upper:\n lower, upper = upper, lower\n return max(min(upper, n), lower)", "response": "Returns the given number n clamping the given lower and upper bounds."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the current screen s resolution. Should be multi - platform.", "response": "def screen_resolution():\n \"\"\"\n Returns the current screen's resolution.\n\n Should be multi-platform.\n\n :return: A tuple containing the width and height of the screen.\n \"\"\"\n w = 0\n h = 0\n try:\n # Windows\n import ctypes\n user32 = ctypes.windll.user32\n w, h = user32.GetSystemMetrics(0), user32.GetSystemMetrics(1)\n except AttributeError:\n try:\n # Mac OS X\n import AppKit\n size = AppKit.NSScreen.screens()[0].frame().size\n w, h = int(size.width), int(size.height)\n except ImportError:\n try:\n # Linux\n import Xlib\n import Xlib.display\n display = Xlib.display.Display()\n root = display.screen().root\n size = root.get_geometry()\n w, h = size.width, size.height\n except ImportError:\n w = 1920\n h = 1080\n\n return w, h"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef open_file(filename):\n if sys.platform.startswith(\"darwin\"):\n subprocess.call((\"open\", filename))\n elif sys.platform == \"cygwin\":\n subprocess.call((\"cygstart\", filename))\n elif os.name == \"nt\":\n os.system(\"start %s\" % filename)\n elif os.name == \"posix\":\n subprocess.call((\"xdg-open\", filename))", "response": "Open a file with its default application."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the appropriate block drawing symbol for the given intensity.", "response": "def dither_symbol(value, dither):\n \"\"\"\n Returns the appropriate block drawing symbol for the given intensity.\n :param value: intensity of the color, in the range [0.0, 1.0]\n :return: dithered symbol representing that intensity\n \"\"\"\n dither = DITHER_TYPES[dither][1]\n return dither[int(round(value * (len(dither) - 1)))]"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndraws a dithered color block on the terminal.", "response": "def draw_dithered_color(cb, x, y, palette, dither, n, n_max, crosshairs_coord=None):\n \"\"\"\n Draws a dithered color block on the terminal, given a palette.\n :type cb: cursebox.CurseBox\n \"\"\"\n i = n * (len(palette) - 1) / n_max\n c1 = palette[int(math.floor(i))]\n c2 = palette[int(math.ceil(i))]\n value = i - int(math.floor(i))\n\n symbol = dither_symbol(value, dither)\n\n if crosshairs_coord is not None:\n old_symbol = symbol\n symbol, crosshairs = get_crosshairs_symbol(x, y, old_symbol, crosshairs_coord)\n if crosshairs:\n sorted_palette = sort_palette(palette)\n if old_symbol == DITHER_TYPES[dither][1][0]:\n c2 = c1\n sorted_index = sorted_palette.index(c2)\n if sorted_index > len(sorted_palette) // 2:\n c1 = sorted_palette[0]\n else:\n c1 = sorted_palette[-1]\n\n cb.put(x, y, symbol, c1(), c2())"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef draw_box(cb, x0, y0, w, h, fg=colors.default_fg, bg=colors.default_bg, h_seps=[], v_seps=[]):\n w -= 1\n h -= 1\n corners = [(x0, y0), (x0 + w, y0), (x0, y0 + h), (x0 + w, y0 + h)]\n\n fg = fg()\n bg = bg()\n\n for i, c in enumerate(corners):\n cb.put(c[0], c[1], BOX_CORNERS[i], fg, bg)\n\n for s in h_seps + [0, h]:\n cb.put(x0 + 1, y0 + s, symbols[\"BOX_HORIZONTAL\"] * (w - 1), fg, bg)\n\n for y in range(1, h):\n for s in v_seps + [0, w]:\n cb.put(x0 + s, y0 + y, symbols[\"BOX_VERTICAL\"], fg, bg)\n\n for s in h_seps:\n cb.put(x0, y0 + s, symbols[\"BOX_X_LEFT\"], fg, bg)\n cb.put(x0 + w, y0 + s, symbols[\"BOX_X_RIGHT\"], fg, bg)\n\n for s in v_seps:\n cb.put(x0 + s, y0, symbols[\"BOX_X_TOP\"], fg, bg)\n cb.put(x0 + s, y0 + h, symbols[\"BOX_X_BOTTOM\"], fg, bg)", "response": "Draws a box in the given terminal."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef draw_progress_bar(cb, message, value, max_value):\n m_x = cb.width // 2\n m_y = cb.height // 2\n w = len(message) + 4\n h = 3\n draw_box(cb, m_x - w // 2, m_y - 1, w, h)\n message = \" %s \" % message\n i = int((value / max_value) * (len(message) + 2))\n message = \"$\" + message[:i] + \"$\" + message[i:]\n draw_text(cb, m_x - w // 2 + 1, m_y, message)", "response": "Draws a progress bar."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef mandelbrot_iterate(c, max_iterations, julia_seed=None):\n z = c\n if julia_seed is not None:\n c = julia_seed\n for iterations in range(max_iterations):\n z = z * z + c\n if abs(z) > 1000:\n return z, iterations\n return z, max_iterations", "response": "This function is used to iterate over the Mandelbrot fractal sequence."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ntransforming the given coordinates from plane - space to Mandelbrot - space.", "response": "def get_coords(x, y, params):\n \"\"\"\n Transforms the given coordinates from plane-space to Mandelbrot-space (real and imaginary).\n\n :param x: X coordinate on the plane.\n :param y: Y coordinate on the plane.\n :param params: Current application parameters.\n :type params: params.Params\n :return: Tuple containing the re-mapped coordinates in Mandelbrot-space.\n \"\"\"\n n_x = x * 2.0 / params.plane_w * params.plane_ratio - 1.0\n n_y = y * 2.0 / params.plane_h - 1.0\n mb_x = params.zoom * n_x\n mb_y = params.zoom * n_y\n return mb_x, mb_y"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncomputes the number of iterations of the given plane - space coordinates.", "response": "def mandelbrot(x, y, params):\n \"\"\"\n Computes the number of iterations of the given plane-space coordinates.\n\n :param x: X coordinate on the plane.\n :param y: Y coordinate on the plane.\n :param params: Current application parameters.\n :type params: params.Params\n :return: Discrete number of iterations.\n \"\"\"\n mb_x, mb_y = get_coords(x, y, params)\n mb = mandelbrot_iterate(mb_x + 1j * mb_y, params.max_iterations, params.julia_seed)\n\n return mb[1]"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncomputes the number of iterations of the given pixel - space coordinates for high - res capture purposes.", "response": "def mandelbrot_capture(x, y, w, h, params):\n \"\"\"\n Computes the number of iterations of the given pixel-space coordinates,\n for high-res capture purposes.\n\n Contrary to :func:`mandelbrot`, this function returns a continuous\n number of iterations to avoid banding.\n\n :param x: X coordinate on the picture\n :param y: Y coordinate on the picture\n :param w: Width of the picture\n :param h: Height of the picture\n :param params: Current application parameters.\n :type params: params.Params\n :return: Continuous number of iterations.\n \"\"\"\n\n # FIXME: Figure out why these corrections are necessary or how to make them perfect\n # Viewport is offset compared to window when capturing without these (found empirically)\n if params.plane_ratio >= 1.0:\n x -= params.plane_w\n else:\n x += 3.0 * params.plane_w\n\n ratio = w / h\n n_x = x * 2.0 / w * ratio - 1.0\n n_y = y * 2.0 / h - 1.0\n mb_x = params.zoom * n_x + params.mb_cx\n mb_y = params.zoom * n_y + params.mb_cy\n\n mb = mandelbrot_iterate(mb_x + 1j * mb_y, params.max_iterations, params.julia_seed)\n z, iterations = mb\n\n # Continuous iteration count for no banding\n # https://en.wikipedia.org/wiki/Mandelbrot_set#Continuous_.28smooth.29_coloring\n nu = params.max_iterations\n if iterations < params.max_iterations:\n nu = iterations + 2 - abs(cmath.log(cmath.log(abs(z)) / cmath.log(params.max_iterations), 2))\n\n return clamp(nu, 0, params.max_iterations)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef update_position(params):\n cx = params.plane_x0 + params.plane_w / 2.0\n cy = params.plane_y0 + params.plane_h / 2.0\n params.mb_cx, params.mb_cy = get_coords(cx, cy, params)", "response": "Updates the Mandelbrot - space coordinates of the current application."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\napplying a zoom on the current parameters.", "response": "def zoom(params, factor):\n \"\"\"\n Applies a zoom on the current parameters.\n\n Computes the top-left plane-space coordinates from the Mandelbrot-space coordinates.\n\n :param params: Current application parameters.\n :param factor: Zoom factor by which the zoom ratio is divided (bigger factor, more zoom)\n \"\"\"\n params.zoom /= factor\n\n n_x = params.mb_cx / params.zoom\n n_y = params.mb_cy / params.zoom\n\n params.plane_x0 = int((n_x + 1.0) * params.plane_w / (2.0 * params.plane_ratio)) - params.plane_w // 2\n params.plane_y0 = int((n_y + 1.0) * params.plane_h / 2.0) - params.plane_h // 2"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef resize(self, w, h):\n self.plane_w = w\n self.plane_h = h\n self.plane_ratio = self.char_ratio * w / h\n\n if self.crosshairs:\n self.crosshairs_coord = ((w + 2) // 2, (h + 2) // 2)", "response": "Resets the plane ratio factor and the crosshairs coordinate."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking that the sender and entity handles match.", "response": "def check_sender_and_entity_handle_match(sender_handle, entity_handle):\n \"\"\"Ensure that sender and entity handles match.\n\n Basically we've already verified the sender is who they say when receiving the payload. However, the sender might\n be trying to set another author in the payload itself, since Diaspora has the sender in both the payload headers\n AND the object. We must ensure they're the same.\n \"\"\"\n if sender_handle != entity_handle:\n logger.warning(\"sender_handle and entity_handle don't match, aborting! sender_handle: %s, entity_handle: %s\",\n sender_handle, entity_handle)\n return False\n return True"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef element_to_objects(\n element: etree.ElementTree, sender: str, sender_key_fetcher:Callable[[str], str]=None, user: UserType =None,\n) -> List:\n \"\"\"Transform an Element to a list of entities recursively.\n\n Possible child entities are added to each entity ``_children`` list.\n\n :param tree: Element\n :param sender: Payload sender id\n :param sender_key_fetcher: Function to fetch sender public key. If not given, key will always be fetched\n over network. The function should take sender handle as the only parameter.\n :param user: Optional receiving user object. If given, should have an ``id``.\n :returns: list of entities\n \"\"\"\n entities = []\n cls = MAPPINGS.get(element.tag)\n if not cls:\n return []\n\n attrs = xml_children_as_dict(element)\n transformed = transform_attributes(attrs, cls)\n if hasattr(cls, \"fill_extra_attributes\"):\n transformed = cls.fill_extra_attributes(transformed)\n entity = cls(**transformed)\n # Add protocol name\n entity._source_protocol = \"diaspora\"\n # Save element object to entity for possible later use\n entity._source_object = etree.tostring(element)\n # Save receiving id to object\n if user:\n entity._receiving_actor_id = user.id\n if issubclass(cls, DiasporaRelayableMixin):\n # If relayable, fetch sender key for validation\n entity._xml_tags = get_element_child_info(element, \"tag\")\n if sender_key_fetcher:\n entity._sender_key = sender_key_fetcher(entity.actor_id)\n else:\n profile = retrieve_and_parse_profile(entity.handle)\n if profile:\n entity._sender_key = profile.public_key\n else:\n # If not relayable, ensure handles match\n if not check_sender_and_entity_handle_match(sender, entity.handle):\n return []\n try:\n entity.validate()\n except ValueError as ex:\n logger.error(\"Failed to validate entity %s: %s\", entity, ex, extra={\n \"attrs\": attrs,\n \"transformed\": transformed,\n })\n return []\n # Extract mentions\n entity._mentions = entity.extract_mentions()\n # Do child elements\n for child in element:\n entity._children.extend(element_to_objects(child, sender, user=user))\n # Add to entities list\n entities.append(entity)\n return entities", "response": "Transform an element tree to a list of entities."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef message_to_objects(\n message: str, sender: str, sender_key_fetcher:Callable[[str], str]=None, user: UserType =None,\n) -> List:\n \"\"\"Takes in a message extracted by a protocol and maps it to entities.\n\n :param message: XML payload\n :type message: str\n :param sender: Payload sender id\n :type message: str\n :param sender_key_fetcher: Function to fetch sender public key. If not given, key will always be fetched\n over network. The function should take sender handle as the only parameter.\n :param user: Optional receiving user object. If given, should have a `handle`.\n :returns: list of entities\n \"\"\"\n doc = etree.fromstring(message)\n if doc.tag in TAGS:\n return element_to_objects(doc, sender, sender_key_fetcher, user)\n return []", "response": "Takes in a message extracted by a protocol and maps it to a list of entities."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ntransforming some attribute keys.", "response": "def transform_attributes(attrs, cls):\n \"\"\"Transform some attribute keys.\n\n :param attrs: Properties from the XML\n :type attrs: dict\n :param cls: Class of the entity\n :type cls: class\n \"\"\"\n transformed = {}\n for key, value in attrs.items():\n if value is None:\n value = \"\"\n if key == \"text\":\n transformed[\"raw_content\"] = value\n elif key == \"author\":\n if cls == DiasporaProfile:\n # Diaspora Profile XML message contains no GUID. We need the guid. Fetch it.\n profile = retrieve_and_parse_profile(value)\n transformed['id'] = value\n transformed[\"guid\"] = profile.guid\n else:\n transformed[\"actor_id\"] = value\n transformed[\"handle\"] = value\n elif key == 'guid':\n if cls != DiasporaProfile:\n transformed[\"id\"] = value\n transformed[\"guid\"] = value\n elif key in (\"root_author\", \"recipient\"):\n transformed[\"target_id\"] = value\n transformed[\"target_handle\"] = value\n elif key in (\"target_guid\", \"root_guid\", \"parent_guid\"):\n transformed[\"target_id\"] = value\n transformed[\"target_guid\"] = value\n elif key in (\"first_name\", \"last_name\"):\n values = [attrs.get('first_name'), attrs.get('last_name')]\n values = [v for v in values if v]\n transformed[\"name\"] = \" \".join(values)\n elif key == \"image_url\":\n if \"image_urls\" not in transformed:\n transformed[\"image_urls\"] = {}\n transformed[\"image_urls\"][\"large\"] = value\n elif key == \"image_url_small\":\n if \"image_urls\" not in transformed:\n transformed[\"image_urls\"] = {}\n transformed[\"image_urls\"][\"small\"] = value\n elif key == \"image_url_medium\":\n if \"image_urls\" not in transformed:\n transformed[\"image_urls\"] = {}\n transformed[\"image_urls\"][\"medium\"] = value\n elif key == \"tag_string\":\n if value:\n transformed[\"tag_list\"] = value.replace(\"#\", \"\").split(\" \")\n elif key == \"bio\":\n transformed[\"raw_content\"] = value\n elif key == \"searchable\":\n transformed[\"public\"] = True if value == \"true\" else False\n elif key in [\"target_type\"] and cls == DiasporaRetraction:\n transformed[\"entity_type\"] = DiasporaRetraction.entity_type_from_remote(value)\n elif key == \"remote_photo_path\":\n transformed[\"remote_path\"] = value\n elif key == \"remote_photo_name\":\n transformed[\"remote_name\"] = value\n elif key == \"status_message_guid\":\n transformed[\"linked_guid\"] = value\n transformed[\"linked_type\"] = \"Post\"\n elif key == \"author_signature\":\n transformed[\"signature\"] = value\n elif key in BOOLEAN_KEYS:\n transformed[key] = True if value == \"true\" else False\n elif key in DATETIME_KEYS:\n transformed[key] = datetime.strptime(value, \"%Y-%m-%dT%H:%M:%SZ\")\n elif key in INTEGER_KEYS:\n transformed[key] = int(value)\n else:\n transformed[key] = value\n return transformed"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the correct entity for this protocol.", "response": "def get_outbound_entity(entity: BaseEntity, private_key: RsaKey):\n \"\"\"Get the correct outbound entity for this protocol.\n\n We might have to look at entity values to decide the correct outbound entity.\n If we cannot find one, we should raise as conversion cannot be guaranteed to the given protocol.\n\n Private key of author is needed to be passed for signing the outbound entity.\n\n :arg entity: An entity instance which can be of a base or protocol entity class.\n :arg private_key: Private key of sender as an RSA object\n :returns: Protocol specific entity class instance.\n :raises ValueError: If conversion cannot be done.\n \"\"\"\n if getattr(entity, \"outbound_doc\", None):\n # If the entity already has an outbound doc, just return the entity as is\n return entity\n outbound = None\n cls = entity.__class__\n if cls in [DiasporaPost, DiasporaImage, DiasporaComment, DiasporaLike, DiasporaProfile, DiasporaRetraction,\n DiasporaContact, DiasporaReshare]:\n # Already fine\n outbound = entity\n elif cls == Post:\n outbound = DiasporaPost.from_base(entity)\n elif cls == Comment:\n outbound = DiasporaComment.from_base(entity)\n elif cls == Reaction:\n if entity.reaction == \"like\":\n outbound = DiasporaLike.from_base(entity)\n elif cls == Follow:\n outbound = DiasporaContact.from_base(entity)\n elif cls == Profile:\n outbound = DiasporaProfile.from_base(entity)\n elif cls == Retraction:\n outbound = DiasporaRetraction.from_base(entity)\n elif cls == Share:\n outbound = DiasporaReshare.from_base(entity)\n if not outbound:\n raise ValueError(\"Don't know how to convert this base entity to Diaspora protocol entities.\")\n if isinstance(outbound, DiasporaRelayableMixin) and not outbound.signature:\n # Sign by author if not signed yet. We don't want to overwrite any existing signature in the case\n # that this is being sent by the parent author\n outbound.sign(private_key)\n # If missing, also add same signature to `parent_author_signature`. This is required at the moment\n # in all situations but is apparently being removed.\n # TODO: remove this once Diaspora removes the extra signature\n outbound.parent_signature = outbound.signature\n return outbound"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget information from child elements of this element as a list since order is important.", "response": "def get_element_child_info(doc, attr):\n \"\"\"Get information from child elements of this elementas a list since order is important.\n\n Don't include signature tags.\n\n :param doc: XML element\n :param attr: Attribute to get from the elements, for example \"tag\" or \"text\".\n \"\"\"\n props = []\n for child in doc:\n if child.tag not in [\"author_signature\", \"parent_author_signature\"]:\n props.append(getattr(child, attr))\n return props"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nverify that the signed XML elements are signed by the claimed author.", "response": "def verify_relayable_signature(public_key, doc, signature):\n \"\"\"\n Verify the signed XML elements to have confidence that the claimed\n author did actually generate this message.\n \"\"\"\n sig_hash = _create_signature_hash(doc)\n cipher = PKCS1_v1_5.new(RSA.importKey(public_key))\n return cipher.verify(sig_hash, b64decode(signature))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef parse_diaspora_webfinger(document):\n webfinger = {\n \"hcard_url\": None,\n }\n try:\n doc = json.loads(document)\n for link in doc[\"links\"]:\n if link[\"rel\"] == \"http://microformats.org/profile/hcard\":\n webfinger[\"hcard_url\"] = link[\"href\"]\n break\n else:\n logger.warning(\"parse_diaspora_webfinger: found JSON webfinger but it has no hcard href\")\n raise ValueError\n except Exception:\n try:\n xrd = XRD.parse_xrd(document)\n webfinger[\"hcard_url\"] = xrd.find_link(rels=\"http://microformats.org/profile/hcard\").href\n except xml.parsers.expat.ExpatError:\n logger.warning(\"parse_diaspora_webfinger: found XML webfinger but it fails to parse (ExpatError)\")\n pass\n return webfinger", "response": "Parses Diaspora webfinger which is either in JSON format or XRD format."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef retrieve_diaspora_hcard(handle):\n webfinger = retrieve_and_parse_diaspora_webfinger(handle)\n document, code, exception = fetch_document(webfinger.get(\"hcard_url\"))\n if exception:\n return None\n return document", "response": "Retrieve a remote Diaspora hCard document."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef retrieve_and_parse_diaspora_webfinger(handle):\n try:\n host = handle.split(\"@\")[1]\n except AttributeError:\n logger.warning(\"retrieve_and_parse_diaspora_webfinger: invalid handle given: %s\", handle)\n return None\n document, code, exception = fetch_document(\n host=host, path=\"/.well-known/webfinger?resource=acct:%s\" % quote(handle),\n )\n if document:\n return parse_diaspora_webfinger(document)\n hostmeta = retrieve_diaspora_host_meta(host)\n if not hostmeta:\n return None\n url = hostmeta.find_link(rels=\"lrdd\").template.replace(\"{uri}\", quote(handle))\n document, code, exception = fetch_document(url)\n if exception:\n return None\n return parse_diaspora_webfinger(document)", "response": "Retrieve a and parse a remote Diaspora webfinger document."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nretrieves a remote Diaspora host - meta document.", "response": "def retrieve_diaspora_host_meta(host):\n \"\"\"\n Retrieve a remote Diaspora host-meta document.\n\n :arg host: Host to retrieve from\n :returns: ``XRD`` instance\n \"\"\"\n document, code, exception = fetch_document(host=host, path=\"/.well-known/host-meta\")\n if exception:\n return None\n xrd = XRD.parse_xrd(document)\n return xrd"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the element text or None if no element.", "response": "def _get_element_text_or_none(document, selector):\n \"\"\"\n Using a CSS selector, get the element and return the text, or None if no element.\n\n :arg document: ``HTMLElement`` document\n :arg selector: CSS selector\n :returns: str or None\n \"\"\"\n element = document.cssselect(selector)\n if element:\n return element[0].text\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_element_attr_or_none(document, selector, attribute):\n element = document.cssselect(selector)\n if element:\n return element[0].get(attribute)\n return None", "response": "Get the element and return the given attribute value or None if no element is found."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nparses all the fields we can from a hCard document to get a Profile.", "response": "def parse_profile_from_hcard(hcard: str, handle: str):\n \"\"\"\n Parse all the fields we can from a hCard document to get a Profile.\n\n :arg hcard: HTML hcard document (str)\n :arg handle: User handle in username@domain.tld format\n :returns: ``federation.entities.diaspora.entities.DiasporaProfile`` instance\n \"\"\"\n from federation.entities.diaspora.entities import DiasporaProfile # Circulars\n doc = html.fromstring(hcard)\n profile = DiasporaProfile(\n name=_get_element_text_or_none(doc, \".fn\"),\n image_urls={\n \"small\": _get_element_attr_or_none(doc, \".entity_photo_small .photo\", \"src\"),\n \"medium\": _get_element_attr_or_none(doc, \".entity_photo_medium .photo\", \"src\"),\n \"large\": _get_element_attr_or_none(doc, \".entity_photo .photo\", \"src\"),\n },\n public=True if _get_element_text_or_none(doc, \".searchable\") == \"true\" else False,\n id=handle,\n handle=handle,\n guid=_get_element_text_or_none(doc, \".uid\"),\n public_key=_get_element_text_or_none(doc, \".key\"),\n )\n return profile"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef retrieve_and_parse_content(\n guid: str, handle: str, entity_type: str, sender_key_fetcher: Callable[[str], str]=None,\n):\n \"\"\"Retrieve remote content and return an Entity class instance.\n\n This is basically the inverse of receiving an entity. Instead, we fetch it, then call \"handle_receive\".\n\n :param sender_key_fetcher: Function to use to fetch sender public key. If not given, network will be used\n to fetch the profile and the key. Function must take handle as only parameter and return a public key.\n :returns: Entity object instance or ``None``\n \"\"\"\n if not validate_handle(handle):\n return\n _username, domain = handle.split(\"@\")\n url = get_fetch_content_endpoint(domain, entity_type.lower(), guid)\n document, status_code, error = fetch_document(url)\n if status_code == 200:\n request = RequestType(body=document)\n _sender, _protocol, entities = handle_receive(request, sender_key_fetcher=sender_key_fetcher)\n if len(entities) > 1:\n logger.warning(\"retrieve_and_parse_content - more than one entity parsed from remote even though we\"\n \"expected only one! ID %s\", guid)\n if entities:\n return entities[0]\n return\n elif status_code == 404:\n logger.warning(\"retrieve_and_parse_content - remote content %s not found\", guid)\n return\n if error:\n raise error\n raise Exception(\"retrieve_and_parse_content - unknown problem when fetching document: %s, %s, %s\" % (\n document, status_code, error,\n ))", "response": "Retrieve remote content and parse it."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef retrieve_and_parse_profile(handle):\n hcard = retrieve_diaspora_hcard(handle)\n if not hcard:\n return None\n profile = parse_profile_from_hcard(hcard, handle)\n try:\n profile.validate()\n except ValueError as ex:\n logger.warning(\"retrieve_and_parse_profile - found profile %s but it didn't validate: %s\",\n profile, ex)\n return None\n return profile", "response": "Retrieve the remote user and return a Profile object."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget remote endpoint for delivering private payloads.", "response": "def get_private_endpoint(id: str, guid: str) -> str:\n \"\"\"Get remote endpoint for delivering private payloads.\"\"\"\n _username, domain = id.split(\"@\")\n return \"https://%s/receive/users/%s\" % (domain, guid)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef ensure_timezone(dt, tz=None):\n if dt.tzinfo is None:\n return dt.replace(tzinfo=tz or tzlocal())\n else:\n return dt", "response": "Ensure the datetime
has a timezone set using timezone ."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nturns a list of dicts into XML nodes with tag names taken from the dict keys and element text taken from the dict values.", "response": "def struct_to_xml(node, struct):\n \"\"\"\n Turn a list of dicts into XML nodes with tag names taken from the dict\n keys and element text taken from dict values. This is a list of dicts\n so that the XML nodes can be ordered in the XML output.\n \"\"\"\n for obj in struct:\n for k, v in obj.items():\n etree.SubElement(node, k).text = v"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting full XML representation of an entity.", "response": "def get_full_xml_representation(entity, private_key):\n \"\"\"Get full XML representation of an entity.\n\n This contains the .. wrapper.\n\n Accepts either a Base entity or a Diaspora entity.\n\n Author `private_key` must be given so that certain entities can be signed.\n \"\"\"\n from federation.entities.diaspora.mappers import get_outbound_entity\n diaspora_entity = get_outbound_entity(entity, private_key)\n xml = diaspora_entity.to_xml()\n return \"%s\" % etree.tostring(xml).decode(\"utf-8\")"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_element_to_doc(doc, tag, value):\n element = doc.find(\".//%s\" % tag)\n if element is None:\n element = etree.SubElement(doc, tag)\n element.text = value", "response": "Set text value of an etree. Element of tag appending a new element with given tag."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef generate_host_meta(template=None, *args, **kwargs):\n if template == \"diaspora\":\n hostmeta = DiasporaHostMeta(*args, **kwargs)\n else:\n hostmeta = BaseHostMeta(*args, **kwargs)\n return hostmeta.render()", "response": "Generate a host - meta XRD document."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef generate_legacy_webfinger(template=None, *args, **kwargs):\n if template == \"diaspora\":\n webfinger = DiasporaWebFinger(*args, **kwargs)\n else:\n webfinger = BaseLegacyWebFinger(*args, **kwargs)\n return webfinger.render()", "response": "Generate a legacy webfinger XRD document."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef generate_nodeinfo2_document(**kwargs):\n return {\n \"version\": \"1.0\",\n \"server\": {\n \"baseUrl\": kwargs['server']['baseUrl'],\n \"name\": kwargs['server']['name'],\n \"software\": kwargs['server']['software'],\n \"version\": kwargs['server']['version'],\n },\n \"organization\": {\n \"name\": kwargs.get('organization', {}).get('name', None),\n \"contact\": kwargs.get('organization', {}).get('contact', None),\n \"account\": kwargs.get('organization', {}).get('account', None),\n },\n \"protocols\": kwargs.get('protocols', [\"diaspora\"]),\n \"relay\": kwargs.get('relay', ''),\n \"services\": {\n \"inbound\": kwargs.get('service', {}).get('inbound', []),\n \"outbound\": kwargs.get('service', {}).get('outbound', []),\n },\n \"openRegistrations\": kwargs['openRegistrations'],\n \"usage\": {\n \"users\": {\n \"total\": kwargs.get('usage', {}).get('users', {}).get('total'),\n \"activeHalfyear\": kwargs.get('usage', {}).get('users', {}).get('activeHalfyear'),\n \"activeMonth\": kwargs.get('usage', {}).get('users', {}).get('activeMonth'),\n \"activeWeek\": kwargs.get('usage', {}).get('users', {}).get('activeWeek'),\n },\n \"localPosts\": kwargs.get('usage', {}).get('localPosts'),\n \"localComments\": kwargs.get('usage', {}).get('localComments'),\n }\n }", "response": "Generate a NodeInfo2 document."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef generate_hcard(template=None, **kwargs):\n if template == \"diaspora\":\n hcard = DiasporaHCard(**kwargs)\n else:\n raise NotImplementedError()\n return hcard.render()", "response": "Generate a hCard document."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef retrieve_remote_content(\n id: str, guid: str=None, handle: str=None, entity_type: str=None, sender_key_fetcher: Callable[[str], str]=None,\n):\n \"\"\"Retrieve remote content and return an Entity object.\n\n Currently, due to no other protocols supported, always use the Diaspora protocol.\n\n :param sender_key_fetcher: Function to use to fetch sender public key. If not given, network will be used\n to fetch the profile and the key. Function must take handle as only parameter and return a public key.\n :returns: Entity class instance or ``None``\n \"\"\"\n # TODO add support for AP\n protocol_name = \"diaspora\"\n if not guid:\n guid = id\n utils = importlib.import_module(\"federation.utils.%s\" % protocol_name)\n return utils.retrieve_and_parse_content(\n guid=guid, handle=handle, entity_type=entity_type, sender_key_fetcher=sender_key_fetcher,\n )", "response": "Retrieve remote content and return an Entity object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef retrieve_remote_profile(id: str) -> Optional[Profile]:\n protocol = identify_protocol_by_id(id)\n utils = importlib.import_module(f\"federation.utils.{protocol.PROTOCOL_NAME}\")\n return utils.retrieve_and_parse_profile(id)", "response": "High level retrieve profile method."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef handle_receive(\n request: RequestType,\n user: UserType = None,\n sender_key_fetcher: Callable[[str], str] = None,\n skip_author_verification: bool = False\n) -> Tuple[str, str, List]:\n \"\"\"Takes a request and passes it to the correct protocol.\n\n Returns a tuple of:\n - sender id\n - protocol name\n - list of entities\n\n NOTE! The returned sender is NOT necessarily the *author* of the entity. By sender here we're\n talking about the sender of the *request*. If this object is being relayed by the sender, the author\n could actually be a different identity.\n\n :arg request: Request object of type RequestType - note not a HTTP request even though the structure is similar\n :arg user: User that will be passed to `protocol.receive` (only required on private encrypted content)\n MUST have a `private_key` and `id` if given.\n :arg sender_key_fetcher: Function that accepts sender handle and returns public key (optional)\n :arg skip_author_verification: Don't verify sender (test purposes, false default)\n :returns: Tuple of sender id, protocol name and list of entity objects\n \"\"\"\n logger.debug(\"handle_receive: processing request: %s\", request)\n found_protocol = identify_protocol_by_request(request)\n\n logger.debug(\"handle_receive: using protocol %s\", found_protocol.PROTOCOL_NAME)\n protocol = found_protocol.Protocol()\n sender, message = protocol.receive(\n request, user, sender_key_fetcher, skip_author_verification=skip_author_verification)\n logger.debug(\"handle_receive: sender %s, message %s\", sender, message)\n\n mappers = importlib.import_module(\"federation.entities.%s.mappers\" % found_protocol.PROTOCOL_NAME)\n entities = mappers.message_to_objects(message, sender, sender_key_fetcher, user)\n logger.debug(\"handle_receive: entities %s\", entities)\n\n return sender, found_protocol.PROTOCOL_NAME, entities", "response": "Takes a request and passes it to the correct protocol."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef identify_id(id: str) -> bool:\n return re.match(r'^https?://', id, flags=re.IGNORECASE) is not None", "response": "Try to identify whether this is an ActivityPub ID."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef identify_request(request: RequestType) -> bool:\n # noinspection PyBroadException\n try:\n data = json.loads(decode_if_bytes(request.body))\n if \"@context\" in data:\n return True\n except Exception:\n pass\n return False", "response": "Identify whether this is an ActivityPub request."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef build_send(self, entity: BaseEntity, from_user: UserType, to_user_key: RsaKey = None) -> Union[str, Dict]:\n if hasattr(entity, \"outbound_doc\") and entity.outbound_doc is not None:\n # Use pregenerated outbound document\n rendered = entity.outbound_doc\n else:\n rendered = entity.to_as2()\n return rendered", "response": "Builds the POST data for sending out to remotes."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef receive(\n self,\n request: RequestType,\n user: UserType = None,\n sender_key_fetcher: Callable[[str], str] = None,\n skip_author_verification: bool = False) -> Tuple[str, dict]:\n \"\"\"\n Receive a request.\n\n For testing purposes, `skip_author_verification` can be passed. Authorship will not be verified.\n \"\"\"\n self.user = user\n self.get_contact_key = sender_key_fetcher\n self.payload = json.loads(decode_if_bytes(request.body))\n self.request = request\n self.extract_actor()\n # Verify the message is from who it claims to be\n if not skip_author_verification:\n self.verify_signature()\n return self.actor, self.payload", "response": "Receive a request.\n\n For testing purposes, `skip_author_verification` can be passed. Authorship will not be verified."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncombining defaults with the Django configuration.", "response": "def get_configuration():\n \"\"\"\n Combine defaults with the Django configuration.\n \"\"\"\n configuration = {\n \"get_object_function\": None,\n \"hcard_path\": \"/hcard/users/\",\n \"nodeinfo2_function\": None,\n \"process_payload_function\": None,\n \"search_path\": None,\n # TODO remove or default to True once AP support is more ready\n \"activitypub\": False,\n }\n configuration.update(settings.FEDERATION)\n if not all([\n \"get_private_key_function\" in configuration,\n \"get_profile_function\" in configuration,\n \"base_url\" in configuration,\n ]):\n raise ImproperlyConfigured(\"Missing required FEDERATION settings, please check documentation.\")\n return configuration"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_function_from_config(item):\n config = get_configuration()\n func_path = config.get(item)\n module_path, func_name = func_path.rsplit(\".\", 1)\n module = importlib.import_module(module_path)\n func = getattr(module, func_name)\n return func", "response": "Import the function to get profile by handle."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef post_receive(self) -> None:\n from federation.utils.activitypub import retrieve_and_parse_profile # Circulars\n try:\n from federation.utils.django import get_function_from_config\n except ImportError:\n logger.warning(\"ActivitypubFollow.post_receive - Unable to send automatic Accept back, only supported on \"\n \"Django currently\")\n return\n get_private_key_function = get_function_from_config(\"get_private_key_function\")\n key = get_private_key_function(self.target_id)\n if not key:\n logger.warning(\"ActivitypubFollow.post_receive - Failed to send automatic Accept back: could not find \"\n \"profile to sign it with\")\n return\n accept = ActivitypubAccept(\n activity_id=f\"{self.target_id}#accept-{uuid.uuid4()}\",\n actor_id=self.target_id,\n target_id=self.activity_id,\n )\n try:\n profile = retrieve_and_parse_profile(self.actor_id)\n except Exception:\n profile = None\n if not profile:\n logger.warning(\"ActivitypubFollow.post_receive - Failed to fetch remote profile for sending back Accept\")\n return\n try:\n handle_send(\n accept,\n UserType(id=self.target_id, private_key=key),\n recipients=[{\n \"fid\": profile.inboxes[\"private\"],\n \"protocol\": \"activitypub\",\n \"public\": False,\n }],\n )\n except Exception:\n logger.exception(\"ActivitypubFollow.post_receive - Failed to send Accept back\")", "response": "Send back an Accept message to the Activitypub server."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nretrieve remote document by ID and return the entity.", "response": "def retrieve_and_parse_document(fid: str) -> Optional[Any]:\n \"\"\"\n Retrieve remote document by ID and return the entity.\n \"\"\"\n document, status_code, ex = fetch_document(fid, extra_headers={'accept': 'application/activity+json'})\n if document:\n document = json.loads(decode_if_bytes(document))\n entities = message_to_objects(document, fid)\n if entities:\n return entities[0]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef retrieve_and_parse_profile(fid: str) -> Optional[ActivitypubProfile]:\n profile = retrieve_and_parse_document(fid)\n if not profile:\n return\n try:\n profile.validate()\n except ValueError as ex:\n logger.warning(\"retrieve_and_parse_profile - found profile %s but it didn't validate: %s\",\n profile, ex)\n return\n return profile", "response": "Retrieve the remote fid and return a Profile object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfind the protocol that matches the given method and value.", "response": "def identify_protocol(method, value):\n # type: (str, Union[str, RequestType]) -> str\n \"\"\"\n Loop through protocols, import the protocol module and try to identify the id or request.\n \"\"\"\n for protocol_name in PROTOCOLS:\n protocol = importlib.import_module(f\"federation.protocols.{protocol_name}.protocol\")\n if getattr(protocol, f\"identify_{method}\")(value):\n return protocol\n else:\n raise NoSuitableProtocolFoundError()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ntries to identify whether this is a Diaspora request.", "response": "def identify_request(request: RequestType):\n \"\"\"Try to identify whether this is a Diaspora request.\n\n Try first public message. Then private message. The check if this is a legacy payload.\n \"\"\"\n # Private encrypted JSON payload\n try:\n data = json.loads(decode_if_bytes(request.body))\n if \"encrypted_magic_envelope\" in data:\n return True\n except Exception:\n pass\n # Public XML payload\n try:\n xml = etree.fromstring(encode_if_text(request.body))\n if xml.tag == MAGIC_ENV_TAG:\n return True\n except Exception:\n pass\n return False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the Magic Envelope and store it in the self. doc attribute.", "response": "def store_magic_envelope_doc(self, payload):\n \"\"\"Get the Magic Envelope, trying JSON first.\"\"\"\n try:\n json_payload = json.loads(decode_if_bytes(payload))\n except ValueError:\n # XML payload\n xml = unquote(decode_if_bytes(payload))\n xml = xml.lstrip().encode(\"utf-8\")\n logger.debug(\"diaspora.protocol.store_magic_envelope_doc: xml payload: %s\", xml)\n self.doc = etree.fromstring(xml)\n else:\n logger.debug(\"diaspora.protocol.store_magic_envelope_doc: json payload: %s\", json_payload)\n self.doc = self.get_json_payload_magic_envelope(json_payload)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreceives a message from the client.", "response": "def receive(\n self,\n request: RequestType,\n user: UserType = None,\n sender_key_fetcher: Callable[[str], str] = None,\n skip_author_verification: bool = False) -> Tuple[str, str]:\n \"\"\"Receive a payload.\n\n For testing purposes, `skip_author_verification` can be passed. Authorship will not be verified.\"\"\"\n self.user = user\n self.get_contact_key = sender_key_fetcher\n self.store_magic_envelope_doc(request.body)\n # Open payload and get actual message\n self.content = self.get_message_content()\n # Get sender handle\n self.sender_handle = self.get_sender()\n # Verify the message is from who it claims to be\n if not skip_author_verification:\n self.verify_signature()\n return self.sender_handle, self.content"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_message_content(self):\n body = self.doc.find(\n \".//{http://salmon-protocol.org/ns/magic-env}data\").text\n\n body = urlsafe_b64decode(body.encode(\"ascii\"))\n\n logger.debug(\"diaspora.protocol.get_message_content: %s\", body)\n return body", "response": "Given the Slap XML extract out the payload."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef verify_signature(self):\n if self.get_contact_key:\n sender_key = self.get_contact_key(self.sender_handle)\n else:\n sender_key = fetch_public_key(self.sender_handle)\n if not sender_key:\n raise NoSenderKeyFoundError(\"Could not find a sender contact to retrieve key\")\n MagicEnvelope(doc=self.doc, public_key=sender_key, verify=True)", "response": "Verify that the signed XML elements have the confidence of the claimedCOOKIE."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef build_send(self, entity: BaseEntity, from_user: UserType, to_user_key: RsaKey = None) -> Union[str, Dict]:\n if entity.outbound_doc is not None:\n # Use pregenerated outbound document\n xml = entity.outbound_doc\n else:\n xml = entity.to_xml()\n me = MagicEnvelope(etree.tostring(xml), private_key=from_user.private_key, author_handle=from_user.handle)\n rendered = me.render()\n if to_user_key:\n return EncryptedPayload.encrypt(rendered, to_user_key)\n return rendered", "response": "Builds the POST data for sending out to remotes."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef validate_reaction(self):\n if self.reaction not in self._reaction_valid_values:\n raise ValueError(\"reaction should be one of: {valid}\".format(\n valid=\", \".join(self._reaction_valid_values)\n ))", "response": "Ensure that the reaction is of a certain type."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate_relationship(self):\n if self.relationship not in self._relationship_valid_values:\n raise ValueError(\"relationship should be one of: {valid}\".format(\n valid=\", \".join(self._relationship_valid_values)\n ))", "response": "Ensure that the relationship is of a certain type."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ntransforms an Element to a list of entities recursively.", "response": "def element_to_objects(payload: Dict) -> List:\n \"\"\"\n Transform an Element to a list of entities recursively.\n \"\"\"\n entities = []\n cls = MAPPINGS.get(payload.get('type'))\n if not cls:\n return []\n\n transformed = transform_attributes(payload, cls)\n entity = cls(**transformed)\n\n if hasattr(entity, \"post_receive\"):\n entity.post_receive()\n\n entities.append(entity)\n\n return entities"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_outbound_entity(entity: BaseEntity, private_key):\n if getattr(entity, \"outbound_doc\", None):\n # If the entity already has an outbound doc, just return the entity as is\n return entity\n outbound = None\n cls = entity.__class__\n if cls in [ActivitypubAccept, ActivitypubFollow, ActivitypubProfile]:\n # Already fine\n outbound = entity\n elif cls == Accept:\n outbound = ActivitypubAccept.from_base(entity)\n elif cls == Follow:\n outbound = ActivitypubFollow.from_base(entity)\n elif cls == Profile:\n outbound = ActivitypubProfile.from_base(entity)\n if not outbound:\n raise ValueError(\"Don't know how to convert this base entity to ActivityPub protocol entities.\")\n # TODO LDS signing\n # if isinstance(outbound, DiasporaRelayableMixin) and not outbound.signature:\n # # Sign by author if not signed yet. We don't want to overwrite any existing signature in the case\n # # that this is being sent by the parent author\n # outbound.sign(private_key)\n # # If missing, also add same signature to `parent_author_signature`. This is required at the moment\n # # in all situations but is apparently being removed.\n # # TODO: remove this once Diaspora removes the extra signature\n # outbound.parent_signature = outbound.signature\n return outbound", "response": "Returns the correct entity for this protocol."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef message_to_objects(\n message: Dict, sender: str, sender_key_fetcher: Callable[[str], str] = None, user: UserType = None,\n) -> List:\n \"\"\"\n Takes in a message extracted by a protocol and maps it to entities.\n \"\"\"\n # We only really expect one element here for ActivityPub.\n return element_to_objects(message)", "response": "Takes in a message extracted by a protocol and maps it to entities."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndoes validation. 1) Check `_required` have been given 2) Make sure all attrs in required have a non-empty value 3) Loop through attributes and call their `validate_` methods, if any. 4) Validate allowed children 5) Validate signatures", "response": "def validate(self):\n \"\"\"Do validation.\n\n 1) Check `_required` have been given\n 2) Make sure all attrs in required have a non-empty value\n 3) Loop through attributes and call their `validate_` methods, if any.\n 4) Validate allowed children\n 5) Validate signatures\n \"\"\"\n attributes = []\n validates = []\n # Collect attributes and validation methods\n for attr in dir(self):\n if not attr.startswith(\"_\"):\n attr_type = type(getattr(self, attr))\n if attr_type != \"method\":\n if getattr(self, \"validate_{attr}\".format(attr=attr), None):\n validates.append(getattr(self, \"validate_{attr}\".format(attr=attr)))\n attributes.append(attr)\n self._validate_empty_attributes(attributes)\n self._validate_required(attributes)\n self._validate_attributes(validates)\n self._validate_children()\n self._validate_signatures()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _validate_required(self, attributes):\n required_fulfilled = set(self._required).issubset(set(attributes))\n if not required_fulfilled:\n raise ValueError(\n \"Not all required attributes fulfilled. Required: {required}\".format(required=set(self._required))\n )", "response": "Ensure required attributes are present."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _validate_empty_attributes(self, attributes):\n attrs_to_check = set(self._required) & set(attributes)\n for attr in attrs_to_check:\n value = getattr(self, attr) # We should always have a value here\n if value is None or value == \"\":\n raise ValueError(\n \"Attribute %s cannot be None or an empty string since it is required.\" % attr\n )", "response": "Check that required attributes are not empty."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncheck that the children we have are allowed here.", "response": "def _validate_children(self):\n \"\"\"Check that the children we have are allowed here.\"\"\"\n for child in self._children:\n if child.__class__ not in self._allowed_children:\n raise ValueError(\n \"Child %s is not allowed as a children for this %s type entity.\" % (\n child, self.__class__\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nensuring participation is of a certain type.", "response": "def validate_participation(self):\n \"\"\"Ensure participation is of a certain type.\"\"\"\n if self.participation not in self._participation_valid_values:\n raise ValueError(\"participation should be one of: {valid}\".format(\n valid=\", \".join(self._participation_valid_values)\n ))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a set of unique tags contained in raw_content.", "response": "def tags(self):\n \"\"\"Returns a `set` of unique tags contained in `raw_content`.\"\"\"\n if not self.raw_content:\n return set()\n return {word.strip(\"#\").lower() for word in self.raw_content.split() if word.startswith(\"#\") and len(word) > 1}"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef handle_create_payload(\n entity: BaseEntity,\n author_user: UserType,\n protocol_name: str,\n to_user_key: RsaKey = None,\n parent_user: UserType = None,\n) -> str:\n \"\"\"Create a payload with the given protocol.\n\n Any given user arguments must have ``private_key`` and ``handle`` attributes.\n\n :arg entity: Entity object to send. Can be a base entity or a protocol specific one.\n :arg author_user: User authoring the object.\n :arg protocol_name: Protocol to create payload for.\n :arg to_user_key: Public key of user private payload is being sent to, required for private payloads.\n :arg parent_user: (Optional) User object of the parent object, if there is one. This must be given for the\n Diaspora protocol if a parent object exists, so that a proper ``parent_author_signature`` can\n be generated. If given, the payload will be sent as this user.\n :returns: Built payload message (str)\n \"\"\"\n mappers = importlib.import_module(f\"federation.entities.{protocol_name}.mappers\")\n protocol = importlib.import_module(f\"federation.protocols.{protocol_name}.protocol\")\n protocol = protocol.Protocol()\n outbound_entity = mappers.get_outbound_entity(entity, author_user.private_key)\n if parent_user:\n outbound_entity.sign_with_parent(parent_user.private_key)\n send_as_user = parent_user if parent_user else author_user\n data = protocol.build_send(entity=outbound_entity, from_user=send_as_user, to_user_key=to_user_key)\n return data", "response": "Create a payload for the given entity."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef handle_send(\n entity: BaseEntity,\n author_user: UserType,\n recipients: List[Dict],\n parent_user: UserType = None,\n) -> None:\n \"\"\"Send an entity to remote servers.\n\n Using this we will build a list of payloads per protocol. After that, each recipient will get the generated\n protocol payload delivered. Delivery to the same endpoint will only be done once so it's ok to include\n the same endpoint as a receiver multiple times.\n\n Any given user arguments must have ``private_key`` and ``fid`` attributes.\n\n :arg entity: Entity object to send. Can be a base entity or a protocol specific one.\n :arg author_user: User authoring the object.\n :arg recipients: A list of recipients to delivery to. Each recipient is a dict\n containing at minimum the \"fid\", \"public\" and \"protocol\" keys.\n\n For ActivityPub and Diaspora payloads, \"fid\" should be an URL of the endpoint to deliver to.\n\n The \"protocol\" should be a protocol name that is known for this recipient.\n\n The \"public\" value should be a boolean to indicate whether the payload should be flagged as a\n public payload.\n\n TODO: support guessing the protocol over networks? Would need caching of results\n\n For private deliveries to Diaspora protocol recipients, \"public_key\" is also required.\n\n For example\n [\n {\n \"fid\": \"https://domain.tld/receive/users/1234-5678-0123-4567\",\n \"protocol\": \"diaspora\",\n \"public\": False,\n \"public_key\": ,\n },\n {\n \"fid\": \"https://domain2.tld/receive/public\",\n \"protocol\": \"diaspora\",\n \"public\": True,\n },\n {\n \"fid\": \"https://domain4.tld/sharedinbox/\",\n \"protocol\": \"activitypub\",\n \"public\": True,\n },\n {\n \"fid\": \"https://domain4.tld/profiles/jill\",\n \"protocol\": \"activitypub\",\n \"public\": False,\n },\n ]\n :arg parent_user: (Optional) User object of the parent object, if there is one. This must be given for the\n Diaspora protocol if a parent object exists, so that a proper ``parent_author_signature`` can\n be generated. If given, the payload will be sent as this user.\n \"\"\"\n payloads = []\n public_payloads = {\n \"activitypub\": {\n \"auth\": None,\n \"payload\": None,\n \"urls\": set(),\n },\n \"diaspora\": {\n \"auth\": None,\n \"payload\": None,\n \"urls\": set(),\n },\n }\n\n # Flatten to unique recipients\n unique_recipients = unique_everseen(recipients)\n\n # Generate payloads and collect urls\n for recipient in unique_recipients:\n fid = recipient[\"fid\"]\n public_key = recipient.get(\"public_key\")\n protocol = recipient[\"protocol\"]\n public = recipient[\"public\"]\n\n if protocol == \"activitypub\":\n try:\n payload = handle_create_payload(entity, author_user, protocol, parent_user=parent_user)\n if public:\n payload[\"to\"] = \"https://www.w3.org/ns/activitystreams#Public\"\n else:\n payload[\"to\"] = fid\n payload = json.dumps(payload).encode(\"utf-8\")\n except Exception as ex:\n logger.error(\"handle_send - failed to generate private payload for %s: %s\", fid, ex)\n continue\n payloads.append({\n \"auth\": get_http_authentication(author_user.private_key, f\"{author_user.id}#main-key\"),\n \"payload\": payload,\n \"content_type\": 'application/ld+json; profile=\"https://www.w3.org/ns/activitystreams\"',\n \"urls\": {fid},\n })\n elif protocol == \"diaspora\":\n if public:\n if public_key:\n raise ValueError(\"handle_send - Diaspora recipient cannot be public and use encrypted delivery\")\n if not public_payloads[protocol][\"payload\"]:\n public_payloads[protocol][\"payload\"] = handle_create_payload(\n entity, author_user, protocol, parent_user=parent_user,\n )\n public_payloads[\"diaspora\"][\"urls\"].add(fid)\n else:\n if not public_key:\n raise ValueError(\"handle_send - Diaspora recipient cannot be private without a public key for \"\n \"encrypted delivery\")\n # Private payload\n try:\n payload = handle_create_payload(\n entity, author_user, \"diaspora\", to_user_key=public_key, parent_user=parent_user,\n )\n payload = json.dumps(payload)\n except Exception as ex:\n logger.error(\"handle_send - failed to generate private payload for %s: %s\", fid, ex)\n continue\n payloads.append({\n \"urls\": {fid}, \"payload\": payload, \"content_type\": \"application/json\", \"auth\": None,\n })\n\n # Add public diaspora payload\n if public_payloads[\"diaspora\"][\"payload\"]:\n payloads.append({\n \"urls\": public_payloads[\"diaspora\"][\"urls\"], \"payload\": public_payloads[\"diaspora\"][\"payload\"],\n \"content_type\": \"application/magic-envelope+xml\", \"auth\": None,\n })\n\n logger.debug(\"handle_send - %s\", payloads)\n\n # Do actual sending\n for payload in payloads:\n for url in payload[\"urls\"]:\n try:\n send_document(\n url,\n payload[\"payload\"],\n auth=payload[\"auth\"],\n headers={\"Content-Type\": payload[\"content_type\"]},\n )\n except Exception as ex:\n logger.error(\"handle_send - failed to send payload to %s: %s, payload: %s\", url, ex, payload[\"payload\"])", "response": "This function handles the send request for the given entity."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the key_id from the sig element which contains urlsafe_b64encoded Diaspora handle.", "response": "def get_sender(doc):\n \"\"\"Get the key_id from the `sig` element which contains urlsafe_b64encoded Diaspora handle.\n\n :param doc: ElementTree document\n :returns: Diaspora handle\n \"\"\"\n key_id = doc.find(\".//{%s}sig\" % NAMESPACE).get(\"key_id\")\n return urlsafe_b64decode(key_id).decode(\"utf-8\")"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create_payload(self):\n doc = etree.fromstring(self.message)\n self.payload = etree.tostring(doc, encoding=\"utf-8\")\n self.payload = urlsafe_b64encode(self.payload).decode(\"ascii\")\n return self.payload", "response": "Create the payload doc.\n is the class name of the message."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _build_signature(self):\n sig_contents = \\\n self.payload + \".\" + \\\n b64encode(b\"application/xml\").decode(\"ascii\") + \".\" + \\\n b64encode(b\"base64url\").decode(\"ascii\") + \".\" + \\\n b64encode(b\"RSA-SHA256\").decode(\"ascii\")\n sig_hash = SHA256.new(sig_contents.encode(\"ascii\"))\n cipher = PKCS1_v1_5.new(self.private_key)\n sig = urlsafe_b64encode(cipher.sign(sig_hash))\n key_id = urlsafe_b64encode(bytes(self.author_handle, encoding=\"utf-8\"))\n return sig, key_id", "response": "Create the signature using the private key."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nverify Magic Envelope document against public key.", "response": "def verify(self):\n \"\"\"Verify Magic Envelope document against public key.\"\"\"\n if not self.public_key:\n self.fetch_public_key()\n data = self.doc.find(\".//{http://salmon-protocol.org/ns/magic-env}data\").text\n sig = self.doc.find(\".//{http://salmon-protocol.org/ns/magic-env}sig\").text\n sig_contents = '.'.join([\n data,\n b64encode(b\"application/xml\").decode(\"ascii\"),\n b64encode(b\"base64url\").decode(\"ascii\"),\n b64encode(b\"RSA-SHA256\").decode(\"ascii\")\n ])\n sig_hash = SHA256.new(sig_contents.encode(\"ascii\"))\n cipher = PKCS1_v1_5.new(RSA.importKey(self.public_key))\n if not cipher.verify(sig_hash, urlsafe_b64decode(sig)):\n raise SignatureVerificationError(\"Signature cannot be verified using the given public key\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nextracting mentions from an entity with raw_content.", "response": "def extract_mentions(self):\n \"\"\"\n Extract mentions from an entity with ``raw_content``.\n\n :return: set\n \"\"\"\n if not hasattr(self, \"raw_content\"):\n return set()\n mentions = re.findall(r'@{[^;]+; [\\w.-]+@[^}]+}', self.raw_content)\n if not mentions:\n return set()\n mentions = {s.split(';')[1].strip(' }') for s in mentions}\n mentions = {s for s in mentions}\n return mentions"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef fetch_country_by_ip(ip):\n iplookup = ipdata.ipdata()\n data = iplookup.lookup(ip)\n if data.get('status') != 200:\n return ''\n\n return data.get('response', {}).get('country_code', '')", "response": "Fetch the country code by IP"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef fetch_host_ip(host: str) -> str:\n try:\n ip = socket.gethostbyname(host)\n except socket.gaierror:\n return ''\n\n return ip", "response": "Fetch ip by host"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef fetch_host_ip_and_country(host: str) -> Tuple:\n ip = fetch_host_ip(host)\n if not host:\n return '', ''\n\n country = fetch_country_by_ip(ip)\n\n return ip, country", "response": "Fetch ip and country by host"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse a date string as specified by HTTP RFC 7231 section 7. 1. 1.", "response": "def parse_http_date(date):\n \"\"\"\n Parse a date format as specified by HTTP RFC7231 section 7.1.1.1.\n\n The three formats allowed by the RFC are accepted, even if only the first\n one is still in widespread use.\n\n Return an integer expressed in seconds since the epoch, in UTC.\n\n Implementation copied from Django.\n https://github.com/django/django/blob/master/django/utils/http.py#L157\n License: BSD 3-clause\n \"\"\"\n MONTHS = 'jan feb mar apr may jun jul aug sep oct nov dec'.split()\n __D = r'(?P\\d{2})'\n __D2 = r'(?P[ \\d]\\d)'\n __M = r'(?P\\w{3})'\n __Y = r'(?P\\d{4})'\n __Y2 = r'(?P\\d{2})'\n __T = r'(?P\\d{2}):(?P\\d{2}):(?P\\d{2})'\n RFC1123_DATE = re.compile(r'^\\w{3}, %s %s %s %s GMT$' % (__D, __M, __Y, __T))\n RFC850_DATE = re.compile(r'^\\w{6,9}, %s-%s-%s %s GMT$' % (__D, __M, __Y2, __T))\n ASCTIME_DATE = re.compile(r'^\\w{3} %s %s %s %s$' % (__M, __D2, __T, __Y))\n # email.utils.parsedate() does the job for RFC1123 dates; unfortunately\n # RFC7231 makes it mandatory to support RFC850 dates too. So we roll\n # our own RFC-compliant parsing.\n for regex in RFC1123_DATE, RFC850_DATE, ASCTIME_DATE:\n m = regex.match(date)\n if m is not None:\n break\n else:\n raise ValueError(\"%r is not in a valid HTTP date format\" % date)\n try:\n year = int(m.group('year'))\n if year < 100:\n if year < 70:\n year += 2000\n else:\n year += 1900\n month = MONTHS.index(m.group('mon').lower()) + 1\n day = int(m.group('day'))\n hour = int(m.group('hour'))\n min = int(m.group('min'))\n sec = int(m.group('sec'))\n result = datetime.datetime(year, month, day, hour, min, sec)\n return calendar.timegm(result.utctimetuple())\n except Exception as exc:\n raise ValueError(\"%r is not a valid date\" % date) from exc"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting HTTP signature authentication for a request.", "response": "def get_http_authentication(private_key: RsaKey, private_key_id: str) -> HTTPSignatureHeaderAuth:\n \"\"\"\n Get HTTP signature authentication for a request.\n \"\"\"\n key = private_key.exportKey()\n return HTTPSignatureHeaderAuth(\n headers=[\"(request-target)\", \"user-agent\", \"host\", \"date\"],\n algorithm=\"rsa-sha256\",\n key=key,\n key_id=private_key_id,\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nverifying the HTTP signature in a request.", "response": "def verify_request_signature(request: RequestType, public_key: Union[str, bytes]):\n \"\"\"\n Verify HTTP signature in request against a public key.\n \"\"\"\n key = encode_if_text(public_key)\n date_header = request.headers.get(\"Date\")\n if not date_header:\n raise ValueError(\"Rquest Date header is missing\")\n\n ts = parse_http_date(date_header)\n dt = datetime.datetime.utcfromtimestamp(ts).replace(tzinfo=pytz.utc)\n delta = datetime.timedelta(seconds=30)\n now = datetime.datetime.utcnow().replace(tzinfo=pytz.utc)\n if dt < now - delta or dt > now + delta:\n raise ValueError(\"Request Date is too far in future or past\")\n\n HTTPSignatureHeaderAuth.verify(request, key_resolver=lambda **kwargs: key)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef to_xml(self):\n element = etree.Element(self._tag_name)\n struct_to_xml(element, [\n {\"text\": self.raw_content},\n {\"guid\": self.guid},\n {\"author\": self.handle},\n {\"public\": \"true\" if self.public else \"false\"},\n {\"created_at\": format_dt(self.created_at)},\n {\"provider_display_name\": self.provider_display_name},\n ])\n return element", "response": "Convert to XML message."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert to XML message.", "response": "def to_xml(self):\n \"\"\"Convert to XML message.\"\"\"\n element = etree.Element(self._tag_name)\n struct_to_xml(element, [\n {\"author\": self.handle},\n {\"recipient\": self.target_handle},\n {\"following\": \"true\" if self.following else \"false\"},\n {\"sharing\": \"true\" if self.following else \"false\"},\n ])\n return element"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts to XML message.", "response": "def to_xml(self):\n \"\"\"Convert to XML message.\"\"\"\n element = etree.Element(self._tag_name)\n struct_to_xml(element, [\n {\"author\": self.handle},\n {\"first_name\": self.name},\n {\"last_name\": \"\"}, # We only have one field - splitting it would be artificial\n {\"image_url\": self.image_urls[\"large\"]},\n {\"image_url_small\": self.image_urls[\"small\"]},\n {\"image_url_medium\": self.image_urls[\"medium\"]},\n {\"gender\": self.gender},\n {\"bio\": self.raw_content},\n {\"location\": self.location},\n {\"searchable\": \"true\" if self.public else \"false\"},\n {\"nsfw\": \"true\" if self.nsfw else \"false\"},\n {\"tag_string\": \" \".join([\"#%s\" % tag for tag in self.tag_list])},\n ])\n return element"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef to_xml(self):\n element = etree.Element(self._tag_name)\n struct_to_xml(element, [\n {\"author\": self.handle},\n {\"target_guid\": self.target_guid},\n {\"target_type\": DiasporaRetraction.entity_type_to_remote(self.entity_type)},\n ])\n return element", "response": "Convert to XML message."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting entity type between our Entity names and Diaspora names.", "response": "def entity_type_to_remote(value):\n \"\"\"Convert entity type between our Entity names and Diaspora names.\"\"\"\n if value in DiasporaRetraction.mapped.values():\n values = list(DiasporaRetraction.mapped.values())\n index = values.index(value)\n return list(DiasporaRetraction.mapped.keys())[index]\n return value"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nbuild a dict of attributes and their values ignoring any properties functions and anything that starts with an underscore.", "response": "def get_base_attributes(entity):\n \"\"\"Build a dict of attributes of an entity.\n\n Returns attributes and their values, ignoring any properties, functions and anything that starts\n with an underscore.\n \"\"\"\n attributes = {}\n cls = entity.__class__\n for attr, _ in inspect.getmembers(cls, lambda o: not isinstance(o, property) and not inspect.isroutine(o)):\n if not attr.startswith(\"_\"):\n attributes[attr] = getattr(entity, attr)\n return attributes"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\npads an input string to be a multiple of block_size bytes.", "response": "def pkcs7_pad(inp, block_size):\n \"\"\"\n Using the PKCS#7 padding scheme, pad to be a multiple of\n bytes. Ruby's AES encryption pads with this scheme, but\n pycrypto doesn't support it.\n\n Implementation copied from pyaspora:\n https://github.com/mjnovice/pyaspora/blob/master/pyaspora/diaspora/protocol.py#L209\n \"\"\"\n val = block_size - len(inp) % block_size\n if val == 0:\n return inp + (bytes([block_size]) * block_size)\n else:\n return inp + (bytes([val]) * val)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef pkcs7_unpad(data):\n if isinstance(data, str):\n return data[0:-ord(data[-1])]\n else:\n return data[0:-data[-1]]", "response": "Remove the padding bytes that were added at point of encryption."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef encrypt(payload, public_key):\n iv, key, encrypter = EncryptedPayload.get_iv_key_encrypter()\n aes_key_json = EncryptedPayload.get_aes_key_json(iv, key)\n cipher = PKCS1_v1_5.new(public_key)\n aes_key = b64encode(cipher.encrypt(aes_key_json))\n padded_payload = pkcs7_pad(payload.encode(\"utf-8\"), AES.block_size)\n encrypted_me = b64encode(encrypter.encrypt(padded_payload))\n return {\n \"aes_key\": aes_key.decode(\"utf-8\"),\n \"encrypted_magic_envelope\": encrypted_me.decode(\"utf8\"),\n }", "response": "Encrypt a payload using an encrypted JSON wrapper."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the template in JSON form", "response": "def template(self):\n \"\"\"Returns the template in JSON form\"\"\"\n if self._template:\n return self._template\n\n template_json = self.read_template(self.args.tmplname)\n\n self._template = loads(template_json)\n return self._template"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the creation time of the specified LogRecord as formatted text.", "response": "def formatTime(self, record, datefmt=None):\n \"\"\"Return the creation time of the specified LogRecord as formatted text.\n\n If ``datefmt`` (a string) is specified, it is used to format the creation time of the record.\n If ``datefmt`` is 'Z' then creation time of the record will be in Zulu Time Zone.\n Otherwise, the ISO8601 format is used.\n \"\"\"\n ct = self.converter(record.created)\n if datefmt:\n if datefmt == 'Z':\n t = time.strftime(\"%Y-%m-%dT%H:%M:%S\", ct)\n s = \"{}.{:03.0f}Z\".format(t, record.msecs)\n else:\n s = time.strftime(datefmt, ct)\n else:\n t = time.strftime(self.default_time_format, ct)\n s = self.default_msec_format % (t, record.msecs)\n\n return s"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nrenaming level names. :param mapping: Mapping level names to new ones :type mapping: dict", "response": "def override_level_names(self, mapping):\n \"\"\"Rename level names.\n\n :param mapping: Mapping level names to new ones\n :type mapping: dict\n \"\"\"\n if not isinstance(mapping, dict):\n return\n for key, val in mapping.items():\n if key in self._level_names:\n self._level_names[key] = val"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\noverride default color of elements.", "response": "def override_colors(self, colors):\n \"\"\"Override default color of elements.\n\n :param colors: New color value for given elements\n :type colors: dict\n \"\"\"\n if not isinstance(colors, dict):\n return\n for key in self._color[True]:\n if key in colors:\n self._color[True][key] = colors[key]"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\npreparing log record with given fields.", "response": "def __prepare_record(self, record, enabled_fields):\n \"\"\"Prepare log record with given fields.\"\"\"\n message = record.getMessage()\n if hasattr(record, 'prefix'):\n message = \"{}{}\".format((str(record.prefix) + ' ') if record.prefix else '', message)\n\n obj = {\n 'name': record.name,\n 'asctime': self.formatTime(record, self.datefmt),\n 'created': record.created,\n 'msecs': record.msecs,\n 'relativeCreated': record.relativeCreated,\n 'levelno': record.levelno,\n 'levelname': self._level_names[record.levelname],\n 'thread': record.thread,\n 'threadName': record.threadName,\n 'process': record.process,\n 'pathname': record.pathname,\n 'filename': record.filename,\n 'module': record.module,\n 'lineno': record.lineno,\n 'funcName': record.funcName,\n 'message': message,\n 'exception': record.exc_info[0].__name__ if record.exc_info else None,\n 'stacktrace': record.exc_text,\n }\n\n if not isinstance(enabled_fields, list):\n enabled_fields = [str(enabled_fields)]\n\n ef = {}\n for item in enabled_fields:\n if not isinstance(item, (str, tuple)):\n continue\n if not isinstance(item, tuple):\n ef[item] = item\n else:\n ef[item[0]] = item[1]\n\n result = {}\n for key, val in obj.items():\n if key in ef:\n result[ef[key]] = val\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef __obj2json(self, obj):\n return json.dumps(obj, indent=self._indent, sort_keys=self._sort_keys)", "response": "Serialize obj to a JSON formatted string."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef validate_host(cert, name):\n cn = None\n for t, v in cert.get_subject().get_components():\n if t == b'CN':\n cn = v\n break\n\n if cn == name:\n return True\n\n # checking SAN\n s_name = name.decode('ascii')\n for i in range(cert.get_extension_count()):\n ext = cert.get_extension(i)\n if ext.get_short_name() == b'subjectAltName':\n s = str(ext)\n # SANs are usually have form like: DNS:hostname\n if s.startswith('DNS:') and s[4:] == s_name:\n return True\n\n # TODO handle wildcards\n return False", "response": "Validates host name against certificate\n Returns True if host name matches certificate\n Returns False if host name matches certificate\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef revert_to_clear(tds_sock):\n enc_conn = tds_sock.conn.sock\n clear_conn = enc_conn._transport\n enc_conn.shutdown()\n tds_sock.conn.sock = clear_conn\n tds_sock._writer._transport = clear_conn\n tds_sock._reader._transport = clear_conn", "response": "Reverts connection back to non - encrypted mode\n Used when client sent ENCRYPT_OFF flag."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef tds7_crypt_pass(password):\n encoded = bytearray(ucs2_codec.encode(password)[0])\n for i, ch in enumerate(encoded):\n encoded[i] = ((ch << 4) & 0xff | (ch >> 4)) ^ 0xA5\n return encoded", "response": "Mangle password according to tds rules\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef unpack(self, struc):\n buf, offset = readall_fast(self, struc.size)\n return struc.unpack_from(buf, offset)", "response": "Unpacks given structure from stream\n "} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nread num_chars UCS2 string from the stream", "response": "def read_ucs2(self, num_chars):\n \"\"\" Reads num_chars UCS2 string from the stream \"\"\"\n buf = readall(self, num_chars * 2)\n return ucs2_codec.decode(buf)[0]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_collation(self):\n buf = readall(self, Collation.wire_size)\n return Collation.unpack(buf)", "response": "Reads a Collation object from the stream."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _read_packet(self):\n try:\n pos = 0\n while pos < _header.size:\n received = self._transport.recv_into(self._bufview[pos:_header.size-pos])\n if received == 0:\n raise tds_base.ClosedConnectionError()\n pos += received\n except tds_base.TimeoutError:\n self._session.put_cancel()\n raise\n self._pos = _header.size\n self._type, self._status, self._size, self._session._spid, _ = _header.unpack_from(self._bufview, 0)\n self._have = pos\n while pos < self._size:\n received = self._transport.recv_into(self._bufview[pos:], self._size - pos)\n if received == 0:\n raise tds_base.ClosedConnectionError()\n pos += received\n self._have += received", "response": "Reads next TDS packet from the underlying transport."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreading a single packet and returns the payload of the packet.", "response": "def read_whole_packet(self):\n \"\"\" Reads single packet and returns bytes payload of the packet\n\n Can only be called when transport's read pointer is at the beginning\n of the packet.\n \"\"\"\n self._read_packet()\n return readall(self, self._size - _header.size)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef write(self, data):\n data_off = 0\n while data_off < len(data):\n left = len(self._buf) - self._pos\n if left <= 0:\n self._write_packet(final=False)\n else:\n to_write = min(left, len(data) - data_off)\n self._buf[self._pos:self._pos + to_write] = data[data_off:data_off + to_write]\n self._pos += to_write\n data_off += to_write", "response": "Writes given bytes buffer into the stream\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nwriting a string to the stream.", "response": "def write_string(self, s, codec):\n \"\"\" Write string encoding it with codec into stream \"\"\"\n for i in range(0, len(s), self.bufsize):\n chunk = s[i:i + self.bufsize]\n buf, consumed = codec.encode(chunk)\n assert consumed == len(chunk)\n self.write(buf)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _write_packet(self, final):\n status = 1 if final else 0\n _header.pack_into(self._buf, 0, self._type, status, self._pos, 0, self._packet_no)\n self._packet_no = (self._packet_no + 1) % 256\n self._transport.sendall(self._buf[:self._pos])\n self._pos = 8", "response": "Writes a single TDS packet into underlying transport."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef raise_db_exception(self):\n if not self.messages:\n raise tds_base.Error(\"Request failed, server didn't send error message\")\n msg = None\n while True:\n msg = self.messages[-1]\n if msg['msgno'] == 3621: # the statement has been terminated\n self.messages = self.messages[:-1]\n else:\n break\n\n error_msg = ' '.join(m['message'] for m in self.messages)\n ex = _create_exception_by_message(msg, error_msg)\n raise ex", "response": "Raises exception from last server message"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_type_info(self, curcol):\n r = self._reader\n # User defined data type of the column\n if tds_base.IS_TDS72_PLUS(self):\n user_type = r.get_uint()\n else:\n user_type = r.get_usmallint()\n curcol.column_usertype = user_type\n curcol.flags = r.get_usmallint() # Flags\n type_id = r.get_byte()\n serializer_class = self._tds.type_factory.get_type_serializer(type_id)\n curcol.serializer = serializer_class.from_stream(r)", "response": "Reads TYPE_INFO structure from the reader and stores it in the curcol."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreads and processes COLMETADATA stream and returns a list of COLINFO structs.", "response": "def tds7_process_result(self):\n \"\"\" Reads and processes COLMETADATA stream\n\n This stream contains a list of returned columns.\n Stream format link: http://msdn.microsoft.com/en-us/library/dd357363.aspx\n \"\"\"\n self.log_response_message('got COLMETADATA')\n r = self._reader\n\n # read number of columns and allocate the columns structure\n\n num_cols = r.get_smallint()\n\n # This can be a DUMMY results token from a cursor fetch\n\n if num_cols == -1:\n return\n\n self.param_info = None\n self.has_status = False\n self.ret_status = None\n self.skipped_to_status = False\n self.rows_affected = tds_base.TDS_NO_COUNT\n self.more_rows = True\n self.row = [None] * num_cols\n self.res_info = info = _Results()\n\n #\n # loop through the columns populating COLINFO struct from\n # server response\n #\n header_tuple = []\n for col in range(num_cols):\n curcol = tds_base.Column()\n info.columns.append(curcol)\n self.get_type_info(curcol)\n\n curcol.column_name = r.read_ucs2(r.get_byte())\n precision = curcol.serializer.precision\n scale = curcol.serializer.scale\n size = curcol.serializer.size\n header_tuple.append(\n (curcol.column_name,\n curcol.serializer.get_typeid(),\n None,\n size,\n precision,\n scale,\n curcol.flags & tds_base.Column.fNullable))\n info.description = tuple(header_tuple)\n return info"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef process_param(self):\n self.log_response_message('got RETURNVALUE message')\n r = self._reader\n if tds_base.IS_TDS72_PLUS(self):\n ordinal = r.get_usmallint()\n else:\n r.get_usmallint() # ignore size\n ordinal = self._out_params_indexes[self.return_value_index]\n name = r.read_ucs2(r.get_byte())\n r.get_byte() # 1 - OUTPUT of sp, 2 - result of udf\n param = tds_base.Column()\n param.column_name = name\n self.get_type_info(param)\n param.value = param.serializer.read(r)\n self.output_params[ordinal] = param\n self.return_value_index += 1", "response": "Reads and processes the next parameter from the stream."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nprocess the incoming token stream until the cancel flag is set.", "response": "def process_cancel(self):\n \"\"\"\n Process the incoming token stream until it finds\n an end token DONE with the cancel flag set.\n At that point the connection should be ready to handle a new query.\n\n In case when no cancel request is pending this function does nothing.\n \"\"\"\n self.log_response_message('got CANCEL message')\n # silly cases, nothing to do\n if not self.in_cancel:\n return\n\n while True:\n token_id = self.get_token_id()\n self.process_token(token_id)\n if not self.in_cancel:\n return"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nread and processes an error or info message from the server.", "response": "def process_msg(self, marker):\n \"\"\" Reads and processes ERROR/INFO streams\n\n Stream formats:\n\n - ERROR: http://msdn.microsoft.com/en-us/library/dd304156.aspx\n - INFO: http://msdn.microsoft.com/en-us/library/dd303398.aspx\n\n :param marker: TDS_ERROR_TOKEN or TDS_INFO_TOKEN\n \"\"\"\n self.log_response_message('got ERROR/INFO message')\n r = self._reader\n r.get_smallint() # size\n msg = {'marker': marker, 'msgno': r.get_int(), 'state': r.get_byte(), 'severity': r.get_byte(),\n 'sql_state': None}\n if marker == tds_base.TDS_INFO_TOKEN:\n msg['priv_msg_type'] = 0\n elif marker == tds_base.TDS_ERROR_TOKEN:\n msg['priv_msg_type'] = 1\n else:\n logger.error('tds_process_msg() called with unknown marker \"{0}\"'.format(marker))\n msg['message'] = r.read_ucs2(r.get_smallint())\n # server name\n msg['server'] = r.read_ucs2(r.get_byte())\n # stored proc name if available\n msg['proc_name'] = r.read_ucs2(r.get_byte())\n msg['line_number'] = r.get_int() if tds_base.IS_TDS72_PLUS(self) else r.get_smallint()\n # in case extended error data is sent, we just try to discard it\n\n # special case\n self.messages.append(msg)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef process_row(self):\n self.log_response_message(\"got ROW message\")\n r = self._reader\n info = self.res_info\n info.row_count += 1\n for i, curcol in enumerate(info.columns):\n curcol.value = self.row[i] = curcol.serializer.read(r)", "response": "Reads and handles ROW stream."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef process_nbcrow(self):\n self.log_response_message(\"got NBCROW message\")\n r = self._reader\n info = self.res_info\n if not info:\n self.bad_stream('got row without info')\n assert len(info.columns) > 0\n info.row_count += 1\n\n # reading bitarray for nulls, 1 represent null values for\n # corresponding fields\n nbc = readall(r, (len(info.columns) + 7) // 8)\n for i, curcol in enumerate(info.columns):\n if tds_base.my_ord(nbc[i // 8]) & (1 << (i % 8)):\n value = None\n else:\n value = curcol.serializer.read(r)\n self.row[i] = value", "response": "Reads and handles NBCROW stream."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreading and processes the end of the resource.", "response": "def process_end(self, marker):\n \"\"\" Reads and processes DONE/DONEINPROC/DONEPROC streams\n\n Stream format urls:\n\n - DONE: http://msdn.microsoft.com/en-us/library/dd340421.aspx\n - DONEINPROC: http://msdn.microsoft.com/en-us/library/dd340553.aspx\n - DONEPROC: http://msdn.microsoft.com/en-us/library/dd340753.aspx\n\n :param marker: Can be TDS_DONE_TOKEN or TDS_DONEINPROC_TOKEN or TDS_DONEPROC_TOKEN\n \"\"\"\n code_to_str = {\n tds_base.TDS_DONE_TOKEN: 'DONE',\n tds_base.TDS_DONEINPROC_TOKEN: 'DONEINPROC',\n tds_base.TDS_DONEPROC_TOKEN: 'DONEPROC',\n }\n self.end_marker = marker\n self.more_rows = False\n r = self._reader\n status = r.get_usmallint()\n r.get_usmallint() # cur_cmd\n more_results = status & tds_base.TDS_DONE_MORE_RESULTS != 0\n was_cancelled = status & tds_base.TDS_DONE_CANCELLED != 0\n done_count_valid = status & tds_base.TDS_DONE_COUNT != 0\n if self.res_info:\n self.res_info.more_results = more_results\n rows_affected = r.get_int8() if tds_base.IS_TDS72_PLUS(self) else r.get_int()\n self.log_response_message(\"got {} message, more_res={}, cancelled={}, rows_affected={}\".format(\n code_to_str[marker], more_results, was_cancelled, rows_affected))\n if was_cancelled or (not more_results and not self.in_cancel):\n self.in_cancel = False\n self.set_state(tds_base.TDS_IDLE)\n if done_count_valid:\n self.rows_affected = rows_affected\n else:\n self.rows_affected = -1\n self.done_flags = status\n if self.done_flags & tds_base.TDS_DONE_ERROR and not was_cancelled and not self.in_cancel:\n self.raise_db_exception()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef process_env_chg(self):\n self.log_response_message(\"got ENVCHANGE message\")\n r = self._reader\n size = r.get_smallint()\n type_id = r.get_byte()\n if type_id == tds_base.TDS_ENV_SQLCOLLATION:\n size = r.get_byte()\n self.conn.collation = r.get_collation()\n logger.info('switched collation to %s', self.conn.collation)\n skipall(r, size - 5)\n # discard old one\n skipall(r, r.get_byte())\n elif type_id == tds_base.TDS_ENV_BEGINTRANS:\n size = r.get_byte()\n assert size == 8\n self.conn.tds72_transaction = r.get_uint8()\n skipall(r, r.get_byte())\n elif type_id == tds_base.TDS_ENV_COMMITTRANS or type_id == tds_base.TDS_ENV_ROLLBACKTRANS:\n self.conn.tds72_transaction = 0\n skipall(r, r.get_byte())\n skipall(r, r.get_byte())\n elif type_id == tds_base.TDS_ENV_PACKSIZE:\n newval = r.read_ucs2(r.get_byte())\n r.read_ucs2(r.get_byte())\n new_block_size = int(newval)\n if new_block_size >= 512:\n # Is possible to have a shrink if server limits packet\n # size more than what we specified\n #\n # Reallocate buffer if possible (strange values from server or out of memory) use older buffer */\n self._writer.bufsize = new_block_size\n elif type_id == tds_base.TDS_ENV_DATABASE:\n newval = r.read_ucs2(r.get_byte())\n logger.info('switched to database %s', newval)\n r.read_ucs2(r.get_byte())\n self.conn.env.database = newval\n elif type_id == tds_base.TDS_ENV_LANG:\n newval = r.read_ucs2(r.get_byte())\n logger.info('switched language to %s', newval)\n r.read_ucs2(r.get_byte())\n self.conn.env.language = newval\n elif type_id == tds_base.TDS_ENV_CHARSET:\n newval = r.read_ucs2(r.get_byte())\n logger.info('switched charset to %s', newval)\n r.read_ucs2(r.get_byte())\n self.conn.env.charset = newval\n remap = {'iso_1': 'iso8859-1'}\n self.conn.server_codec = codecs.lookup(remap.get(newval, newval))\n elif type_id == tds_base.TDS_ENV_DB_MIRRORING_PARTNER:\n newval = r.read_ucs2(r.get_byte())\n logger.info('got mirroring partner %s', newval)\n r.read_ucs2(r.get_byte())\n elif type_id == tds_base.TDS_ENV_LCID:\n lcid = int(r.read_ucs2(r.get_byte()))\n logger.info('switched lcid to %s', lcid)\n self.conn.server_codec = codecs.lookup(lcid2charset(lcid))\n r.read_ucs2(r.get_byte())\n elif type_id == tds_base.TDS_ENV_UNICODE_DATA_SORT_COMP_FLAGS:\n old_comp_flags = r.read_ucs2(r.get_byte())\n comp_flags = r.read_ucs2(r.get_byte())\n self.conn.comp_flags = comp_flags\n elif type_id == 20:\n # routing\n sz = r.get_usmallint()\n protocol = r.get_byte()\n protocol_property = r.get_usmallint()\n alt_server = r.read_ucs2(r.get_usmallint())\n logger.info('got routing info proto=%d proto_prop=%d alt_srv=%s', protocol, protocol_property, alt_server)\n self.conn.route = {\n 'server': alt_server,\n 'port': protocol_property,\n }\n # OLDVALUE = 0x00, 0x00\n r.get_usmallint()\n else:\n logger.warning(\"unknown env type: {0}, skipping\".format(type_id))\n # discard byte values, not still supported\n skipall(r, size - 1)", "response": "Reads and processes the ENVCHANGE stream."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef process_auth(self):\n r = self._reader\n w = self._writer\n pdu_size = r.get_smallint()\n if not self.authentication:\n raise tds_base.Error('Got unexpected token')\n packet = self.authentication.handle_next(readall(r, pdu_size))\n if packet:\n w.write(packet)\n w.flush()", "response": "Reads and processes SSPI stream."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nswitching state of the TDS session.", "response": "def set_state(self, state):\n \"\"\" Switches state of the TDS session.\n\n It also does state transitions checks.\n :param state: New state, one of TDS_PENDING/TDS_READING/TDS_IDLE/TDS_DEAD/TDS_QUERING\n \"\"\"\n prior_state = self.state\n if state == prior_state:\n return state\n if state == tds_base.TDS_PENDING:\n if prior_state in (tds_base.TDS_READING, tds_base.TDS_QUERYING):\n self.state = tds_base.TDS_PENDING\n else:\n raise tds_base.InterfaceError('logic error: cannot chage query state from {0} to {1}'.\n format(tds_base.state_names[prior_state], tds_base.state_names[state]))\n elif state == tds_base.TDS_READING:\n # transition to READING are valid only from PENDING\n if self.state != tds_base.TDS_PENDING:\n raise tds_base.InterfaceError('logic error: cannot change query state from {0} to {1}'.\n format(tds_base.state_names[prior_state], tds_base.state_names[state]))\n else:\n self.state = state\n elif state == tds_base.TDS_IDLE:\n if prior_state == tds_base.TDS_DEAD:\n raise tds_base.InterfaceError('logic error: cannot change query state from {0} to {1}'.\n format(tds_base.state_names[prior_state], tds_base.state_names[state]))\n self.state = state\n elif state == tds_base.TDS_DEAD:\n self.state = state\n elif state == tds_base.TDS_QUERYING:\n if self.state == tds_base.TDS_DEAD:\n raise tds_base.InterfaceError('logic error: cannot change query state from {0} to {1}'.\n format(tds_base.state_names[prior_state], tds_base.state_names[state]))\n elif self.state != tds_base.TDS_IDLE:\n raise tds_base.InterfaceError('logic error: cannot change query state from {0} to {1}'.\n format(tds_base.state_names[prior_state], tds_base.state_names[state]))\n else:\n self.rows_affected = tds_base.TDS_NO_COUNT\n self.internal_sp_called = 0\n self.state = state\n else:\n assert False\n return self.state"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngenerates a parameter from value and name.", "response": "def make_param(self, name, value):\n \"\"\" Generates instance of :class:`Column` from value and name\n\n Value can also be of a special types:\n\n - An instance of :class:`Column`, in which case it is just returned.\n - An instance of :class:`output`, in which case parameter will become\n an output parameter.\n - A singleton :var:`default`, in which case default value will be passed\n into a stored proc.\n\n :param name: Name of the parameter, will populate column_name property of returned column.\n :param value: Value of the parameter, also used to guess the type of parameter.\n :return: An instance of :class:`Column`\n \"\"\"\n if isinstance(value, tds_base.Column):\n value.column_name = name\n return value\n column = tds_base.Column()\n column.column_name = name\n column.flags = 0\n \n if isinstance(value, output):\n column.flags |= tds_base.fByRefValue\n if isinstance(value.type, six.string_types):\n column.type = tds_types.sql_type_by_declaration(value.type)\n elif value.type:\n column.type = self.conn.type_inferrer.from_class(value.type)\n value = value.value\n\n if value is default:\n column.flags |= tds_base.fDefaultValue\n value = None\n\n column.value = value\n if column.type is None:\n column.type = self.conn.type_inferrer.from_value(value)\n return column"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconverting a dict of list of parameters into a list of Column instances.", "response": "def _convert_params(self, parameters):\n \"\"\" Converts a dict of list of parameters into a list of :class:`Column` instances.\n\n :param parameters: Can be a list of parameter values, or a dict of parameter names to values.\n :return: A list of :class:`Column` instances.\n \"\"\"\n if isinstance(parameters, dict):\n return [self.make_param(name, value)\n for name, value in parameters.items()]\n else:\n params = []\n for parameter in parameters:\n params.append(self.make_param('', parameter))\n return params"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncancel current pending request.", "response": "def cancel_if_pending(self):\n \"\"\" Cancels current pending request.\n\n Does nothing if no request is pending, otherwise sends cancel request,\n and waits for response.\n \"\"\"\n if self.state == tds_base.TDS_IDLE:\n return\n if not self.in_cancel:\n self.put_cancel()\n self.process_cancel()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef submit_rpc(self, rpc_name, params, flags=0):\n logger.info('Sending RPC %s flags=%d', rpc_name, flags)\n self.messages = []\n self.output_params = {}\n self.cancel_if_pending()\n self.res_info = None\n w = self._writer\n with self.querying_context(tds_base.PacketType.RPC):\n if tds_base.IS_TDS72_PLUS(self):\n self._start_query()\n if tds_base.IS_TDS71_PLUS(self) and isinstance(rpc_name, tds_base.InternalProc):\n w.put_smallint(-1)\n w.put_smallint(rpc_name.proc_id)\n else:\n if isinstance(rpc_name, tds_base.InternalProc):\n rpc_name = rpc_name.name\n w.put_smallint(len(rpc_name))\n w.write_ucs2(rpc_name)\n #\n # TODO support flags\n # bit 0 (1 as flag) in TDS7/TDS5 is \"recompile\"\n # bit 1 (2 as flag) in TDS7+ is \"no metadata\" bit this will prevent sending of column infos\n #\n w.put_usmallint(flags)\n self._out_params_indexes = []\n for i, param in enumerate(params):\n if param.flags & tds_base.fByRefValue:\n self._out_params_indexes.append(i)\n w.put_byte(len(param.column_name))\n w.write_ucs2(param.column_name)\n #\n # TODO support other flags (use defaul null/no metadata)\n # bit 1 (2 as flag) in TDS7+ is \"default value\" bit\n # (what's the meaning of \"default value\" ?)\n #\n w.put_byte(param.flags)\n\n # TYPE_INFO structure: https://msdn.microsoft.com/en-us/library/dd358284.aspx\n serializer = param.choose_serializer(\n type_factory=self._tds.type_factory,\n collation=self._tds.collation or raw_collation\n )\n type_id = serializer.type\n w.put_byte(type_id)\n serializer.write_info(w)\n\n serializer.write(w, param.value)", "response": "Sends an RPC request to the server."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsend a plain query to the server.", "response": "def submit_plain_query(self, operation):\n \"\"\" Sends a plain query to server.\n\n This call will transition session into pending state.\n If some operation is currently pending on the session, it will be\n cancelled before sending this request.\n\n Spec: http://msdn.microsoft.com/en-us/library/dd358575.aspx\n\n :param operation: A string representing sql statement.\n \"\"\"\n self.messages = []\n self.cancel_if_pending()\n self.res_info = None\n logger.info(\"Sending query %s\", operation[:100])\n w = self._writer\n with self.querying_context(tds_base.PacketType.QUERY):\n if tds_base.IS_TDS72_PLUS(self):\n self._start_query()\n w.write_ucs2(operation)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef submit_bulk(self, metadata, rows):\n logger.info('Sending INSERT BULK')\n num_cols = len(metadata)\n w = self._writer\n serializers = []\n with self.querying_context(tds_base.PacketType.BULK):\n w.put_byte(tds_base.TDS7_RESULT_TOKEN)\n w.put_usmallint(num_cols)\n for col in metadata:\n if tds_base.IS_TDS72_PLUS(self):\n w.put_uint(col.column_usertype)\n else:\n w.put_usmallint(col.column_usertype)\n w.put_usmallint(col.flags)\n serializer = col.choose_serializer(\n type_factory=self._tds.type_factory,\n collation=self._tds.collation,\n )\n type_id = serializer.type\n w.put_byte(type_id)\n serializers.append(serializer)\n serializer.write_info(w)\n w.put_byte(len(col.column_name))\n w.write_ucs2(col.column_name)\n for row in rows:\n w.put_byte(tds_base.TDS_ROW_TOKEN)\n for i, col in enumerate(metadata):\n serializers[i].write(w, row[i])\n\n # https://msdn.microsoft.com/en-us/library/dd340421.aspx\n w.put_byte(tds_base.TDS_DONE_TOKEN)\n w.put_usmallint(tds_base.TDS_DONE_FINAL)\n w.put_usmallint(0) # curcmd\n # row count\n if tds_base.IS_TDS72_PLUS(self):\n w.put_int8(0)\n else:\n w.put_int(0)", "response": "Sends an INSERT BULK command."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsending a cancel packet to the server.", "response": "def put_cancel(self):\n \"\"\" Sends a cancel request to the server.\n\n Switches connection to IN_CANCEL state.\n \"\"\"\n logger.info('Sending CANCEL')\n self._writer.begin_packet(tds_base.PacketType.CANCEL)\n self._writer.flush()\n self.in_cancel = 1"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef iterdecode(iterable, codec):\n decoder = codec.incrementaldecoder()\n for chunk in iterable:\n yield decoder.decode(chunk)\n yield decoder.decode(b'', True)", "response": "Uses an incremental decoder to decode each chunk in iterable."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef skipall(stm, size):\n res = stm.recv(size)\n if len(res) == size:\n return\n elif len(res) == 0:\n raise ClosedConnectionError()\n left = size - len(res)\n while left:\n buf = stm.recv(left)\n if len(buf) == 0:\n raise ClosedConnectionError()\n left -= len(buf)", "response": "Skips exactly size bytes in stm."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreading exactly size bytes from stm and produces chunks", "response": "def read_chunks(stm, size):\n \"\"\" Reads exactly size bytes from stm and produces chunks\n\n May call stm.read multiple times until required\n number of bytes is read.\n If EOF is reached before size bytes are read\n will raise :class:`ClosedConnectionError`\n\n :param stm: Stream to read bytes from, should have read method,\n this read method can return less than requested\n number of bytes.\n :param size: Number of bytes to read.\n \"\"\"\n if size == 0:\n yield b''\n return\n\n res = stm.recv(size)\n if len(res) == 0:\n raise ClosedConnectionError()\n yield res\n left = size - len(res)\n while left:\n buf = stm.recv(left)\n if len(buf) == 0:\n raise ClosedConnectionError()\n yield buf\n left -= len(buf)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef readall_fast(stm, size):\n buf, offset = stm.read_fast(size)\n if len(buf) - offset < size:\n # slow case\n buf = buf[offset:]\n buf += stm.recv(size - len(buf))\n return buf, 0\n return buf, offset", "response": "Slightly faster version of readall."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndecode little - endian integer from buffer", "response": "def _decode_num(buf):\n \"\"\" Decodes little-endian integer from buffer\n\n Buffer can be of any size\n \"\"\"\n return functools.reduce(lambda acc, val: acc * 256 + tds_base.my_ord(val), reversed(buf), 0)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngenerating chunks from stream.", "response": "def chunks(self):\n \"\"\" Generates chunks from stream, each chunk is an instace of bytes.\n \"\"\"\n if self.is_null():\n return\n total = 0\n while True:\n chunk_len = self._rdr.get_uint()\n if chunk_len == 0:\n if not self.is_unknown_len() and total != self._size:\n msg = \"PLP actual length (%d) doesn't match reported length (%d)\" % (total, self._size)\n self._rdr.session.bad_stream(msg)\n\n return\n\n total += chunk_len\n left = chunk_len\n while left:\n buf = self._rdr.recv(left)\n yield buf\n left -= len(buf)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate sql date object from Python date object.", "response": "def from_pydate(cls, pydate):\n \"\"\"\n Creates sql date object from Python date object.\n @param pydate: Python date\n @return: sql date\n \"\"\"\n return cls(days=(datetime.datetime.combine(pydate, datetime.time(0, 0, 0)) - _datetime2_base_date).days)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert sql time object into Python s time object", "response": "def to_pytime(self):\n \"\"\"\n Converts sql time object into Python's time object\n this will truncate nanoseconds to microseconds\n @return: naive time\n \"\"\"\n nanoseconds = self._nsec\n hours = nanoseconds // 1000000000 // 60 // 60\n nanoseconds -= hours * 60 * 60 * 1000000000\n minutes = nanoseconds // 1000000000 // 60\n nanoseconds -= minutes * 60 * 1000000000\n seconds = nanoseconds // 1000000000\n nanoseconds -= seconds * 1000000000\n return datetime.time(hours, minutes, seconds, nanoseconds // 1000)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef from_pytime(cls, pytime):\n secs = pytime.hour * 60 * 60 + pytime.minute * 60 + pytime.second\n nsec = secs * 10 ** 9 + pytime.microsecond * 1000\n return cls(nsec=nsec)", "response": "Converts Python time object to sql time object ignoring timezone\n "} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert datetime2 object into Python s datetime. datetime object", "response": "def to_pydatetime(self):\n \"\"\"\n Converts datetime2 object into Python's datetime.datetime object\n @return: naive datetime.datetime\n \"\"\"\n return datetime.datetime.combine(self._date.to_pydate(), self._time.to_pytime())"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef from_pydatetime(cls, pydatetime):\n return cls(date=Date.from_pydate(pydatetime.date),\n time=Time.from_pytime(pydatetime.time))", "response": "Creates sql datetime2 object from Python datetime object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef to_pydatetime(self):\n dt = datetime.datetime.combine(self._date.to_pydate(), self._time.to_pytime())\n from .tz import FixedOffsetTimezone\n return dt.replace(tzinfo=_utc).astimezone(FixedOffsetTimezone(self._offset))", "response": "Converts datetimeoffset object into Python s datetime. datetime object"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nwriting the info of the current record set to the given writer.", "response": "def write_info(self, w):\n \"\"\"\n Writes TVP_TYPENAME structure\n\n spec: https://msdn.microsoft.com/en-us/library/dd302994.aspx\n @param w: TdsWriter\n @return:\n \"\"\"\n w.write_b_varchar(\"\") # db_name, should be empty\n w.write_b_varchar(self._table_type.typ_schema)\n w.write_b_varchar(self._table_type.typ_name)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nwriting remaining part of TVP_TYPE_INFO structure and TVP_ROWMETADATA structure.", "response": "def write(self, w, val):\n \"\"\"\n Writes remaining part of TVP_TYPE_INFO structure, resuming from TVP_COLMETADATA\n\n specs:\n https://msdn.microsoft.com/en-us/library/dd302994.aspx\n https://msdn.microsoft.com/en-us/library/dd305261.aspx\n https://msdn.microsoft.com/en-us/library/dd303230.aspx\n\n @param w: TdsWriter\n @param val: TableValuedParam or None\n @return:\n \"\"\"\n if val.is_null():\n w.put_usmallint(tds_base.TVP_NULL_TOKEN)\n else:\n columns = self._table_type.columns\n w.put_usmallint(len(columns))\n for i, column in enumerate(columns):\n w.put_uint(column.column_usertype)\n\n w.put_usmallint(column.flags)\n\n # TYPE_INFO structure: https://msdn.microsoft.com/en-us/library/dd358284.aspx\n\n serializer = self._columns_serializers[i]\n type_id = serializer.type\n w.put_byte(type_id)\n serializer.write_info(w)\n\n w.write_b_varchar('') # ColName, must be empty in TVP according to spec\n\n # here can optionally send TVP_ORDER_UNIQUE and TVP_COLUMN_ORDERING\n # https://msdn.microsoft.com/en-us/library/dd305261.aspx\n\n # terminating optional metadata\n w.put_byte(tds_base.TVP_END_TOKEN)\n\n # now sending rows using TVP_ROW\n # https://msdn.microsoft.com/en-us/library/dd305261.aspx\n if val.rows:\n for row in val.rows:\n w.put_byte(tds_base.TVP_ROW_TOKEN)\n for i, col in enumerate(self._table_type.columns):\n if not col.flags & tds_base.TVP_COLUMN_DEFAULT_FLAG:\n self._columns_serializers[i].write(w, row[i])\n\n # terminating rows\n w.put_byte(tds_base.TVP_END_TOKEN)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nparsing sql type declaration and return instance of corresponding type class", "response": "def parse(self, declaration):\n \"\"\"\n Parse sql type declaration, e.g. varchar(10) and return instance of corresponding type class,\n e.g. VarCharType(10)\n\n @param declaration: Sql declaration to parse, e.g. varchar(10)\n @return: instance of SqlTypeMetaclass\n \"\"\"\n declaration = declaration.strip()\n for regex, constructor in self._compiled:\n m = regex.match(declaration)\n if m:\n return constructor(*m.groups())\n raise ValueError('Unable to parse type declaration', declaration)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfunctioning infers TDS type from Python value.", "response": "def from_value(self, value):\n \"\"\" Function infers TDS type from Python value.\n\n :param value: value from which to infer TDS type\n :return: An instance of subclass of :class:`BaseType`\n \"\"\"\n if value is None:\n sql_type = NVarCharType(size=1)\n else:\n sql_type = self._from_class_value(value, type(value))\n return sql_type"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef dict_row_strategy(column_names):\n # replace empty column names with indices\n column_names = [(name or idx) for idx, name in enumerate(column_names)]\n\n def row_factory(row):\n return dict(zip(column_names, row))\n\n return row_factory", "response": "Dict row strategy for generating dictionaries of the given column names."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef namedtuple_row_strategy(column_names):\n import collections\n # replace empty column names with placeholders\n column_names = [name if is_valid_identifier(name) else 'col%s_' % idx for idx, name in enumerate(column_names)]\n row_class = collections.namedtuple('Row', column_names)\n\n def row_factory(row):\n return row_class(*row)\n\n return row_factory", "response": "Namedtuple row strategy, rows returned as named tuples\n\n Column names that are not valid Python identifiers will be replaced\n with col_"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef recordtype_row_strategy(column_names):\n try:\n from namedlist import namedlist as recordtype # optional dependency\n except ImportError:\n from recordtype import recordtype # optional dependency\n # replace empty column names with placeholders\n column_names = [name if is_valid_identifier(name) else 'col%s_' % idx for idx, name in enumerate(column_names)]\n recordtype_row_class = recordtype('Row', column_names)\n\n # custom extension class that supports indexing\n class Row(recordtype_row_class):\n def __getitem__(self, index):\n if isinstance(index, slice):\n return tuple(getattr(self, x) for x in self.__slots__[index])\n return getattr(self, self.__slots__[index])\n\n def __setitem__(self, index, value):\n setattr(self, self.__slots__[index], value)\n\n def row_factory(row):\n return Row(*row)\n\n return row_factory", "response": "Recordtype row strategy that returns rows returned as recordtypes\n Column names that are not valid Python identifiers will be replaced with col_\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _get_servers_deque(servers, database):\n key = (servers, database)\n if key not in _servers_deques:\n _servers_deques[key] = deque(servers)\n return _servers_deques[key]", "response": "Returns deque of servers for given tuple of servers and database name."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconnects to a TDS database.", "response": "def connect(dsn=None, database=None, user=None, password=None, timeout=None,\n login_timeout=15, as_dict=None,\n appname=None, port=None, tds_version=tds_base.TDS74,\n autocommit=False,\n blocksize=4096, use_mars=False, auth=None, readonly=False,\n load_balancer=None, use_tz=None, bytes_to_unicode=True,\n row_strategy=None, failover_partner=None, server=None,\n cafile=None, validate_host=True, enc_login_only=False,\n disable_connect_retry=False,\n pooling=False,\n ):\n \"\"\"\n Opens connection to the database\n\n :keyword dsn: SQL server host and instance: [\\]\n :type dsn: string\n :keyword failover_partner: secondary database host, used if primary is not accessible\n :type failover_partner: string\n :keyword database: the database to initially connect to\n :type database: string\n :keyword user: database user to connect as\n :type user: string\n :keyword password: user's password\n :type password: string\n :keyword timeout: query timeout in seconds, default 0 (no timeout)\n :type timeout: int\n :keyword login_timeout: timeout for connection and login in seconds, default 15\n :type login_timeout: int\n :keyword as_dict: whether rows should be returned as dictionaries instead of tuples.\n :type as_dict: boolean\n :keyword appname: Set the application name to use for the connection\n :type appname: string\n :keyword port: the TCP port to use to connect to the server\n :type port: int\n :keyword tds_version: Maximum TDS version to use, should only be used for testing\n :type tds_version: int\n :keyword autocommit: Enable or disable database level autocommit\n :type autocommit: bool\n :keyword blocksize: Size of block for the TDS protocol, usually should not be used\n :type blocksize: int\n :keyword use_mars: Enable or disable MARS\n :type use_mars: bool\n :keyword auth: An instance of authentication method class, e.g. Ntlm or Sspi\n :keyword readonly: Allows to enable read-only mode for connection, only supported by MSSQL 2012,\n earlier versions will ignore this parameter\n :type readonly: bool\n :keyword load_balancer: An instance of load balancer class to use, if not provided will not use load balancer\n :keyword use_tz: Provides timezone for naive database times, if not provided date and time will be returned\n in naive format\n :keyword bytes_to_unicode: If true single byte database strings will be converted to unicode Python strings,\n otherwise will return strings as ``bytes`` without conversion.\n :type bytes_to_unicode: bool\n :keyword row_strategy: strategy used to create rows, determines type of returned rows, can be custom or one of:\n :func:`tuple_row_strategy`, :func:`list_row_strategy`, :func:`dict_row_strategy`,\n :func:`namedtuple_row_strategy`, :func:`recordtype_row_strategy`\n :type row_strategy: function of list of column names returning row factory\n :keyword cafile: Name of the file containing trusted CAs in PEM format, if provided will enable TLS\n :type cafile: str\n :keyword validate_host: Host name validation during TLS connection is enabled by default, if you disable it you\n will be vulnerable to MitM type of attack.\n :type validate_host: bool\n :keyword enc_login_only: Allows you to scope TLS encryption only to an authentication portion. This means that\n anyone who can observe traffic on your network will be able to see all your SQL requests and potentially modify\n them.\n :type enc_login_only: bool\n :returns: An instance of :class:`Connection`\n \"\"\"\n login = _TdsLogin()\n login.client_host_name = socket.gethostname()[:128]\n login.library = \"Python TDS Library\"\n login.user_name = user or ''\n login.password = password or ''\n login.app_name = appname or 'pytds'\n login.port = port\n login.language = '' # use database default\n login.attach_db_file = ''\n login.tds_version = tds_version\n if tds_version < tds_base.TDS70:\n raise ValueError('This TDS version is not supported')\n login.database = database or ''\n login.bulk_copy = False\n login.client_lcid = lcid.LANGID_ENGLISH_US\n login.use_mars = use_mars\n login.pid = os.getpid()\n login.change_password = ''\n login.client_id = uuid.getnode() # client mac address\n login.cafile = cafile\n login.validate_host = validate_host\n login.enc_login_only = enc_login_only\n if cafile:\n if not tls.OPENSSL_AVAILABLE:\n raise ValueError(\"You are trying to use encryption but pyOpenSSL does not work, you probably \"\n \"need to install it first\")\n login.tls_ctx = tls.create_context(cafile)\n if login.enc_login_only:\n login.enc_flag = PreLoginEnc.ENCRYPT_OFF\n else:\n login.enc_flag = PreLoginEnc.ENCRYPT_ON\n else:\n login.tls_ctx = None\n login.enc_flag = PreLoginEnc.ENCRYPT_NOT_SUP\n\n if use_tz:\n login.client_tz = use_tz\n else:\n login.client_tz = pytds.tz.local\n\n # that will set:\n # ANSI_DEFAULTS to ON,\n # IMPLICIT_TRANSACTIONS to OFF,\n # TEXTSIZE to 0x7FFFFFFF (2GB) (TDS 7.2 and below), TEXTSIZE to infinite (introduced in TDS 7.3),\n # and ROWCOUNT to infinite\n login.option_flag2 = tds_base.TDS_ODBC_ON\n\n login.connect_timeout = login_timeout\n login.query_timeout = timeout\n login.blocksize = blocksize\n login.auth = auth\n login.readonly = readonly\n login.load_balancer = load_balancer\n login.bytes_to_unicode = bytes_to_unicode\n\n if server and dsn:\n raise ValueError(\"Both server and dsn shouldn't be specified\")\n\n if server:\n warnings.warn(\"server parameter is deprecated, use dsn instead\", DeprecationWarning)\n dsn = server\n\n if load_balancer and failover_partner:\n raise ValueError(\"Both load_balancer and failover_partner shoudln't be specified\")\n if load_balancer:\n servers = [(srv, None) for srv in load_balancer.choose()]\n else:\n servers = [(dsn or 'localhost', port)]\n if failover_partner:\n servers.append((failover_partner, port))\n\n parsed_servers = []\n for srv, port in servers:\n host, instance = _parse_server(srv)\n if instance and port:\n raise ValueError(\"Both instance and port shouldn't be specified\")\n parsed_servers.append((host, port, instance))\n\n login.servers = _get_servers_deque(tuple(parsed_servers), database)\n\n # unique connection identifier used to pool connection\n key = (\n dsn,\n login.user_name,\n login.app_name,\n login.tds_version,\n login.database,\n login.client_lcid,\n login.use_mars,\n login.cafile,\n login.blocksize,\n login.readonly,\n login.bytes_to_unicode,\n login.auth,\n login.client_tz,\n autocommit,\n )\n\n conn = Connection()\n conn._use_tz = use_tz\n conn._autocommit = autocommit\n conn._login = login\n conn._pooling = pooling\n conn._key = key\n\n assert row_strategy is None or as_dict is None,\\\n 'Both row_startegy and as_dict were specified, you should use either one or another'\n if as_dict is not None:\n conn.as_dict = as_dict\n elif row_strategy is not None:\n conn._row_strategy = row_strategy\n else:\n conn._row_strategy = tuple_row_strategy # default row strategy\n\n conn._isolation_level = 0\n conn._dirty = False\n from .tz import FixedOffsetTimezone\n conn._tzinfo_factory = None if use_tz is None else FixedOffsetTimezone\n if disable_connect_retry:\n conn._try_open(timeout=login.connect_timeout)\n else:\n conn._open()\n return conn"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef commit(self):\n self._assert_open()\n if self._autocommit:\n return\n if not self._conn.tds72_transaction:\n return\n self._main_cursor._commit(cont=True, isolation_level=self._isolation_level)", "response": "Commit the current transaction which is currently in progress."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef cursor(self):\n self._assert_open()\n if self.mars_enabled:\n in_tran = self._conn.tds72_transaction\n if in_tran and self._dirty:\n try:\n return _MarsCursor(self,\n self._conn.create_session(self._tzinfo_factory),\n self._tzinfo_factory)\n except (socket.error, OSError) as e:\n self._conn.close()\n raise\n else:\n try:\n return _MarsCursor(self,\n self._conn.create_session(self._tzinfo_factory),\n self._tzinfo_factory)\n except (socket.error, OSError) as e:\n if e.errno not in (errno.EPIPE, errno.ECONNRESET):\n raise\n self._conn.close()\n except ClosedConnectionError:\n pass\n self._assert_open()\n return _MarsCursor(self,\n self._conn.create_session(self._tzinfo_factory),\n self._tzinfo_factory)\n else:\n return Cursor(self,\n self._conn.main_session,\n self._tzinfo_factory)", "response": "Returns a Cursor object that can be used to make queries and fetch\n results from the database."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef rollback(self):\n try:\n if self._autocommit:\n return\n\n if not self._conn or not self._conn.is_connected():\n return\n\n if not self._conn.tds72_transaction:\n return\n\n self._main_cursor._rollback(cont=True,\n isolation_level=self._isolation_level)\n except socket.error as e:\n if e.errno in (errno.ENETRESET, errno.ECONNRESET, errno.EPIPE):\n return\n self._conn.close()\n raise\n except ClosedConnectionError:\n pass", "response": "Roll back the current transaction which is currently in progress."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nclose the connection to an MS SQL Server.", "response": "def close(self):\n \"\"\" Close connection to an MS SQL Server.\n\n This function tries to close the connection and free all memory used.\n It can be called more than once in a row. No exception is raised in\n this case.\n \"\"\"\n if self._conn:\n if self._pooling:\n _connection_pool.add(self._key, (self._conn, self._main_cursor._session))\n else:\n self._conn.close()\n self._active_cursor = None\n self._main_cursor = None\n self._conn = None\n self._closed = True"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the values of OUTPUT parameters.", "response": "def get_proc_outputs(self):\n \"\"\"\n If stored procedure has result sets and OUTPUT parameters use this method\n after you processed all result sets to get values of OUTPUT parameters.\n :return: A list of output parameter values.\n \"\"\"\n\n self._session.complete_rpc()\n results = [None] * len(self._session.output_params.items())\n for key, param in self._session.output_params.items():\n results[key] = param.value\n return results"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef callproc(self, procname, parameters=()):\n conn = self._assert_open()\n conn._try_activate_cursor(self)\n return self._callproc(procname, parameters)", "response": "Call a stored procedure with the given name."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_proc_return_status(self):\n if self._session is None:\n return None\n if not self._session.has_status:\n self._session.find_return_status()\n return self._session.ret_status if self._session.has_status else None", "response": "Get the status of the last stored proc result"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef cancel(self):\n conn = self._assert_open()\n conn._try_activate_cursor(self)\n self._session.cancel_if_pending()", "response": "Cancel the current statement."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nclosing the cursor. The cursor is unusable from this point.", "response": "def close(self):\n \"\"\"\n Closes the cursor. The cursor is unusable from this point.\n \"\"\"\n conn = self._conn\n if conn is not None:\n conn = conn()\n if conn is not None:\n if self is conn._active_cursor:\n conn._active_cursor = conn._main_cursor\n self._session = None\n self._conn = None"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef execute(self, operation, params=()):\n conn = self._assert_open()\n conn._try_activate_cursor(self)\n self._execute(operation, params)\n # for compatibility with pyodbc\n return self", "response": "Execute the query\n operation."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the description of the current object.", "response": "def description(self):\n \"\"\" Cursor description, see http://legacy.python.org/dev/peps/pep-0249/#description\n \"\"\"\n if self._session is None:\n return None\n res = self._session.res_info\n if res:\n return res.description\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a list of all the messages generated by the server.", "response": "def messages(self):\n \"\"\" Messages generated by server, see http://legacy.python.org/dev/peps/pep-0249/#cursor-messages\n \"\"\"\n if self._session:\n result = []\n for msg in self._session.messages:\n ex = _create_exception_by_message(msg)\n result.append((type(ex), ex))\n return result\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef fetchone(self):\n row = self._session.fetchone()\n if row:\n return self._row_factory(row)", "response": "Returns the next row from the result set or None if there are no more rows."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef copy_to(self, file=None, table_or_view=None, sep='\\t', columns=None,\n check_constraints=False, fire_triggers=False, keep_nulls=False,\n kb_per_batch=None, rows_per_batch=None, order=None, tablock=False,\n schema=None, null_string=None, data=None):\n \"\"\" *Experimental*. Efficiently load data to database from file using ``BULK INSERT`` operation\n\n :param file: Source file-like object, should be in csv format. Specify\n either this or data, not both.\n :param table_or_view: Destination table or view in the database\n :type table_or_view: str\n\n Optional parameters:\n\n :keyword sep: Separator used in csv file\n :type sep: str\n :keyword columns: List of Column objects or column names in target\n table to insert to. SQL Server will do some conversions, so these\n may not have to match the actual table definition exactly.\n If not provided will insert into all columns assuming nvarchar(4000)\n NULL for all columns.\n If only the column name is provided, the type is assumed to be\n nvarchar(4000) NULL.\n If rows are given with file, you cannot specify non-string data\n types.\n If rows are given with data, the values must be a type supported by\n the serializer for the column in tds_types.\n :type columns: list\n :keyword check_constraints: Check table constraints for incoming data\n :type check_constraints: bool\n :keyword fire_triggers: Enable or disable triggers for table\n :type fire_triggers: bool\n :keyword keep_nulls: If enabled null values inserted as-is, instead of\n inserting default value for column\n :type keep_nulls: bool\n :keyword kb_per_batch: Kilobytes per batch can be used to optimize performance, see MSSQL\n server documentation for details\n :type kb_per_batch: int\n :keyword rows_per_batch: Rows per batch can be used to optimize performance, see MSSQL\n server documentation for details\n :type rows_per_batch: int\n :keyword order: The ordering of the data in source table. List of columns with ASC or DESC suffix.\n E.g. ``['order_id ASC', 'name DESC']``\n Can be used to optimize performance, see MSSQL server documentation for details\n :type order: list\n :keyword tablock: Enable or disable table lock for the duration of bulk load\n :keyword schema: Name of schema for table or view, if not specified default schema will be used\n :keyword null_string: String that should be interpreted as a NULL when\n reading the CSV file. Has no meaning if using data instead of file.\n :keyword data: The data to insert as an iterable of rows, which are\n iterables of values. Specify either this or file, not both.\n \"\"\"\n conn = self._conn()\n rows = None\n if data is None:\n import csv\n reader = csv.reader(file, delimiter=sep)\n\n if null_string is not None:\n def _convert_null_strings(csv_reader):\n for row in csv_reader:\n yield [r if r != null_string else None for r in row]\n\n reader = _convert_null_strings(reader)\n\n rows = reader\n else:\n rows = data\n\n obj_name = tds_base.tds_quote_id(table_or_view)\n if schema:\n obj_name = '{0}.{1}'.format(tds_base.tds_quote_id(schema), obj_name)\n if columns:\n metadata = []\n for column in columns:\n if isinstance(column, Column):\n metadata.append(column)\n else:\n metadata.append(Column(name=column, type=NVarCharType(size=4000), flags=Column.fNullable))\n else:\n self.execute('select top 1 * from {} where 1<>1'.format(obj_name))\n metadata = [Column(name=col[0], type=NVarCharType(size=4000), flags=Column.fNullable if col[6] else 0)\n for col in self.description]\n col_defs = ','.join('{0} {1}'.format(tds_base.tds_quote_id(col.column_name), col.type.get_declaration())\n for col in metadata)\n with_opts = []\n if check_constraints:\n with_opts.append('CHECK_CONSTRAINTS')\n if fire_triggers:\n with_opts.append('FIRE_TRIGGERS')\n if keep_nulls:\n with_opts.append('KEEP_NULLS')\n if kb_per_batch:\n with_opts.append('KILOBYTES_PER_BATCH = {0}'.format(kb_per_batch))\n if rows_per_batch:\n with_opts.append('ROWS_PER_BATCH = {0}'.format(rows_per_batch))\n if order:\n with_opts.append('ORDER({0})'.format(','.join(order)))\n if tablock:\n with_opts.append('TABLOCK')\n with_part = ''\n if with_opts:\n with_part = 'WITH ({0})'.format(','.join(with_opts))\n operation = 'INSERT BULK {0}({1}) {2}'.format(obj_name, col_defs, with_part)\n self.execute(operation)\n self._session.submit_bulk(metadata, rows)\n self._session.process_simple_request()", "response": "Copy data from file to table_or_view in database."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nclose the cursor. The cursor is unusable from this point.", "response": "def close(self):\n \"\"\"\n Closes the cursor. The cursor is unusable from this point.\n \"\"\"\n if self._session is not None:\n try:\n self._session.close()\n self._session = None\n except socket.error as e:\n if e.errno != errno.ECONNRESET:\n raise"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef callproc(self, procname, parameters=()):\n self._assert_open()\n return self._callproc(procname, parameters)", "response": "Call a stored procedure with the given name."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef __posix_to_local_path(path, local_path_module=os.path):\n partial_path = deque()\n while True:\n if not path or path == '/':\n break\n (path, base) = posixpath.split(path)\n partial_path.appendleft(base)\n return local_path_module.join(*partial_path)", "response": "Converts a posix path to a local path."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef calculate_path(self, remote_path, input_type):\n directory, allow_nested_files = self._directory_for_file_type(input_type)\n path = get_mapped_file(directory, remote_path, allow_nested_files=allow_nested_files)\n return path", "response": "Calculate path for a file in the cache."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef setup(self, tool_id=None, tool_version=None, preserve_galaxy_python_environment=None):\n setup_args = {\"job_id\": self.job_id}\n if tool_id:\n setup_args[\"tool_id\"] = tool_id\n if tool_version:\n setup_args[\"tool_version\"] = tool_version\n if preserve_galaxy_python_environment:\n setup_args[\"preserve_galaxy_python_environment\"] = preserve_galaxy_python_environment\n return self.setup_handler.setup(**setup_args)", "response": "Setup remote Pulsar server to run this job."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef launch(self, command_line, dependencies_description=None, env=[], remote_staging=[], job_config=None):\n launch_params = dict(command_line=command_line, job_id=self.job_id)\n submit_params_dict = submit_params(self.destination_params)\n if submit_params_dict:\n launch_params['params'] = json_dumps(submit_params_dict)\n if dependencies_description:\n launch_params['dependencies_description'] = json_dumps(dependencies_description.to_dict())\n if env:\n launch_params['env'] = json_dumps(env)\n if remote_staging:\n launch_params['remote_staging'] = json_dumps(remote_staging)\n if job_config and 'touch_outputs' in job_config:\n # message clients pass the entire job config\n launch_params['submit_extras'] = json_dumps({'touch_outputs': job_config['touch_outputs']})\n\n if job_config and self.setup_handler.local:\n # Setup not yet called, job properties were inferred from\n # destination arguments. Hence, must have Pulsar setup job\n # before queueing.\n setup_params = _setup_params_from_job_config(job_config)\n launch_params['setup_params'] = json_dumps(setup_params)\n return self._raw_execute(\"submit\", launch_params)", "response": "Queue up the execution of the supplied command_line on the remote cluster."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfetch an output from the remote Pulsar server.", "response": "def fetch_output(self, path, name, working_directory, action_type, output_type):\n \"\"\"\n Fetch (transfer, copy, etc...) an output from the remote Pulsar server.\n\n **Parameters**\n\n path : str\n Local path of the dataset.\n name : str\n Remote name of file (i.e. path relative to remote staging output\n or working directory).\n working_directory : str\n Local working_directory for the job.\n action_type : str\n Where to find file on Pulsar (output_workdir or output). legacy is also\n an option in this case Pulsar is asked for location - this will only be\n used if targetting an older Pulsar server that didn't return statuses\n allowing this to be inferred.\n \"\"\"\n if output_type in ['output_workdir', 'output_metadata']:\n self._populate_output_path(name, path, action_type, output_type)\n elif output_type == 'output':\n self._fetch_output(path=path, name=name, action_type=action_type)\n else:\n raise Exception(\"Unknown output_type %s\" % output_type)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\npipes - safe version of subprocess. check_output", "response": "def check_output(args):\n \"\"\"Pipe-safe (and 2.6 compatible) version of subprocess.check_output\n \"\"\"\n proc = Popen(args, stdout=PIPE)\n out = proc.communicate()[0]\n if proc.returncode:\n raise CalledProcessError(proc.returncode, args, output=out)\n return out"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef unstructured_mappers(self):\n return filter(lambda m: path_type.UNSTRUCTURED in m.path_types, self.mappers)", "response": "Return a list of all mappers that will map unstructured files."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _psutil_kill_pid(pid):\n try:\n parent = Process(pid)\n for child in parent.children(recursive=True):\n child.kill()\n parent.kill()\n except NoSuchProcess:\n return", "response": "Kills the process with the given PID."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse_external_id(output, type=EXTERNAL_ID_TYPE_ANY):\n external_id = None\n for pattern_type, pattern in EXTERNAL_ID_PATTERNS:\n if type != EXTERNAL_ID_TYPE_ANY and type != pattern_type:\n continue\n\n match = search(pattern, output)\n if match:\n external_id = match.group(1)\n break\n\n return external_id", "response": "Parses the output of a job submission command for an external id."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the default script path if it is not set.", "response": "def _handle_default(value, script_name):\n \"\"\" There are two potential variants of these scripts,\n the Bash scripts that are meant to be run within PULSAR_ROOT\n for older-style installs and the binaries created by setup.py\n as part of a proper pulsar installation.\n\n This method first looks for the newer style variant of these\n scripts and returns the full path to them if needed and falls\n back to the bash scripts if these cannot be found.\n \"\"\"\n if value:\n return value\n\n installed_script = which(\"pulsar-%s\" % script_name.replace(\"_\", \"-\"))\n if installed_script:\n return installed_script\n else:\n return \"scripts/%s.bash\" % script_name"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _read(path):\n input = open(path, \"r\", encoding=\"utf-8\")\n try:\n return input.read()\n finally:\n input.close()", "response": "Utility method to quickly read small files into memory as bytes."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef find_referenced_subfiles(self, directory):\n if directory is None:\n return []\n\n pattern = r'''[\\'\\\"]?(%s%s[^\\s\\'\\\"]+)[\\'\\\"]?''' % (escape(directory), escape(sep))\n return self.find_pattern_references(pattern)", "response": "Find all files below specified directory in job inputs."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef rewrite_paths(self, local_path, remote_path):\n self.__rewrite_command_line(local_path, remote_path)\n self.__rewrite_config_files(local_path, remote_path)", "response": "Rewrite references to local_path with remote_path in job inputs."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef rewrite_input_paths(self):\n for local_path, remote_path in self.file_renames.items():\n self.job_inputs.rewrite_paths(local_path, remote_path)", "response": "For each file that has been transferred and renamed update command line and configfiles to reflect that rewrite."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef __get_payload(self, uuid, failed):\n # Caller should have the publish_uuid_store lock\n try:\n return self.publish_uuid_store[uuid]\n except Exception as exc:\n msg = \"Failed to load payload from publish store for UUID %s, %s: %s\"\n if uuid in failed:\n log.error(msg, uuid, \"discarding\", str(exc))\n self.__discard_publish_uuid(uuid, failed)\n else:\n log.error(msg, uuid, \"will try agan\", str(exc))\n failed.add(uuid)\n return None", "response": "Retry reading a message from the publish_uuid_store once delete on the second failure."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef __job_complete_dict(complete_status, manager, job_id):\n return_code = manager.return_code(job_id)\n if return_code == PULSAR_UNKNOWN_RETURN_CODE:\n return_code = None\n stdout_contents = manager.stdout_contents(job_id).decode(\"utf-8\")\n stderr_contents = manager.stderr_contents(job_id).decode(\"utf-8\")\n job_directory = manager.job_directory(job_id)\n as_dict = dict(\n job_id=job_id,\n complete=\"true\", # Is this still used or is it legacy.\n status=complete_status,\n returncode=return_code,\n stdout=stdout_contents,\n stderr=stderr_contents,\n working_directory=job_directory.working_directory(),\n metadata_directory=job_directory.metadata_directory(),\n working_directory_contents=job_directory.working_directory_contents(),\n metadata_directory_contents=job_directory.metadata_directory_contents(),\n outputs_directory_contents=job_directory.outputs_directory_contents(),\n system_properties=manager.system_properties(),\n pulsar_version=pulsar_version,\n )\n return as_dict", "response": "Build final dictionary describing a completed job."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nlaunching a new job from the specified config.", "response": "def submit_job(manager, job_config):\n \"\"\" Launch new job from specified config. May have been previously 'setup'\n if 'setup_params' in job_config is empty.\n \"\"\"\n # job_config is raw dictionary from JSON (from MQ or HTTP endpoint).\n job_id = job_config.get('job_id')\n try:\n command_line = job_config.get('command_line')\n\n setup_params = job_config.get('setup_params', {})\n force_setup = job_config.get('setup')\n remote_staging = job_config.get('remote_staging', {})\n dependencies_description = job_config.get('dependencies_description', None)\n env = job_config.get('env', [])\n submit_params = job_config.get('submit_params', {})\n touch_outputs = job_config.get('touch_outputs', [])\n job_config = None\n if setup_params or force_setup:\n input_job_id = setup_params.get(\"job_id\", job_id)\n tool_id = setup_params.get(\"tool_id\", None)\n tool_version = setup_params.get(\"tool_version\", None)\n use_metadata = setup_params.get(\"use_metadata\", False)\n job_config = setup_job(\n manager,\n input_job_id,\n tool_id,\n tool_version,\n use_metadata,\n )\n\n if job_config is not None:\n job_directory = job_config[\"job_directory\"]\n jobs_directory = os.path.abspath(os.path.join(job_directory, os.pardir))\n command_line = command_line.replace('__PULSAR_JOBS_DIRECTORY__', jobs_directory)\n\n # TODO: Handle __PULSAR_JOB_DIRECTORY__ config files, metadata files, etc...\n manager.touch_outputs(job_id, touch_outputs)\n launch_config = {\n \"remote_staging\": remote_staging,\n \"command_line\": command_line,\n \"dependencies_description\": dependencies_description,\n \"submit_params\": submit_params,\n \"env\": env,\n \"setup_params\": setup_params,\n }\n manager.preprocess_and_launch(job_id, launch_config)\n except Exception:\n manager.handle_failure_before_launch(job_id)\n raise"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef app_factory(global_conf, **local_conf):\n configuration_file = global_conf.get(\"__file__\", None)\n webapp = init_webapp(ini_path=configuration_file, local_conf=local_conf)\n return webapp", "response": "Returns a Pulsar WSGI application."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef run_next(self):\n while 1:\n (op, obj) = self.work_queue.get()\n if op is STOP_SIGNAL:\n return\n try:\n (job_id, command_line) = obj\n try:\n os.remove(self._job_file(job_id, JOB_FILE_COMMAND_LINE))\n except Exception:\n log.exception(\"Running command but failed to delete - command may rerun on Pulsar boot.\")\n # _run will not do anything if job has been cancelled.\n self._run(job_id, command_line, background=False)\n except Exception:\n log.warn(\"Uncaught exception running job with job_id %s\" % job_id)\n traceback.print_exc()", "response": "Run the next item in the queue."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef check_dependencies():\n\n print 'Checking dependencies...'\n if not HAS_VIRTUALENV:\n print 'Virtual environment not found.'\n # Try installing it via easy_install...\n if HAS_EASY_INSTALL:\n print 'Installing virtualenv via easy_install...',\n run_command(['easy_install', 'virtualenv'],\n die_message='easy_install failed to install virtualenv'\n '\\ndevelopment requires virtualenv, please'\n ' install it using your favorite tool')\n if not run_command(['which', 'virtualenv']):\n die('ERROR: virtualenv not found in path.\\n\\ndevelopment '\n ' requires virtualenv, please install it using your'\n ' favorite package management tool and ensure'\n ' virtualenv is in your path')\n print 'virtualenv installation done.'\n else:\n die('easy_install not found.\\n\\nInstall easy_install'\n ' (python-setuptools in ubuntu) or virtualenv by hand,'\n ' then rerun.')\n print 'dependency check done.'", "response": "Check if virtualenv is in the path."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_virtualenv(venv=VENV):\n print 'Creating venv...',\n run_command(['virtualenv', '-q', '--no-site-packages', VENV])\n print 'done.'\n print 'Installing pip in virtualenv...',\n if not run_command([WITH_VENV, 'easy_install', 'pip']).strip():\n die(\"Failed to install pip.\")\n print 'done.'\n print 'Installing distribute in virtualenv...'\n pip_install('distribute>=0.6.24')\n print 'done.'", "response": "Creates the virtual environment and installs pip only into the virtual environment"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef __remote_path_rewrite(self, dataset_path, dataset_path_type, name=None):\n path = str(dataset_path) # Use false_path if needed.\n action = self.action_mapper.action(path, dataset_path_type)\n if action.staging_needed:\n if name is None:\n name = os.path.basename(path)\n remote_directory = self.__remote_directory(dataset_path_type)\n remote_path_rewrite = self.path_helper.remote_join(remote_directory, name)\n else:\n # Actions which don't require staging MUST define a path_rewrite\n # method.\n remote_path_rewrite = action.path_rewrite(self.path_helper)\n\n return remote_path_rewrite", "response": "Return the remote path of this file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _retry_over_time(fun, catch, args=[], kwargs={}, errback=None,\n max_retries=None, interval_start=2, interval_step=2,\n interval_max=30):\n \"\"\"Retry the function over and over until max retries is exceeded.\n\n For each retry we sleep a for a while before we try again, this interval\n is increased for every retry until the max seconds is reached.\n\n :param fun: The function to try\n :param catch: Exceptions to catch, can be either tuple or a single\n exception class.\n :keyword args: Positional arguments passed on to the function.\n :keyword kwargs: Keyword arguments passed on to the function.\n :keyword max_retries: Maximum number of retries before we give up.\n If this is not set, we will retry forever.\n :keyword interval_start: How long (in seconds) we start sleeping between\n retries.\n :keyword interval_step: By how much the interval is increased for each\n retry.\n :keyword interval_max: Maximum number of seconds to sleep between retries.\n\n \"\"\"\n retries = 0\n interval_range = __fxrange(interval_start,\n interval_max + interval_start,\n interval_step, repeatlast=True)\n for retries in count():\n try:\n return fun(*args, **kwargs)\n except catch as exc:\n if max_retries and retries >= max_retries:\n raise\n tts = float(errback(exc, interval_range, retries) if errback\n else next(interval_range))\n if tts:\n sleep(tts)", "response": "Attempts the function over and over until max retries is exceeded."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nbuild a client given specific destination parameters and job_id.", "response": "def get_client(self, destination_params, job_id, **kwargs):\n \"\"\"Build a client given specific destination parameters and job_id.\"\"\"\n destination_params = _parse_destination_params(destination_params)\n destination_params.update(**kwargs)\n job_manager_interface_class = self.job_manager_interface_class\n job_manager_interface_args = dict(destination_params=destination_params, **self.job_manager_interface_args)\n job_manager_interface = job_manager_interface_class(**job_manager_interface_args)\n return self.client_class(destination_params, job_id, job_manager_interface, **self.extra_client_kwds)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget shell and job interface defined by and configured via specified params.", "response": "def get_plugins(self, shell_params, job_params):\n \"\"\"\n Return shell and job interface defined by and configured via\n specified params.\n \"\"\"\n shell = self.get_shell_plugin(shell_params)\n job_interface = self.get_job_interface(job_params)\n return shell, job_interface"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef atomicish_move(source, destination, tmp_suffix=\"_TMP\"):\n destination_dir = os.path.dirname(destination)\n destination_name = os.path.basename(destination)\n temp_destination = os.path.join(destination_dir, \"%s%s\" % (destination_name, tmp_suffix))\n shutil.move(source, temp_destination)\n os.rename(temp_destination, destination)", "response": "Move source to destination without risk of partial moves."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef env_to_statement(env):\n ''' Return the abstraction description of an environment variable definition\n into a statement for shell script.\n\n >>> env_to_statement(dict(name='X', value='Y'))\n 'X=\"Y\"; export X'\n >>> env_to_statement(dict(name='X', value='Y', raw=True))\n 'X=Y; export X'\n >>> env_to_statement(dict(name='X', value='\"A\",\"B\",\"C\"'))\n 'X=\"\\\\\\\\\"A\\\\\\\\\",\\\\\\\\\"B\\\\\\\\\",\\\\\\\\\"C\\\\\\\\\"\"; export X'\n >>> env_to_statement(dict(file=\"Y\"))\n '. \"Y\"'\n >>> env_to_statement(dict(file=\"'RAW $FILE'\", raw=True))\n \". 'RAW $FILE'\"\n >>> # Source file takes precedence\n >>> env_to_statement(dict(name='X', value='\"A\",\"B\",\"C\"', file=\"S\"))\n '. \"S\"'\n >>> env_to_statement(dict(execute=\"module load java/1.5.1\"))\n 'module load java/1.5.1'\n '''\n source_file = env.get('file', None)\n if source_file:\n return '. %s' % __escape(source_file, env)\n execute = env.get('execute', None)\n if execute:\n return execute\n name = env['name']\n value = __escape(env['value'], env)\n return '%s=%s; export %s' % (name, value, name)", "response": "Return the abstraction description of an environment variable definition\n into a statement for shell script."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef copy_to_temp(object):\n temp_file = NamedTemporaryFile(delete=False)\n _copy_and_close(object, temp_file)\n return temp_file.name", "response": "Copy file - like object to temp file and return path."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef build_submit_description(executable, output, error, user_log, query_params):\n all_query_params = DEFAULT_QUERY_CLASSAD.copy()\n all_query_params.update(query_params)\n\n submit_description = []\n for key, value in all_query_params.items():\n submit_description.append('%s = %s' % (key, value))\n submit_description.append('executable = ' + executable)\n submit_description.append('output = ' + output)\n submit_description.append('error = ' + error)\n submit_description.append('log = ' + user_log)\n submit_description.append('queue')\n return '\\n'.join(submit_description)", "response": "Builds the contents of a condor submit description file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef condor_submit(submit_file):\n external_id = None\n try:\n submit = Popen(('condor_submit', submit_file), stdout=PIPE, stderr=STDOUT)\n message, _ = submit.communicate()\n if submit.returncode == 0:\n external_id = parse_external_id(message, type='condor')\n else:\n message = PROBLEM_PARSING_EXTERNAL_ID\n except Exception as e:\n message = str(e)\n return external_id, message", "response": "Submit a condor job described by the given file and return an external id and a reason for the failure."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nstops running condor job and return a failure_message if this fails.", "response": "def condor_stop(external_id):\n \"\"\"\n Stop running condor job and return a failure_message if this\n fails.\n \"\"\"\n failure_message = None\n try:\n check_call(('condor_rm', external_id))\n except CalledProcessError:\n failure_message = \"condor_rm failed\"\n except Exception as e:\n \"error encountered calling condor_rm: %s\" % e\n return failure_message"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget a job lock corresponding to the path.", "response": "def get_lock(self, path):\n \"\"\" Get a job lock corresponding to the path - assumes parent\n directory exists but the file itself does not.\n \"\"\"\n if self.lockfile:\n return self.lockfile.LockFile(path)\n else:\n with self.job_locks_lock:\n if path not in self.job_locks:\n lock = threading.Lock()\n self.job_locks[path] = lock\n else:\n lock = self.job_locks[path]\n return lock"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef shutdown(self, timeout=None):\n try:\n shutdown_method = self._proxied_manager.shutdown\n except AttributeError:\n return\n shutdown_method(timeout)", "response": "Shut down the current active user."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef build(client, destination_args):\n # Have defined a remote job directory, lets do the setup locally.\n if client.job_directory:\n handler = LocalSetupHandler(client, destination_args)\n else:\n handler = RemoteSetupHandler(client)\n return handler", "response": "Build a SetupHandler object for a client from destination parameters."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a DRMAA job template populate with specified properties run the job and return the external_job_id.", "response": "def run_job(self, **kwds):\n \"\"\"\n Create a DRMAA job template, populate with specified properties,\n run the job, and return the external_job_id.\n \"\"\"\n template = DrmaaSession.session.createJobTemplate()\n try:\n for key in kwds:\n setattr(template, key, kwds[key])\n with DrmaaSession.session_lock:\n return DrmaaSession.session.runJob(template)\n finally:\n DrmaaSession.session.deleteJobTemplate(template)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef url_to_destination_params(url):\n\n if url.startswith(\"pulsar://\"):\n url = url[len(\"pulsar://\"):]\n\n if not url.endswith(\"/\"):\n url += \"/\"\n\n # Check for private token embedded in the URL. A URL of the form\n # https://moo@cow:8913 will try to contact https://cow:8913\n # with a private key of moo\n private_token_format = \"https?://(.*)@.*/?\"\n private_token_match = match(private_token_format, url)\n private_token = None\n if private_token_match:\n private_token = private_token_match.group(1)\n url = url.replace(\"%s@\" % private_token, '', 1)\n\n destination_args = {\"url\": url,\n \"private_token\": private_token}\n\n return destination_args", "response": "Convert a legacy runner URL to a job destination\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a standard option parser for the current locale.", "response": "def standard_parser(cls, verbose=True,\n interactive=False,\n no_interactive=False,\n simulate=False,\n quiet=False,\n overwrite=False):\n \"\"\"\n Create a standard ``OptionParser`` instance.\n\n Typically used like::\n\n class MyCommand(Command):\n parser = Command.standard_parser()\n\n Subclasses may redefine ``standard_parser``, so use the\n nearest superclass's class method.\n \"\"\"\n parser = BoolOptionParser()\n if verbose:\n parser.add_option('-v', '--verbose',\n action='count',\n dest='verbose',\n default=0)\n if quiet:\n parser.add_option('-q', '--quiet',\n action='count',\n dest='quiet',\n default=0)\n if no_interactive:\n parser.add_option('--no-interactive',\n action=\"count\",\n dest=\"no_interactive\",\n default=0)\n if interactive:\n parser.add_option('-i', '--interactive',\n action='count',\n dest='interactive',\n default=0)\n if simulate:\n parser.add_option('-n', '--simulate',\n action='store_true',\n dest='simulate',\n default=False)\n if overwrite:\n parser.add_option('-f', '--overwrite',\n dest=\"overwrite\",\n action=\"store_true\",\n help=\"Overwrite files (warnings will be emitted for non-matching files otherwise)\")\n return parser"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef quote_first_command_arg(self, arg):\n if (sys.platform != 'win32'\n or ' ' not in arg):\n # Problem does not apply:\n return arg\n try:\n import win32api\n except ImportError:\n raise ValueError(\n \"The executable %r contains a space, and in order to \"\n \"handle this issue you must have the win32api module \"\n \"installed\" % arg)\n arg = win32api.GetShortPathName(arg)\n return arg", "response": "Quote the first command argument that is passed to the win32api module."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef logging_file_config(self, config_file):\n parser = ConfigParser.ConfigParser()\n parser.read([config_file])\n if parser.has_section('loggers'):\n config_file = os.path.abspath(config_file)\n fileConfig(config_file, dict(__file__=config_file,\n here=os.path.dirname(config_file)))", "response": "Setup logging via the logging module s fileConfig function with the\n 69 - bit log file name."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef finish_job(client, cleanup_job, job_completed_normally, client_outputs, pulsar_outputs):\n collection_failure_exceptions = []\n if job_completed_normally:\n output_collector = ClientOutputCollector(client)\n action_mapper = FileActionMapper(client)\n results_stager = ResultsCollector(output_collector, action_mapper, client_outputs, pulsar_outputs)\n collection_failure_exceptions = results_stager.collect()\n _clean(collection_failure_exceptions, cleanup_job, client)\n return collection_failure_exceptions", "response": "Process for un - staging a complete Pulsar job."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef copy(source, destination):\n source = os.path.abspath(source)\n destination = os.path.abspath(destination)\n if source != destination:\n if not os.path.exists(os.path.dirname(destination)):\n os.makedirs(os.path.dirname(destination))\n shutil.copyfile(source, destination)", "response": "Copy source file to destination if needed."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef shutdown(self, timeout=None):\n try:\n super(BaseDrmaaManager, self).shutdown(timeout)\n except Exception:\n pass\n self.drmaa_session.close()", "response": "Cleanup DRMAA session and call shutdown of parent."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nmoves a file from a temporary staging area into the cache.", "response": "def cache_file(self, local_path, ip, path):\n \"\"\"\n Move a file from a temporary staging area into the cache.\n \"\"\"\n destination = self.__destination(ip, path)\n atomicish_move(local_path, destination)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nbuild a dictionary of job manager objects from a config file.", "response": "def build_managers(app, conf):\n \"\"\"\n Takes in a config file as outlined in job_managers.ini.sample and builds\n a dictionary of job manager objects from them.\n \"\"\"\n # Load default options from config file that apply to all\n # managers.\n default_options = _get_default_options(conf)\n\n manager_descriptions = ManagerDescriptions()\n if \"job_managers_config\" in conf:\n job_managers_config = conf.get(\"job_managers_config\", None)\n _populate_manager_descriptions_from_ini(manager_descriptions, job_managers_config)\n elif \"managers\" in conf:\n for manager_name, manager_options in conf[\"managers\"].items():\n manager_description = ManagerDescription.from_dict(manager_options, manager_name)\n manager_descriptions.add(manager_description)\n elif \"manager\" in conf:\n manager_description = ManagerDescription.from_dict(conf[\"manager\"])\n manager_descriptions.add(manager_description)\n else:\n manager_descriptions.add(ManagerDescription())\n\n manager_classes = _get_managers_dict()\n managers = {}\n for manager_name, manager_description in manager_descriptions.descriptions.items():\n manager_options = dict(default_options)\n manager_options.update(manager_description.manager_options)\n\n manager_class = manager_classes[manager_description.manager_type]\n manager = _build_manager(manager_class, app, manager_name, manager_options)\n managers[manager_name] = manager\n\n return managers"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fix_type_error(exc_info, callable, varargs, kwargs):\n if exc_info is None:\n exc_info = sys.exc_info()\n if (exc_info[0] != TypeError\n or str(exc_info[1]).find('arguments') == -1\n or getattr(exc_info[1], '_type_error_fixed', False)):\n return exc_info\n exc_info[1]._type_error_fixed = True\n argspec = inspect.formatargspec(*inspect.getargspec(callable))\n args = ', '.join(map(_short_repr, varargs))\n if kwargs and args:\n args += ', '\n if kwargs:\n kwargs = kwargs.items()\n kwargs.sort()\n args += ', '.join(['%s=...' % n for n, v in kwargs])\n gotspec = '(%s)' % args\n msg = '%s; got %s, wanted %s' % (exc_info[1], gotspec, argspec)\n exc_info[1].args = (msg,)\n return exc_info", "response": "Given an exception and a callable args and kwargs this will test if the exception was due to a\n signature error and annotate the error with better information."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncall a function in the sequence of objects and returns the result.", "response": "def fix_call(callable, *args, **kw):\n \"\"\"\n Call ``callable(*args, **kw)`` fixing any type errors that come out.\n \"\"\"\n try:\n val = callable(*args, **kw)\n except TypeError:\n exc_info = fix_type_error(None, callable, args, kw)\n reraise(*exc_info)\n return val"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nlooks up a module or object from a some. module : func_name specification.", "response": "def lookup_object(spec):\n \"\"\"\n Looks up a module or object from a some.module:func_name specification.\n To just look up a module, omit the colon and everything after it.\n \"\"\"\n parts, target = spec.split(':') if ':' in spec else (spec, None)\n module = __import__(parts)\n\n for part in parts.split('.')[1:] + ([target] if target else []):\n module = getattr(module, part)\n\n return module"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nflattening a nested list.", "response": "def _flatten(lst):\n \"\"\"\n Flatten a nested list.\n \"\"\"\n if not isinstance(lst, (list, tuple)):\n return [lst]\n result = []\n for item in lst:\n result.extend(_flatten(item))\n return result"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef defaults(self):\n defaults = ConfigParser.defaults(self).copy()\n for key, val in iteritems(defaults):\n defaults[key] = self.get('DEFAULT', key) or val\n return defaults", "response": "Return the defaults dict with their values interpolated"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nfind the section name with the given name prefix.", "response": "def find_config_section(self, object_type, name=None):\n \"\"\"\n Return the section name with the given name prefix (following the\n same pattern as ``protocol_desc`` in ``config``. It must have the\n given name, or for ``'main'`` an empty name is allowed. The\n prefix must be followed by a ``:``.\n\n Case is *not* ignored.\n \"\"\"\n possible = []\n for name_options in object_type.config_prefixes:\n for name_prefix in name_options:\n found = self._find_sections(\n self.parser.sections(), name_prefix, name)\n if found:\n possible.extend(found)\n break\n if not possible:\n raise LookupError(\n \"No section %r (prefixed by %s) found in config %s\"\n % (name,\n ' or '.join(map(repr, _flatten(object_type.config_prefixes))),\n self.filename))\n if len(possible) > 1:\n raise LookupError(\n \"Ambiguous section names %r for section %r (prefixed by %s) \"\n \"found in config %s\"\n % (possible, name,\n ' or '.join(map(repr, _flatten(object_type.config_prefixes))),\n self.filename))\n return possible[0]"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfinding the entry point for the given object type.", "response": "def find_egg_entry_point(self, object_type, name=None):\n \"\"\"\n Returns the (entry_point, protocol) for the with the given\n ``name``.\n \"\"\"\n if name is None:\n name = 'main'\n possible = []\n for protocol_options in object_type.egg_protocols:\n for protocol in protocol_options:\n pkg_resources.require(self.spec)\n entry = pkg_resources.get_entry_info(\n self.spec,\n protocol,\n name)\n if entry is not None:\n possible.append((entry.load(), protocol, entry.name))\n break\n if not possible:\n # Better exception\n dist = pkg_resources.get_distribution(self.spec)\n raise LookupError(\n \"Entry point %r not found in egg %r (dir: %s; protocols: %s; \"\n \"entry_points: %s)\"\n % (name, self.spec,\n dist.location,\n ', '.join(_flatten(object_type.egg_protocols)),\n ', '.join(_flatten([\n dictkeys(pkg_resources.get_entry_info(self.spec, prot, name) or {})\n for prot in protocol_options] or '(no entry points)'))))\n if len(possible) > 1:\n raise LookupError(\n \"Ambiguous entry points for %r in egg %r (protocols: %s)\"\n % (name, self.spec, ', '.join(_flatten(protocol_options))))\n return possible[0]"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef calculate_path(self, remote_relative_path, input_type):\n directory, allow_nested_files = self._directory_for_file_type(input_type)\n return self.path_helper.remote_join(directory, remote_relative_path)", "response": "Calculate the path to the local cache file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncompute the status of a job.", "response": "def get_status(self, job_id):\n \"\"\" Compute status used proxied manager and handle state transitions\n and track additional state information needed.\n \"\"\"\n job_directory = self._proxied_manager.job_directory(job_id)\n with job_directory.lock(\"status\"):\n proxy_status, state_change = self.__proxy_status(job_directory, job_id)\n\n if state_change == \"to_complete\":\n self.__deactivate(job_id, proxy_status)\n elif state_change == \"to_running\":\n self.__state_change_callback(status.RUNNING, job_id)\n\n return self.__status(job_directory, proxy_status)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndetermines the status of a job based on the proxied job manager and the job id.", "response": "def __proxy_status(self, job_directory, job_id):\n \"\"\" Determine state with proxied job manager and if this job needs\n to be marked as deactivated (this occurs when job first returns a\n complete status from proxy.\n \"\"\"\n state_change = None\n if job_directory.has_metadata(JOB_FILE_PREPROCESSING_FAILED):\n proxy_status = status.FAILED\n job_directory.store_metadata(JOB_FILE_FINAL_STATUS, proxy_status)\n state_change = \"to_complete\"\n elif not job_directory.has_metadata(JOB_FILE_PREPROCESSED):\n proxy_status = status.PREPROCESSING\n elif job_directory.has_metadata(JOB_FILE_FINAL_STATUS):\n proxy_status = job_directory.load_metadata(JOB_FILE_FINAL_STATUS)\n else:\n proxy_status = self._proxied_manager.get_status(job_id)\n if proxy_status == status.RUNNING:\n if not job_directory.has_metadata(JOB_METADATA_RUNNING):\n job_directory.store_metadata(JOB_METADATA_RUNNING, True)\n state_change = \"to_running\"\n elif proxy_status in [status.COMPLETE, status.CANCELLED]:\n job_directory.store_metadata(JOB_FILE_FINAL_STATUS, proxy_status)\n state_change = \"to_complete\"\n return proxy_status, state_change"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncomputes the status of the job based on the proxy status.", "response": "def __status(self, job_directory, proxy_status):\n \"\"\" Use proxied manager's status to compute the real\n (stateful) status of job.\n \"\"\"\n if proxy_status == status.COMPLETE:\n if not job_directory.has_metadata(JOB_FILE_POSTPROCESSED):\n job_status = status.POSTPROCESSING\n else:\n job_status = status.COMPLETE\n else:\n job_status = proxy_status\n return job_status"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a dictionary mapping local path to remote name.", "response": "def output_extras(self, output_file):\n \"\"\"\n Returns dict mapping local path to remote name.\n \"\"\"\n output_directory = dirname(output_file)\n\n def local_path(name):\n return join(output_directory, self.path_helper.local_name(name))\n\n files_directory = \"%s_files%s\" % (basename(output_file)[0:-len(\".dat\")], self.path_helper.separator)\n names = filter(lambda o: o.startswith(files_directory), self.output_directory_contents)\n return dict(map(lambda name: (local_path(name), name), names))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef sudo_popen(*args, **kwargs):\n user = kwargs.get(\"user\", None)\n full_command = [SUDO_PATH, SUDO_PRESERVE_ENVIRONMENT_ARG]\n if user:\n full_command.extend([SUDO_USER_ARG, user])\n full_command.extend(args)\n log.info(\"About to execute the following sudo command - [%s]\" % ' '.join(full_command))\n p = Popen(full_command, shell=False, stdout=PIPE, stderr=PIPE)\n return p", "response": "Helper method for building and executing a Popen command."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_family(self, major_number):\n # Normally, we have separate buckets for bugfixes vs features\n keys = ['unreleased_bugfix', 'unreleased_feature']\n # But unstable prehistorical releases roll all up into just\n # 'unreleased'\n if major_number == 0 and self.config.releases_unstable_prehistory:\n keys = ['unreleased']\n # Either way, the buckets default to an empty list\n self[major_number] = {key: [] for key in keys}", "response": "Add a new release line with given major_number to the release line with given major_number."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef has_stable_releases(self):\n nonzeroes = self.stable_families\n # Nothing but 0.x releases -> yup we're prehistory\n if not nonzeroes:\n return False\n # Presumably, if there's >1 major family besides 0.x, we're at least\n # one release into the 1.0 (or w/e) line.\n if len(nonzeroes) > 1:\n return True\n # If there's only one, we may still be in the space before its N.0.0 as\n # well; we can check by testing for existence of bugfix buckets\n return any(\n x for x in self[nonzeroes[0]] if not x.startswith('unreleased')\n )", "response": "Returns whether stable releases exist."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nload and parse a changelog file from a path and return a dict of data structures.", "response": "def parse_changelog(path, **kwargs):\n \"\"\"\n Load and parse changelog file from ``path``, returning data structures.\n\n This function does not alter any files on disk; it is solely for\n introspecting a Releases ``changelog.rst`` and programmatically answering\n questions like \"are there any unreleased bugfixes for the 2.3 line?\" or\n \"what was included in release 1.2.1?\".\n\n For example, answering the above questions is as simple as::\n\n changelog = parse_changelog(\"/path/to/changelog\")\n print(\"Unreleased issues for 2.3.x: {}\".format(changelog['2.3']))\n print(\"Contents of v1.2.1: {}\".format(changelog['1.2.1']))\n\n Aside from the documented arguments, any additional keyword arguments are\n passed unmodified into an internal `get_doctree` call (which then passes\n them to `make_app`).\n\n :param str path: A relative or absolute file path string.\n\n :returns:\n A dict whose keys map to lists of ``releases.models.Issue`` objects, as\n follows:\n\n - Actual releases are full version number keys, such as ``\"1.2.1\"`` or\n ``\"2.0.0\"``.\n - Unreleased bugs (or bug-like issues; see the Releases docs) are\n stored in minor-release buckets, e.g. ``\"1.2\"`` or ``\"2.0\"``.\n - Unreleased features (or feature-like issues) are found in\n ``\"unreleased_N_feature\"``, where ``N`` is one of the major release\n families (so, a changelog spanning only 1.x will only have\n ``unreleased_1_feature``, whereas one with 1.x and 2.x releases will\n have ``unreleased_1_feature`` and ``unreleased_2_feature``, etc).\n\n .. versionchanged:: 1.6\n Added support for passing kwargs to `get_doctree`/`make_app`.\n \"\"\"\n app, doctree = get_doctree(path, **kwargs)\n # Have to semi-reproduce the 'find first bullet list' bit from main code,\n # which is unfortunately side-effect-heavy (thanks to Sphinx plugin\n # design).\n first_list = None\n for node in doctree[0]:\n if isinstance(node, bullet_list):\n first_list = node\n break\n # Initial parse into the structures Releases finds useful internally\n releases, manager = construct_releases(first_list.children, app)\n ret = changelog2dict(releases)\n # Stitch them together into something an end-user would find better:\n # - nuke unreleased_N.N_Y as their contents will be represented in the\n # per-line buckets\n for key in ret.copy():\n if key.startswith('unreleased'):\n del ret[key]\n for family in manager:\n # - remove unreleased_bugfix, as they are accounted for in the per-line\n # buckets too. No need to store anywhere.\n manager[family].pop('unreleased_bugfix', None)\n # - bring over each major family's unreleased_feature as\n # unreleased_N_feature\n unreleased = manager[family].pop('unreleased_feature', None)\n if unreleased is not None:\n ret['unreleased_{}_feature'.format(family)] = unreleased\n # - bring over all per-line buckets from manager (flattening)\n # Here, all that's left in the per-family bucket should be lines, not\n # unreleased_*\n ret.update(manager[family])\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_doctree(path, **kwargs):\n root, filename = os.path.split(path)\n docname, _ = os.path.splitext(filename)\n # TODO: this only works for top level changelog files (i.e. ones where\n # their dirname is the project/doc root)\n app = make_app(srcdir=root, **kwargs)\n # Create & init a BuildEnvironment. Mm, tasty side effects.\n app._init_env(freshenv=True)\n env = app.env\n # More arity/API changes: Sphinx 1.3/1.4-ish require one to pass in the app\n # obj in BuildEnvironment.update(); modern Sphinx performs that inside\n # Application._init_env() (which we just called above) and so that kwarg is\n # removed from update(). EAFP.\n kwargs = dict(\n config=app.config,\n srcdir=root,\n doctreedir=app.doctreedir,\n app=app,\n )\n try:\n env.update(**kwargs)\n except TypeError:\n # Assume newer Sphinx w/o an app= kwarg\n del kwargs['app']\n env.update(**kwargs)\n # Code taken from sphinx.environment.read_doc; easier to manually call\n # it with a working Environment object, instead of doing more random crap\n # to trick the higher up build system into thinking our single changelog\n # document was \"updated\".\n env.temp_data['docname'] = docname\n env.app = app\n # NOTE: SphinxStandaloneReader API changed in 1.4 :(\n reader_kwargs = {\n 'app': app,\n 'parsers': env.config.source_parsers,\n }\n if sphinx.version_info[:2] < (1, 4):\n del reader_kwargs['app']\n # This monkeypatches (!!!) docutils to 'inject' all registered Sphinx\n # domains' roles & so forth. Without this, rendering the doctree lacks\n # almost all Sphinx magic, including things like :ref: and :doc:!\n with sphinx_domains(env):\n try:\n reader = SphinxStandaloneReader(**reader_kwargs)\n except TypeError:\n # If we import from io, this happens automagically, not in API\n del reader_kwargs['parsers']\n reader = SphinxStandaloneReader(**reader_kwargs)\n pub = Publisher(reader=reader,\n writer=SphinxDummyWriter(),\n destination_class=NullOutput)\n pub.set_components(None, 'restructuredtext', None)\n pub.process_programmatic_settings(None, env.settings, None)\n # NOTE: docname derived higher up, from our given path\n src_path = env.doc2path(docname)\n source = SphinxFileInput(\n app,\n env,\n source=None,\n source_path=src_path,\n encoding=env.config.source_encoding,\n )\n pub.source = source\n pub.settings._source = src_path\n pub.set_destination(None, None)\n pub.publish()\n return app, pub.document", "response": "Return a Sphinx doctree from the RST file at path."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef load_conf(srcdir):\n path = os.path.join(srcdir, 'conf.py')\n mylocals = {'__file__': path}\n with open(path) as fd:\n exec(fd.read(), mylocals)\n return mylocals", "response": "Load conf. py from given srcdir."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef make_app(**kwargs):\n srcdir = kwargs.pop('srcdir', mkdtemp())\n dstdir = kwargs.pop('dstdir', mkdtemp())\n doctreedir = kwargs.pop('doctreedir', mkdtemp())\n load_extensions = kwargs.pop('load_extensions', False)\n real_conf = None\n try:\n # Sphinx <1.6ish\n Sphinx._log = lambda self, message, wfile, nonl=False: None\n # Sphinx >=1.6ish. Technically still lets Very Bad Things through,\n # unlike the total muting above, but probably OK.\n # NOTE: used to just do 'sphinx' but that stopped working, even on\n # sphinx 1.6.x. Weird. Unsure why hierarchy not functioning.\n for name in ('sphinx', 'sphinx.sphinx.application'):\n logging.getLogger(name).setLevel(logging.ERROR)\n # App API seems to work on all versions so far.\n app = Sphinx(\n srcdir=srcdir,\n confdir=None,\n outdir=dstdir,\n doctreedir=doctreedir,\n buildername='html',\n )\n # Might as well load the conf file here too.\n if load_extensions:\n real_conf = load_conf(srcdir)\n finally:\n for d in (srcdir, dstdir, doctreedir):\n # Only remove empty dirs; non-empty dirs are implicitly something\n # that existed before we ran, and should not be touched.\n try:\n os.rmdir(d)\n except OSError:\n pass\n setup(app)\n # Mock out the config within. More assumptions by Sphinx :(\n # TODO: just use real config and overlay what truly needs changing? is that\n # feasible given the rest of the weird ordering we have to do? If it is,\n # maybe just literally slap this over the return value of load_conf()...\n config = {\n 'releases_release_uri': 'foo_%s',\n 'releases_issue_uri': 'bar_%s',\n 'releases_debug': False,\n 'master_doc': 'index',\n }\n # Allow tinkering with document filename\n if 'docname' in kwargs:\n app.env.temp_data['docname'] = kwargs.pop('docname')\n # Allow config overrides via kwargs\n for name in kwargs:\n config['releases_{}'.format(name)] = kwargs[name]\n # Stitch together as the sphinx app init() usually does w/ real conf files\n app.config._raw_config = config\n # init_values() requires a 'warn' runner on Sphinx 1.3-1.6, so if we seem\n # to be hitting arity errors, give it a dummy such callable. Hopefully\n # calling twice doesn't introduce any wacko state issues :(\n try:\n app.config.init_values()\n except TypeError: # boy I wish Python had an ArityError or w/e\n app.config.init_values(lambda x: x)\n # Initialize extensions (the internal call to this happens at init time,\n # which of course had no valid config yet here...)\n if load_extensions:\n for extension in real_conf.get('extensions', []):\n # But don't set up ourselves again, that causes errors\n if extension == 'releases':\n continue\n app.setup_extension(extension)\n return app", "response": "Create a dummy Sphinx application with some hardcoded assumptions."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _log(txt, config):\n if config.releases_debug:\n sys.stderr.write(str(txt) + \"\\n\")\n sys.stderr.flush()", "response": "Log debug output if debug setting is on."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nattempt to return some sort of Spec from given keyword value. Returns None if one could not be derived.", "response": "def scan_for_spec(keyword):\n \"\"\"\n Attempt to return some sort of Spec from given keyword value.\n\n Returns None if one could not be derived.\n \"\"\"\n # Both 'spec' formats are wrapped in parens, discard\n keyword = keyword.lstrip('(').rstrip(')')\n # First, test for intermediate '1.2+' style\n matches = release_line_re.findall(keyword)\n if matches:\n return Spec(\">={}\".format(matches[0]))\n # Failing that, see if Spec can make sense of it\n try:\n return Spec(keyword)\n # I've only ever seen Spec fail with ValueError.\n except ValueError:\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ngenerate a new object for the given issue.", "response": "def issues_role(name, rawtext, text, lineno, inliner, options={}, content=[]):\n \"\"\"\n Use: :issue|bug|feature|support:`ticket_number`\n\n When invoked as :issue:, turns into just a \"#NN\" hyperlink to\n `releases_issue_uri`.\n\n When invoked otherwise, turns into \"[Type] <#NN hyperlink>: \".\n\n Spaces present in the \"ticket number\" are used as fields for keywords\n (major, backported) and/or specs (e.g. '>=1.0'). This data is removed &\n used when constructing the object.\n\n May give a 'ticket number' of ``-`` or ``0`` to generate no hyperlink.\n \"\"\"\n parts = utils.unescape(text).split()\n issue_no = parts.pop(0)\n # Lol @ access back to Sphinx\n config = inliner.document.settings.env.app.config\n if issue_no not in ('-', '0'):\n ref = None\n if config.releases_issue_uri:\n # TODO: deal with % vs .format()\n ref = config.releases_issue_uri % issue_no\n elif config.releases_github_path:\n ref = \"https://github.com/{}/issues/{}\".format(\n config.releases_github_path, issue_no)\n # Only generate a reference/link if we were able to make a URI\n if ref:\n identifier = nodes.reference(\n rawtext, '#' + issue_no, refuri=ref, **options\n )\n # Otherwise, just make it regular text\n else:\n identifier = nodes.raw(\n rawtext=rawtext, text='#' + issue_no, format='html',\n **options\n )\n else:\n identifier = None\n issue_no = None # So it doesn't gum up dupe detection later\n # Additional 'new-style changelog' stuff\n if name in ISSUE_TYPES:\n nodelist = issue_nodelist(name, identifier)\n spec = None\n keyword = None\n # TODO: sanity checks re: e.g. >2 parts, >1 instance of keywords, >1\n # instance of specs, etc.\n for part in parts:\n maybe_spec = scan_for_spec(part)\n if maybe_spec:\n spec = maybe_spec\n else:\n if part in ('backported', 'major'):\n keyword = part\n else:\n err = \"Gave unknown keyword {!r} for issue no. {}\"\n raise ValueError(err.format(keyword, issue_no))\n # Create temporary node w/ data & final nodes to publish\n node = Issue(\n number=issue_no,\n type_=name,\n nodelist=nodelist,\n backported=(keyword == 'backported'),\n major=(keyword == 'major'),\n spec=spec,\n )\n return [node], []\n # Return old style info for 'issue' for older changelog entries\n else:\n return [identifier], []"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef release_role(name, rawtext, text, lineno, inliner, options={}, content=[]):\n # Make sure year has been specified\n match = year_arg_re.match(text)\n if not match:\n msg = inliner.reporter.error(\"Must specify release date!\")\n return [inliner.problematic(rawtext, rawtext, msg)], [msg]\n number, date = match.group(1), match.group(2)\n # Lol @ access back to Sphinx\n config = inliner.document.settings.env.app.config\n nodelist = [release_nodes(number, number, date, config)]\n # Return intermediate node\n node = Release(number=number, date=date, nodelist=nodelist)\n return [node], []", "response": "This function is used to create a release node."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef append_unreleased_entries(app, manager, releases):\n for family, lines in six.iteritems(manager):\n for type_ in ('bugfix', 'feature'):\n bucket = 'unreleased_{}'.format(type_)\n if bucket not in lines: # Implies unstable prehistory + 0.x fam\n continue\n issues = lines[bucket]\n fam_prefix = \"{}.x \".format(family) if len(manager) > 1 else \"\"\n header = \"Next {}{} release\".format(fam_prefix, type_)\n line = \"unreleased_{}.x_{}\".format(family, type_)\n releases.append(\n generate_unreleased_entry(header, line, issues, manager, app)\n )", "response": "Append unreleased issues to the list of releases."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef reorder_release_entries(releases):\n order = {'feature': 0, 'bug': 1, 'support': 2}\n for release in releases:\n entries = release['entries'][:]\n release['entries'] = sorted(entries, key=lambda x: order[x.type])", "response": "Mutate releases so that the entries in each release are ordered by feature bug support etc."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef construct_entry_with_release(focus, issues, manager, log, releases, rest):\n log(\"release for line %r\" % focus.minor)\n # Check for explicitly listed issues first\n explicit = None\n if rest[0].children:\n explicit = [x.strip() for x in rest[0][0].split(',')]\n # Do those by themselves since they override all other logic\n if explicit:\n log(\"Explicit issues requested: %r\" % (explicit,))\n # First scan global issue dict, dying if not found\n missing = [i for i in explicit if i not in issues]\n if missing:\n raise ValueError(\n \"Couldn't find issue(s) #{} in the changelog!\".format(\n ', '.join(missing)))\n # Obtain the explicitly named issues from global list\n entries = []\n for i in explicit:\n for flattened_issue_item in itertools.chain(issues[i]):\n entries.append(flattened_issue_item)\n # Create release\n log(\"entries in this release: %r\" % (entries,))\n releases.append({\n 'obj': focus,\n 'entries': entries,\n })\n # Introspect these entries to determine which buckets they should get\n # removed from (it's not \"all of them\"!)\n for obj in entries:\n if obj.type == 'bug':\n # Major bugfix: remove from unreleased_feature\n if obj.major:\n log(\"Removing #%s from unreleased\" % obj.number)\n # TODO: consider making a LineManager method somehow\n manager[focus.family]['unreleased_feature'].remove(obj)\n # Regular bugfix: remove from bucket for this release's\n # line + unreleased_bugfix\n else:\n if obj in manager[focus.family]['unreleased_bugfix']:\n log(\"Removing #%s from unreleased\" % obj.number)\n manager[focus.family]['unreleased_bugfix'].remove(obj)\n if obj in manager[focus.family][focus.minor]:\n log(\"Removing #%s from %s\" % (obj.number, focus.minor))\n manager[focus.family][focus.minor].remove(obj)\n # Regular feature/support: remove from unreleased_feature\n # Backported feature/support: remove from bucket for this\n # release's line (if applicable) + unreleased_feature\n else:\n log(\"Removing #%s from unreleased\" % obj.number)\n manager[focus.family]['unreleased_feature'].remove(obj)\n if obj in manager[focus.family].get(focus.minor, []):\n manager[focus.family][focus.minor].remove(obj)\n\n # Implicit behavior otherwise\n else:\n # Unstable prehistory -> just dump 'unreleased' and continue\n if manager.unstable_prehistory:\n # TODO: need to continue making LineManager actually OO, i.e. do\n # away with the subdicts + keys, move to sub-objects with methods\n # answering questions like \"what should I give you for a release\"\n # or whatever\n log(\"in unstable prehistory, dumping 'unreleased'\")\n releases.append({\n 'obj': focus,\n # NOTE: explicitly dumping 0, not focus.family, since this\n # might be the last pre-historical release and thus not 0.x\n 'entries': manager[0]['unreleased'][:],\n })\n manager[0]['unreleased'] = []\n # If this isn't a 0.x release, it signals end of prehistory, make a\n # new release bucket (as is also done below in regular behavior).\n # Also acts like a sentinel that prehistory is over.\n if focus.family != 0:\n manager[focus.family][focus.minor] = []\n # Regular behavior from here\n else:\n # New release line/branch detected. Create it & dump unreleased\n # features.\n if focus.minor not in manager[focus.family]:\n log(\"not seen prior, making feature release & bugfix bucket\")\n manager[focus.family][focus.minor] = []\n # TODO: this used to explicitly say \"go over everything in\n # unreleased_feature and dump if it's feature, support or major\n # bug\". But what the hell else would BE in unreleased_feature?\n # Why not just dump the whole thing??\n #\n # Dump only the items in the bucket whose family this release\n # object belongs to, i.e. 1.5.0 should only nab the 1.0\n # family's unreleased feature items.\n releases.append({\n 'obj': focus,\n 'entries': manager[focus.family]['unreleased_feature'][:],\n })\n manager[focus.family]['unreleased_feature'] = []\n\n # Existing line -> empty out its bucket into new release.\n # Skip 'major' bugs as those \"belong\" to the next release (and will\n # also be in 'unreleased_feature' - so safe to nuke the entire\n # line)\n else:\n log(\"pre-existing, making bugfix release\")\n # TODO: as in other branch, I don't get why this wasn't just\n # dumping the whole thing - why would major bugs be in the\n # regular bugfix buckets?\n entries = manager[focus.family][focus.minor][:]\n releases.append({'obj': focus, 'entries': entries})\n manager[focus.family][focus.minor] = []\n # Clean out the items we just released from\n # 'unreleased_bugfix'. (Can't nuke it because there might\n # be some unreleased bugs for other release lines.)\n for x in entries:\n if x in manager[focus.family]['unreleased_bugfix']:\n manager[focus.family]['unreleased_bugfix'].remove(x)", "response": "Construct a new entry with the given focus and issues and a list of releases."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nhandles the first release line in the list of entries.", "response": "def handle_first_release_line(entries, manager):\n \"\"\"\n Set up initial line-manager entry for first encountered release line.\n\n To be called at start of overall process; afterwards, subsequent major\n lines are generated by `handle_upcoming_major_release`.\n \"\"\"\n # It's remotely possible the changelog is totally empty...\n if not entries:\n return\n # Obtain (short-circuiting) first Release obj.\n first_release = None\n for obj in entries:\n if isinstance(obj, Release):\n first_release = obj\n break\n # It's also possible it's non-empty but has no releases yet.\n if first_release:\n manager.add_family(obj.family)\n # If God did not exist, man would be forced to invent him.\n else:\n manager.add_family(0)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef minor_releases(self, manager):\n # TODO: yea deffo need a real object for 'manager', heh. E.g. we do a\n # very similar test for \"do you have any actual releases yet?\"\n # elsewhere. (This may be fodder for changing how we roll up\n # pre-major-release features though...?)\n return [\n key for key, value in six.iteritems(manager)\n if any(x for x in value if not x.startswith('unreleased'))\n ]", "response": "Return all minor release line labels found in manager."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef default_spec(self, manager):\n # TODO: I feel like this + the surrounding bits in add_to_manager()\n # could be consolidated & simplified...\n specstr = \"\"\n # Make sure truly-default spec skips 0.x if prehistory was unstable.\n stable_families = manager.stable_families\n if manager.config.releases_unstable_prehistory and stable_families:\n specstr = \">={}\".format(min(stable_families))\n if self.is_featurelike:\n # TODO: if app->config->={}\".format(max(manager.keys()))\n else:\n # Can only meaningfully limit to minor release buckets if they\n # actually exist yet.\n buckets = self.minor_releases(manager)\n if buckets:\n specstr = \">={}\".format(max(buckets))\n return Spec(specstr) if specstr else Spec()", "response": "Returns a Spec object for the current release - lines structure."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_to_manager(self, manager):\n # Derive version spec allowing us to filter against major/minor buckets\n spec = self.spec or self.default_spec(manager)\n # Only look in appropriate major version/family; if self is an issue\n # declared as living in e.g. >=2, this means we don't even bother\n # looking in the 1.x family.\n families = [Version(str(x)) for x in manager]\n versions = list(spec.filter(families))\n for version in versions:\n family = version.major\n # Within each family, we further limit which bugfix lines match up\n # to what self cares about (ignoring 'unreleased' until later)\n candidates = [\n Version(x)\n for x in manager[family]\n if not x.startswith('unreleased')\n ]\n # Select matching release lines (& stringify)\n buckets = []\n bugfix_buckets = [str(x) for x in spec.filter(candidates)]\n # Add back in unreleased_* as appropriate\n # TODO: probably leverage Issue subclasses for this eventually?\n if self.is_buglike:\n buckets.extend(bugfix_buckets)\n # Don't put into JUST unreleased_bugfix; it implies that this\n # major release/family hasn't actually seen any releases yet\n # and only exists for features to go into.\n if bugfix_buckets:\n buckets.append('unreleased_bugfix')\n # Obtain list of minor releases to check for \"haven't had ANY\n # releases yet\" corner case, in which case ALL issues get thrown in\n # unreleased_feature for the first release to consume.\n # NOTE: assumes first release is a minor or major one,\n # but...really? why would your first release be a bugfix one??\n no_releases = not self.minor_releases(manager)\n if self.is_featurelike or self.backported or no_releases:\n buckets.append('unreleased_feature')\n # Now that we know which buckets are appropriate, add ourself to\n # all of them. TODO: or just...do it above...instead...\n for bucket in buckets:\n manager[family][bucket].append(self)", "response": "Add self to one or more of its buckets."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef read(self, source, errors='strict', clean_paragraphs=True):\n\n reader = Rtf15Reader(source, errors, clean_paragraphs)\n return reader.go()", "response": "Read a list of term IDs from a file - like object."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nremoves whitespace at start and end of paragraphs and remove empty blocks and etc.", "response": "def cleanParagraph(self):\n \"\"\"\n Compress text runs, remove whitespace at start and end,\n skip empty blocks, etc\n \"\"\"\n\n runs = self.block.content\n\n if not runs:\n self.block = None\n return\n\n if not self.clean_paragraphs:\n return\n\n joinedRuns = []\n hasContent = False\n\n for run in runs:\n\n if run.content[0]:\n hasContent = True\n else:\n continue\n\n # For whitespace-only groups, remove any property stuff,\n # to avoid extra markup in output\n if not run.content[0].strip():\n run.properties = {}\n\n # Join runs only if their properties match\n if joinedRuns and (run.properties == joinedRuns[-1].properties):\n joinedRuns[-1].content[0] += run.content[0]\n else:\n joinedRuns.append(run)\n\n if hasContent:\n # Strip beginning of paragraph\n joinedRuns[0].content[0] = joinedRuns[0].content[0].lstrip()\n # And then strip the end\n joinedRuns[-1].content[0] = joinedRuns[-1].content[0].rstrip()\n self.block.content = joinedRuns\n else:\n self.block = None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nparse a css style sheet into the CSS object.", "response": "def parse_css(self, css):\n \"\"\"\n Parse a css style sheet into the CSS object.\n\n For the moment this will only work for very simple css\n documents. It works by using regular expression matching css\n syntax. This is not bullet proof.\n \"\"\"\n rulesets = self.ruleset_re.findall(css)\n for (selector, declarations) in rulesets:\n rule = Rule(self.parse_selector(selector))\n rule.properties = self.parse_declarations(declarations)\n self.rules.append(rule)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef parse_declarations(self, declarations):\n declarations = self.declaration_re.findall(declarations)\n return dict(declarations)", "response": "parse a css declaration list"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nparsing a css selector", "response": "def parse_selector(self, selector):\n \"\"\"\n parse a css selector\n \"\"\"\n tag, klass = self.selector_re.match(selector).groups()\n return Selector(tag, klass)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a dict of all the properties of a given BeautifulSoup node.", "response": "def get_properties(self, node):\n \"\"\"\n return a dict of all the properties of a given BeautifulSoup\n node found by applying the css style.\n \"\"\"\n ret = {}\n # Try all the rules one by one\n for rule in self.rules:\n if rule.selector(node):\n ret.update(rule.properties)\n # Also search for direct 'style' arguments in the html doc\n for style_node in node.findParents(attrs={'style': True}):\n style = style_node.get('style')\n properties = self.parse_declarations(style)\n ret.update(properties)\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef namedModule(name):\n topLevel = __import__(name)\n packages = name.split(\".\")[1:]\n m = topLevel\n for p in packages:\n m = getattr(m, p)\n return m", "response": "Return a module given its name."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef namedObject(name):\n classSplit = name.split('.')\n module = namedModule('.'.join(classSplit[:-1]))\n return getattr(module, classSplit[-1])", "response": "Get a fully named module - global object."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nprocess a pyth text and return the formatted string", "response": "def text(self, text):\n \"\"\"\n process a pyth text and return the formatted string\n \"\"\"\n ret = u\"\".join(text.content)\n if 'url' in text.properties:\n return u\"`%s`_\" % ret\n if 'bold' in text.properties:\n return u\"**%s**\" % ret\n if 'italic' in text.properties:\n return u\"*%s*\" % ret\n if 'sub' in text.properties:\n return ur\"\\ :sub:`%s`\\ \" % ret\n if 'super' in text.properties:\n return ur\"\\ :sup:`%s`\\ \" % ret\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef paragraph(self, paragraph, prefix=\"\"):\n content = []\n for text in paragraph.content:\n content.append(self.text(text))\n content = u\"\".join(content).encode(\"utf-8\")\n\n for line in content.split(\"\\n\"):\n self.target.write(\" \" * self.indent)\n self.target.write(prefix)\n self.target.write(line)\n self.target.write(\"\\n\")\n if prefix:\n prefix = \" \"\n\n # handle the links\n if any('url' in text.properties for text in paragraph.content):\n self.target.write(\"\\n\")\n for text in paragraph.content:\n if 'url' in text.properties:\n string = u\"\".join(text.content)\n url = text.properties['url']\n self.target.write(\".. _%s: %s\\n\" % (string, url))", "response": "process a pyth paragraph into the target"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nprocess a pyth list into the target", "response": "def list(self, list, prefix=None):\n \"\"\"\n Process a pyth list into the target\n \"\"\"\n self.indent += 1\n for (i, entry) in enumerate(list.content):\n for (j, paragraph) in enumerate(entry.content):\n prefix = \"- \" if j == 0 else \" \"\n handler = self.paragraphDispatch[paragraph.__class__]\n handler(paragraph, prefix)\n self.target.write(\"\\n\")\n self.indent -= 1"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nformatting a BeautifulSoup document into a single line", "response": "def format(self, soup):\n \"\"\"format a BeautifulSoup document\n\n This will transform the block elements content from\n multi-lines text into single line.\n\n This allow us to avoid having to deal with further text\n rendering once this step has been done.\n \"\"\"\n # Remove all the newline characters before a closing tag.\n for node in soup.findAll(text=True):\n if node.rstrip(\" \").endswith(\"\\n\"):\n node.replaceWith(node.rstrip(\" \").rstrip(\"\\n\"))\n # Join the block elements lines into a single long line\n for tag in ['p', 'li']:\n for node in soup.findAll(tag):\n text = unicode(node)\n lines = [x.strip() for x in text.splitlines()]\n text = ' '.join(lines)\n node.replaceWith(BeautifulSoup.BeautifulSoup(text))\n soup = BeautifulSoup.BeautifulSoup(unicode(soup))\n # replace all
tag by newline character\n for node in soup.findAll('br'):\n node.replaceWith(\"\\n\")\n soup = BeautifulSoup.BeautifulSoup(unicode(soup))\n return soup"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef url(self, node):\n a_node = node.findParent('a')\n if not a_node:\n return None\n\n if self.link_callback is None:\n return a_node.get('href')\n else:\n return self.link_callback(a_node.get('href'))", "response": "Returns the url of a BeautifulSoup node or None if there is no url."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef process_text(self, node):\n text = node.string.strip()\n if not text:\n return\n\n # Set all the properties\n properties=dict()\n if self.is_bold(node):\n properties['bold'] = True\n if self.is_italic(node):\n properties['italic'] = True\n if self.url(node):\n properties['url'] = self.url(node)\n if self.is_sub(node):\n properties['sub'] = True\n if self.is_super(node):\n properties['super'] = True\n\n content=[node.string]\n\n return document.Text(properties, content)", "response": "Process the text from a BeautifulSoup node."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef process_into(self, node, obj):\n if isinstance(node, BeautifulSoup.NavigableString):\n text = self.process_text(node)\n if text:\n obj.append(text)\n return\n if node.name == 'p':\n # add a new paragraph into the pyth object\n new_obj = document.Paragraph()\n obj.append(new_obj)\n obj = new_obj\n elif node.name == 'ul':\n # add a new list\n new_obj = document.List()\n obj.append(new_obj)\n obj = new_obj\n elif node.name == 'li':\n # add a new list entry\n new_obj = document.ListEntry()\n obj.append(new_obj)\n obj = new_obj\n for child in node:\n self.process_into(child, obj)", "response": "Process a BeautifulSoup node and fill its elements into a pyth object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nappending an item to the element.", "response": "def append(self, item):\n \"\"\"\n Try to add an item to this element.\n\n If the item is of the wrong type, and if this element has a sub-type,\n then try to create such a sub-type and insert the item into that, instead.\n \n This happens recursively, so (in python-markup):\n L [ u'Foo' ]\n actually creates:\n L [ LE [ P [ T [ u'Foo' ] ] ] ]\n\n If that doesn't work, raise a TypeError.\n \"\"\"\n\n okay = True\n if not isinstance(item, self.contentType):\n if hasattr(self.contentType, 'contentType'):\n try:\n item = self.contentType(content=[item])\n except TypeError:\n okay = False\n else:\n okay = False\n \n if not okay:\n raise TypeError(\"Wrong content type for %s: %s (%s)\" % (\n self.__class__.__name__, repr(type(item)), repr(item)))\n\n self.content.append(item)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _MetaPythonBase():\n \n class MagicGetItem(type):\n def __new__(mcs, name, bases, dict):\n klass = type.__new__(mcs, name, bases, dict)\n mcs.__getitem__ = lambda _, k: klass()[k]\n return klass\n \n return MagicGetItem", "response": "Return a metaclass which implements __getitem__ and requires e. g. P [... ] instead of P"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef write(klass, document, target=None, stylesheet=\"\"):\n writer = LatexWriter(document, target, stylesheet)\n return writer.go()", "response": "write a pyth document to a latex document"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the full style sheet that will be inserted into the latex document.", "response": "def full_stylesheet(self):\n \"\"\"\n Return the style sheet that will ultimately be inserted into\n the latex document.\n\n This is the user given style sheet plus some additional parts\n to add the meta data.\n \"\"\"\n latex_fragment = r\"\"\"\n \\usepackage[colorlinks=true,linkcolor=blue,urlcolor=blue]{hyperref}\n \\hypersetup{\n pdftitle={%s},\n pdfauthor={%s},\n pdfsubject={%s}\n }\n \"\"\" % (self.document.properties.get(\"title\"),\n self.document.properties.get(\"author\"),\n self.document.properties.get(\"subject\"))\n return latex_fragment + self.stylesheet"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_archive_layout_url(self, archive_id):\n url = self.api_url + '/v2/project/' + self.api_key + '/archive/' + archive_id + '/layout'\n return url", "response": "this method returns the url to set the archive layout"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nstops an OpenTok archive that is being recorded.", "response": "def stop(self):\n \"\"\"\n Stops an OpenTok archive that is being recorded.\n\n Archives automatically stop recording after 120 minutes or when all clients have\n disconnected from the session being archived.\n \"\"\"\n temp_archive = self.sdk.stop_archive(self.id)\n for k,v in iteritems(temp_archive.attrs()):\n setattr(self, k, v)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef attrs(self):\n return dict((k, v) for k, v in iteritems(self.__dict__) if k is not \"sdk\")", "response": "Returns a dictionary of the archive s attributes."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef generate_token(self, session_id, role=Roles.publisher, expire_time=None, data=None,\n initial_layout_class_list=[]):\n \"\"\"\n Generates a token for a given session.\n\n :param String session_id: The session ID of the session to be accessed by the client using\n the token.\n\n :param String role: The role for the token. Valid values are defined in the Role\n class:\n\n * `Roles.subscriber` -- A subscriber can only subscribe to streams.\n\n * `Roles.publisher` -- A publisher can publish streams, subscribe to\n streams, and signal. (This is the default value if you do not specify a role.)\n\n * `Roles.moderator` -- In addition to the privileges granted to a\n publisher, in clients using the OpenTok.js 2.2 library, a moderator can call the\n `forceUnpublish()` and `forceDisconnect()` method of the\n Session object.\n\n :param int expire_time: The expiration time of the token, in seconds since the UNIX epoch.\n The maximum expiration time is 30 days after the creation time. The default expiration\n time is 24 hours after the token creation time.\n\n :param String data: A string containing connection metadata describing the\n end-user. For example, you can pass the user ID, name, or other data describing the\n end-user. The length of the string is limited to 1000 characters. This data cannot be\n updated once it is set.\n\n :param list initial_layout_class_list: An array of class names (strings)\n to be used as the initial layout classes for streams published by the client. Layout\n classes are used in customizing the layout of videos in\n `live streaming broadcasts `_ and\n `composed archives `_\n\n :rtype:\n The token string.\n \"\"\"\n\n # normalize\n # expire_time can be an integer, a datetime object, or anything else that can be coerced into an integer\n # after this block it will only be an integer\n if expire_time is not None:\n if isinstance(expire_time, datetime):\n expire_time = calendar.timegm(expire_time.utctimetuple())\n else:\n try:\n expire_time = int(expire_time)\n except (ValueError, TypeError):\n raise OpenTokException(u('Cannot generate token, invalid expire time {0}').format(expire_time))\n else:\n expire_time = int(time.time()) + (60*60*24) # 1 day\n\n # validations\n if not text_type(session_id):\n raise OpenTokException(u('Cannot generate token, session_id was not valid {0}').format(session_id))\n if not isinstance(role, Roles):\n raise OpenTokException(u('Cannot generate token, {0} is not a valid role').format(role))\n now = int(time.time())\n if expire_time < now:\n raise OpenTokException(u('Cannot generate token, expire_time is not in the future {0}').format(expire_time))\n if expire_time > now + (60*60*24*30): # 30 days\n raise OpenTokException(u('Cannot generate token, expire_time is not in the next 30 days {0}').format(expire_time))\n if data and len(data) > 1000:\n raise OpenTokException(u('Cannot generate token, data must be less than 1000 characters'))\n if initial_layout_class_list and not all(text_type(c) for c in initial_layout_class_list):\n raise OpenTokException(u('Cannot generate token, all items in initial_layout_class_list must be strings'))\n initial_layout_class_list_serialized = u(' ').join(initial_layout_class_list)\n if len(initial_layout_class_list_serialized) > 1000:\n raise OpenTokException(u('Cannot generate token, initial_layout_class_list must be less than 1000 characters'))\n\n # decode session id to verify api_key\n sub_session_id = session_id[2:]\n sub_session_id_bytes = sub_session_id.encode('utf-8')\n sub_session_id_bytes_padded = sub_session_id_bytes + (b('=') * (-len(sub_session_id_bytes) % 4))\n try:\n decoded_session_id = base64.b64decode(sub_session_id_bytes_padded, b('-_'))\n parts = decoded_session_id.decode('utf-8').split(u('~'))\n except Exception as e:\n raise OpenTokException(u('Cannot generate token, the session_id {0} was not valid').format(session_id))\n if self.api_key not in parts:\n raise OpenTokException(u('Cannot generate token, the session_id {0} does not belong to the api_key {1}').format(session_id, self.api_key))\n\n data_params = dict(\n session_id = session_id,\n create_time = now,\n expire_time = expire_time,\n role = role.value,\n nonce = random.randint(0,999999),\n initial_layout_class_list = initial_layout_class_list_serialized\n )\n if data:\n data_params['connection_data'] = data\n data_string = urlencode(data_params, True)\n\n sig = self._sign_string(data_string, self.api_secret)\n decoded_base64_bytes = u('partner_id={api_key}&sig={sig}:{payload}').format(\n api_key = self.api_key,\n sig = sig,\n payload = data_string\n )\n if PY3:\n decoded_base64_bytes = decoded_base64_bytes.encode('utf-8')\n token = u('{sentinal}{base64_data}').format(\n sentinal = self.TOKEN_SENTINEL,\n base64_data = base64.b64encode(decoded_base64_bytes).decode()\n )\n return token", "response": "Generates a token for a given session."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a new session and return the session ID.", "response": "def create_session(self, location=None, media_mode=MediaModes.relayed, archive_mode=ArchiveModes.manual):\n \"\"\"\n Creates a new OpenTok session and returns the session ID, which uniquely identifies\n the session.\n\n For example, when using the OpenTok JavaScript library, use the session ID when calling the\n OT.initSession() method (to initialize an OpenTok session).\n\n OpenTok sessions do not expire. However, authentication tokens do expire (see the\n generateToken() method). Also note that sessions cannot explicitly be destroyed.\n\n A session ID string can be up to 255 characters long.\n\n Calling this method results in an OpenTokException in the event of an error.\n Check the error message for details.\n\n You can also create a session using the OpenTok\n `REST API `_ or\n `the OpenTok dashboard `_.\n\n :param String media_mode: Determines whether the session will transmit streams using the\n OpenTok Media Router (MediaMode.routed) or not (MediaMode.relayed). By default,\n the setting is MediaMode.relayed.\n\n With the media_mode property set to MediaMode.relayed, the session\n will attempt to transmit streams directly between clients. If clients cannot connect\n due to firewall restrictions, the session uses the OpenTok TURN server to relay\n audio-video streams.\n\n The `OpenTok Media\n Router `_\n provides the following benefits:\n\n * The OpenTok Media Router can decrease bandwidth usage in multiparty sessions.\n (When the mediaMode property is set to MediaMode.relayed, each client must send\n a separate audio-video stream to each client subscribing to it.)\n\n * The OpenTok Media Router can improve the quality of the user experience through\n audio fallback and video recovery (see https://tokbox.com/platform/fallback). With\n these features, if a client's connectivity degrades to a degree that\n it does not support video for a stream it's subscribing to, the video is dropped on\n that client (without affecting other clients), and the client receives audio only.\n If the client's connectivity improves, the video returns.\n\n * The OpenTok Media Router supports the archiving feature, which lets\n you record, save, and retrieve OpenTok sessions (see http://tokbox.com/platform/archiving).\n\n :param String archive_mode: Whether the session is automatically archived\n (ArchiveModes.always) or not (ArchiveModes.manual). By default,\n the setting is ArchiveModes.manual, and you must call the\n start_archive() method of the OpenTok object to start archiving. To archive the session\n (either automatically or not), you must set the media_mode parameter to\n MediaModes.routed.\n\n :param String location: An IP address that the OpenTok servers will use to\n situate the session in its global network. If you do not set a location hint,\n the OpenTok servers will be based on the first client connecting to the session.\n\n :rtype: The Session object. The session_id property of the object is the session ID.\n \"\"\"\n\n # build options\n options = {}\n if not isinstance(media_mode, MediaModes):\n raise OpenTokException(u('Cannot create session, {0} is not a valid media mode').format(media_mode))\n if not isinstance(archive_mode, ArchiveModes):\n raise OpenTokException(u('Cannot create session, {0} is not a valid archive mode').format(archive_mode))\n if archive_mode == ArchiveModes.always and media_mode != MediaModes.routed:\n raise OpenTokException(u('A session with always archive mode must also have the routed media mode.'))\n options[u('p2p.preference')] = media_mode.value\n options[u('archiveMode')] = archive_mode.value\n if location:\n # validate IP address\n try:\n inet_aton(location)\n except:\n raise OpenTokException(u('Cannot create session. Location must be either None or a valid IPv4 address {0}').format(location))\n options[u('location')] = location\n\n try:\n response = requests.post(self.endpoints.session_url(), data=options, headers=self.headers(), proxies=self.proxies, timeout=self.timeout)\n response.encoding = 'utf-8'\n\n if response.status_code == 403:\n raise AuthError('Failed to create session, invalid credentials')\n if not response.content:\n raise RequestError()\n dom = xmldom.parseString(response.content)\n except Exception as e:\n raise RequestError('Failed to create session: %s' % str(e))\n\n try:\n error = dom.getElementsByTagName('error')\n if error:\n error = error[0]\n raise AuthError('Failed to create session (code=%s): %s' % (error.attributes['code'].value, error.firstChild.attributes['message'].value))\n\n session_id = dom.getElementsByTagName('session_id')[0].childNodes[0].nodeValue\n return Session(self, session_id, location=location, media_mode=media_mode, archive_mode=archive_mode)\n except Exception as e:\n raise OpenTokException('Failed to generate session: %s' % str(e))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nstart recording an OpenTok archive.", "response": "def start_archive(self, session_id, has_audio=True, has_video=True, name=None, output_mode=OutputModes.composed, resolution=None):\n \"\"\"\n Starts archiving an OpenTok session.\n\n Clients must be actively connected to the OpenTok session for you to successfully start\n recording an archive.\n\n You can only record one archive at a time for a given session. You can only record archives\n of sessions that use the OpenTok Media Router (sessions with the media mode set to routed);\n you cannot archive sessions with the media mode set to relayed.\n\n For more information on archiving, see the\n `OpenTok archiving `_ programming guide.\n\n :param String session_id: The session ID of the OpenTok session to archive.\n :param String name: This is the name of the archive. You can use this name\n to identify the archive. It is a property of the Archive object, and it is a property\n of archive-related events in the OpenTok.js library.\n :param Boolean has_audio: if set to True, an audio track will be inserted to the archive.\n has_audio is an optional parameter that is set to True by default. If you set both\n has_audio and has_video to False, the call to the start_archive() method results in\n an error.\n :param Boolean has_video: if set to True, a video track will be inserted to the archive.\n has_video is an optional parameter that is set to True by default.\n :param OutputModes output_mode: Whether all streams in the archive are recorded\n to a single file (OutputModes.composed, the default) or to individual files\n (OutputModes.individual).\n :param String resolution (Optional): The resolution of the archive, either \"640x480\" (the default)\n or \"1280x720\". This parameter only applies to composed archives. If you set this\n parameter and set the output_mode parameter to OutputModes.individual, the call to the\n start_archive() method results in an error.\n\n :rtype: The Archive object, which includes properties defining the archive,\n including the archive ID.\n \"\"\"\n if not isinstance(output_mode, OutputModes):\n raise OpenTokException(u('Cannot start archive, {0} is not a valid output mode').format(output_mode))\n\n\n if resolution and output_mode == OutputModes.individual:\n raise OpenTokException(u('Invalid parameters: Resolution cannot be supplied for individual output mode.'))\n\n payload = {'name': name,\n 'sessionId': session_id,\n 'hasAudio': has_audio,\n 'hasVideo': has_video,\n 'outputMode': output_mode.value,\n 'resolution': resolution,\n }\n\n response = requests.post(self.endpoints.archive_url(), data=json.dumps(payload), headers=self.json_headers(), proxies=self.proxies, timeout=self.timeout)\n\n if response.status_code < 300:\n return Archive(self, response.json())\n elif response.status_code == 403:\n raise AuthError()\n elif response.status_code == 400:\n \"\"\"\n The HTTP response has a 400 status code in the following cases:\n You do not pass in a session ID or you pass in an invalid session ID.\n No clients are actively connected to the OpenTok session.\n You specify an invalid resolution value.\n The outputMode property is set to \"individual\" and you set the resolution property and (which is not supported in individual stream archives).\n \"\"\"\n raise RequestError(response.json().get(\"message\"))\n elif response.status_code == 404:\n raise NotFoundError(\"Session not found\")\n elif response.status_code == 409:\n raise ArchiveError(response.json().get(\"message\"))\n else:\n raise RequestError(\"An unexpected error occurred\", response.status_code)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nstopping an OpenTok archive that is being recorded.", "response": "def stop_archive(self, archive_id):\n \"\"\"\n Stops an OpenTok archive that is being recorded.\n\n Archives automatically stop recording after 90 minutes or when all clients have disconnected\n from the session being archived.\n\n @param [String] archive_id The archive ID of the archive you want to stop recording.\n\n :rtype: The Archive object corresponding to the archive being stopped.\n \"\"\"\n response = requests.post(self.endpoints.archive_url(archive_id) + '/stop', headers=self.json_headers(), proxies=self.proxies, timeout=self.timeout)\n\n if response.status_code < 300:\n return Archive(self, response.json())\n elif response.status_code == 403:\n raise AuthError()\n elif response.status_code == 404:\n raise NotFoundError(\"Archive not found\")\n elif response.status_code == 409:\n raise ArchiveError(\"Archive is not in started state\")\n else:\n raise RequestError(\"An unexpected error occurred\", response.status_code)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef delete_archive(self, archive_id):\n response = requests.delete(self.endpoints.archive_url(archive_id), headers=self.json_headers(), proxies=self.proxies, timeout=self.timeout)\n\n if response.status_code < 300:\n pass\n elif response.status_code == 403:\n raise AuthError()\n elif response.status_code == 404:\n raise NotFoundError(\"Archive not found\")\n else:\n raise RequestError(\"An unexpected error occurred\", response.status_code)", "response": "Deletes an OpenTok archive."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_archive(self, archive_id):\n response = requests.get(self.endpoints.archive_url(archive_id), headers=self.json_headers(), proxies=self.proxies, timeout=self.timeout)\n\n if response.status_code < 300:\n return Archive(self, response.json())\n elif response.status_code == 403:\n raise AuthError()\n elif response.status_code == 404:\n raise NotFoundError(\"Archive not found\")\n else:\n raise RequestError(\"An unexpected error occurred\", response.status_code)", "response": "Gets an Archive object for the given archive ID."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_archives(self, offset=None, count=None, session_id=None):\n params = {}\n if offset is not None:\n params['offset'] = offset\n if count is not None:\n params['count'] = count\n if session_id is not None:\n params['sessionId'] = session_id\n\n endpoint = self.endpoints.archive_url() + \"?\" + urlencode(params)\n\n response = requests.get(\n endpoint, headers=self.json_headers(), proxies=self.proxies, timeout=self.timeout\n )\n\n if response.status_code < 300:\n return ArchiveList(self, response.json())\n elif response.status_code == 403:\n raise AuthError()\n elif response.status_code == 404:\n raise NotFoundError(\"Archive not found\")\n else:\n raise RequestError(\"An unexpected error occurred\", response.status_code)", "response": "Returns an ArchiveList object which is an array of archives that are completed and in - progress."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting a list of all the archive entries for this session.", "response": "def list_archives(self, offset=None, count=None, session_id=None):\n \"\"\"\n New method to get archive list, it's alternative to 'get_archives()',\n both methods exist to have backwards compatible\n \"\"\"\n return self.get_archives(offset, count, session_id)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef signal(self, session_id, payload, connection_id=None):\n response = requests.post(\n self.endpoints.signaling_url(session_id, connection_id),\n data=json.dumps(payload),\n headers=self.json_headers(),\n proxies=self.proxies,\n timeout=self.timeout\n )\n\n if response.status_code == 204:\n pass\n elif response.status_code == 400:\n raise SignalingError('One of the signal properties - data, type, sessionId or connectionId - is invalid.')\n elif response.status_code == 403:\n raise AuthError('You are not authorized to send the signal. Check your authentication credentials.')\n elif response.status_code == 404:\n raise SignalingError('The client specified by the connectionId property is not connected to the session.')\n elif response.status_code == 413:\n raise SignalingError('The type string exceeds the maximum length (128 bytes), or the data string exceeds the maximum size (8 kB).')\n else:\n raise RequestError('An unexpected error occurred', response.status_code)", "response": "Send a signal to all participants in an OpenTok session or to a specific client connected to that session."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a Stream object that contains information about the specified stream.", "response": "def get_stream(self, session_id, stream_id):\n \"\"\"\n Returns an Stream object that contains information of an OpenTok stream:\n\n -id: The stream ID\n -videoType: \"camera\" or \"screen\"\n -name: The stream name (if one was set when the client published the stream)\n -layoutClassList: It's an array of the layout classes for the stream\n \"\"\"\n endpoint = self.endpoints.get_stream_url(session_id, stream_id)\n response = requests.get(\n endpoint, headers=self.json_headers(), proxies=self.proxies, timeout=self.timeout\n )\n\n if response.status_code == 200:\n return Stream(response.json())\n elif response.status_code == 400:\n raise GetStreamError('Invalid request. This response may indicate that data in your request data is invalid JSON. Or it may indicate that you do not pass in a session ID or you passed in an invalid stream ID.')\n elif response.status_code == 403:\n raise AuthError('You passed in an invalid OpenTok API key or JWT token.')\n elif response.status_code == 408:\n raise GetStreamError('You passed in an invalid stream ID.')\n else:\n raise RequestError('An unexpected error occurred', response.status_code)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef list_streams(self, session_id):\n endpoint = self.endpoints.get_stream_url(session_id)\n\n response = requests.get(\n endpoint, headers=self.json_headers(), proxies=self.proxies, timeout=self.timeout\n )\n\n if response.status_code == 200:\n return StreamList(response.json())\n elif response.status_code == 400:\n raise GetStreamError('Invalid request. This response may indicate that data in your request data is invalid JSON. Or it may indicate that you do not pass in a session ID or you passed in an invalid stream ID.')\n elif response.status_code == 403:\n raise AuthError('You passed in an invalid OpenTok API key or JWT token.')\n else:\n raise RequestError('An unexpected error occurred', response.status_code)", "response": "Returns a list of all the streams in a session."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef force_disconnect(self, session_id, connection_id):\n endpoint = self.endpoints.force_disconnect_url(session_id, connection_id)\n response = requests.delete(\n endpoint, headers=self.json_headers(), proxies=self.proxies, timeout=self.timeout\n )\n\n if response.status_code == 204:\n pass\n elif response.status_code == 400:\n raise ForceDisconnectError('One of the arguments - sessionId or connectionId - is invalid.')\n elif response.status_code == 403:\n raise AuthError('You are not authorized to forceDisconnect, check your authentication credentials.')\n elif response.status_code == 404:\n raise ForceDisconnectError('The client specified by the connectionId property is not connected to the session.')\n else:\n raise RequestError('An unexpected error occurred', response.status_code)", "response": "Sends a request to force a disconnect from an OpenTok session."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nuse this method to change the layout of videos in an OpenTok archive :param String archive_id: The ID of the archive that will be updated :param String layout_type: The layout type for the archive. Valid values are: 'bestFit', 'custom', 'horizontalPresentation', 'pip' and 'verticalPresentation' :param String stylesheet optional: CSS used to style the custom layout. Specify this only if you set the type property to 'custom'", "response": "def set_archive_layout(self, archive_id, layout_type, stylesheet=None):\n \"\"\"\n Use this method to change the layout of videos in an OpenTok archive\n\n :param String archive_id: The ID of the archive that will be updated\n\n :param String layout_type: The layout type for the archive. Valid values are:\n 'bestFit', 'custom', 'horizontalPresentation', 'pip' and 'verticalPresentation'\n\n :param String stylesheet optional: CSS used to style the custom layout.\n Specify this only if you set the type property to 'custom'\n \"\"\"\n payload = {\n 'type': layout_type,\n }\n\n if layout_type == 'custom':\n if stylesheet is not None:\n payload['stylesheet'] = stylesheet\n\n endpoint = self.endpoints.set_archive_layout_url(archive_id)\n response = requests.put(\n endpoint,\n data=json.dumps(payload),\n headers=self.json_headers(),\n proxies=self.proxies,\n timeout=self.timeout\n )\n\n if response.status_code == 200:\n pass\n elif response.status_code == 400:\n raise ArchiveError('Invalid request. This response may indicate that data in your request data is invalid JSON. It may also indicate that you passed in invalid layout options.')\n elif response.status_code == 403:\n raise AuthError('Authentication error.')\n else:\n raise RequestError('OpenTok server error.', response.status_code)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef dial(self, session_id, token, sip_uri, options=[]):\n payload = {\n 'sessionId': session_id,\n 'token': token,\n 'sip': {\n 'uri': sip_uri\n }\n }\n\n if 'from' in options:\n payload['sip']['from'] = options['from']\n\n if 'headers' in options:\n payload['sip']['headers'] = options['headers']\n\n if 'auth' in options:\n payload['sip']['auth'] = options['auth']\n\n if 'secure' in options:\n payload['sip']['secure'] = options['secure']\n\n endpoint = self.endpoints.dial_url()\n response = requests.post(\n endpoint,\n data=json.dumps(payload),\n headers=self.json_headers(),\n proxies=self.proxies,\n timeout=self.timeout\n )\n\n if response.status_code == 200:\n return SipCall(response.json())\n elif response.status_code == 400:\n raise SipDialError('Invalid request. Invalid session ID.')\n elif response.status_code == 403:\n raise AuthError('Authentication error.')\n elif response.status_code == 404:\n raise SipDialError('The session does not exist.')\n elif response.status_code == 409:\n raise SipDialError(\n 'You attempted to start a SIP call for a session that '\n 'does not use the OpenTok Media Router.')\n else:\n raise RequestError('OpenTok server error.', response.status_code)", "response": "This method connects a SIP call to an OpenTok session."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_stream_class_lists(self, session_id, payload):\n items_payload = {'items': payload}\n\n endpoint = self.endpoints.set_stream_class_lists_url(session_id)\n response = requests.put(\n endpoint,\n data=json.dumps(items_payload),\n headers=self.json_headers(),\n proxies=self.proxies,\n timeout=self.timeout\n )\n\n if response.status_code == 200:\n pass\n elif response.status_code == 400:\n raise SetStreamClassError(\n 'Invalid request. This response may indicate that data in your request data '\n 'is invalid JSON. It may also indicate that you passed in invalid layout options.'\n )\n elif response.status_code == 403:\n raise AuthError('Authentication error.')\n else:\n raise RequestError('OpenTok server error.', response.status_code)", "response": "This method allows you to change the layout classes for OpenTok streams."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nstart a broadcast for the given session_id.", "response": "def start_broadcast(self, session_id, options):\n \"\"\"\n Use this method to start a live streaming for an OpenTok session. This broadcasts the\n session to an HLS (HTTP live streaming) or to RTMP streams. To successfully start\n broadcasting a session, at least one client must be connected to the session. You can only\n start live streaming for sessions that use the OpenTok Media Router (with the media mode set\n to routed); you cannot use live streaming with sessions that have the media mode set to\n relayed\n\n :param String session_id: The session ID of the OpenTok session you want to broadcast\n\n :param Dictionary options, with the following properties:\n\n Dictionary 'layout' optional: Specify this to assign the initial layout type for the\n broadcast. Valid values for the layout property are \"bestFit\", \"custom\",\n \"horizontalPresentation\", \"pip\" and \"verticalPresentation\". If you specify a \"custom\"\n layout type, set the stylesheet property of the layout object to the stylesheet.\n If you do not specify an initial layout type, the broadcast stream uses the Best Fit\n layout type\n\n Integer 'maxDuration' optional: The maximum duration for the broadcast, in seconds.\n The broadcast will automatically stop when the maximum duration is reached. You can\n set the maximum duration to a value from 60 (60 seconds) to 36000 (10 hours). The\n default maximum duration is 2 hours (7200 seconds)\n\n Dictionary 'outputs': This object defines the types of broadcast streams you want to\n start (both HLS and RTMP). You can include HLS, RTMP, or both as broadcast streams.\n If you include RTMP streaming, you can specify up to five target RTMP streams. For\n each RTMP stream, specify 'serverUrl' (the RTMP server URL), 'streamName' (the stream\n name, such as the YouTube Live stream name or the Facebook stream key), and\n (optionally) 'id' (a unique ID for the stream)\n\n String 'resolution' optional: The resolution of the broadcast, either \"640x480\"\n (SD, the default) or \"1280x720\" (HD)\n\n :rtype A Broadcast object, which contains information of the broadcast: id, sessionId\n projectId, createdAt, updatedAt, resolution, status and broadcastUrls\n \"\"\"\n payload = {\n 'sessionId': session_id\n }\n\n payload.update(options)\n\n endpoint = self.endpoints.broadcast_url()\n response = requests.post(\n endpoint,\n data=json.dumps(payload),\n headers=self.json_headers(),\n proxies=self.proxies,\n timeout=self.timeout\n )\n\n if response.status_code == 200:\n return Broadcast(response.json())\n elif response.status_code == 400:\n raise BroadcastError(\n 'Invalid request. This response may indicate that data in your request data is '\n 'invalid JSON. It may also indicate that you passed in invalid layout options. '\n 'Or you have exceeded the limit of five simultaneous RTMP streams for an OpenTok '\n 'session. Or you specified and invalid resolution.')\n elif response.status_code == 403:\n raise AuthError('Authentication error.')\n elif response.status_code == 409:\n raise BroadcastError('The broadcast has already started for the session.')\n else:\n raise RequestError('OpenTok server error.', response.status_code)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef stop_broadcast(self, broadcast_id):\n endpoint = self.endpoints.broadcast_url(broadcast_id, stop=True)\n response = requests.post(\n endpoint,\n headers=self.json_headers(),\n proxies=self.proxies,\n timeout=self.timeout\n )\n\n if response.status_code == 200:\n return Broadcast(response.json())\n elif response.status_code == 400:\n raise BroadcastError(\n 'Invalid request. This response may indicate that data in your request '\n 'data is invalid JSON.')\n elif response.status_code == 403:\n raise AuthError('Authentication error.')\n elif response.status_code == 409:\n raise BroadcastError(\n 'The broadcast (with the specified ID) was not found or it has already '\n 'stopped.')\n else:\n raise RequestError('OpenTok server error.', response.status_code)", "response": "Stop a live broadcast of an OpenTok session"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_broadcast(self, broadcast_id):\n endpoint = self.endpoints.broadcast_url(broadcast_id)\n response = requests.get(\n endpoint,\n headers=self.json_headers(),\n proxies=self.proxies,\n timeout=self.timeout\n )\n\n if response.status_code == 200:\n return Broadcast(response.json())\n elif response.status_code == 400:\n raise BroadcastError(\n 'Invalid request. This response may indicate that data in your request '\n 'data is invalid JSON.')\n elif response.status_code == 403:\n raise AuthError('Authentication error.')\n elif response.status_code == 409:\n raise BroadcastError('No matching broadcast found (with the specified ID).')\n else:\n raise RequestError('OpenTok server error.', response.status_code)", "response": "This method returns details on a specific broadcast."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nuses this method to change the layout type of a live streaming broadcast :param String broadcast_id: The ID of the broadcast that will be updated :param String layout_type: The layout type for the broadcast. Valid values are: 'bestFit', 'custom', 'horizontalPresentation', 'pip' and 'verticalPresentation' :param String stylesheet optional: CSS used to style the custom layout. Specify this only if you set the type property to 'custom'", "response": "def set_broadcast_layout(self, broadcast_id, layout_type, stylesheet=None):\n \"\"\"\n Use this method to change the layout type of a live streaming broadcast\n\n :param String broadcast_id: The ID of the broadcast that will be updated\n\n :param String layout_type: The layout type for the broadcast. Valid values are:\n 'bestFit', 'custom', 'horizontalPresentation', 'pip' and 'verticalPresentation'\n\n :param String stylesheet optional: CSS used to style the custom layout.\n Specify this only if you set the type property to 'custom'\n \"\"\"\n payload = {\n 'type': layout_type,\n }\n\n if layout_type == 'custom':\n if stylesheet is not None:\n payload['stylesheet'] = stylesheet\n\n endpoint = self.endpoints.broadcast_url(broadcast_id, layout=True)\n response = requests.put(\n endpoint,\n data=json.dumps(payload),\n headers=self.json_headers(),\n proxies=self.proxies,\n timeout=self.timeout\n )\n\n if response.status_code == 200:\n pass\n elif response.status_code == 400:\n raise BroadcastError(\n 'Invalid request. This response may indicate that data in your request data is '\n 'invalid JSON. It may also indicate that you passed in invalid layout options.')\n elif response.status_code == 403:\n raise AuthError('Authentication error.')\n else:\n raise RequestError('OpenTok server error.', response.status_code)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nauthenticates using XAuth variant of OAuth.", "response": "def login(self, username, password):\n '''Authenticate using XAuth variant of OAuth.\n\n :param str username: Username or email address for the relevant account\n :param str password: Password for the account\n '''\n response = self.request(\n ACCESS_TOKEN,\n {\n 'x_auth_mode': 'client_auth',\n 'x_auth_username': username,\n 'x_auth_password': password\n },\n returns_json=False\n )\n token = dict(parse_qsl(response['data'].decode()))\n self.token = oauth.Token(\n token['oauth_token'], token['oauth_token_secret'])\n self.oauth_client = oauth.Client(self.consumer, self.token)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef request(self, path, params=None, returns_json=True,\n method='POST', api_version=API_VERSION):\n '''Process a request using the OAuth client's request method.\n\n :param str path: Path fragment to the API endpoint, e.g. \"resource/ID\"\n :param dict params: Parameters to pass to request\n :param str method: Optional HTTP method, normally POST for Instapaper\n :param str api_version: Optional alternative API version\n :returns: response headers and body\n :retval: dict\n '''\n time.sleep(REQUEST_DELAY_SECS)\n full_path = '/'.join([BASE_URL, 'api/%s' % api_version, path])\n params = urlencode(params) if params else None\n log.debug('URL: %s', full_path)\n request_kwargs = {'method': method}\n if params:\n request_kwargs['body'] = params\n response, content = self.oauth_client.request(\n full_path, **request_kwargs)\n log.debug('CONTENT: %s ...', content[:50])\n if returns_json:\n try:\n data = json.loads(content)\n if isinstance(data, list) and len(data) == 1:\n # ugly -- API always returns a list even when you expect\n # only one item\n if data[0]['type'] == 'error':\n raise Exception('Instapaper error %d: %s' % (\n data[0]['error_code'],\n data[0]['message'])\n )\n # TODO: PyInstapaperException custom class?\n except ValueError:\n # Instapaper API can be unpredictable/inconsistent, e.g.\n # bookmarks/get_text doesn't return JSON\n data = content\n else:\n data = content\n return {\n 'response': response,\n 'data': data\n }", "response": "Process a request using the OAuth client s request method."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a list of user s bookmarks.", "response": "def get_bookmarks(self, folder='unread', limit=25, have=None):\n \"\"\"Return list of user's bookmarks.\n\n :param str folder: Optional. Possible values are unread (default),\n starred, archive, or a folder_id value.\n :param int limit: Optional. A number between 1 and 500, default 25.\n :param list have: Optional. A list of IDs to exclude from results\n :returns: List of user's bookmarks\n :rtype: list\n \"\"\"\n path = 'bookmarks/list'\n params = {'folder_id': folder, 'limit': limit}\n if have:\n have_concat = ','.join(str(id_) for id_ in have)\n params['have'] = have_concat\n response = self.request(path, params)\n items = response['data']\n bookmarks = []\n for item in items:\n if item.get('type') == 'error':\n raise Exception(item.get('message'))\n elif item.get('type') == 'bookmark':\n bookmarks.append(Bookmark(self, **item))\n return bookmarks"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn list of user s folders.", "response": "def get_folders(self):\n \"\"\"Return list of user's folders.\n\n :rtype: list\n \"\"\"\n path = 'folders/list'\n response = self.request(path)\n items = response['data']\n folders = []\n for item in items:\n if item.get('type') == 'error':\n raise Exception(item.get('message'))\n elif item.get('type') == 'folder':\n folders.append(Folder(self, **item))\n return folders"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add(self):\n '''Save an object to Instapaper after instantiating it.\n\n Example::\n\n folder = Folder(instapaper, title='stuff')\n result = folder.add()\n '''\n # TODO validation per object type\n submit_attribs = {}\n for attrib in self.ATTRIBUTES:\n val = getattr(self, attrib, None)\n if val:\n submit_attribs[attrib] = val\n path = '/'.join([self.RESOURCE, 'add'])\n result = self.client.request(path, submit_attribs)\n return result", "response": "Save an object to Instapaper after instantiating it."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nissue a request for an API method whose only param is the obj ID.", "response": "def _simple_action(self, action=None):\n '''Issue a request for an API method whose only param is the obj ID.\n\n :param str action: The name of the action for the resource\n :returns: Response from the API\n :rtype: dict\n '''\n if not action:\n raise Exception('No simple action defined')\n path = \"/\".join([self.RESOURCE, action])\n response = self.client.request(\n path, {self.RESOURCE_ID_ATTRIBUTE: self.object_id}\n )\n return response"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_highlights(self):\n '''Get highlights for Bookmark instance.\n\n :return: list of ``Highlight`` objects\n :rtype: list\n '''\n # NOTE: all Instapaper API methods use POST except this one!\n path = '/'.join([self.RESOURCE, str(self.object_id), 'highlights'])\n response = self.client.request(path, method='GET', api_version='1.1')\n items = response['data']\n highlights = []\n for item in items:\n if item.get('type') == 'error':\n raise Exception(item.get('message'))\n elif item.get('type') == 'highlight':\n highlights.append(Highlight(self, **item))\n return highlights", "response": "Get highlights for Bookmark instance."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef stop(self):\n if self.interval is not None:\n self._queue.put_nowait(None)\n self._thread.join()\n self.interval = None", "response": "Tell the sender thread to finish and wait for it to finish."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nbuilding a Graphite message to send and return it as a byte string.", "response": "def build_message(self, metric, value, timestamp, tags={}):\n \"\"\"Build a Graphite message to send and return it as a byte string.\"\"\"\n if not metric or metric.split(None, 1)[0] != metric:\n raise ValueError('\"metric\" must not have whitespace in it')\n if not isinstance(value, (int, float)):\n raise TypeError('\"value\" must be an int or a float, not a {}'.format(\n type(value).__name__))\n\n tags_suffix = ''.join(';{}={}'.format(x[0], x[1]) for x in sorted(tags.items()))\n\n message = u'{}{}{} {} {}\\n'.format(\n self.prefix + '.' if self.prefix else '',\n metric,\n tags_suffix,\n value,\n int(round(timestamp))\n )\n message = message.encode('utf-8')\n return message"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsend given metric and value to Graphite host.", "response": "def send(self, metric, value, timestamp=None, tags={}):\n \"\"\"Send given metric and (int or float) value to Graphite host.\n Performs send on background thread if \"interval\" was specified when\n creating this Sender.\n\n If a \"tags\" dict is specified, send the tags to the Graphite host along with the metric.\n \"\"\"\n if timestamp is None:\n timestamp = time.time()\n message = self.build_message(metric, value, timestamp, tags)\n\n if self.interval is None:\n self.send_socket(message)\n else:\n try:\n self._queue.put_nowait(message)\n except queue.Full:\n logger.error('queue full when sending {!r}'.format(message))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef send_socket(self, message):\n if self.log_sends:\n start_time = time.time()\n try:\n self.send_message(message)\n except Exception as error:\n logger.error('error sending message {!r}: {}'.format(message, error))\n else:\n if self.log_sends:\n elapsed_time = time.time() - start_time\n logger.info('sent message {!r} to {}:{} in {:.03f} seconds'.format(\n message, self.host, self.port, elapsed_time))", "response": "Low - level function to send a message to this Sender s socket."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nbackground thread used when Sender is in asynchronous/interval mode.", "response": "def _thread_loop(self):\n \"\"\"Background thread used when Sender is in asynchronous/interval mode.\"\"\"\n last_check_time = time.time()\n messages = []\n while True:\n # Get first message from queue, blocking until the next time we\n # should be sending\n time_since_last_check = time.time() - last_check_time\n time_till_next_check = max(0, self.interval - time_since_last_check)\n try:\n message = self._queue.get(timeout=time_till_next_check)\n except queue.Empty:\n pass\n else:\n if message is None:\n # None is the signal to stop this background thread\n break\n messages.append(message)\n\n # Get any other messages currently on queue without blocking,\n # paying attention to None (\"stop thread\" signal)\n should_stop = False\n while True:\n try:\n message = self._queue.get_nowait()\n except queue.Empty:\n break\n if message is None:\n should_stop = True\n break\n messages.append(message)\n if should_stop:\n break\n\n # If it's time to send, send what we've collected\n current_time = time.time()\n if current_time - last_check_time >= self.interval:\n last_check_time = current_time\n for i in range(0, len(messages), self.batch_size):\n batch = messages[i:i + self.batch_size]\n self.send_socket(b''.join(batch))\n messages = []\n\n # Send any final messages before exiting thread\n for i in range(0, len(messages), self.batch_size):\n batch = messages[i:i + self.batch_size]\n self.send_socket(b''.join(batch))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting a resource group name from a VMSS ID string", "response": "def get_rg_from_id(vmss_id):\n '''get a resource group name from a VMSS ID string'''\n rgname = re.search('Groups/(.+?)/providers', vmss_id).group(1)\n print('Resource group: ' + rgname)\n return rgname"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the user agent header. Sends library identification to Azure endpoint.", "response": "def get_user_agent():\n '''User-Agent Header. Sends library identification to Azure endpoint.\n '''\n version = pkg_resources.require(\"azurerm\")[0].version\n user_agent = \"python/{} ({}) requests/{} azurerm/{}\".format(\n platform.python_version(),\n platform.platform(),\n requests.__version__,\n version)\n return user_agent"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef do_get(endpoint, access_token):\n '''Do an HTTP GET request and return JSON.\n\n Args:\n endpoint (str): Azure Resource Manager management endpoint.\n access_token (str): A valid Azure authentication token.\n\n Returns:\n HTTP response. JSON body.\n '''\n headers = {\"Authorization\": 'Bearer ' + access_token}\n headers['User-Agent'] = get_user_agent()\n return requests.get(endpoint, headers=headers).json()", "response": "Do an HTTP GET request and return JSON."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef do_get_next(endpoint, access_token):\n '''Do an HTTP GET request, follow the nextLink chain and return JSON.\n\n Args:\n endpoint (str): Azure Resource Manager management endpoint.\n access_token (str): A valid Azure authentication token.\n\n Returns:\n HTTP response. JSON body.\n '''\n headers = {\"Authorization\": 'Bearer ' + access_token}\n headers['User-Agent'] = get_user_agent()\n looping = True\n value_list = []\n vm_dict = {}\n while looping:\n get_return = requests.get(endpoint, headers=headers).json()\n if not 'value' in get_return:\n return get_return\n if not 'nextLink' in get_return:\n looping = False\n else:\n endpoint = get_return['nextLink']\n value_list += get_return['value']\n vm_dict['value'] = value_list\n return vm_dict", "response": "Do an HTTP GET request and return the JSON body."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef do_delete(endpoint, access_token):\n '''Do an HTTP GET request and return JSON.\n\n Args:\n endpoint (str): Azure Resource Manager management endpoint.\n access_token (str): A valid Azure authentication token.\n\n Returns:\n HTTP response.\n '''\n headers = {\"Authorization\": 'Bearer ' + access_token}\n headers['User-Agent'] = get_user_agent()\n return requests.delete(endpoint, headers=headers)", "response": "Do an HTTP GET request and return JSON."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndo an HTTP PATCH request and return JSON body.", "response": "def do_patch(endpoint, body, access_token):\n '''Do an HTTP PATCH request and return JSON.\n\n Args:\n endpoint (str): Azure Resource Manager management endpoint.\n body (str): JSON body of information to patch.\n access_token (str): A valid Azure authentication token.\n\n Returns:\n HTTP response. JSON body.\n '''\n headers = {\"content-type\": \"application/json\", \"Authorization\": 'Bearer ' + access_token}\n headers['User-Agent'] = get_user_agent()\n return requests.patch(endpoint, data=body, headers=headers)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef do_post(endpoint, body, access_token):\n '''Do an HTTP POST request and return JSON.\n\n Args:\n endpoint (str): Azure Resource Manager management endpoint.\n body (str): JSON body of information to post.\n access_token (str): A valid Azure authentication token.\n\n Returns:\n HTTP response. JSON body.\n '''\n headers = {\"content-type\": \"application/json\", \"Authorization\": 'Bearer ' + access_token}\n headers['User-Agent'] = get_user_agent()\n return requests.post(endpoint, data=body, headers=headers)", "response": "Do an HTTP POST request and return JSON body."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef do_put(endpoint, body, access_token):\n '''Do an HTTP PUT request and return JSON.\n\n Args:\n endpoint (str): Azure Resource Manager management endpoint.\n body (str): JSON body of information to put.\n access_token (str): A valid Azure authentication token.\n\n Returns:\n HTTP response. JSON body.\n '''\n headers = {\"content-type\": \"application/json\", \"Authorization\": 'Bearer ' + access_token}\n headers['User-Agent'] = get_user_agent()\n return requests.put(endpoint, data=body, headers=headers)", "response": "Do an HTTP PUT request and return JSON body."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget Media Services Final Endpoint URL.", "response": "def get_url(access_token, endpoint=ams_rest_endpoint, flag=True):\n '''Get Media Services Final Endpoint URL.\n Args:\n access_token (str): A valid Azure authentication token.\n endpoint (str): Azure Media Services Initial Endpoint.\n flag (bol): flag.\n\n Returns:\n HTTP response. JSON body.\n '''\n return do_ams_get_url(endpoint, access_token, flag)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef do_ams_auth(endpoint, body):\n '''Acquire Media Services Authentication Token.\n Args:\n endpoint (str): Azure Media Services Initial Endpoint.\n body (str): A Content Body.\n\n Returns:\n HTTP response. JSON body.\n '''\n headers = {\"content-type\": \"application/x-www-form-urlencoded\",\n \"Accept\": json_acceptformat}\n return requests.post(endpoint, data=body, headers=headers)", "response": "Acquire Media Services Authentication Token."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndo a AMS HTTP PUT request and return JSON. Args: endpoint (str): Azure Media Services Initial Endpoint. path (str): Azure Media Services Endpoint Path. body (str): Azure Media Services Content Body. access_token (str): A valid Azure authentication token. rformat (str): A required JSON Accept Format. ds_min_version (str): A required DS MIN Version. Returns: HTTP response. JSON body.", "response": "def do_ams_put(endpoint, path, body, access_token, rformat=\"json\", ds_min_version=\"3.0;NetFx\"):\n '''Do a AMS HTTP PUT request and return JSON.\n Args:\n endpoint (str): Azure Media Services Initial Endpoint.\n path (str): Azure Media Services Endpoint Path.\n body (str): Azure Media Services Content Body.\n access_token (str): A valid Azure authentication token.\n rformat (str): A required JSON Accept Format.\n ds_min_version (str): A required DS MIN Version.\n\n Returns:\n HTTP response. JSON body.\n '''\n min_ds = dsversion_min\n content_acceptformat = json_acceptformat\n if rformat == \"json_only\":\n min_ds = ds_min_version\n content_acceptformat = json_only_acceptformat\n headers = {\"Content-Type\": content_acceptformat,\n \"DataServiceVersion\": min_ds,\n \"MaxDataServiceVersion\": dsversion_max,\n \"Accept\": json_acceptformat,\n \"Accept-Charset\" : charset,\n \"Authorization\": \"Bearer \" + access_token,\n \"x-ms-version\" : xmsversion}\n response = requests.put(endpoint, data=body, headers=headers, allow_redirects=False)\n # AMS response to the first call can be a redirect,\n # so we handle it here to make it transparent for the caller...\n if response.status_code == 301:\n redirected_url = ''.join([response.headers['location'], path])\n response = requests.put(redirected_url, data=body, headers=headers)\n return response"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndo a AMS HTTP POST request and return JSON body.", "response": "def do_ams_post(endpoint, path, body, access_token, rformat=\"json\", ds_min_version=\"3.0;NetFx\"):\n '''Do a AMS HTTP POST request and return JSON.\n Args:\n endpoint (str): Azure Media Services Initial Endpoint.\n path (str): Azure Media Services Endpoint Path.\n body (str): Azure Media Services Content Body.\n access_token (str): A valid Azure authentication token.\n rformat (str): A required JSON Accept Format.\n ds_min_version (str): A required DS MIN Version.\n\n Returns:\n HTTP response. JSON body.\n '''\n min_ds = dsversion_min\n content_acceptformat = json_acceptformat\n acceptformat = json_acceptformat\n if rformat == \"json_only\":\n min_ds = ds_min_version\n content_acceptformat = json_only_acceptformat\n if rformat == \"xml\":\n content_acceptformat = xml_acceptformat\n acceptformat = xml_acceptformat + \",application/xml\"\n headers = {\"Content-Type\": content_acceptformat,\n \"DataServiceVersion\": min_ds,\n \"MaxDataServiceVersion\": dsversion_max,\n \"Accept\": acceptformat,\n \"Accept-Charset\" : charset,\n \"Authorization\": \"Bearer \" + access_token,\n \"x-ms-version\" : xmsversion}\n response = requests.post(endpoint, data=body, headers=headers, allow_redirects=False)\n # AMS response to the first call can be a redirect,\n # so we handle it here to make it transparent for the caller...\n if response.status_code == 301:\n redirected_url = ''.join([response.headers['location'], path])\n response = requests.post(redirected_url, data=body, headers=headers)\n return response"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef do_ams_patch(endpoint, path, body, access_token):\n '''Do a AMS PATCH request and return JSON.\n Args:\n endpoint (str): Azure Media Services Initial Endpoint.\n path (str): Azure Media Services Endpoint Path.\n body (str): Azure Media Services Content Body.\n access_token (str): A valid Azure authentication token.\n\n Returns:\n HTTP response. JSON body.\n '''\n headers = {\"Content-Type\": json_acceptformat,\n \"DataServiceVersion\": dsversion_min,\n \"MaxDataServiceVersion\": dsversion_max,\n \"Accept\": json_acceptformat,\n \"Accept-Charset\" : charset,\n \"Authorization\": \"Bearer \" + access_token,\n \"x-ms-version\" : xmsversion}\n response = requests.patch(endpoint, data=body, headers=headers, allow_redirects=False)\n # AMS response to the first call can be a redirect,\n # so we handle it here to make it transparent for the caller...\n if response.status_code == 301:\n redirected_url = ''.join([response.headers['location'], path])\n response = requests.patch(redirected_url, data=body, headers=headers)\n return response", "response": "Do a AMS PATCH request and return JSON body."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndo a AMS DELETE request and return JSON.", "response": "def do_ams_delete(endpoint, path, access_token):\n '''Do a AMS DELETE request and return JSON.\n Args:\n endpoint (str): Azure Media Services Initial Endpoint.\n path (str): Azure Media Services Endpoint Path.\n access_token (str): A valid Azure authentication token.\n\n Returns:\n HTTP response. JSON body.\n '''\n headers = {\"DataServiceVersion\": dsversion_min,\n \"MaxDataServiceVersion\": dsversion_max,\n \"Accept\": json_acceptformat,\n \"Accept-Charset\" : charset,\n \"Authorization\": 'Bearer ' + access_token,\n \"x-ms-version\" : xmsversion}\n response = requests.delete(endpoint, headers=headers, allow_redirects=False)\n # AMS response to the first call can be a redirect,\n # so we handle it here to make it transparent for the caller...\n if response.status_code == 301:\n redirected_url = ''.join([response.headers['location'], path])\n response = requests.delete(redirected_url, headers=headers)\n return response"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef do_ams_sto_put(endpoint, body, content_length):\n '''Do a PUT request to the Azure Storage API and return JSON.\n Args:\n endpoint (str): Azure Media Services Initial Endpoint.\n body (str): Azure Media Services Content Body.\n content_length (str): Content_length.\n\n Returns:\n HTTP response. JSON body.\n '''\n headers = {\"Accept\": json_acceptformat,\n \"Accept-Charset\" : charset,\n \"x-ms-blob-type\" : \"BlockBlob\",\n \"x-ms-meta-m1\": \"v1\",\n \"x-ms-meta-m2\": \"v2\",\n \"x-ms-version\" : \"2015-02-21\",\n \"Content-Length\" : str(content_length)}\n return requests.put(endpoint, data=body, headers=headers)", "response": "Do a PUT request to the Azure Storage API and return JSON body."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndo an AMS GET request to retrieve the Final AMS Endpoint and return JSON.", "response": "def do_ams_get_url(endpoint, access_token, flag=True):\n '''Do an AMS GET request to retrieve the Final AMS Endpoint and return JSON.\n Args:\n endpoint (str): Azure Media Services Initial Endpoint.\n access_token (str): A valid Azure authentication token.\n flag (str): A Flag to follow the redirect or not.\n\n Returns:\n HTTP response. JSON body.\n '''\n headers = {\"Content-Type\": json_acceptformat,\n \"DataServiceVersion\": dsversion_min,\n \"MaxDataServiceVersion\": dsversion_max,\n \"Accept\": json_acceptformat,\n \"Accept-Charset\" : charset,\n \"Authorization\": \"Bearer \" + access_token,\n \"x-ms-version\" : xmsversion}\n body = ''\n response = requests.get(endpoint, headers=headers, allow_redirects=flag)\n if flag:\n if response.status_code == 301:\n response = requests.get(response.headers['location'], data=body, headers=headers)\n return response"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef handle_bad_update(operation, ret):\n '''report error for bad update'''\n print(\"Error \" + operation)\n sys.exit('Return code: ' + str(ret.status_code) + ' Error: ' + ret.text)", "response": "handle error for bad update"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef extract_code(end_mark, current_str, str_array, line_num):\n '''Extract a multi-line string from a string array, up to a specified end marker.\n\n Args:\n end_mark (str): The end mark string to match for.\n current_str (str): The first line of the string array.\n str_array (list): An array of strings (lines).\n line_num (int): The current offset into the array.\n\n Returns:\n Extended string up to line with end marker.\n '''\n if end_mark not in current_str:\n reached_end = False\n line_num += 1\n while reached_end is False:\n next_line = str_array[line_num]\n if end_mark in next_line:\n reached_end = True\n else:\n line_num += 1\n current_str += next_line\n clean_str = current_str.split(end_mark)[0]\n return {'current_str': clean_str, 'line_num': line_num}", "response": "Extracts a multi - line string from a string array up to a specified end marker."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef process_file(pyfile_name):\n '''Process a Python source file with Google style docstring comments.\n\n Reads file header comment, function definitions, function docstrings.\n Returns dictionary encapsulation for subsequent writing.\n\n Args:\n pyfile_name (str): file name to read.\n\n Returns:\n Dictionary object containing summary comment, with a list of entries for each function.\n '''\n print('Processing file: ' + pyfile_name)\n\n # load the source file\n with open(pyfile_name) as fpyfile:\n pyfile_str = fpyfile.readlines()\n\n # meta-doc for a source file\n file_dict = {'source_file': pyfile_name.replace('\\\\', '/')}\n\n # get file summary line at the top of the file\n if pyfile_str[0].startswith(\"'''\"):\n file_dict['summary_comment'] = pyfile_str[0][:-1].strip(\"'\")\n else:\n file_dict['summary_comment'] = pyfile_name\n\n file_dict['functions'] = []\n # find every function definition\n for line in pyfile_str:\n # process definition\n if line.startswith('def '):\n line_num = pyfile_str.index(line)\n fn_def = line[4:]\n fn_name = fn_def.split('(')[0]\n function_info = {'name': fn_name}\n extract = extract_code(':', fn_def, pyfile_str, line_num)\n function_info['definition'] = extract['current_str']\n # process docstring\n line_num = extract['line_num'] + 1\n doc_line = pyfile_str[line_num]\n if doc_line.startswith(\" '''\"):\n comment_str = doc_line[7:]\n extract = extract_code(\n \"'''\", comment_str, pyfile_str, line_num)\n function_info['comments'] = extract['current_str']\n file_dict['functions'].append(function_info)\n return file_dict", "response": "Process a Python source file with Google style docstring comments."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate a markdown format documentation file.", "response": "def process_output(meta_file, outfile_name, code_links):\n '''Create a markdown format documentation file.\n\n Args:\n meta_file (dict): Dictionary with documentation metadata.\n outfile_name (str): Markdown file to write to.\n '''\n\n # Markdown title line\n doc_str = '# ' + meta_file['header'] + '\\n'\n doc_str += 'Generated by [py2md](https://github.com/gbowerman/py2md) on '\n doc_str += strftime(\"%Y-%m-%d %H:%M:%S \") + '\\n\\n'\n\n # Create a table of contents if more than one module (i.e. more than one\n # source file)\n if len(meta_file['modules']) > 1:\n doc_str += \"## Contents\\n\"\n chapter_num = 1\n for meta_doc in meta_file['modules']:\n chapter_name = meta_doc['summary_comment']\n chapter_link = chapter_name.lstrip().replace('.', '').replace(' ', '-').lower()\n doc_str += str(chapter_num) + \\\n '. [' + chapter_name + '](#' + chapter_link + ')\\n'\n chapter_num += 1\n\n\n # Document each meta-file\n for meta_doc in meta_file['modules']:\n doc_str += '## ' + meta_doc['summary_comment'] + '\\n'\n doc_str += '[source file](' + meta_doc['source_file'] + ')' + '\\n'\n for function_info in meta_doc['functions']:\n doc_str += '### ' + function_info['name'] + '\\n'\n doc_str += function_info['definition'] + '\\n\\n'\n if 'comments' in function_info:\n doc_str += '```\\n' + function_info['comments'] + '\\n```\\n\\n'\n\n # write the markdown to file\n print('Writing file: ' + outfile_name)\n out_file = open(outfile_name, 'w')\n out_file.write(doc_str)\n out_file.close()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_container_service(access_token, subscription_id, resource_group, service_name, \\\n agent_count, agent_vm_size, agent_dns, master_dns, admin_user, location, public_key=None,\\\n master_count=3, orchestrator='DCOS', app_id=None, app_secret=None, admin_password=None, \\\n ostype='Linux'):\n '''Create a new container service - include app_id and app_secret if using Kubernetes.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n service_name (str): Name of container service.\n agent_count (int): The number of agent VMs.\n agent_vm_size (str): VM size of agents, e.g. Standard_D1_v2.\n agent_dns (str): A unique DNS string for the agent DNS.\n master_dns (str): A unique string for the master DNS.\n admin_user (str): Admin user name.\n location (str): Azure data center location, e.g. westus.\n public_key (str): RSA public key (utf-8).\n master_count (int): Number of master VMs.\n orchestrator (str): Container orchestrator. E.g. DCOS, Kubernetes.\n app_id (str): Application ID for Kubernetes.\n app_secret (str): Application secret for Kubernetes.\n admin_password (str): Admin user password.\n ostype (str): Operating system. Windows of Linux.\n\n Returns:\n HTTP response. Container service JSON model.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', resource_group,\n '/providers/Microsoft.ContainerService/ContainerServices/', service_name,\n '?api-version=', ACS_API])\n acs_body = {'location': location}\n properties = {'orchestratorProfile': {'orchestratorType': orchestrator}}\n properties['masterProfile'] = {'count': master_count, 'dnsPrefix': master_dns}\n ap_profile = {'name': 'AgentPool1'}\n ap_profile['count'] = agent_count\n ap_profile['vmSize'] = agent_vm_size\n ap_profile['dnsPrefix'] = agent_dns\n properties['agentPoolProfiles'] = [ap_profile]\n if ostype == 'Linux':\n linux_profile = {'adminUsername': admin_user}\n linux_profile['ssh'] = {'publicKeys': [{'keyData': public_key}]}\n properties['linuxProfile'] = linux_profile\n else: # Windows\n windows_profile = {'adminUsername': admin_user, 'adminPassword': admin_password}\n properties['windowsProfile'] = windows_profile\n if orchestrator == 'Kubernetes':\n sp_profile = {'ClientID': app_id}\n sp_profile['Secret'] = app_secret\n properties['servicePrincipalProfile'] = sp_profile\n acs_body['properties'] = properties\n body = json.dumps(acs_body)\n return do_put(endpoint, body, access_token)", "response": "Create a new container service."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef delete_container_service(access_token, subscription_id, resource_group, service_name):\n '''Delete a named container.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n service_name (str): Name of container service.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', resource_group,\n '/providers/Microsoft.ContainerService/ContainerServices/', service_name,\n '?api-version=', ACS_API])\n return do_delete(endpoint, access_token)", "response": "Delete a named container."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_container_service(access_token, subscription_id, resource_group, service_name):\n '''Get details about an Azure Container Server\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n service_name (str): Name of container service.\n\n Returns:\n HTTP response. JSON model.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', resource_group,\n '/providers/Microsoft.ContainerService/ContainerServices/', service_name,\n '?api-version=', ACS_API])\n return do_get(endpoint, access_token)", "response": "Get details about an Azure Container Server"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef list_container_services(access_token, subscription_id, resource_group):\n '''List the container services in a resource group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n\n Returns:\n HTTP response. JSON model.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', resource_group,\n '/providers/Microsoft.ContainerService/ContainerServices',\n '?api-version=', ACS_API])\n return do_get(endpoint, access_token)", "response": "List the container services in a resource group."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef list_container_services_sub(access_token, subscription_id):\n '''List the container services in a subscription.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n\n Returns:\n HTTP response. JSON model.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/Microsoft.ContainerService/ContainerServices',\n '?api-version=', ACS_API])\n return do_get(endpoint, access_token)", "response": "List the container services in a subscription."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_storage_account(access_token, subscription_id, rgname, account_name, location,\n storage_type='Standard_LRS'):\n '''Create a new storage account in the named resource group, with the named location.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rgname (str): Azure resource group name.\n account_name (str): Name of the new storage account.\n location (str): Azure data center location. E.g. westus.\n storage_type (str): Premium or Standard, local or globally redundant.\n Defaults to Standard_LRS.\n\n Returns:\n HTTP response. JSON body of storage account properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', rgname,\n '/providers/Microsoft.Storage/storageAccounts/', account_name,\n '?api-version=', STORAGE_API])\n\n storage_body = {'location': location}\n storage_body['sku'] = {'name': storage_type}\n storage_body['kind'] = 'Storage'\n body = json.dumps(storage_body)\n return do_put(endpoint, body, access_token)", "response": "Create a new storage account in the named resource group with the named location."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndeleting a storage account in the specified resource group.", "response": "def delete_storage_account(access_token, subscription_id, rgname, account_name):\n '''Delete a storage account in the specified resource group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rgname (str): Azure resource group name.\n account_name (str): Name of the new storage account.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', rgname,\n '/providers/Microsoft.Storage/storageAccounts/', account_name,\n '?api-version=', STORAGE_API])\n return do_delete(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the properties for the named storage account.", "response": "def get_storage_account(access_token, subscription_id, rgname, account_name):\n '''Get the properties for the named storage account.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rgname (str): Azure resource group name.\n account_name (str): Name of the new storage account.\n\n Returns:\n HTTP response. JSON body of storage account properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', rgname,\n '/providers/Microsoft.Storage/storageAccounts/', account_name,\n '?api-version=', STORAGE_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_storage_account_keys(access_token, subscription_id, rgname, account_name):\n '''Get the access keys for the specified storage account.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rgname (str): Azure resource group name.\n account_name (str): Name of the new storage account.\n\n Returns:\n HTTP response. JSON body of storage account keys.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', rgname,\n '/providers/Microsoft.Storage/storageAccounts/', account_name,\n '/listKeys',\n '?api-version=', STORAGE_API])\n return do_post(endpoint, '', access_token)", "response": "Get the access keys for the specified storage account."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_storage_usage(access_token, subscription_id, location):\n '''Returns storage usage and quota information for the specified subscription.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n\n Returns:\n HTTP response. JSON body of storage account usage.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/Microsoft.Storage/locations/', location,\n '/usages',\n '?api-version=', STORAGE_API])\n return do_get(endpoint, access_token)", "response": "Returns the storage usage and quota information for the specified location."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef list_storage_accounts_rg(access_token, subscription_id, rgname):\n '''List the storage accounts in the specified resource group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rgname (str): Azure resource group name.\n\n Returns:\n HTTP response. JSON body list of storage accounts.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', rgname,\n '/providers/Microsoft.Storage/storageAccounts',\n '?api-version=', STORAGE_API])\n return do_get(endpoint, access_token)", "response": "List the storage accounts in the specified resource group."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef list_storage_accounts_sub(access_token, subscription_id):\n '''List the storage accounts in the specified subscription.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n\n Returns:\n HTTP response. JSON body list of storage accounts.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/Microsoft.Storage/storageAccounts',\n '?api-version=', STORAGE_API])\n return do_get(endpoint, access_token)", "response": "List the storage accounts in the specified subscription."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef rgfromid(idstr):\n '''get resource group name from the id string'''\n rgidx = idstr.find('resourceGroups/')\n providx = idstr.find('/providers/')\n return idstr[rgidx + 15:providx]", "response": "get resource group name from the id string"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef main():\n '''main routine'''\n # Load Azure app defaults\n try:\n with open('azurermconfig.json') as config_file:\n config_data = json.load(config_file)\n except FileNotFoundError:\n sys.exit('Error: Expecting azurermconfig.json in current folder')\n\n tenant_id = config_data['tenantId']\n app_id = config_data['appId']\n app_secret = config_data['appSecret']\n subscription_id = config_data['subscriptionId']\n\n access_token = azurerm.get_access_token(tenant_id, app_id, app_secret)\n\n # list storage accounts per sub\n sa_list = azurerm.list_storage_accounts_sub(access_token, subscription_id)\n # print(sa_list)\n for sta in sa_list['value']:\n print(sta['name'] + ', ' + sta['properties']['primaryLocation'] + ', '\n + rgfromid(sta['id']))\n\n # get storage account quota\n quota_info = azurerm.get_storage_usage(access_token, subscription_id)\n used = quota_info['value'][0]['currentValue']\n limit = quota_info[\"value\"][0][\"limit\"]\n print('\\nUsing ' + str(used) + ' accounts out of ' + str(limit) + '.')", "response": "main routine for the AZURM."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef check_media_service_name_availability(access_token, subscription_id, msname):\n '''Check media service name availability.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n msname (str): media service name.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/microsoft.media/CheckNameAvailability?',\n 'api-version=', MEDIA_API])\n ms_body = {'name': msname}\n ms_body['type'] = 'mediaservices'\n body = json.dumps(ms_body)\n return do_post(endpoint, body, access_token)", "response": "Check media service name availability."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a media service in a resource group.", "response": "def create_media_service_rg(access_token, subscription_id, rgname, location, stoname, msname):\n '''Create a media service in a resource group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rgname (str): Azure resource group name.\n location (str): Azure data center location. E.g. westus.\n stoname (str): Azure storage account name.\n msname (str): Media service name.\n\n Returns:\n HTTP response. JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', rgname,\n '/providers/microsoft.media/mediaservices/', msname,\n '?api-version=', MEDIA_API])\n ms_body = {'name': msname}\n ms_body['location'] = location\n sub_id_str = '/subscriptions/' + subscription_id + '/resourceGroups/' + rgname + \\\n '/providers/Microsoft.Storage/storageAccounts/' + stoname\n storage_account = {'id': sub_id_str}\n storage_account['isPrimary'] = True\n properties = {'storageAccounts': [storage_account]}\n ms_body['properties'] = properties\n body = json.dumps(ms_body)\n return do_put(endpoint, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndeletes a media service in a resource group.", "response": "def delete_media_service_rg(access_token, subscription_id, rgname, msname):\n '''Delete a media service.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rgname (str): Azure resource group name.\n msname (str): Media service name.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', rgname,\n '/providers/microsoft.media/mediaservices/', msname,\n '?api-version=', MEDIA_API])\n return do_delete(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef list_media_endpoint_keys(access_token, subscription_id, rgname, msname):\n '''list the media endpoint keys in a media service\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rgname (str): Azure resource group name.\n msname (str): Media service name.\n\n Returns:\n HTTP response. JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', rgname,\n '/providers/microsoft.media/',\n '/mediaservices/', msname,\n '/listKeys?api-version=', MEDIA_API])\n return do_get(endpoint, access_token)", "response": "list the media endpoint keys in a media service"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef list_media_services(access_token, subscription_id):\n '''List the media services in a subscription.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n\n Returns:\n HTTP response. JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/microsoft.media/mediaservices?api-version=', MEDIA_API])\n return do_get(endpoint, access_token)", "response": "List the media services in a subscription."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef list_media_services_rg(access_token, subscription_id, rgname):\n '''List the media services in a resource group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rgname (str): Azure resource group name.\n\n Returns:\n HTTP response. JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', rgname,\n '/providers/microsoft.media/mediaservices?api-version=', MEDIA_API])\n return do_get(endpoint, access_token)", "response": "List the media services in a resource group."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_ams_access_token(accountname, accountkey):\n '''Get Media Services Authentication Token.\n\n Args:\n accountname (str): Azure Media Services account name.\n accountkey (str): Azure Media Services Key.\n\n Returns:\n HTTP response. JSON body.\n '''\n accountkey_encoded = urllib.parse.quote(accountkey, safe='')\n body = \"grant_type=client_credentials&client_id=\" + accountname + \\\n\t\"&client_secret=\" + accountkey_encoded + \" &scope=urn%3aWindowsAzureMediaServices\"\n return do_ams_auth(ams_auth_endpoint, body)", "response": "Get Media Services Authentication Token."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef create_media_asset(access_token, name, options=\"0\"):\n '''Create Media Service Asset.\n\n Args:\n access_token (str): A valid Azure authentication token.\n name (str): Media Service Asset Name.\n options (str): Media Service Options.\n\n Returns:\n HTTP response. JSON body.\n '''\n path = '/Assets'\n endpoint = ''.join([ams_rest_endpoint, path])\n body = '{\"Name\": \"' + name + '\", \"Options\": \"' + str(options) + '\"}'\n return do_ams_post(endpoint, path, body, access_token)", "response": "Create Media Service Asset."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates Media Service Asset File.", "response": "def create_media_assetfile(access_token, parent_asset_id, name, is_primary=\"false\", \\\nis_encrypted=\"false\", encryption_scheme=\"None\", encryptionkey_id=\"None\"):\n '''Create Media Service Asset File.\n\n Args:\n access_token (str): A valid Azure authentication token.\n parent_asset_id (str): Media Service Parent Asset ID.\n name (str): Media Service Asset Name.\n is_primary (str): Media Service Primary Flag.\n is_encrypted (str): Media Service Encryption Flag.\n encryption_scheme (str): Media Service Encryption Scheme.\n encryptionkey_id (str): Media Service Encryption Key ID.\n\n Returns:\n HTTP response. JSON body.\n '''\n path = '/Files'\n endpoint = ''.join([ams_rest_endpoint, path])\n if encryption_scheme == \"StorageEncryption\":\n body = '{ \\\n\t\t\t\"IsEncrypted\": \"' + is_encrypted + '\", \\\n\t\t\t\"EncryptionScheme\": \"' + encryption_scheme + '\", \\\n\t\t\t\"EncryptionVersion\": \"' + \"1.0\" + '\", \\\n\t\t\t\"EncryptionKeyId\": \"' + encryptionkey_id + '\", \\\n\t\t\t\"IsPrimary\": \"' + is_primary + '\", \\\n\t\t\t\"MimeType\": \"video/mp4\", \\\n\t\t\t\"Name\": \"' + name + '\", \\\n\t\t\t\"ParentAssetId\": \"' + parent_asset_id + '\" \\\n\t\t}'\n else:\n body = '{ \\\n\t\t\t\"IsPrimary\": \"' + is_primary + '\", \\\n\t\t\t\"MimeType\": \"video/mp4\", \\\n\t\t\t\"Name\": \"' + name + '\", \\\n\t\t\t\"ParentAssetId\": \"' + parent_asset_id + '\" \\\n\t\t}'\n return do_ams_post(endpoint, path, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_sas_locator(access_token, asset_id, accesspolicy_id):\n '''Create Media Service SAS Locator.\n\n Args:\n access_token (str): A valid Azure authentication token.\n asset_id (str): Media Service Asset ID.\n accesspolicy_id (str): Media Service Access Policy ID.\n\n Returns:\n HTTP response. JSON body.\n '''\n path = '/Locators'\n endpoint = ''.join([ams_rest_endpoint, path])\n body = '{ \\\n\t\t\"AccessPolicyId\":\"' + accesspolicy_id + '\", \\\n\t\t\"AssetId\":\"' + asset_id + '\", \\\n\t\t\"Type\":1 \\\n\t}'\n return do_ams_post(endpoint, path, body, access_token)", "response": "Create Media Service SAS Locator."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_asset_delivery_policy(access_token, ams_account, key_delivery_url):\n '''Create Media Service Asset Delivery Policy.\n\n Args:\n access_token (str): A valid Azure authentication token.\n ams_account (str): Media Service Account.\n\n Returns:\n HTTP response. JSON body.\n '''\n path = '/AssetDeliveryPolicies'\n endpoint = ''.join([ams_rest_endpoint, path])\n body = '{ \\\n\t\t\"Name\":\"AssetDeliveryPolicy\", \\\n\t\t\"AssetDeliveryProtocol\":\"4\", \\\n\t\t\"AssetDeliveryPolicyType\":\"3\", \\\n\t\t\"AssetDeliveryConfiguration\":\"[{ \\\n\t\t\t\\\\\"Key\\\\\":\\\\\"2\\\\\", \\\n\t\t\t\\\\\"Value\\\\\":\\\\\"' + key_delivery_url + '\\\\\"}]\" \\\n\t}'\n return do_ams_post(endpoint, path, body, access_token)", "response": "Create Media Service Asset Delivery Policy."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate Media Service Content Key Authorization Policy.", "response": "def create_contentkey_authorization_policy(access_token, content):\n '''Create Media Service Content Key Authorization Policy.\n\n Args:\n access_token (str): A valid Azure authentication token.\n content (str): Content Payload.\n\n Returns:\n HTTP response. JSON body.\n '''\n path = '/ContentKeyAuthorizationPolicies'\n endpoint = ''.join([ams_rest_endpoint, path])\n body = content\n return do_ams_post(endpoint, path, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_contentkey_authorization_policy_options(access_token, key_delivery_type=\"2\", \\\nname=\"HLS Open Authorization Policy\", key_restriction_type=\"0\"):\n '''Create Media Service Content Key Authorization Policy Options.\n\n Args:\n access_token (str): A valid Azure authentication token.\n key_delivery_type (str): A Media Service Content Key Authorization Policy Delivery Type.\n name (str): A Media Service Contenty Key Authorization Policy Name.\n key_restiction_type (str): A Media Service Contenty Key Restriction Type.\n\n Returns:\n HTTP response. JSON body.\n '''\n path = '/ContentKeyAuthorizationPolicyOptions'\n endpoint = ''.join([ams_rest_endpoint, path])\n body = '{ \\\n\t\t\"Name\":\"policy\",\\\n\t\t\"KeyDeliveryType\":\"' + key_delivery_type + '\", \\\n\t\t\"KeyDeliveryConfiguration\":\"\", \\\n\t\t\t\"Restrictions\":[{ \\\n\t\t\t\"Name\":\"' + name + '\", \\\n\t\t\t\"KeyRestrictionType\":\"' + key_restriction_type + '\", \\\n\t\t\t\"Requirements\":null \\\n\t\t}] \\\n\t}'\n return do_ams_post(endpoint, path, body, access_token, \"json_only\")", "response": "Create Media Service Content Key Authorization Policy Options."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_ondemand_streaming_locator(access_token, encoded_asset_id, pid, starttime=None):\n '''Create Media Service OnDemand Streaming Locator.\n\n Args:\n access_token (str): A valid Azure authentication token.\n encoded_asset_id (str): A Media Service Encoded Asset ID.\n pid (str): A Media Service Encoded PID.\n starttime (str): A Media Service Starttime.\n\n Returns:\n HTTP response. JSON body.\n '''\n path = '/Locators'\n endpoint = ''.join([ams_rest_endpoint, path])\n if starttime is None:\n body = '{ \\\n\t\t\t\"AccessPolicyId\":\"' + pid + '\", \\\n\t\t\t\"AssetId\":\"' + encoded_asset_id + '\", \\\n\t\t\t\"Type\": \"2\" \\\n }'\n else:\n body = '{ \\\n\t\t\t\"AccessPolicyId\":\"' + pid + '\", \\\n\t\t\t\"AssetId\":\"' + encoded_asset_id + '\", \\\n\t\t\t\"StartTime\":\"' + str(starttime) + '\", \\\n\t\t\t\"Type\": \"2\" \\\n\t\t}'\n return do_ams_post(endpoint, path, body, access_token, \"json_only\")", "response": "Create Media Service OnDemand Streaming Locator."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create_asset_accesspolicy(access_token, name, duration, permission=\"1\"):\n '''Create Media Service Asset Access Policy.\n\n Args:\n access_token (str): A valid Azure authentication token.\n name (str): A Media Service Asset Access Policy Name.\n duration (str): A Media Service duration.\n permission (str): A Media Service permission.\n\n Returns:\n HTTP response. JSON body.\n '''\n path = '/AccessPolicies'\n endpoint = ''.join([ams_rest_endpoint, path])\n body = '{ \\\n\t\t\"Name\": \"' + str(name) + '\", \\\n\t\t\"DurationInMinutes\": \"' + duration + '\", \\\n\t\t\"Permissions\": \"' + permission + '\" \\\n\t}'\n return do_ams_post(endpoint, path, body, access_token)", "response": "Create Media Service Asset Access Policy."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a new Media Service Streaming Endpoint.", "response": "def create_streaming_endpoint(access_token, name, description=\"New Streaming Endpoint\", \\\nscale_units=\"1\"):\n '''Create Media Service Streaming Endpoint.\n\n Args:\n access_token (str): A valid Azure authentication token.\n name (str): A Media Service Streaming Endpoint Name.\n description (str): A Media Service Streaming Endpoint Description.\n scale_units (str): A Media Service Scale Units Number.\n\n Returns:\n HTTP response. JSON body.\n '''\n path = '/StreamingEndpoints'\n endpoint = ''.join([ams_rest_endpoint, path])\n body = '{ \\\n\t\t\"Id\":null, \\\n\t\t\"Name\":\"' + name + '\", \\\n\t\t\"Description\":\"' + description + '\", \\\n\t\t\"Created\":\"0001-01-01T00:00:00\", \\\n\t\t\"LastModified\":\"0001-01-01T00:00:00\", \\\n\t\t\"State\":null, \\\n\t\t\"HostName\":null, \\\n\t\t\"ScaleUnits\":\"' + scale_units + '\", \\\n\t\t\"CrossSiteAccessPolicies\":{ \\\n\t\t\t\"ClientAccessPolicy\":\"\", \\\n\t\t\t\"CrossDomainPolicy\":\"\" \\\n\t\t} \\\n\t}'\n return do_ams_post(endpoint, path, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef scale_streaming_endpoint(access_token, streaming_endpoint_id, scale_units):\n '''Scale Media Service Streaming Endpoint.\n\n Args:\n access_token (str): A valid Azure authentication token.\n streaming_endpoint_id (str): A Media Service Streaming Endpoint ID.\n scale_units (str): A Media Service Scale Units Number.\n\n Returns:\n HTTP response. JSON body.\n '''\n path = '/StreamingEndpoints'\n full_path = ''.join([path, \"('\", streaming_endpoint_id, \"')\", \"/Scale\"])\n full_path_encoded = urllib.parse.quote(full_path, safe='')\n endpoint = ''.join([ams_rest_endpoint, full_path_encoded])\n body = '{\"scaleUnits\": \"' + str(scale_units) + '\"}'\n return do_ams_post(endpoint, full_path_encoded, body, access_token)", "response": "Scale Media Service Streaming Endpoint."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nlinks Media Service Asset and Content Key.", "response": "def link_asset_content_key(access_token, asset_id, encryptionkey_id, ams_redirected_rest_endpoint):\n '''Link Media Service Asset and Content Key.\n\n Args:\n access_token (str): A valid Azure authentication token.\n asset_id (str): A Media Service Asset ID.\n encryption_id (str): A Media Service Encryption ID.\n ams_redirected_rest_endpoint (str): A Media Service Redirected Endpoint.\n\n Returns:\n HTTP response. JSON body.\n '''\n path = '/Assets'\n full_path = ''.join([path, \"('\", asset_id, \"')\", \"/$links/ContentKeys\"])\n full_path_encoded = urllib.parse.quote(full_path, safe='')\n endpoint = ''.join([ams_rest_endpoint, full_path_encoded])\n uri = ''.join([ams_redirected_rest_endpoint, 'ContentKeys', \"('\", encryptionkey_id, \"')\"])\n body = '{\"uri\": \"' + uri + '\"}'\n return do_ams_post(endpoint, full_path_encoded, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nlinks Media Service Content Key Authorization Policy.", "response": "def link_contentkey_authorization_policy(access_token, ckap_id, options_id, \\\nams_redirected_rest_endpoint):\n '''Link Media Service Content Key Authorization Policy.\n\n Args:\n access_token (str): A valid Azure authentication token.\n ckap_id (str): A Media Service Asset Content Key Authorization Policy ID.\n options_id (str): A Media Service Content Key Authorization Policy Options .\n ams_redirected_rest_endpoint (str): A Media Service Redirected Endpoint.\n\n Returns:\n HTTP response. JSON body.\n '''\n path = '/ContentKeyAuthorizationPolicies'\n full_path = ''.join([path, \"('\", ckap_id, \"')\", \"/$links/Options\"])\n full_path_encoded = urllib.parse.quote(full_path, safe='')\n endpoint = ''.join([ams_rest_endpoint, full_path_encoded])\n uri = ''.join([ams_redirected_rest_endpoint, 'ContentKeyAuthorizationPolicyOptions', \\\n \"('\", options_id, \"')\"])\n body = '{\"uri\": \"' + uri + '\"}'\n return do_ams_post(endpoint, full_path_encoded, body, access_token, \"json_only\", \"1.0;NetFx\")"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_authorization_policy(access_token, ck_id, oid):\n '''Add Media Service Authorization Policy.\n\n Args:\n access_token (str): A valid Azure authentication token.\n ck_id (str): A Media Service Asset Content Key ID.\n options_id (str): A Media Service OID.\n\n Returns:\n HTTP response. JSON body.\n '''\n path = '/ContentKeys'\n body = '{\"AuthorizationPolicyId\":\"' + oid + '\"}'\n return helper_add(access_token, ck_id, path, body)", "response": "Add Media Service Authorization Policy."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate Media Service Asset File.", "response": "def update_media_assetfile(access_token, parent_asset_id, asset_id, content_length, name):\n '''Update Media Service Asset File.\n\n Args:\n access_token (str): A valid Azure authentication token.\n parent_asset_id (str): A Media Service Asset Parent Asset ID.\n asset_id (str): A Media Service Asset Asset ID.\n content_length (str): A Media Service Asset Content Length.\n name (str): A Media Service Asset name.\n\n Returns:\n HTTP response. JSON body.\n '''\n path = '/Files'\n full_path = ''.join([path, \"('\", asset_id, \"')\"])\n full_path_encoded = urllib.parse.quote(full_path, safe='')\n endpoint = ''.join([ams_rest_endpoint, full_path_encoded])\n body = '{ \\\n\t\t\"ContentFileSize\": \"' + str(content_length) + '\", \\\n\t\t\"Id\": \"' + asset_id + '\", \\\n\t\t\"MimeType\": \"video/mp4\", \\\n\t\t\"Name\": \"' + name + '\", \\\n\t\t\"ParentAssetId\": \"' + parent_asset_id + '\" \\\n\t}'\n return do_ams_patch(endpoint, full_path_encoded, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets Media Services Key Delivery URL.", "response": "def get_key_delivery_url(access_token, ck_id, key_type):\n '''Get Media Services Key Delivery URL.\n\n Args:\n access_token (str): A valid Azure authentication token.\n ck_id (str): A Media Service Content Key ID.\n key_type (str): A Media Service key Type.\n\n Returns:\n HTTP response. JSON body.\n '''\n path = '/ContentKeys'\n full_path = ''.join([path, \"('\", ck_id, \"')\", \"/GetKeyDeliveryUrl\"])\n endpoint = ''.join([ams_rest_endpoint, full_path])\n body = '{\"keyDeliveryType\": \"' + key_type + '\"}'\n return do_ams_post(endpoint, full_path, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget Media Service Encode Mezanine Asset.", "response": "def encode_mezzanine_asset(access_token, processor_id, asset_id, output_assetname, json_profile):\n '''Get Media Service Encode Mezanine Asset.\n\n Args:\n access_token (str): A valid Azure authentication token.\n processor_id (str): A Media Service Processor ID.\n asset_id (str): A Media Service Asset ID.\n output_assetname (str): A Media Service Asset Name.\n json_profile (str): A Media Service JSON Profile.\n\n Returns:\n HTTP response. JSON body.\n '''\n path = '/Jobs'\n endpoint = ''.join([ams_rest_endpoint, path])\n assets_path = ''.join([\"/Assets\", \"('\", asset_id, \"')\"])\n assets_path_encoded = urllib.parse.quote(assets_path, safe='')\n endpoint_assets = ''.join([ams_rest_endpoint, assets_path_encoded])\n body = '{ \\\n \t\t\"Name\":\"' + output_assetname + '\", \\\n \t\t\"InputMediaAssets\":[{ \\\n \t \t\t\"__metadata\":{ \\\n \t \t\t\t\"uri\":\"' + endpoint_assets + '\" \\\n \t \t\t} \\\n \t \t}], \\\n \t\t\"Tasks\":[{ \\\n \t \t\t\"Configuration\":\\'' + json_profile + '\\', \\\n \t \t\t\"MediaProcessorId\":\"' + processor_id + '\", \\\n \t \t\t\"TaskBody\":\"JobInputAsset(0)JobOutputAsset(0)\" \\\n \t\t}] \\\n\t}'\n return do_ams_post(endpoint, path, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nlists all operations involved in a given deployment.", "response": "def list_deployment_operations(access_token, subscription_id, rg_name, deployment_name):\n '''List all operations involved in a given deployment.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rg_name (str): Azure resource group name.\n\n Returns:\n HTTP response. JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', rg_name,\n '/providers/Microsoft.Resources/deployments/', deployment_name,\n '/operations',\n '?api-version=', BASE_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef create_lb_with_nat_pool(access_token, subscription_id, resource_group, lb_name, public_ip_id,\n fe_start_port, fe_end_port, backend_port, location):\n '''Create a load balancer with inbound NAT pools.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n lb_name (str): Name of the new load balancer.\n public_ip_id (str): Public IP address resource id.\n fe_start_port (int): Start of front-end port range.\n fe_end_port (int): End of front-end port range.\n backend_port (int): Back end port for VMs.\n location (str): Azure data center location. E.g. westus.\n\n Returns:\n HTTP response. Load Balancer JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Network/loadBalancers/', lb_name,\n '?api-version=', NETWORK_API])\n lb_body = {'location': location}\n frontendipcconfig = {'name': 'LoadBalancerFrontEnd'}\n fipc_properties = {'publicIPAddress': {'id': public_ip_id}}\n frontendipcconfig['properties'] = fipc_properties\n properties = {'frontendIPConfigurations': [frontendipcconfig]}\n properties['backendAddressPools'] = [{'name': 'bepool'}]\n inbound_natpool = {'name': 'natpool'}\n lbfe_id = '/subscriptions/' + subscription_id + '/resourceGroups/' + resource_group + \\\n '/providers/Microsoft.Network/loadBalancers/' + lb_name + \\\n '/frontendIPConfigurations/LoadBalancerFrontEnd'\n ibnp_properties = {'frontendIPConfiguration': {'id': lbfe_id}}\n ibnp_properties['protocol'] = 'tcp'\n ibnp_properties['frontendPortRangeStart'] = fe_start_port\n ibnp_properties['frontendPortRangeEnd'] = fe_end_port\n ibnp_properties['backendPort'] = backend_port\n inbound_natpool['properties'] = ibnp_properties\n properties['inboundNatPools'] = [inbound_natpool]\n lb_body['properties'] = properties\n body = json.dumps(lb_body)\n return do_put(endpoint, body, access_token)", "response": "Create a load balancer with inbound NAT pools."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_nic(access_token, subscription_id, resource_group, nic_name, public_ip_id, subnet_id,\n location, nsg_id=None):\n '''Create a network interface with an associated public ip address.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n nic_name (str): Name of the new NIC.\n public_ip_id (str): Public IP address resource id.\n subnetid (str): Subnet resource id.\n location (str): Azure data center location. E.g. westus.\n nsg_id (str): Optional Network Secruity Group resource id.\n\n Returns:\n HTTP response. NIC JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Network/networkInterfaces/', nic_name,\n '?api-version=', NETWORK_API])\n nic_body = {'location': location}\n ipconfig = {'name': 'ipconfig1'}\n ipc_properties = {'privateIPAllocationMethod': 'Dynamic'}\n ipc_properties['publicIPAddress'] = {'id': public_ip_id}\n ipc_properties['subnet'] = {'id': subnet_id}\n ipconfig['properties'] = ipc_properties\n properties = {'ipConfigurations': [ipconfig]}\n if nsg_id is not None:\n properties['networkSecurityGroup'] = {'id': nsg_id}\n nic_body['properties'] = properties\n body = json.dumps(nic_body)\n return do_put(endpoint, body, access_token)", "response": "Create a network interface with an associated public ip address."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a network security group.", "response": "def create_nsg(access_token, subscription_id, resource_group, nsg_name, location):\n '''Create network security group (use create_nsg_rule() to add rules to it).\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n nsg_name (str): Name of the new NSG.\n location (str): Azure data center location. E.g. westus.\n\n Returns:\n HTTP response. NSG JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Network/networkSecurityGroups/', nsg_name,\n '?api-version=', NETWORK_API])\n nsg_body = {'location': location}\n body = json.dumps(nsg_body)\n return do_put(endpoint, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a network security group rule.", "response": "def create_nsg_rule(access_token, subscription_id, resource_group, nsg_name, nsg_rule_name,\n description, protocol='Tcp', source_range='*', destination_range='*',\n source_prefix='*', destination_prefix='*', access='Allow', priority=100,\n direction='Inbound'):\n '''Create network security group rule.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n nsg_name (str): Name of the Network Security Group.\n nsg_rule_name (str): Name of the new rule.\n description (str): Description.\n protocol (str): Optional protocol. Default Tcp.\n source_range (str): Optional source IP range. Default '*'.\n destination_range (str): Destination IP range. Default *'.\n source_prefix (str): Source DNS prefix. Default '*'.\n destination_prefix (str): Destination prefix. Default '*'.\n access (str): Allow or deny rule. Default Allow.\n priority: Relative priority. Default 100.\n direction: Inbound or Outbound. Default Inbound.\n\n Returns:\n HTTP response. NSG JSON rule body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Network/networkSecurityGroups/', nsg_name,\n '/securityRules/', nsg_rule_name,\n '?api-version=', NETWORK_API])\n properties = {'description': description}\n properties['protocol'] = protocol\n properties['sourcePortRange'] = source_range\n properties['destinationPortRange'] = destination_range\n properties['sourceAddressPrefix'] = source_prefix\n properties['destinationAddressPrefix'] = destination_prefix\n properties['access'] = access\n properties['priority'] = priority\n properties['direction'] = direction\n ip_body = {'properties': properties}\n body = json.dumps(ip_body)\n return do_put(endpoint, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a public ip address.", "response": "def create_public_ip(access_token, subscription_id, resource_group, public_ip_name, dns_label,\n location):\n '''Create a public ip address.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n public_ip_name (str): Name of the new public ip address resource.\n dns_label (str): DNS label to apply to the IP address.\n location (str): Azure data center location. E.g. westus.\n\n Returns:\n HTTP response. Public IP address JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Network/publicIPAddresses/', public_ip_name,\n '?api-version=', NETWORK_API])\n ip_body = {'location': location}\n properties = {'publicIPAllocationMethod': 'Dynamic'}\n properties['dnsSettings'] = {'domainNameLabel': dns_label}\n ip_body['properties'] = properties\n body = json.dumps(ip_body)\n return do_put(endpoint, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a new VNet with specified name and location.", "response": "def create_vnet(access_token, subscription_id, resource_group, name, location,\n address_prefix='10.0.0.0/16', subnet_prefix='10.0.0.0/16', nsg_id=None):\n '''Create a VNet with specified name and location. Optional subnet address prefix..\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n name (str): Name of the new VNet.\n location (str): Azure data center location. E.g. westus.\n address_prefix (str): Optional VNet address prefix. Default '10.0.0.0/16'.\n subnet_prefix (str): Optional subnet address prefix. Default '10.0.0.0/16'.\n nsg_id (str): Optional Netwrok Security Group resource Id. Default None.\n\n Returns:\n HTTP response. VNet JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Network/virtualNetworks/', name,\n '?api-version=', NETWORK_API])\n\n vnet_body = {'location': location}\n properties = {'addressSpace': {'addressPrefixes': [address_prefix]}}\n subnet = {'name': 'subnet'}\n subnet['properties'] = {'addressPrefix': subnet_prefix}\n if nsg_id is not None:\n subnet['properties']['networkSecurityGroup'] = {'id': nsg_id}\n properties['subnets'] = [subnet]\n vnet_body['properties'] = properties\n body = json.dumps(vnet_body)\n return do_put(endpoint, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndelete a load balancer.", "response": "def delete_load_balancer(access_token, subscription_id, resource_group, lb_name):\n '''Delete a load balancer.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n lb_name (str): Name of the load balancer.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Network/loadBalancers/', lb_name,\n '?api-version=', NETWORK_API])\n return do_delete(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndelete a network interface.", "response": "def delete_nic(access_token, subscription_id, resource_group, nic_name):\n '''Delete a network interface.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n nic_name (str): Name of the NIC.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Network/networkInterfaces/', nic_name,\n '?api-version=', NETWORK_API])\n return do_delete(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndeleting a network security group.", "response": "def delete_nsg(access_token, subscription_id, resource_group, nsg_name):\n '''Delete network security group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n nsg_name (str): Name of the NSG.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Network/networkSecurityGroups/', nsg_name,\n '?api-version=', NETWORK_API])\n return do_delete(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndeleting a network security group rule.", "response": "def delete_nsg_rule(access_token, subscription_id, resource_group, nsg_name, nsg_rule_name):\n '''Delete network security group rule.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n nsg_name (str): Name of the Network Security Group.\n nsg_rule_name (str): Name of the NSG rule.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Network/networkSecurityGroups/', nsg_name,\n '/securityRules/', nsg_rule_name,\n '?api-version=', NETWORK_API])\n return do_delete(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef delete_public_ip(access_token, subscription_id, resource_group, public_ip_name):\n '''Delete a public ip addresses associated with a resource group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n public_ip_name (str): Name of the public ip address resource.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Network/publicIPAddresses/', public_ip_name,\n '?api-version=', NETWORK_API])\n return do_delete(endpoint, access_token)", "response": "Delete a public ip addresses associated with a resource group."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndeletes a virtual network.", "response": "def delete_vnet(access_token, subscription_id, resource_group, name):\n '''Delete a virtual network.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n name (str): Name of the VNet.\n\n Returns:\n HTTP response. VNet JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Network/virtualNetworks/', name,\n '?api-version=', NETWORK_API])\n return do_delete(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_lb_nat_rule(access_token, subscription_id, resource_group, lb_name, rule_name):\n '''Get details about a load balancer inbound NAT rule.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n lb_name (str): Name of the load balancer.\n rule_name (str): Name of the NAT rule.\n\n Returns:\n HTTP response. JSON body of rule.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Network/loadBalancers/', lb_name,\n '/inboundNatRules/', rule_name,\n '?api-version=', NETWORK_API])\n return do_get(endpoint, access_token)", "response": "Get details about a load balancer inbound NAT rule."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nlists network usage and limits for a location.", "response": "def get_network_usage(access_token, subscription_id, location):\n '''List network usage and limits for a location.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n location (str): Azure data center location. E.g. westus.\n\n Returns:\n HTTP response. JSON body of network usage.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/Microsoft.Network/locations/', location,\n '/usages?api-version=', NETWORK_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_nic(access_token, subscription_id, resource_group, nic_name):\n '''Get details about a network interface.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n nic_name (str): Name of the NIC.\n\n Returns:\n HTTP response. NIC JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Network/networkInterfaces/', nic_name,\n '?api-version=', NETWORK_API])\n return do_get(endpoint, access_token)", "response": "Get details about a network interface."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting details about the named public ip address.", "response": "def get_public_ip(access_token, subscription_id, resource_group, ip_name):\n '''Get details about the named public ip address.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n public_ip_name (str): Name of the public ip address resource.\n\n Returns:\n HTTP response. Public IP address JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Network/',\n 'publicIPAddresses/', ip_name,\n '?api-version=', NETWORK_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_vnet(access_token, subscription_id, resource_group, vnet_name):\n '''Get details about the named virtual network.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vnet_name (str): Name of the VNet.\n\n Returns:\n HTTP response. VNet JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Network/virtualNetworks/', vnet_name,\n '?api-version=', NETWORK_API])\n return do_get(endpoint, access_token)", "response": "Get details about the named virtual network."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting details about the application security groups for a resource group.", "response": "def list_asgs(access_token, subscription_id, resource_group):\n '''Get details about the application security groups for a resource group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n\n Returns:\n HTTP response. ASG JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Network/virtualNetworks/',\n '?api-version=', NETWORK_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget details about the application security groups for a resource group.", "response": "def list_asgs_all(access_token, subscription_id):\n '''Get details about the application security groups for a resource group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n\n Returns:\n HTTP response. ASG JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/Microsoft.Network/virtualNetworks/',\n '?api-version=', NETWORK_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef list_lb_nat_rules(access_token, subscription_id, resource_group, lb_name):\n '''List the inbound NAT rules for a load balancer.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n lb_name (str): Name of the load balancer.\n\n Returns:\n HTTP response. JSON body of load balancer NAT rules.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Network/loadBalancers/', lb_name,\n 'inboundNatRules?api-version=', NETWORK_API])\n return do_get(endpoint, access_token)", "response": "List the inbound NAT rules for a load balancer."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nlist the load balancers in a subscription.", "response": "def list_load_balancers(access_token, subscription_id):\n '''List the load balancers in a subscription.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n\n Returns:\n HTTP response. JSON body of load balancer list with properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/Microsoft.Network/',\n '/loadBalancers?api-version=', NETWORK_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nlists the network interfaces in a subscription.", "response": "def list_nics(access_token, subscription_id):\n '''List the network interfaces in a subscription.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n\n Returns:\n HTTP response. JSON body of NICs list with properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/Microsoft.Network/',\n '/networkInterfaces?api-version=', NETWORK_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nlist all network security groups in a subscription.", "response": "def list_nsgs_all(access_token, subscription_id):\n '''List all network security groups in a subscription.\n Args:\n access_token (str): a valid Azure Authentication token.\n subscription_id (str): Azure subscription id.\n Returns:\n HTTP response. JSON body of all network security groups in a subscription.\n\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/Microsoft.Network/',\n 'networkSEcurityGroups?api-version=', NETWORK_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef list_vnets(access_token, subscription_id):\n '''List the VNETs in a subscription\t.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n\n Returns:\n HTTP response. JSON body of VNets list with properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/Microsoft.Network/',\n '/virtualNetworks?api-version=', NETWORK_API])\n return do_get(endpoint, access_token)", "response": "List the VNETs in a subscription\t."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nupdate a load balancer model i. e. PUT an updated LB body.", "response": "def update_load_balancer(access_token, subscription_id, resource_group, lb_name, body):\n '''Updates a load balancer model, i.e. PUT an updated LB body.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n lb_name (str): Name of the new load balancer.\n body (str): JSON body of an updated load balancer.\n\n Returns:\n HTTP response. Load Balancer JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Network/loadBalancers/', lb_name,\n '?api-version=', NETWORK_API])\n return do_put(endpoint, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nprint the Compute usage quota for a specific region", "response": "def print_region_quota(access_token, sub_id, region):\n '''Print the Compute usage quota for a specific region'''\n print(region + ':')\n quota = azurerm.get_compute_usage(access_token, sub_id, region)\n if SUMMARY is False:\n print(json.dumps(quota, sort_keys=False, indent=2, separators=(',', ': ')))\n try:\n for resource in quota['value']:\n if resource['name']['value'] == 'cores':\n print(' Current: ' + str(resource['currentValue']) + ', limit: '\n + str(resource['limit']))\n break\n except KeyError:\n print('Invalid data for region: ' + region)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef main():\n '''main routine'''\n\n # process arguments\n if len(sys.argv) < 4:\n usage()\n\n rgname = sys.argv[1]\n vmss_name = sys.argv[2]\n capacity = sys.argv[3]\n\n # Load Azure app defaults\n try:\n with open('azurermconfig.json') as config_file:\n config_data = json.load(config_file)\n except FileNotFoundError:\n print(\"Error: Expecting azurermconfig.json in current folder\")\n sys.exit()\n\n tenant_id = config_data['tenantId']\n app_id = config_data['appId']\n app_secret = config_data['appSecret']\n subscription_id = config_data['subscriptionId']\n\n access_token = azurerm.get_access_token(tenant_id, app_id, app_secret)\n\n scaleoutput = azurerm.scale_vmss(access_token, subscription_id, rgname, vmss_name, capacity)\n print(scaleoutput.text)", "response": "main routine for the main function of the AZURM."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef main():\n '''main routine'''\n\n # process arguments\n if len(sys.argv) < 3:\n usage()\n\n rgname = sys.argv[1]\n vmss_name = sys.argv[2]\n\n # Load Azure app defaults\n try:\n with open('azurermconfig.json') as config_file:\n config_data = json.load(config_file)\n except FileNotFoundError:\n sys.exit('Error: Expecting azurermonfig.json in current folder')\n\n tenant_id = config_data['tenantId']\n app_id = config_data['appId']\n app_secret = config_data['appSecret']\n subscription_id = config_data['subscriptionId']\n\n access_token = azurerm.get_access_token(tenant_id, app_id, app_secret)\n\n instanceviewlist = azurerm.list_vmss_vm_instance_view(access_token, subscription_id, rgname,\n vmss_name)\n for vmi in instanceviewlist['value']:\n instance_id = vmi['instanceId']\n upgrade_domain = vmi['properties']['instanceView']['platformUpdateDomain']\n fault_domain = vmi['properties']['instanceView']['platformFaultDomain']\n print('Instance ID: ' + instance_id + ', UD: ' + str(upgrade_domain) + ', FD: '\n + str(fault_domain))", "response": "main routine that lists the available VM resources and sets up the current state of the current resource."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef create_as(access_token, subscription_id, resource_group, as_name,\n update_domains, fault_domains, location):\n '''Create availability set.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n as_name (str): Name of the new availability set.\n update_domains (int): Number of update domains.\n fault_domains (int): Number of fault domains.\n location (str): Azure data center location. E.g. westus.\n\n Returns:\n HTTP response. JSON body of the availability set properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/availabilitySets/', as_name,\n '?api-version=', COMP_API])\n as_body = {'location': location}\n properties = {'platformUpdateDomainCount': update_domains}\n properties['platformFaultDomainCount'] = fault_domains\n as_body['properties'] = properties\n body = json.dumps(as_body)\n return do_put(endpoint, body, access_token)", "response": "Create an availability set."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_vm(access_token, subscription_id, resource_group, vm_name, vm_size, publisher, offer,\n sku, version, nic_id, location, storage_type='Standard_LRS', osdisk_name=None,\n username='azure', password=None, public_key=None):\n '''Create a new Azure virtual machine.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vm_name (str): Name of the new virtual machine.\n vm_size (str): Size of virtual machine, e.g. 'Standard_D1_v2'.\n publisher (str): VM image publisher. E.g. 'MicrosoftWindowsServer'.\n offer (str): VM image offer. E.g. 'WindowsServer'.\n sku (str): VM image sku. E.g. '2016-Datacenter'.\n version (str): VM image version. E.g. 'latest'.\n nic_id (str): Resource id of a NIC.\n location (str): Azure data center location. E.g. westus.\n storage_type (str): Optional storage type. Default 'Standard_LRS'.\n osdisk_name (str): Optional OS disk name. Default is None.\n username (str): Optional user name. Default is 'azure'.\n password (str): Optional password. Default is None (not required if using public_key).\n public_key (str): Optional public key. Default is None (not required if using password,\n e.g. on Windows).\n\n Returns:\n HTTP response. JSON body of the virtual machine properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachines/', vm_name,\n '?api-version=', COMP_API])\n if osdisk_name is None:\n osdisk_name = vm_name + 'osdisk'\n vm_body = {'name': vm_name}\n vm_body['location'] = location\n properties = {'hardwareProfile': {'vmSize': vm_size}}\n image_reference = {'publisher': publisher,\n 'offer': offer, 'sku': sku, 'version': version}\n storage_profile = {'imageReference': image_reference}\n os_disk = {'name': osdisk_name}\n os_disk['managedDisk'] = {'storageAccountType': storage_type}\n os_disk['caching'] = 'ReadWrite'\n os_disk['createOption'] = 'fromImage'\n storage_profile['osDisk'] = os_disk\n properties['storageProfile'] = storage_profile\n os_profile = {'computerName': vm_name}\n os_profile['adminUsername'] = username\n if password is not None:\n os_profile['adminPassword'] = password\n if public_key is not None:\n if password is None:\n disable_pswd = True\n else:\n disable_pswd = False\n linux_config = {'disablePasswordAuthentication': disable_pswd}\n pub_key = {'path': '/home/' + username + '/.ssh/authorized_keys'}\n pub_key['keyData'] = public_key\n linux_config['ssh'] = {'publicKeys': [pub_key]}\n os_profile['linuxConfiguration'] = linux_config\n properties['osProfile'] = os_profile\n network_profile = {'networkInterfaces': [\n {'id': nic_id, 'properties': {'primary': True}}]}\n properties['networkProfile'] = network_profile\n vm_body['properties'] = properties\n body = json.dumps(vm_body)\n return do_put(endpoint, body, access_token)", "response": "Create a new Azure virtual machine."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate virtual machine scale set.", "response": "def create_vmss(access_token, subscription_id, resource_group, vmss_name, vm_size, capacity,\n publisher, offer, sku, version, subnet_id, location, be_pool_id=None,\n lb_pool_id=None, storage_type='Standard_LRS', username='azure', password=None,\n public_key=None, overprovision=True, upgrade_policy='Manual',\n public_ip_per_vm=False):\n '''Create virtual machine scale set.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vmss_name (str): Name of the new scale set.\n vm_size (str): Size of virtual machine, e.g. 'Standard_D1_v2'.\n capacity (int): Number of VMs in the scale set. 0-1000.\n publisher (str): VM image publisher. E.g. 'MicrosoftWindowsServer'.\n offer (str): VM image offer. E.g. 'WindowsServer'.\n sku (str): VM image sku. E.g. '2016-Datacenter'.\n version (str): VM image version. E.g. 'latest'.\n subnet_id (str): Resource id of a subnet.\n location (str): Azure data center location. E.g. westus.\n be_pool_id (str): Resource id of a backend NAT pool.\n lb_pool_id (str): Resource id of a load balancer pool.\n storage_type (str): Optional storage type. Default 'Standard_LRS'.\n username (str): Optional user name. Default is 'azure'.\n password (str): Optional password. Default is None (not required if using public_key).\n public_key (str): Optional public key. Default is None (not required if using password,\n e.g. on Windows).\n overprovision (bool): Optional. Enable overprovisioning of VMs. Default True.\n upgrade_policy (str): Optional. Set upgrade policy to Automatic, Manual or Rolling.\n Default 'Manual'.\n public_ip_per_vm (bool): Optional. Set public IP per VM. Default False.\n\n Returns:\n HTTP response. JSON body of the virtual machine scale set properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachineScaleSets/', vmss_name,\n '?api-version=', COMP_API])\n\n vmss_body = {'location': location}\n vmss_sku = {'name': vm_size, 'tier': 'Standard', 'capacity': capacity}\n vmss_body['sku'] = vmss_sku\n properties = {'overprovision': overprovision}\n properties['upgradePolicy'] = {'mode': upgrade_policy}\n os_profile = {'computerNamePrefix': vmss_name}\n os_profile['adminUsername'] = username\n if password is not None:\n os_profile['adminPassword'] = password\n if public_key is not None:\n if password is None:\n disable_pswd = True\n else:\n disable_pswd = False\n linux_config = {'disablePasswordAuthentication': disable_pswd}\n pub_key = {'path': '/home/' + username + '/.ssh/authorized_keys'}\n pub_key['keyData'] = public_key\n linux_config['ssh'] = {'publicKeys': [pub_key]}\n os_profile['linuxConfiguration'] = linux_config\n vm_profile = {'osProfile': os_profile}\n os_disk = {'createOption': 'fromImage'}\n os_disk['managedDisk'] = {'storageAccountType': storage_type}\n os_disk['caching'] = 'ReadWrite'\n storage_profile = {'osDisk': os_disk}\n storage_profile['imageReference'] = \\\n {'publisher': publisher, 'offer': offer, 'sku': sku, 'version': version}\n vm_profile['storageProfile'] = storage_profile\n nic = {'name': vmss_name}\n ip_config = {'name': vmss_name}\n ip_properties = {'subnet': {'id': subnet_id}}\n if be_pool_id is not None:\n ip_properties['loadBalancerBackendAddressPools'] = [{'id': be_pool_id}]\n if lb_pool_id is not None:\n ip_properties['loadBalancerInboundNatPools'] = [{'id': lb_pool_id}]\n if public_ip_per_vm is True:\n ip_properties['publicIpAddressConfiguration'] = {\n 'name': 'pubip', 'properties': {'idleTimeoutInMinutes': 15}}\n ip_config['properties'] = ip_properties\n nic['properties'] = {'primary': True, 'ipConfigurations': [ip_config]}\n network_profile = {'networkInterfaceConfigurations': [nic]}\n vm_profile['networkProfile'] = network_profile\n properties['virtualMachineProfile'] = vm_profile\n vmss_body['properties'] = properties\n\n body = json.dumps(vmss_body)\n return do_put(endpoint, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndeletes an availability set.", "response": "def delete_as(access_token, subscription_id, resource_group, as_name):\n '''Delete availability set.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n as_name (str): Name of the availability set.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/availabilitySets/', as_name,\n '?api-version=', COMP_API])\n return do_delete(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndeletes a virtual machine.", "response": "def delete_vm(access_token, subscription_id, resource_group, vm_name):\n '''Delete a virtual machine.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vm_name (str): Name of the virtual machine.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachines/', vm_name,\n '?api-version=', COMP_API])\n return do_delete(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndeleting a virtual machine scale set.", "response": "def delete_vmss(access_token, subscription_id, resource_group, vmss_name):\n '''Delete a virtual machine scale set.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vmss_name (str): Name of the virtual machine scale set.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachineScaleSets/', vmss_name,\n '?api-version=', COMP_API])\n return do_delete(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef delete_vmss_vms(access_token, subscription_id, resource_group, vmss_name, vm_ids):\n '''Delete a VM in a VM Scale Set.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vmss_name (str): Name of the virtual machine scale set.\n vm_ids (str): String representation of a JSON list of VM IDs. E.g. '[1,2]'.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachineScaleSets/', vmss_name,\n '/delete?api-version=', COMP_API])\n body = '{\"instanceIds\" : ' + vm_ids + '}'\n return do_post(endpoint, body, access_token)", "response": "Delete a VM in a VM Scale Set."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nlists compute usage and limits for a location.", "response": "def get_compute_usage(access_token, subscription_id, location):\n '''List compute usage and limits for a location.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n location (str): Azure data center location. E.g. westus.\n\n Returns:\n HTTP response. JSON body of Compute usage and limits data.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/Microsoft.compute/locations/', location,\n '/usages?api-version=', COMP_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting details of a virtual machine.", "response": "def get_vm(access_token, subscription_id, resource_group, vm_name):\n '''Get virtual machine details.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vm_name (str): Name of the virtual machine.\n\n Returns:\n HTTP response. JSON body of VM properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachines/', vm_name,\n '?api-version=', COMP_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets details about a VM extension.", "response": "def get_vm_extension(access_token, subscription_id, resource_group, vm_name, extension_name):\n '''Get details about a VM extension.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vm_name (str): Name of the virtual machine.\n extension_name (str): VM extension name.\n\n Returns:\n HTTP response. JSON body of VM extension properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachines/', vm_name,\n '/extensions/', extension_name,\n '?api-version=', COMP_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_vmss(access_token, subscription_id, resource_group, vmss_name):\n '''Get virtual machine scale set details.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vmss_name (str): Name of the virtual machine scale set.\n\n Returns:\n HTTP response. JSON body of scale set properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachineScaleSets/', vmss_name,\n '?api-version=', COMP_API])\n return do_get(endpoint, access_token)", "response": "Get details of a virtual machine scale set."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting individual VMSS VM model view.", "response": "def get_vmss_vm(access_token, subscription_id, resource_group, vmss_name, instance_id):\n '''Get individual VMSS VM details.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vmss_name (str): Name of the virtual machine scale set.\n instance_id (int): VM ID of the scale set VM.\n\n Returns:\n HTTP response. JSON body of VMSS VM model view.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachineScaleSets/', vmss_name,\n '/virtualMachines/', str(instance_id),\n '?api-version=', COMP_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the details of a new availability set.", "response": "def get_as(access_token, subscription_id, resource_group, as_name):\n '''Get availability set details.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n as_name (str): Name of the new availability set.\n\n Returns:\n HTTP response. JSON body of the availability set properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/availabilitySets/', as_name,\n '?api-version=', COMP_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nlist availability sets in a subscription.", "response": "def list_as_sub(access_token, subscription_id):\n '''List availability sets in a subscription.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n\n Returns:\n HTTP response. JSON body of the list of availability set properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/Microsoft.Compute/availabilitySets',\n '?api-version=', COMP_API])\n return do_get_next(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef list_vm_images_sub(access_token, subscription_id):\n '''List VM images in a subscription.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n\n Returns:\n HTTP response. JSON body of a list of VM images.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/Microsoft.Compute/images',\n '?api-version=', COMP_API])\n return do_get_next(endpoint, access_token)", "response": "List VM images in a subscription."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nlisting VMs in a resource group.", "response": "def list_vms(access_token, subscription_id, resource_group):\n '''List VMs in a resource group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n\n Returns:\n HTTP response. JSON body of a list of VM model views.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachines',\n '?api-version=', COMP_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nlist VMs in a subscription.", "response": "def list_vms_sub(access_token, subscription_id):\n '''List VMs in a subscription.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n\n Returns:\n HTTP response. JSON body of a list of VM model views.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/Microsoft.Compute/virtualMachines',\n '?api-version=', COMP_API])\n return do_get_next(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef list_vmss(access_token, subscription_id, resource_group):\n '''List VM Scale Sets in a resource group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n\n Returns:\n HTTP response. JSON body of a list of scale set model views.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachineScaleSets',\n '?api-version=', COMP_API])\n return do_get_next(endpoint, access_token)", "response": "List VM Scale Sets in a resource group."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef list_vmss_skus(access_token, subscription_id, resource_group, vmss_name):\n '''List the VM skus available for a VM Scale Set.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vmss_name (str): Name of the virtual machine scale set.\n\n Returns:\n HTTP response. JSON body of VM skus.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachineScaleSets/', vmss_name,\n '/skus',\n '?api-version=', COMP_API])\n return do_get_next(endpoint, access_token)", "response": "List the VM skus available for a VM Scale Set."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef list_vmss_sub(access_token, subscription_id):\n '''List VM Scale Sets in a subscription.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n\n Returns:\n HTTP response. JSON body of VM scale sets.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/Microsoft.Compute/virtualMachineScaleSets',\n '?api-version=', COMP_API])\n return do_get_next(endpoint, access_token)", "response": "List VM Scale Sets in a subscription."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns one page of a paginated list of VM instance views.", "response": "def list_vmss_vm_instance_view_pg(access_token, subscription_id, resource_group, vmss_name,\n link=None):\n '''Gets one page of a paginated list of scale set VM instance views.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vmss_name (str): Name of the virtual machine scale set.\n link (str): Optional link to URI to get list (as part of a paginated API query).\n\n Returns:\n HTTP response. JSON body of list of VM instance views.\n '''\n if link is None:\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachineScaleSets/', vmss_name,\n '/virtualMachines?$expand=instanceView&$select=instanceView',\n '&api-version=', COMP_API])\n else:\n endpoint = link\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef poweroff_vmss(access_token, subscription_id, resource_group, vmss_name):\n '''Power off all the VMs in a virtual machine scale set.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vmss_name (str): Name of the virtual machine scale set.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachineScaleSets/', vmss_name,\n '/powerOff?api-version=', COMP_API])\n body = '{\"instanceIds\" : [\"*\"]}'\n return do_post(endpoint, body, access_token)", "response": "Power off all the VMs in a virtual machine scale set."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef poweroff_vmss_vms(access_token, subscription_id, resource_group, vmss_name, instance_ids):\n '''Poweroff all the VMs in a virtual machine scale set.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vmss_name (str): Name of the virtual machine scale set.\n instance_ids (str): String representation of a JSON list of VM IDs. E.g. '[1,2]'.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachineScaleSets/', vmss_name,\n '/powerOff?api-version=', COMP_API])\n body = '{\"instanceIds\" : ' + instance_ids + '}'\n return do_post(endpoint, body, access_token)", "response": "Poweroff all the VMs in a virtual machine scale set."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nputting VMSS body. Can be used to create or update a scale set. E.g. call get_vmss(), make changes to the body, call put_vmss(). Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. resource_group (str): Azure resource group name. vmss_name (str): Name of the new scale set. vmss_body (dictionary): Body containining Returns: HTTP response. JSON body of the virtual machine scale set properties.", "response": "def put_vmss(access_token, subscription_id, resource_group, vmss_name, vmss_body):\n '''Put VMSS body.\n\n Can be used to create or update a scale set.\n E.g. call get_vmss(), make changes to the body, call put_vmss().\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vmss_name (str): Name of the new scale set.\n vmss_body (dictionary): Body containining \n\n Returns:\n HTTP response. JSON body of the virtual machine scale set properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachineScaleSets/', vmss_name,\n '?api-version=', COMP_API])\n body = json.dumps(vmss_body)\n return do_put(endpoint, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef put_vmss_vm(access_token, subscription_id, resource_group, vmss_name, vm_id, vm_body):\n '''Update a VMSS VM. E.g. add/remove a data disk from a specifc VM in a scale set (preview).\n Note: Only currently enabled for Azure Canary regions.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vmss_name (str): Name of the new scale set.\n vm_id (int): VM ID of VM to update\n vmss_body (dictionary): Body containining \n\n Returns:\n HTTP response. JSON body of the virtual machine scale set VM properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachineScaleSets/', vmss_name,\n '/virtualMachines/', str(vm_id),\n '?api-version=', COMP_API])\n body = json.dumps(vm_body)\n return do_put(endpoint, body, access_token)", "response": "Update a VMSS VM. Eg. add a data disk from a specifc VM in a scale set VM."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nchange the instance count of an existing VM Scale Set.", "response": "def scale_vmss(access_token, subscription_id, resource_group, vmss_name, capacity):\n '''Change the instance count of an existing VM Scale Set.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vmss_name (str): Name of the virtual machine scale set.\n capacity (int): New number of VMs.\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachineScaleSets/', vmss_name,\n '?api-version=', COMP_API])\n body = '{\"sku\":{\"capacity\":\"' + str(capacity) + '\"}}'\n return do_patch(endpoint, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef start_vm(access_token, subscription_id, resource_group, vm_name):\n '''Start a virtual machine.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vm_name (str): Name of the virtual machine.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachines/',\n vm_name,\n '/start',\n '?api-version=', COMP_API])\n return do_post(endpoint, '', access_token)", "response": "Start a virtual machine."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef update_vm(access_token, subscription_id, resource_group, vm_name, body):\n '''Update a virtual machine with a new JSON body. E.g. do a GET, change something, call this.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vm_name (str): Name of the virtual machine.\n body (dict): JSON body of the VM.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachines/', vm_name,\n '?api-version=', COMP_API])\n return do_put(endpoint, body, access_token)", "response": "Update a virtual machine with a new JSON body. Eg. do a PUT call this."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nupdates a VMSS with a new JSON body. Eg. do a PUT call this.", "response": "def update_vmss(access_token, subscription_id, resource_group, vmss_name, body):\n '''Update a VMSS with a new JSON body. E.g. do a GET, change something, call this.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vm_name (str): Name of the virtual machine.\n body (dict): JSON body of the VM scale set.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/Microsoft.Compute/virtualMachineScaleSets/', vmss_name,\n '?api-version=', COMP_API])\n return do_put(endpoint, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_vm_ids_by_ud(access_token, subscription_id, resource_group, vmssname, updatedomain):\n '''look at VMSS VM instance view to get VM IDs by UD'''\n instance_viewlist = azurerm.list_vmss_vm_instance_view(access_token, subscription_id,\n resource_group, vmssname)\n # print(json.dumps(instance_viewlist, sort_keys=False, indent=2, separators=(',', ': ')))\n\n # loop through the instance view list, and build the vm id list of VMs in\n # the matching UD\n udinstancelist = []\n for instance_view in instance_viewlist['value']:\n vmud = instance_view['properties']['instance_view']['platformUpdateDomain']\n if vmud == updatedomain:\n udinstancelist.append(instance_view['instanceId'])\n udinstancelist.sort()\n return udinstancelist", "response": "look at VMSS VM instance view to get VM IDs by UD"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_graph_token_from_msi():\n '''get a Microsoft Graph access token using Azure Cloud Shell's MSI_ENDPOINT.\n\n Notes: \n The auth token returned by this function is not an Azure auth token. Use it for querying\n the Microsoft Graph API.\n This function only works in an Azure cloud shell or virtual machine.\n\n Returns:\n A Microsoft Graph authentication token string.\n '''\n if 'ACC_CLOUD' in os.environ and 'MSI_ENDPOINT' in os.environ:\n endpoint = os.environ['MSI_ENDPOINT']\n else:\n return None\n \n headers = {'Metadata': 'true'}\n body = {\"resource\": 'https://' + GRAPH_RESOURCE_HOST + '/'}\n ret = requests.post(endpoint, headers=headers, data=body)\n return ret.json()['access_token']", "response": "get a Microsoft Graph authentication token using Microsoft Cloud Shell s MSI_ENDPOINT"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the object ID for the Graph user who owns the access token.", "response": "def get_object_id_from_graph(access_token=None):\n '''Return the object ID for the Graph user who owns the access token.\n\n Args:\n access_token (str): A Microsoft Graph access token. (Not an Azure access token.)\n If not provided, attempt to get it from MSI_ENDPOINT.\n\n Returns:\n An object ID string for a user or service principal.\n '''\n if access_token is None:\n access_token = get_graph_token_from_msi()\n\n endpoint = 'https://' + GRAPH_RESOURCE_HOST + '/v1.0/me/'\n headers = {'Authorization': 'Bearer ' + access_token, 'Host': GRAPH_RESOURCE_HOST}\n ret = requests.get(endpoint, headers=headers)\n return ret.json()['id']"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_subscription_from_cli(name=None):\n '''Get the default, or named, subscription id from CLI's local cache.\n\n Args:\n name (str): Optional subscription name. If this is set, the subscription id of the named\n subscription is returned from the CLI cache if present. If not set, the subscription id\n of the default subscription is returned.\n\n Returns:\n Azure subscription ID string.\n\n Requirements:\n User has run 'az login' once, or is in Azure Cloud Shell.\n '''\n home = os.path.expanduser('~')\n azure_profile_path = home + os.sep + '.azure' + os.sep + 'azureProfile.json'\n if os.path.isfile(azure_profile_path) is False:\n print('Error from get_subscription_from_cli(): Cannot find ' +\n azure_profile_path)\n return None\n with io.open(azure_profile_path, 'r', encoding='utf-8-sig') as azure_profile_fd:\n azure_profile = json.load(azure_profile_fd)\n for subscription_info in azure_profile['subscriptions']:\n if (name is None and subscription_info['isDefault'] is True) or \\\n subscription_info['name'] == name:\n return subscription_info['id']\n return None", "response": "Get the default or named subscription id from CLI s local cache."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef list_locations(access_token, subscription_id):\n '''List available locations for a subscription.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n\n Returns:\n HTTP response. JSON list of locations.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/locations?api-version=', BASE_API])\n return do_get(endpoint, access_token)", "response": "List available locations for a subscription."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a new Cosmos DB account in the named resource group with the named location.", "response": "def create_cosmosdb_account(access_token, subscription_id, rgname, account_name, location,\n cosmosdb_kind):\n '''Create a new Cosmos DB account in the named resource group, with the named location.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rgname (str): Azure resource group name.\n account_name (str): Name of the new Cosmos DB account.\n location (str): Azure data center location. E.g. westus.\n cosmosdb_kind (str): Database type. E.g. GlobalDocumentDB.\n\n Returns:\n HTTP response. JSON body of storage account properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', rgname,\n '/providers/Microsoft.DocumentDB/databaseAccounts/', account_name,\n '?api-version=', COSMOSDB_API])\n\n cosmosdb_body = {'location': location,\n 'kind': cosmosdb_kind,\n 'properties': {'databaseAccountOfferType': 'Standard',\n 'locations': [{'failoverPriority': 0,\n 'locationName': location}]}}\n body = json.dumps(cosmosdb_body)\n return do_put(endpoint, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget the access keys for the specified Cosmos DB account.", "response": "def get_cosmosdb_account_keys(access_token, subscription_id, rgname, account_name):\n '''Get the access keys for the specified Cosmos DB account.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rgname (str): Azure resource group name.\n account_name (str): Name of the Cosmos DB account.\n\n Returns:\n HTTP response. JSON body of Cosmos DB account keys.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', rgname,\n '/providers/Microsoft.DocumentDB/databaseAccounts/', account_name,\n '/listKeys',\n '?api-version=', COSMOSDB_API])\n return do_post(endpoint, '', access_token)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create_keyvault(access_token, subscription_id, rgname, vault_name, location,\n template_deployment=True, tenant_id=None, object_id=None):\n '''Create a new key vault in the named resource group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rgname (str): Azure resource group name.\n vault_name (str): Name of the new key vault.\n location (str): Azure data center location. E.g. westus2.\n template_deployment (boolean): Whether to allow deployment from template.\n tenant_id (str): Optionally specify a tenant ID (otherwise picks first response) from\n ist_tenants().\n object_id (str): Optionally specify an object ID representing user or principal for the\n access policy.\n\n Returns:\n HTTP response. JSON body of key vault properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', rgname,\n '/providers/Microsoft.KeyVault/vaults/', vault_name,\n '?api-version=', KEYVAULT_API])\n # get tenant ID if not specified\n if tenant_id is None:\n ret = list_tenants(access_token)\n tenant_id = ret['value'][0]['tenantId']\n # if object_id is None:\n access_policies = [{'tenantId': tenant_id, 'objectId': object_id,\n 'permissions': {\n 'keys': ['get', 'create', 'delete', 'list', 'update', 'import',\n 'backup', 'restore', 'recover'],\n 'secrets': ['get', 'list', 'set', 'delete', 'backup', 'restore',\n 'recover'],\n 'certificates': ['get', 'list', 'delete', 'create', 'import', 'update',\n 'managecontacts', 'getissuers', 'listissuers',\n 'setissuers', 'deleteissuers', 'manageissuers',\n 'recover'],\n 'storage': ['get', 'list', 'delete', 'set', 'update', 'regeneratekey',\n 'setsas', 'listsas', 'getsas', 'deletesas']\n }}]\n vault_properties = {'tenantId': tenant_id, 'sku': {'family': 'A', 'name': 'standard'},\n 'enabledForTemplateDeployment': template_deployment,\n 'accessPolicies': access_policies}\n vault_body = {'location': location, 'properties': vault_properties}\n body = json.dumps(vault_body)\n return do_put(endpoint, body, access_token)", "response": "Create a new key vault in the named resource group."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndelete a key vault in the named resource group.", "response": "def delete_keyvault(access_token, subscription_id, rgname, vault_name):\n '''Deletes a key vault in the named resource group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rgname (str): Azure resource group name.\n vault_name (str): Name of the new key vault.\n\n Returns:\n HTTP response. 200 OK.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', rgname,\n '/providers/Microsoft.KeyVault/vaults/', vault_name,\n '?api-version=', KEYVAULT_API])\n return do_delete(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_keyvault(access_token, subscription_id, rgname, vault_name):\n '''Gets details about the named key vault.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rgname (str): Azure resource group name.\n vault_name (str): Name of the key vault.\n\n Returns:\n HTTP response. JSON body of key vault properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', rgname,\n '/providers/Microsoft.KeyVault/vaults/', vault_name,\n '?api-version=', KEYVAULT_API])\n return do_get(endpoint, access_token)", "response": "Gets details about the named key vault."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef list_keyvaults(access_token, subscription_id, rgname):\n '''Lists key vaults in the named resource group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rgname (str): Azure resource group name.\n\n Returns:\n HTTP response. 200 OK.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', rgname,\n '/providers/Microsoft.KeyVault/vaults',\n '?api-version=', KEYVAULT_API])\n return do_get_next(endpoint, access_token)", "response": "Lists key vaults in the named resource group."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef list_keyvaults_sub(access_token, subscription_id):\n '''Lists key vaults belonging to this subscription.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n\n Returns:\n HTTP response. 200 OK.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/Microsoft.KeyVault/vaults',\n '?api-version=', KEYVAULT_API])\n return do_get_next(endpoint, access_token)", "response": "Lists key vaults belonging to this subscription."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding a secret to a key vault using the key vault URI. Creates a new version if the secret already exists.", "response": "def set_keyvault_secret(access_token, vault_uri, secret_name, secret_value):\n '''Adds a secret to a key vault using the key vault URI. \n Creates a new version if the secret already exists.\n\n Args:\n access_token (str): A valid Azure authentication token.\n vault_uri (str): Vault URI e.g. https://myvault.vault.azure.net.\n secret_name (str): Name of the secret to add.\n secret_value (str): Value of the secret.\n\n Returns:\n HTTP response. 200 OK.\n '''\n endpoint = ''.join([vault_uri,\n '/secrets/', secret_name,\n '?api-version=', '7.0'])\n current_time = datetime.datetime.now().isoformat()\n attributes = {'created': current_time,\n 'enabled': True,\n 'exp': None,\n 'nbf': None,\n 'recoveryLevel': 'Purgeable',\n 'updated': current_time}\n secret_body = {'attributes': attributes,\n 'contentType': None,\n 'kid': None,\n 'managed': None,\n 'tags': {'file-encoding': 'utf-8'},\n 'value': secret_value}\n body = json.dumps(secret_body)\n print(body)\n return do_put(endpoint, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delete_keyvault_secret(access_token, vault_uri, secret_name):\n '''Deletes a secret from a key vault using the key vault URI. \n\n Args:\n access_token (str): A valid Azure authentication token.\n vault_uri (str): Vault URI e.g. https://myvault.azure.net.\n secret_name (str): Name of the secret to add.\n\n Returns:\n HTTP response. 200 OK.\n '''\n endpoint = ''.join([vault_uri,\n '/secrets/', secret_name,\n '?api-version=', '7.0'])\n return do_delete(endpoint, access_token)", "response": "Deletes a secret from a key vault using the key vault URI."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef create_autoscale_rule(subscription_id, resource_group, vmss_name, metric_name, operator,\n threshold, direction, change_count, time_grain='PT1M',\n time_window='PT5M', cool_down='PT1M'):\n '''Create a new autoscale rule - pass the output in a list to create_autoscale_setting().\n\n Args:\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n vmss_name (str): Name of scale set to apply scale events to.\n metric_name (str): Name of metric being evaluated.\n operator (str): Operator to evaluate. E.g. \"GreaterThan\".\n threshold (str): Threshold to trigger action.\n direction (str): Direction of action. E.g. Increase.\n change_count (str): How many to increase or decrease by.\n time_grain (str): Optional. Measurement granularity. Default 'PT1M'.\n time_window (str): Optional. Range of time to collect data over. Default 'PT5M'.\n cool_down (str): Optional. Time to wait after last scaling action. ISO 8601 format.\n Default 'PT1M'.\n Returns:\n HTTP response. JSON body of autoscale setting.\n '''\n metric_trigger = {'metricName': metric_name}\n metric_trigger['metricNamespace'] = ''\n metric_trigger['metricResourceUri'] = '/subscriptions/' + subscription_id + \\\n '/resourceGroups/' + resource_group + \\\n '/providers/Microsoft.Compute/virtualMachineScaleSets/' + vmss_name\n metric_trigger['timeGrain'] = time_grain\n metric_trigger['statistic'] = 'Average'\n metric_trigger['timeWindow'] = time_window\n metric_trigger['timeAggregation'] = 'Average'\n metric_trigger['operator'] = operator\n metric_trigger['threshold'] = threshold\n scale_action = {'direction': direction}\n scale_action['type'] = 'ChangeCount'\n scale_action['value'] = str(change_count)\n scale_action['cooldown'] = cool_down\n new_rule = {'metricTrigger': metric_trigger}\n new_rule['scaleAction'] = scale_action\n return new_rule", "response": "Creates a new autoscale rule for the given resource group and metric."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a new autoscale setting for a resource group.", "response": "def create_autoscale_setting(access_token, subscription_id, resource_group, setting_name,\n vmss_name, location, minval, maxval, default, autoscale_rules,\n notify=None):\n '''Create a new autoscale setting for a scale set.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n setting_name (str): Name of the autoscale setting.\n vmss_name (str): Name of scale set to apply scale events to.\n location (str): Azure data center location. E.g. westus.\n minval (int): Minimum number of VMs.\n maxval (int): Maximum number of VMs.\n default (int): Default VM number when no data available.\n autoscale_rules (list): List of outputs from create_autoscale_rule().\n notify (str): Optional.\n Returns:\n HTTP response. JSON body of autoscale setting.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/microsoft.insights/autoscaleSettings/', setting_name,\n '?api-version=', INSIGHTS_API])\n autoscale_setting = {'location': location}\n profile = {'name': 'Profile1'}\n capacity = {'minimum': str(minval)}\n capacity['maximum'] = str(maxval)\n capacity['default'] = str(default)\n profile['capacity'] = capacity\n profile['rules'] = autoscale_rules\n profiles = [profile]\n properties = {'name': setting_name}\n properties['profiles'] = profiles\n properties['targetResourceUri'] = '/subscriptions/' + subscription_id + \\\n '/resourceGroups/' + resource_group + \\\n '/providers/Microsoft.Compute/virtualMachineScaleSets/' + vmss_name\n properties['enabled'] = True\n if notify is not None:\n notification = {'operation': 'Scale'}\n email = {'sendToSubscriptionAdministrato': False}\n email['sendToSubscriptionCoAdministrators'] = False\n email['customEmails'] = [notify]\n notification = {'email': email}\n properties['notifications'] = [notification]\n autoscale_setting['properties'] = properties\n body = json.dumps(autoscale_setting)\n return do_put(endpoint, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef list_autoscale_settings(access_token, subscription_id):\n '''List the autoscale settings in a subscription.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n\n Returns:\n HTTP response. JSON body of autoscale settings.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/microsoft.insights/',\n '/autoscaleSettings?api-version=', INSIGHTS_API])\n return do_get(endpoint, access_token)", "response": "List the autoscale settings in a subscription."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef list_insights_components(access_token, subscription_id, resource_group):\n '''List the Microsoft Insights components in a resource group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n\n Returns:\n HTTP response. JSON body of components.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/microsoft.insights/',\n '/components?api-version=', INSIGHTS_COMPONENTS_API])\n return do_get(endpoint, access_token)", "response": "List Microsoft Insights components in a resource group."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef list_metric_defs_for_resource(access_token, subscription_id, resource_group,\n resource_provider, resource_type, resource_name):\n '''List the monitoring metric definitions for a resource.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n resource_provider (str): Type of resource provider.\n resource_type (str): Type of resource.\n resource_name (str): Name of resource.\n\n Returns:\n HTTP response. JSON body of metric definitions.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/', resource_provider,\n '/', resource_type,\n '/', resource_name,\n '/providers/microsoft.insights',\n '/metricdefinitions?api-version=', INSIGHTS_METRICS_API])\n return do_get(endpoint, access_token)", "response": "List the monitoring metric definitions for a resource."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the monitoring metrics for a resource.", "response": "def get_metrics_for_resource(access_token, subscription_id, resource_group, resource_provider,\n resource_type, resource_name):\n '''Get the monitoring metrics for a resource.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n resource_type (str): Type of resource.\n resource_name (str): Name of resource.\n\n Returns:\n HTTP response. JSON body of resource metrics.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', resource_group,\n '/providers/', resource_provider,\n '/', resource_type,\n '/', resource_name,\n '/providers/microsoft.insights',\n '/metrics?api-version=', INSIGHTS_PREVIEW_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_events_for_subscription(access_token, subscription_id, start_timestamp):\n '''Get the insights evens for a subsctipion since the specific timestamp.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n start_timestamp (str): timestamp to get events from. E.g. '2017-05-01T00:00:00.0000000Z'.\n Returns:\n HTTP response. JSON body of insights events.\n\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/microsoft.insights/eventtypes/management/values?api-version=',\n INSIGHTS_API, '&$filter=eventTimestamp ge \\'', start_timestamp, '\\''])\n return do_get(endpoint, access_token)", "response": "Get the insights evens for a subscription since the specific timestamp."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef deploy_template(access_token, subscription_id, resource_group, deployment_name, template,\n parameters):\n '''Deploy a template referenced by a JSON string, with parameters as a JSON string.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n deployment_name (str): A name you give to the deployment.\n template (str): String representatipn of a JSON template body.\n parameters (str): String representation of a JSON template parameters body.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', resource_group,\n '/providers/Microsoft.Resources/deployments/', deployment_name,\n '?api-version=', DEPLOYMENTS_API])\n properties = {'template': template}\n properties['mode'] = 'Incremental'\n properties['parameters'] = parameters\n template_body = {'properties': properties}\n body = json.dumps(template_body)\n return do_put(endpoint, body, access_token)", "response": "Deploy a template referenced by a JSON string with parameters as a JSON string."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndeploys a template referenced by a URI with parameters as a JSON string.", "response": "def deploy_template_uri(access_token, subscription_id, resource_group, deployment_name,\n template_uri, parameters):\n '''Deploy a template referenced by a URI, with parameters as a JSON string.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n deployment_name (str): A name you give to the deployment.\n template_uri (str): URI which points to a JSON template (e.g. github raw location).\n parameters (str): String representation of a JSON template parameters body.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', resource_group,\n '/providers/Microsoft.Resources/deployments/', deployment_name,\n '?api-version=', DEPLOYMENTS_API])\n properties = {'templateLink': {'uri': template_uri}}\n properties['mode'] = 'Incremental'\n properties['parameters'] = parameters\n template_body = {'properties': properties}\n body = json.dumps(template_body)\n return do_put(endpoint, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef deploy_template_uri_param_uri(access_token, subscription_id, resource_group, deployment_name,\n template_uri, parameters_uri):\n '''Deploy a template with both template and parameters referenced by URIs.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n deployment_name (str): A name you give to the deployment.\n template_uri (str): URI which points to a JSON template (e.g. github raw location).\n parameters_uri (str): URI which points to a JSON parameters file (e.g. github raw location).\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', resource_group,\n '/providers/Microsoft.Resources/deployments/', deployment_name,\n '?api-version=', DEPLOYMENTS_API])\n properties = {'templateLink': {'uri': template_uri}}\n properties['mode'] = 'Incremental'\n properties['parametersLink'] = {'uri': parameters_uri}\n template_body = {'properties': properties}\n body = json.dumps(template_body)\n return do_put(endpoint, body, access_token)", "response": "Deploy a template with both template and parameters referenced by URIs."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef attach_model(subscription, rgname, vmssvm_model, diskname, lun):\n '''Attach a data disk to a VMSS VM model'''\n disk_id = '/subscriptions/' + subscription + '/resourceGroups/' + rgname + \\\n '/providers/Microsoft.Compute/disks/' + diskname\n disk_model = {'lun': lun, 'createOption': 'Attach', 'caching': 'None',\n 'managedDisk': {'storageAccountType': 'Standard_LRS', 'id': disk_id}}\n vmssvm_model['properties']['storageProfile']['dataDisks'].append(\n disk_model)\n return vmssvm_model", "response": "Attach a data disk to a VMSS VM model"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndetaches a data disk from a VMSS VM model", "response": "def detach_model(vmssvm_model, lun):\n '''Detach a data disk from a VMSS VM model'''\n data_disks = vmssvm_model['properties']['storageProfile']['dataDisks']\n data_disks[:] = [disk for disk in data_disks if disk.get('lun') != lun]\n vmssvm_model['properties']['storageProfile']['dataDisks'] = data_disks\n return vmssvm_model"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_access_token(tenant_id, application_id, application_secret):\n '''get an Azure access token using the adal library.\n\n Args:\n tenant_id (str): Tenant id of the user's account.\n application_id (str): Application id of a Service Principal account.\n application_secret (str): Application secret (password) of the Service Principal account.\n\n Returns:\n An Azure authentication token string.\n '''\n context = adal.AuthenticationContext(\n get_auth_endpoint() + tenant_id, api_version=None)\n token_response = context.acquire_token_with_client_credentials(\n get_resource_endpoint(), application_id, application_secret)\n return token_response.get('accessToken')", "response": "get an Azure access token using the adal library."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget an Azure authentication token from CLI s local cache.", "response": "def get_access_token_from_cli():\n '''Get an Azure authentication token from CLI's cache.\n\n Will only work if CLI local cache has an unexpired auth token (i.e. you ran 'az login'\n recently), or if you are running in Azure Cloud Shell (aka cloud console)\n\n Returns:\n An Azure authentication token string.\n '''\n\n # check if running in cloud shell, if so, pick up token from MSI_ENDPOINT\n if 'ACC_CLOUD' in os.environ and 'MSI_ENDPOINT' in os.environ:\n endpoint = os.environ['MSI_ENDPOINT']\n headers = {'Metadata': 'true'}\n body = {\"resource\": \"https://management.azure.com/\"}\n ret = requests.post(endpoint, headers=headers, data=body)\n return ret.json()['access_token']\n\n else: # not running cloud shell\n home = os.path.expanduser('~')\n sub_username = \"\"\n\n # 1st identify current subscription\n azure_profile_path = home + os.sep + '.azure' + os.sep + 'azureProfile.json'\n if os.path.isfile(azure_profile_path) is False:\n print('Error from get_access_token_from_cli(): Cannot find ' + azure_profile_path)\n return None\n with codecs.open(azure_profile_path, 'r', 'utf-8-sig') as azure_profile_fd:\n subs = json.load(azure_profile_fd)\n for sub in subs['subscriptions']:\n if sub['isDefault'] == True:\n sub_username = sub['user']['name']\n if sub_username == \"\":\n print('Error from get_access_token_from_cli(): Default subscription not found in ' + \\\n azure_profile_path)\n return None\n\n # look for acces_token\n access_keys_path = home + os.sep + '.azure' + os.sep + 'accessTokens.json'\n if os.path.isfile(access_keys_path) is False:\n print('Error from get_access_token_from_cli(): Cannot find ' + access_keys_path)\n return None\n with open(access_keys_path, 'r') as access_keys_fd:\n keys = json.load(access_keys_fd)\n\n # loop through accessTokens.json until first unexpired entry found\n for key in keys:\n if key['userId'] == sub_username:\n if 'accessToken' not in keys[0]:\n print('Error from get_access_token_from_cli(): accessToken not found in ' + \\\n access_keys_path)\n return None\n if 'tokenType' not in keys[0]:\n print('Error from get_access_token_from_cli(): tokenType not found in ' + \\\n access_keys_path)\n return None\n if 'expiresOn' not in keys[0]:\n print('Error from get_access_token_from_cli(): expiresOn not found in ' + \\\n access_keys_path)\n return None\n expiry_date_str = key['expiresOn']\n\n # check date and skip past expired entries\n if 'T' in expiry_date_str:\n exp_date = dt.strptime(key['expiresOn'], '%Y-%m-%dT%H:%M:%S.%fZ')\n else:\n exp_date = dt.strptime(key['expiresOn'], '%Y-%m-%d %H:%M:%S.%f')\n if exp_date < dt.now():\n continue\n else:\n return key['accessToken']\n\n # if dropped out of the loop, token expired\n print('Error from get_access_token_from_cli(): token expired. Run \\'az login\\'')\n return None"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nlist available VM image offers from a publisher.", "response": "def list_offers(access_token, subscription_id, location, publisher):\n '''List available VM image offers from a publisher.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n location (str): Azure data center location. E.g. westus.\n publisher (str): Publisher name, e.g. Canonical.\n\n Returns:\n HTTP response with JSON list of image offers.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/Microsoft.Compute/',\n 'locations/', location,\n '/publishers/', publisher,\n '/artifacttypes/vmimage/offers?api-version=', COMP_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nlists available VM image skus for a publisher offer.", "response": "def list_skus(access_token, subscription_id, location, publisher, offer):\n '''List available VM image skus for a publisher offer.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n location (str): Azure data center location. E.g. westus.\n publisher (str): VM image publisher. E.g. MicrosoftWindowsServer.\n offer (str): VM image offer. E.g. WindowsServer.\n\n Returns:\n HTTP response with JSON list of skus.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/Microsoft.Compute/',\n 'locations/', location,\n '/publishers/', publisher,\n '/artifacttypes/vmimage/offers/', offer,\n '/skus?api-version=', COMP_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nlist available versions for a given publisher s sku.", "response": "def list_sku_versions(access_token, subscription_id, location, publisher, offer, sku):\n '''List available versions for a given publisher's sku.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n location (str): Azure data center location. E.g. westus.\n publisher (str): VM image publisher. E.g. MicrosoftWindowsServer.\n offer (str): VM image offer. E.g. WindowsServer.\n sku (str): VM image sku. E.g. 2016-Datacenter.\n\n Returns:\n HTTP response with JSON list of versions.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/Microsoft.Compute/',\n 'locations/', location,\n '/publishers/', publisher,\n '/artifacttypes/vmimage/offers/', offer,\n '/skus/', sku,\n '/versions?api-version=', COMP_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create_container_definition(container_name, image, port=80, cpu=1.0, memgb=1.5,\n environment=None):\n '''Makes a python dictionary of container properties.\n\n Args:\n container_name: The name of the container.\n image (str): Container image string. E.g. nginx.\n port (int): TCP port number. E.g. 8080.\n cpu (float): Amount of CPU to allocate to container. E.g. 1.0.\n memgb (float): Memory in GB to allocate to container. E.g. 1.5.\n environment (list): A list of [{'name':'envname', 'value':'envvalue'}].\n Sets environment variables in the container.\n\n Returns:\n A Python dictionary of container properties, pass a list of these to\n create_container_group().\n '''\n container = {'name': container_name}\n container_properties = {'image': image}\n container_properties['ports'] = [{'port': port}]\n container_properties['resources'] = {\n 'requests': {'cpu': cpu, 'memoryInGB': memgb}}\n container['properties'] = container_properties\n\n if environment is not None:\n container_properties['environmentVariables'] = environment\n return container", "response": "Makes a python dictionary of container properties."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a new container group with a list of containers specifified by container_list.", "response": "def create_container_instance_group(access_token, subscription_id, resource_group,\n container_group_name, container_list, location,\n ostype='Linux', port=80, iptype='public'):\n '''Create a new container group with a list of containers specifified by container_list.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n container_group_name (str): Name of container instance group.\n container_list (list): A list of container properties. Use create_container_definition to\n create each container property set.\n location (str): Azure data center location. E.g. westus.\n ostype (str): Container operating system type. Linux or Windows.\n port (int): TCP port number. E.g. 8080.\n iptype (str): Type of IP address. E.g. public.\n\n Returns:\n HTTP response with JSON body of container group.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', resource_group,\n '/providers/Microsoft.ContainerInstance/ContainerGroups/',\n container_group_name,\n '?api-version=', CONTAINER_API])\n container_group_body = {'location': location}\n properties = {'osType': ostype}\n properties['containers'] = container_list\n ipport = {'protocol': 'TCP'}\n ipport['port'] = port\n ipaddress = {'ports': [ipport]}\n ipaddress['type'] = iptype\n properties['ipAddress'] = ipaddress\n container_group_body['properties'] = properties\n body = json.dumps(container_group_body)\n return do_put(endpoint, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef delete_container_instance_group(access_token, subscription_id, resource_group,\n container_group_name):\n '''Delete a container group from a resource group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n container_group_name (str): Name of container instance group.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', resource_group,\n '/providers/Microsoft.ContainerInstance/ContainerGroups/',\n container_group_name,\n '?api-version=', CONTAINER_API])\n return do_delete(endpoint, access_token)", "response": "Delete a container group from a resource group."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_container_instance_group(access_token, subscription_id, resource_group,\n container_group_name):\n '''Get the JSON definition of a container group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n container_group_name (str): Name of container instance group.\n\n Returns:\n HTTP response. JSON body of container group.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', resource_group,\n '/providers/Microsoft.ContainerInstance/ContainerGroups/',\n container_group_name,\n '?api-version=', CONTAINER_API])\n return do_get(endpoint, access_token)", "response": "Get the JSON definition of a container group."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_container_instance_logs(access_token, subscription_id, resource_group, container_group_name,\n container_name=None):\n '''Get the container logs for containers in a container group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n container_group_name (str): Name of container instance group.\n container_name (str): Optional name of a container in the group.\n\n Returns:\n HTTP response. Container logs.\n '''\n if container_name is None:\n container_name = container_group_name\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', resource_group,\n '/providers/Microsoft.ContainerInstance/ContainerGroups/',\n container_group_name,\n '/containers/', container_name, '/logs?api-version=', CONTAINER_API])\n return do_get(endpoint, access_token)", "response": "Get the container logs for containers in a container group."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nlists the container groups in a resource group.", "response": "def list_container_instance_groups(access_token, subscription_id, resource_group):\n '''List the container groups in a resource group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n resource_group (str): Azure resource group name.\n\n Returns:\n HTTP response. JSON list of container groups and their properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', resource_group,\n '/providers/Microsoft.ContainerInstance/ContainerGroups',\n '?api-version=', CONTAINER_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef list_container_instance_groups_sub(access_token, subscription_id):\n '''List the container groups in a subscription.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n\n Returns:\n HTTP response. JSON list of container groups and their properties.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/providers/Microsoft.ContainerInstance/ContainerGroups',\n '?api-version=', CONTAINER_API])\n return do_get(endpoint, access_token)", "response": "List the container groups in a subscription."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate a resource group in the specified location.", "response": "def create_resource_group(access_token, subscription_id, rgname, location):\n '''Create a resource group in the specified location.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rgname (str): Azure resource group name.\n location (str): Azure data center location. E.g. westus.\n\n Returns:\n HTTP response. JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', rgname,\n '?api-version=', RESOURCE_API])\n rg_body = {'location': location}\n body = json.dumps(rg_body)\n return do_put(endpoint, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndelete the named resource group.", "response": "def delete_resource_group(access_token, subscription_id, rgname):\n '''Delete the named resource group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rgname (str): Azure resource group name.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', rgname,\n '?api-version=', RESOURCE_API])\n return do_delete(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncaptures the specified resource group as a template.", "response": "def export_template(access_token, subscription_id, rgname):\n '''Capture the specified resource group as a template\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rgname (str): Azure resource group name.\n\n Returns:\n HTTP response. JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourcegroups/', rgname,\n '/exportTemplate',\n '?api-version=', RESOURCE_API])\n rg_body = {'options':'IncludeParameterDefaultValue', 'resources':['*']}\n body = json.dumps(rg_body)\n return do_post(endpoint, body, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting details about the named resource group.", "response": "def get_resource_group(access_token, subscription_id, rgname):\n '''Get details about the named resource group.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n rgname (str): Azure resource group name.\n\n Returns:\n HTTP response. JSON body.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/', rgname,\n '?api-version=', RESOURCE_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nlist the resource groups in a subscription.", "response": "def list_resource_groups(access_token, subscription_id):\n '''List the resource groups in a subscription.\n\n Args:\n access_token (str): A valid Azure authentication token.\n subscription_id (str): Azure subscription id.\n\n Returns:\n HTTP response.\n '''\n endpoint = ''.join([get_rm_endpoint(),\n '/subscriptions/', subscription_id,\n '/resourceGroups/',\n '?api-version=', RESOURCE_API])\n return do_get(endpoint, access_token)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nwrites your forwards methods here.", "response": "def forwards(self, orm):\n \"Write your forwards methods here.\"\n # Note: Don't use \"from appname.models import ModelName\". \n # Use orm.ModelName to refer to models in this application,\n # and orm['appname.ModelName'] for models in other applications.\n for coupon in orm['coupons.Coupon'].objects.all():\n if coupon.user is not None or coupon.redeemed_at is not None:\n orm['coupons.CouponUser'].objects.create(\n coupon=coupon,\n user=coupon.user,\n redeemed_at=coupon.redeemed_at\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get(self, requirement):\n filename = self.generate_filename(requirement)\n for backend in list(self.backends):\n try:\n pathname = backend.get(filename)\n if pathname is not None:\n return pathname\n except CacheBackendDisabledError as e:\n logger.debug(\"Disabling %s because it requires configuration: %s\", backend, e)\n self.backends.remove(backend)\n except Exception as e:\n logger.exception(\"Disabling %s because it failed: %s\", backend, e)\n self.backends.remove(backend)", "response": "Get a distribution archive from any of the available caches."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nstoring a distribution archive in all of the available caches.", "response": "def put(self, requirement, handle):\n \"\"\"\n Store a distribution archive in all of the available caches.\n\n :param requirement: A :class:`.Requirement` object.\n :param handle: A file-like object that provides access to the\n distribution archive.\n \"\"\"\n filename = self.generate_filename(requirement)\n for backend in list(self.backends):\n handle.seek(0)\n try:\n backend.put(filename, handle)\n except CacheBackendDisabledError as e:\n logger.debug(\"Disabling %s because it requires configuration: %s\", backend, e)\n self.backends.remove(backend)\n except Exception as e:\n logger.exception(\"Disabling %s because it failed: %s\", backend, e)\n self.backends.remove(backend)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef generate_filename(self, requirement):\n return FILENAME_PATTERN % (self.config.cache_format_revision,\n requirement.name, requirement.version,\n get_python_version())", "response": "Generates a distribution archive filename for a package."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ninstall missing dependencies for the given requirement.", "response": "def install_dependencies(self, requirement):\n \"\"\"\n Install missing dependencies for the given requirement.\n\n :param requirement: A :class:`.Requirement` object.\n :returns: :data:`True` when missing system packages were installed,\n :data:`False` otherwise.\n :raises: :exc:`.DependencyInstallationRefused` when automatic\n installation is disabled or refused by the operator.\n :raises: :exc:`.DependencyInstallationFailed` when the installation\n of missing system packages fails.\n\n If `pip-accel` fails to build a binary distribution, it will call this\n method as a last chance to install missing dependencies. If this\n function does not raise an exception, `pip-accel` will retry the build\n once.\n \"\"\"\n install_timer = Timer()\n missing_dependencies = self.find_missing_dependencies(requirement)\n if missing_dependencies:\n # Compose the command line for the install command.\n install_command = shlex.split(self.install_command) + missing_dependencies\n # Prepend `sudo' to the command line?\n if not WINDOWS and not is_root():\n # FIXME Ideally this should properly detect the presence of `sudo'.\n # Or maybe this should just be embedded in the *.ini files?\n install_command.insert(0, 'sudo')\n # Always suggest the installation command to the operator.\n logger.info(\"You seem to be missing %s: %s\",\n pluralize(len(missing_dependencies), \"dependency\", \"dependencies\"),\n concatenate(missing_dependencies))\n logger.info(\"You can install %s with this command: %s\",\n \"it\" if len(missing_dependencies) == 1 else \"them\", \" \".join(install_command))\n if self.config.auto_install is False:\n # Refuse automatic installation and don't prompt the operator when the configuration says no.\n self.installation_refused(requirement, missing_dependencies, \"automatic installation is disabled\")\n # Get the operator's permission to install the missing package(s).\n if self.config.auto_install:\n logger.info(\"Got permission to install %s (via auto_install option).\",\n pluralize(len(missing_dependencies), \"dependency\", \"dependencies\"))\n elif self.confirm_installation(requirement, missing_dependencies, install_command):\n logger.info(\"Got permission to install %s (via interactive prompt).\",\n pluralize(len(missing_dependencies), \"dependency\", \"dependencies\"))\n else:\n logger.error(\"Refused installation of missing %s!\",\n \"dependency\" if len(missing_dependencies) == 1 else \"dependencies\")\n self.installation_refused(requirement, missing_dependencies, \"manual installation was refused\")\n if subprocess.call(install_command) == 0:\n logger.info(\"Successfully installed %s in %s.\",\n pluralize(len(missing_dependencies), \"dependency\", \"dependencies\"),\n install_timer)\n return True\n else:\n logger.error(\"Failed to install %s.\",\n pluralize(len(missing_dependencies), \"dependency\", \"dependencies\"))\n msg = \"Failed to install %s required by Python package %s! (%s)\"\n raise DependencyInstallationFailed(msg % (pluralize(len(missing_dependencies),\n \"system package\", \"system packages\"),\n requirement.name,\n concatenate(missing_dependencies)))\n return False"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfinding missing dependencies of a Python package.", "response": "def find_missing_dependencies(self, requirement):\n \"\"\"\n Find missing dependencies of a Python package.\n\n :param requirement: A :class:`.Requirement` object.\n :returns: A list of strings with system package names.\n \"\"\"\n known_dependencies = self.find_known_dependencies(requirement)\n if known_dependencies:\n installed_packages = self.find_installed_packages()\n logger.debug(\"Checking for missing dependencies of %s ..\", requirement.name)\n missing_dependencies = sorted(set(known_dependencies).difference(installed_packages))\n if missing_dependencies:\n logger.debug(\"Found %s: %s\",\n pluralize(len(missing_dependencies), \"missing dependency\", \"missing dependencies\"),\n concatenate(missing_dependencies))\n else:\n logger.info(\"All known dependencies are already installed.\")\n return missing_dependencies"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfinding the known dependencies of a Python package.", "response": "def find_known_dependencies(self, requirement):\n \"\"\"\n Find the known dependencies of a Python package.\n\n :param requirement: A :class:`.Requirement` object.\n :returns: A list of strings with system package names.\n \"\"\"\n logger.info(\"Checking for known dependencies of %s ..\", requirement.name)\n known_dependencies = sorted(self.dependencies.get(requirement.name.lower(), []))\n if known_dependencies:\n logger.info(\"Found %s: %s\", pluralize(len(known_dependencies), \"known dependency\", \"known dependencies\"),\n concatenate(known_dependencies))\n else:\n logger.info(\"No known dependencies... Maybe you have a suggestion?\")\n return known_dependencies"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfinds the installed system packages.", "response": "def find_installed_packages(self):\n \"\"\"\n Find the installed system packages.\n\n :returns: A list of strings with system package names.\n :raises: :exc:`.SystemDependencyError` when the command to list the\n installed system packages fails.\n \"\"\"\n list_command = subprocess.Popen(self.list_command, shell=True, stdout=subprocess.PIPE)\n stdout, stderr = list_command.communicate()\n if list_command.returncode != 0:\n raise SystemDependencyError(\"The command to list the installed system packages failed! ({command})\",\n command=self.list_command)\n installed_packages = sorted(stdout.decode().split())\n logger.debug(\"Found %i installed system package(s): %s\", len(installed_packages), installed_packages)\n return installed_packages"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nraises a DependencyInstallationRefused exception with a user friendly message.", "response": "def installation_refused(self, requirement, missing_dependencies, reason):\n \"\"\"\n Raise :exc:`.DependencyInstallationRefused` with a user friendly message.\n\n :param requirement: A :class:`.Requirement` object.\n :param missing_dependencies: A list of strings with missing dependencies.\n :param reason: The reason why installation was refused (a string).\n \"\"\"\n msg = \"Missing %s (%s) required by Python package %s (%s) but %s!\"\n raise DependencyInstallationRefused(\n msg % (pluralize(len(missing_dependencies), \"system package\", \"system packages\"),\n concatenate(missing_dependencies),\n requirement.name,\n requirement.version,\n reason)\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef confirm_installation(self, requirement, missing_dependencies, install_command):\n try:\n return prompt_for_confirmation(format(\n \"Do you want me to install %s %s?\",\n \"this\" if len(missing_dependencies) == 1 else \"these\",\n \"dependency\" if len(missing_dependencies) == 1 else \"dependencies\",\n ), default=True)\n except KeyboardInterrupt:\n # Control-C is a negative response but doesn't\n # otherwise interrupt the program flow.\n return False", "response": "Ask the operator s permission to install missing system packages."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef compact(text, **kw):\n return '\\n\\n'.join(' '.join(p.split()) for p in text.split('\\n\\n')).format(**kw)", "response": "Compact whitespace in a string and format any keyword arguments into a string."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef expand_path(pathname):\n # The following logic previously used regular expressions but that approach\n # turned out to be very error prone, hence the current contraption based on\n # direct string manipulation :-).\n home_directory = find_home_directory()\n separators = set([os.sep])\n if os.altsep is not None:\n separators.add(os.altsep)\n if len(pathname) >= 2 and pathname[0] == '~' and pathname[1] in separators:\n pathname = os.path.join(home_directory, pathname[2:])\n # Also expand environment variables.\n return parse_path(pathname)", "response": "Expand the home directory in a pathname based on the effective user id."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef find_home_directory():\n if WINDOWS:\n directory = os.environ.get('APPDATA')\n if not directory:\n directory = os.path.expanduser(r'~\\Application Data')\n else:\n # This module isn't available on Windows so we have to import it here.\n import pwd\n # Look up the home directory of the effective user id so we can\n # generate pathnames relative to the home directory.\n entry = pwd.getpwuid(os.getuid())\n directory = entry.pw_dir\n return directory", "response": "Look up the home directory of the effective user id."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef makedirs(path, mode=0o777):\n try:\n os.makedirs(path, mode)\n return True\n except OSError as e:\n if e.errno != errno.EEXIST:\n # We don't want to swallow errors other than EEXIST,\n # because we could be obscuring a real problem.\n raise\n return False", "response": "Create a directory if it doesn t already exist."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef same_directories(path1, path2):\n if all(os.path.isdir(p) for p in (path1, path2)):\n try:\n return os.path.samefile(path1, path2)\n except AttributeError:\n # On Windows and Python 2 os.path.samefile() is unavailable.\n return os.path.realpath(path1) == os.path.realpath(path2)\n else:\n return False", "response": "Check if two pathnames refer to the same directory."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncalculates the hexadecimal digest of one or more local files.", "response": "def hash_files(method, *files):\n \"\"\"\n Calculate the hexadecimal digest of one or more local files.\n\n :param method: The hash method (a string, given to :func:`hashlib.new()`).\n :param files: The pathname(s) of file(s) to hash (zero or more strings).\n :returns: The calculated hex digest (a string).\n \"\"\"\n context = hashlib.new(method)\n for filename in files:\n with open(filename, 'rb') as handle:\n while True:\n chunk = handle.read(4096)\n if not chunk:\n break\n context.update(chunk)\n return context.hexdigest()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreplace a file in an atomic fashion when possible.", "response": "def replace_file(src, dst):\n \"\"\"\n Overwrite a file (in an atomic fashion when possible).\n\n :param src: The pathname of the source file (a string).\n :param dst: The pathname of the destination file (a string).\n \"\"\"\n # Try os.replace() which was introduced in Python 3.3\n # (this should work on POSIX as well as Windows systems).\n try:\n os.replace(src, dst)\n return\n except AttributeError:\n pass\n # Try os.rename() which is atomic on UNIX but refuses to overwrite existing\n # files on Windows.\n try:\n os.rename(src, dst)\n return\n except OSError as e:\n if e.errno != errno.EEXIST:\n raise\n # Finally we fall back to the dumb approach required only on Windows.\n # See https://bugs.python.org/issue8828 for a long winded discussion.\n os.remove(dst)\n os.rename(src, dst)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef requirement_is_installed(expr):\n required_dist = next(parse_requirements(expr))\n try:\n installed_dist = get_distribution(required_dist.key)\n return installed_dist in required_dist\n except DistributionNotFound:\n return False", "response": "Checks whether a requirement is installed."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nmatch a command line argument against a short and long option.", "response": "def match_option(argument, short_option, long_option):\n \"\"\"\n Match a command line argument against a short and long option.\n\n :param argument: The command line argument (a string).\n :param short_option: The short option (a string).\n :param long_option: The long option (a string).\n :returns: :data:`True` if the argument matches, :data:`False` otherwise.\n \"\"\"\n return short_option[1] in argument[1:] if is_short_option(argument) else argument == long_option"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks if a list of command line options contains an option with a value pair.", "response": "def match_option_with_value(arguments, option, value):\n \"\"\"\n Check if a list of command line options contains an option with a value.\n\n :param arguments: The command line arguments (a list of strings).\n :param option: The long option (a string).\n :param value: The expected value (a string).\n :returns: :data:`True` if the command line contains the option/value pair,\n :data:`False` otherwise.\n \"\"\"\n return ('%s=%s' % (option, value) in arguments or\n contains_sublist(arguments, [option, value]))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef contains_sublist(lst, sublst):\n n = len(sublst)\n return any((sublst == lst[i:i + n]) for i in range(len(lst) - n + 1))", "response": "Check if one list contains the items from another list."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nwraps around np. fromfile to support any file - like object.", "response": "def fromfile(file, dtype, count, *args, **kwargs):\n \"\"\"Wrapper around np.fromfile to support any file-like object.\"\"\"\n try:\n return numpy.fromfile(file, dtype=dtype, count=count, *args, **kwargs)\n except (TypeError, IOError):\n return numpy.frombuffer(file.read(count * numpy.dtype(dtype).itemsize),\n dtype=dtype, count=count, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef parse(path, meta_data_only=False, compensate=False, channel_naming='$PnS',\n reformat_meta=False, data_set=0, dtype='float32', encoding=\"utf-8\"):\n \"\"\"Parse an fcs file at the location specified by the path.\n\n Parameters\n ----------\n path: str\n Path of .fcs file\n meta_data_only: bool\n If True, the parse_fcs only returns the meta_data (the TEXT segment of the FCS file)\n output_format: 'DataFrame' | 'ndarray'\n If set to 'DataFrame' the returned\n channel_naming: '$PnS' | '$PnN'\n Determines which meta data field is used for naming the channels.\n The default should be $PnS (even though it is not guaranteed to be unique)\n\n $PnN stands for the short name (guaranteed to be unique).\n Will look like 'FL1-H'\n $PnS stands for the actual name (not guaranteed to be unique).\n Will look like 'FSC-H' (Forward scatter)\n\n The chosen field will be used to population self.channels\n\n Note: These names are not flipped in the implementation.\n It looks like they were swapped for some reason in the official FCS specification.\n reformat_meta: bool\n If true, the meta data is reformatted with the channel information organized\n into a DataFrame and moved into the '_channels_' key\n data_set: int\n Index of retrieved data set in the fcs file.\n This value specifies the data set being retrieved from an fcs file with multiple data sets.\n dtype: str | None\n If provided, will force convert all data into this dtype.\n This is set by default to auto-convert to float32 to deal with cases in which the original\n data has been stored using a smaller data type (e.g., unit8). This modifies the original\n data, but should make follow up analysis safer in basically all cases.\n encoding: str\n Provide encoding type of the text section.\n\n\n Returns\n -------\n if meta_data_only is True:\n meta_data: dict\n Contains a dictionary with the meta data information\n Otherwise:\n a 2-tuple with\n the first element the meta_data (dictionary)\n the second element the data (in either DataFrame or numpy format)\n\n Examples\n --------\n fname = '../tests/data/EY_2013-05-03_EID_214_PID_1120_Piperacillin_Well_B7.001.fcs'\n meta = parse_fcs(fname, meta_data_only=True)\n meta, data_pandas = parse_fcs(fname, meta_data_only=False)\n \"\"\"\n if compensate:\n raise ParserFeatureNotImplementedError(u'Compensation has not been implemented yet.')\n\n read_data = not meta_data_only\n\n fcs_parser = FCSParser(path, read_data=read_data, channel_naming=channel_naming,\n data_set=data_set, encoding=encoding)\n\n if reformat_meta:\n fcs_parser.reformat_meta()\n\n meta = fcs_parser.annotation\n\n if meta_data_only:\n return meta\n else: # Then include both meta and dataframe.\n df = fcs_parser.dataframe\n df = df.astype(dtype) if dtype else df\n return meta, df", "response": "Parse a FCS file at the specified location."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nload the requested parts of the file into memory.", "response": "def load_file(self, file_handle, data_set=0, read_data=True):\n \"\"\"Load the requested parts of the file into memory.\"\"\"\n file_handle.seek(0, 2)\n self._file_size = file_handle.tell()\n file_handle.seek(0)\n data_segments = 0\n # seek the correct data set in fcs\n nextdata_offset = 0\n while data_segments <= data_set:\n self.read_header(file_handle, nextdata_offset)\n self.read_text(file_handle)\n if '$NEXTDATA' in self.annotation:\n data_segments += 1\n nextdata_offset = self.annotation['$NEXTDATA']\n file_handle.seek(nextdata_offset)\n if nextdata_offset == 0 and data_segments < data_set:\n warnings.warn(\"File does not contain the number of data sets.\")\n break\n else:\n if data_segments != 0:\n warnings.warn('File does not contain $NEXTDATA information.')\n break\n if read_data:\n self.read_data(file_handle)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef from_data(cls, data):\n obj = cls()\n with contextlib.closing(BytesIO(data)) as file_handle:\n obj.load_file(file_handle)\n return obj", "response": "Load an FCS file from a bytes - like object."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreading the header of the FCS file.", "response": "def read_header(self, file_handle, nextdata_offset=0):\n \"\"\"Read the header of the FCS file.\n\n The header specifies where the annotation, data and analysis are located inside the binary\n file.\n\n Args:\n file_handle: buffer containing FCS file.\n nextdata_offset: byte offset of a set header from file start specified by $NEXTDATA\n \"\"\"\n header = {'FCS format': file_handle.read(6)}\n\n file_handle.read(4) # 4 space characters after the FCS format\n\n for field in ('text start', 'text end', 'data start', 'data end', 'analysis start',\n 'analysis end'):\n s = file_handle.read(8)\n try:\n field_value = int(s)\n except ValueError:\n field_value = 0\n header[field] = field_value + nextdata_offset\n\n # Checking that the location of the TEXT segment is specified\n for k in ('text start', 'text end'):\n if header[k] == 0:\n raise ValueError(u'The FCS file \"{}\" seems corrupted. (Parser cannot locate '\n u'information about the \"{}\" segment.)'.format(self.path, k))\n elif header[k] > self._file_size:\n raise ValueError(u'The FCS file \"{}\" is corrupted. \"{}\" segment '\n u'is larger than file size'.format(self.path, k))\n else:\n # All OK\n pass\n\n self._data_start = header['data start']\n self._data_end = header['data start']\n\n if header['analysis end'] - header['analysis start'] != 0:\n warnings.warn(u'There appears to be some information in the ANALYSIS segment of file '\n u'{0}. However, it might not be read correctly.'.format(self.path))\n\n self.annotation['__header__'] = header"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _extract_text_dict(self, raw_text):\n delimiter = raw_text[0]\n\n if raw_text[-1] != delimiter:\n raw_text = raw_text.strip()\n if raw_text[-1] != delimiter:\n msg = (u'The first two characters were:\\n {}. The last two characters were: {}\\n'\n u'Parser expects the same delimiter character in beginning '\n u'and end of TEXT segment'.format(raw_text[:2], raw_text[-2:]))\n raise ParserFeatureNotImplementedError(msg)\n\n # The delimiter is escaped by being repeated (two consecutive delimiters). This code splits\n # on the escaped delimiter first, so there is no need for extra logic to distinguish\n # actual delimiters from escaped delimiters.\n nested_split_list = [x.split(delimiter) for x in raw_text[1:-1].split(delimiter * 2)]\n # 1:-1 above removes the first and last characters which are reserved for the delimiter.\n\n # Flatten the nested list to a list of elements (alternating keys and values)\n raw_text_elements = nested_split_list[0]\n for partial_element_list in nested_split_list[1:]:\n # Rejoin two parts of an element that was split by an escaped delimiter (the end and\n # start of two successive sub-lists in nested_split_list)\n raw_text_elements[-1] += (delimiter + partial_element_list[0])\n raw_text_elements.extend(partial_element_list[1:])\n\n keys, values = raw_text_elements[0::2], raw_text_elements[1::2]\n return dict(zip(keys, values))", "response": "Parse the TEXT segment of the FCS file into a python dictionary."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nread the TEXT segment of the FCS file and returns the entry in the object.", "response": "def read_text(self, file_handle):\n \"\"\"Parse the TEXT segment of the FCS file.\n\n The TEXT segment contains meta data associated with the FCS file.\n Converting all meta keywords to lower case.\n \"\"\"\n header = self.annotation['__header__'] # For convenience\n\n #####\n # Read in the TEXT segment of the FCS file\n # There are some differences in how the\n file_handle.seek(header['text start'], 0)\n raw_text = file_handle.read(header['text end'] - header['text start'] + 1)\n try:\n raw_text = raw_text.decode(self._encoding)\n except UnicodeDecodeError as e:\n # Catching the exception and logging it in this way kills the traceback, but\n # we can worry about this later.\n logger.warning(u'Encountered an illegal utf-8 byte in the header.\\n Illegal utf-8 '\n u'characters will be ignored.\\n{}'.format(e))\n raw_text = raw_text.decode(self._encoding, errors='ignore')\n\n text = self._extract_text_dict(raw_text)\n\n ##\n # Extract channel names and convert some of the channel properties\n # and other fields into numeric data types (from string)\n # Note: do not use regular expressions for manipulations here.\n # Regular expressions are too heavy in terms of computation time.\n pars = int(text['$PAR'])\n if '$P0B' in text.keys(): # Checking whether channel number count starts from 0 or from 1\n self.channel_numbers = range(0, pars) # Channel number count starts from 0\n else:\n self.channel_numbers = range(1, pars + 1) # Channel numbers start from 1\n\n # Extract parameter names\n try:\n names_n = tuple([text['$P{0}N'.format(i)] for i in self.channel_numbers])\n except KeyError:\n names_n = []\n\n try:\n names_s = tuple([text['$P{0}S'.format(i)] for i in self.channel_numbers])\n except KeyError:\n names_s = []\n\n self.channel_names_s = names_s\n self.channel_names_n = names_n\n\n # Convert some of the fields into integer values\n keys_encoding_bits = ['$P{0}B'.format(i) for i in self.channel_numbers]\n\n add_keys_to_convert_to_int = ['$NEXTDATA', '$PAR', '$TOT']\n\n keys_to_convert_to_int = keys_encoding_bits + add_keys_to_convert_to_int\n\n for key in keys_to_convert_to_int:\n value = text[key]\n text[key] = int(value)\n\n self.annotation.update(text)\n\n # Update data start segments if needed\n\n if self._data_start == 0:\n self._data_start = int(text['$BEGINDATA'])\n if self._data_end == 0:\n self._data_end = int(text['$ENDDATA'])"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef read_analysis(self, file_handle):\n start = self.annotation['__header__']['analysis start']\n end = self.annotation['__header__']['analysis end']\n if start != 0 and end != 0:\n file_handle.seek(start, 0)\n self._analysis = file_handle.read(end - start)\n else:\n self._analysis = None", "response": "Reads the ANALYSIS segment of the FCS file and stores it in self. _analysis."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _verify_assumptions(self):\n text = self.annotation\n keys = text.keys()\n\n if '$MODE' not in text or text['$MODE'] != 'L':\n raise ParserFeatureNotImplementedError(u'Mode not implemented')\n\n if '$P0B' in keys:\n raise ParserFeatureNotImplementedError(u'Not expecting a parameter starting at 0')\n\n if text['$BYTEORD'] not in ['1,2,3,4', '4,3,2,1', '1,2', '2,1']:\n raise ParserFeatureNotImplementedError(u'$BYTEORD {} '\n u'not implemented'.format(text['$BYTEORD']))", "response": "Verify that all assumptions made by the parser hold are met."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_channel_names(self):\n names_s, names_n = self.channel_names_s, self.channel_names_n\n\n # Figure out which channel names to use\n if self._channel_naming == '$PnS':\n channel_names, channel_names_alternate = names_s, names_n\n else:\n channel_names, channel_names_alternate = names_n, names_s\n\n if len(channel_names) == 0:\n channel_names = channel_names_alternate\n\n if len(set(channel_names)) != len(channel_names):\n msg = (u'The default channel names (defined by the {} '\n u'parameter in the FCS file) were not unique. To avoid '\n u'problems in downstream analysis, the channel names '\n u'have been switched to the alternate channel names '\n u'defined in the FCS file. To avoid '\n u'seeing this warning message, explicitly instruct '\n u'the FCS parser to use the alternate channel names by '\n u'specifying the channel_naming parameter.')\n msg = msg.format(self._channel_naming)\n warnings.warn(msg)\n channel_names = channel_names_alternate\n\n return channel_names", "response": "Get list of channel names. Raises a warning if the names are not unique."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreading the DATA segment of the FCS file.", "response": "def read_data(self, file_handle):\n \"\"\"Read the DATA segment of the FCS file.\"\"\"\n self._verify_assumptions()\n text = self.annotation\n\n if (self._data_start > self._file_size) or (self._data_end > self._file_size):\n raise ValueError(u'The FCS file \"{}\" is corrupted. Part of the data segment '\n u'is missing.'.format(self.path))\n\n num_events = text['$TOT'] # Number of events recorded\n num_pars = text['$PAR'] # Number of parameters recorded\n\n if text['$BYTEORD'].strip() == '1,2,3,4' or text['$BYTEORD'].strip() == '1,2':\n endian = '<'\n elif text['$BYTEORD'].strip() == '4,3,2,1' or text['$BYTEORD'].strip() == '2,1':\n endian = '>'\n else:\n msg = 'Unrecognized byte order ({})'.format(text['$BYTEORD'])\n raise ParserFeatureNotImplementedError(msg)\n\n # dictionary to convert from FCS format to numpy convention\n conversion_dict = {'F': 'f', 'D': 'f', 'I': 'u'}\n\n if text['$DATATYPE'] not in conversion_dict.keys():\n raise ParserFeatureNotImplementedError('$DATATYPE = {0} is not yet '\n 'supported.'.format(text['$DATATYPE']))\n\n # Calculations to figure out data types of each of parameters\n # $PnB specifies the number of bits reserved for a measurement of parameter n\n bytes_per_par_list = [int(text['$P{0}B'.format(i)] / 8) for i in self.channel_numbers]\n\n par_numeric_type_list = [\n '{endian}{type}{size}'.format(endian=endian,\n type=conversion_dict[text['$DATATYPE']],\n size=bytes_per_par)\n for bytes_per_par in bytes_per_par_list\n ]\n\n # Parser for list mode. Here, the order is a list of tuples.\n # Each tuple stores event related information\n file_handle.seek(self._data_start, 0) # Go to the part of the file where data starts\n\n ##\n # Read in the data\n if len(set(par_numeric_type_list)) > 1:\n # This branch deals with files in which the different columns (channels)\n # were encoded with different types; i.e., a mixed data format.\n dtype = ','.join(par_numeric_type_list)\n data = fromfile(file_handle, dtype, num_events)\n\n # The dtypes in the numpy array `data` above are associated with both a name\n # and a type; i.e.,\n # https://docs.scipy.org/doc/numpy/reference/generated/numpy.recarray.html\n # The names are assigned automatically.\n # In order for this code to work correctly with the pandas DataFrame constructor,\n # we convert the *names* of the dtypes to the channel names we want to use.\n\n names = self.get_channel_names()\n\n if six.PY2:\n encoded_names = [name.encode('ascii', errors='replace') for name in names]\n else: # Assume that python3 or older then.\n encoded_names = [name for name in names]\n\n data.dtype.names = tuple(encoded_names)\n else:\n # values saved in a single data format\n dtype = par_numeric_type_list[0]\n data = fromfile(file_handle, dtype, num_events * num_pars)\n data = data.reshape((num_events, num_pars))\n ##\n # Convert to native byte order\n # This is needed for working with pandas data structures\n native_code = '<' if (sys.byteorder == 'little') else '>'\n if endian != native_code:\n # swaps the actual bytes and also the endianness\n data = data.byteswap().newbyteorder()\n\n self._data = data"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef data(self):\n if self._data is None:\n with open(self.path, 'rb') as f:\n self.read_data(f)\n return self._data", "response": "Get parsed DATA segment of the FCS file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef analysis(self):\n if self._analysis is None:\n with open(self.path, 'rb') as f:\n self.read_analysis(f)\n return self._analysis", "response": "Get ANALYSIS segment of the FCS file."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef reformat_meta(self):\n meta = self.annotation # For shorthand (passed by reference)\n channel_properties = []\n\n for key, value in meta.items():\n if key[:3] == '$P1':\n if key[3] not in string.digits:\n channel_properties.append(key[3:])\n\n # Capture all the channel information in a list of lists -- used to create a data frame\n channel_matrix = [\n [meta.get('$P{0}{1}'.format(ch, p)) for p in channel_properties]\n for ch in self.channel_numbers\n ]\n\n # Remove this information from the dictionary\n for ch in self.channel_numbers:\n for p in channel_properties:\n key = '$P{0}{1}'.format(ch, p)\n if key in meta:\n meta.pop(key)\n\n num_channels = meta['$PAR']\n column_names = ['$Pn{0}'.format(p) for p in channel_properties]\n\n df = pd.DataFrame(channel_matrix, columns=column_names,\n index=(1 + numpy.arange(num_channels)))\n\n if '$PnE' in column_names:\n df['$PnE'] = df['$PnE'].apply(lambda x: x.split(','))\n\n df.index.name = 'Channel Number'\n meta['_channels_'] = df\n meta['_channel_names_'] = self.get_channel_names()", "response": "Collect the meta data information in a more user friendly format."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_binary_dist(self, requirement):\n cache_file = self.cache.get(requirement)\n if cache_file:\n if self.needs_invalidation(requirement, cache_file):\n logger.info(\"Invalidating old %s binary (source has changed) ..\", requirement)\n cache_file = None\n else:\n logger.debug(\"%s hasn't been cached yet, doing so now.\", requirement)\n if not cache_file:\n # Build the binary distribution.\n try:\n raw_file = self.build_binary_dist(requirement)\n except BuildFailed:\n logger.warning(\"Build of %s failed, checking for missing dependencies ..\", requirement)\n if self.system_package_manager.install_dependencies(requirement):\n raw_file = self.build_binary_dist(requirement)\n else:\n raise\n # Transform the binary distribution archive into a form that we can re-use.\n fd, transformed_file = tempfile.mkstemp(prefix='pip-accel-bdist-', suffix='.tar.gz')\n try:\n archive = tarfile.open(transformed_file, 'w:gz')\n try:\n for member, from_handle in self.transform_binary_dist(raw_file):\n archive.addfile(member, from_handle)\n finally:\n archive.close()\n # Push the binary distribution archive to all available backends.\n with open(transformed_file, 'rb') as handle:\n self.cache.put(requirement, handle)\n finally:\n # Close file descriptor before removing the temporary file.\n # Without closing Windows is complaining that the file cannot\n # be removed because it is used by another process.\n os.close(fd)\n # Cleanup the temporary file.\n os.remove(transformed_file)\n # Get the absolute pathname of the file in the local cache.\n cache_file = self.cache.get(requirement)\n # Enable checksum based cache invalidation.\n self.persist_checksum(requirement, cache_file)\n archive = tarfile.open(cache_file, 'r:gz')\n try:\n for member in archive.getmembers():\n yield member, archive.extractfile(member.name)\n finally:\n archive.close()", "response": "Gets or creates a binary distribution archive for the given requirement."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef needs_invalidation(self, requirement, cache_file):\n if self.config.trust_mod_times:\n return requirement.last_modified > os.path.getmtime(cache_file)\n else:\n checksum = self.recall_checksum(cache_file)\n return checksum and checksum != requirement.checksum", "response": "Checks whether a cached binary distribution needs to be invalidated."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the checksum of the input used to generate a binary distribution archive.", "response": "def recall_checksum(self, cache_file):\n \"\"\"\n Get the checksum of the input used to generate a binary distribution archive.\n\n :param cache_file: The pathname of the binary distribution archive (a string).\n :returns: The checksum (a string) or :data:`None` (when no checksum is available).\n \"\"\"\n # EAFP instead of LBYL because of concurrency between pip-accel\n # processes (https://docs.python.org/2/glossary.html#term-lbyl).\n checksum_file = '%s.txt' % cache_file\n try:\n with open(checksum_file) as handle:\n contents = handle.read()\n return contents.strip()\n except IOError as e:\n if e.errno == errno.ENOENT:\n # Gracefully handle missing checksum files.\n return None\n else:\n # Don't swallow exceptions we don't expect!\n raise"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\npersisting the checksum of the input used to generate a binary distribution.", "response": "def persist_checksum(self, requirement, cache_file):\n \"\"\"\n Persist the checksum of the input used to generate a binary distribution.\n\n :param requirement: A :class:`.Requirement` object.\n :param cache_file: The pathname of a cached binary distribution (a string).\n\n .. note:: The checksum is only calculated and persisted when\n :attr:`~.Config.trust_mod_times` is :data:`False`.\n \"\"\"\n if not self.config.trust_mod_times:\n checksum_file = '%s.txt' % cache_file\n with AtomicReplace(checksum_file) as temporary_file:\n with open(temporary_file, 'w') as handle:\n handle.write('%s\\n' % requirement.checksum)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nbuild a binary distribution archive from an unpacked source distribution.", "response": "def build_binary_dist(self, requirement):\n \"\"\"\n Build a binary distribution archive from an unpacked source distribution.\n\n :param requirement: A :class:`.Requirement` object.\n :returns: The pathname of a binary distribution archive (a string).\n :raises: :exc:`.BinaryDistributionError` when the original command\n and the fall back both fail to produce a binary distribution\n archive.\n\n This method uses the following command to build binary distributions:\n\n .. code-block:: sh\n\n $ python setup.py bdist_dumb --format=tar\n\n This command can fail for two main reasons:\n\n 1. The package is missing binary dependencies.\n 2. The ``setup.py`` script doesn't (properly) implement ``bdist_dumb``\n binary distribution format support.\n\n The first case is dealt with in :func:`get_binary_dist()`. To deal\n with the second case this method falls back to the following command:\n\n .. code-block:: sh\n\n $ python setup.py bdist\n\n This fall back is almost never needed, but there are Python packages\n out there which require this fall back (this method was added because\n the installation of ``Paver==1.2.3`` failed, see `issue 37`_ for\n details about that).\n\n .. _issue 37: https://github.com/paylogic/pip-accel/issues/37\n \"\"\"\n try:\n return self.build_binary_dist_helper(requirement, ['bdist_dumb', '--format=tar'])\n except (BuildFailed, NoBuildOutput):\n logger.warning(\"Build of %s failed, falling back to alternative method ..\", requirement)\n return self.build_binary_dist_helper(requirement, ['bdist', '--formats=gztar'])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nbuild a binary distribution from an unpacked source distribution.", "response": "def build_binary_dist_helper(self, requirement, setup_command):\n \"\"\"\n Convert an unpacked source distribution to a binary distribution.\n\n :param requirement: A :class:`.Requirement` object.\n :param setup_command: A list of strings with the arguments to\n ``setup.py``.\n :returns: The pathname of the resulting binary distribution (a string).\n :raises: :exc:`.BuildFailed` when the build reports an error (e.g.\n because of missing binary dependencies like system\n libraries).\n :raises: :exc:`.NoBuildOutput` when the build does not produce the\n expected binary distribution archive.\n \"\"\"\n build_timer = Timer()\n # Make sure the source distribution contains a setup script.\n setup_script = os.path.join(requirement.source_directory, 'setup.py')\n if not os.path.isfile(setup_script):\n msg = \"Directory %s (%s %s) doesn't contain a source distribution!\"\n raise InvalidSourceDistribution(msg % (requirement.source_directory, requirement.name, requirement.version))\n # Let the user know what's going on.\n build_text = \"Building %s binary distribution\" % requirement\n logger.info(\"%s ..\", build_text)\n # Cleanup previously generated distributions.\n dist_directory = os.path.join(requirement.source_directory, 'dist')\n if os.path.isdir(dist_directory):\n logger.debug(\"Cleaning up previously generated distributions in %s ..\", dist_directory)\n shutil.rmtree(dist_directory)\n # Let the user know (approximately) which command is being executed\n # (I don't think it's necessary to show them the nasty details :-).\n logger.debug(\"Executing external command: %s\",\n ' '.join(map(pipes.quote, [self.config.python_executable, 'setup.py'] + setup_command)))\n # Compose the command line needed to build the binary distribution.\n # This nasty command line forces the use of setuptools (instead of\n # distutils) just like pip does. This will cause the `*.egg-info'\n # metadata to be written to a directory instead of a file, which\n # (amongst other things) enables tracking of installed files.\n command_line = [\n self.config.python_executable, '-c',\n ';'.join([\n 'import setuptools',\n '__file__=%r' % setup_script,\n r\"exec(compile(open(__file__).read().replace('\\r\\n', '\\n'), __file__, 'exec'))\",\n ])\n ] + setup_command\n # Redirect all output of the build to a temporary file.\n fd, temporary_file = tempfile.mkstemp()\n try:\n # Start the build.\n build = subprocess.Popen(command_line, cwd=requirement.source_directory, stdout=fd, stderr=fd)\n # Wait for the build to finish and provide feedback to the user in the mean time.\n spinner = Spinner(label=build_text, timer=build_timer)\n while build.poll() is None:\n spinner.step()\n # Don't tax the CPU too much.\n time.sleep(0.2)\n spinner.clear()\n # Make sure the build succeeded and produced a binary distribution archive.\n try:\n # If the build reported an error we'll try to provide the user with\n # some hints about what went wrong.\n if build.returncode != 0:\n raise BuildFailed(\"Failed to build {name} ({version}) binary distribution!\",\n name=requirement.name, version=requirement.version)\n # Check if the build created the `dist' directory (the os.listdir()\n # call below will raise an exception if we don't check for this).\n if not os.path.isdir(dist_directory):\n raise NoBuildOutput(\"Build of {name} ({version}) did not produce a binary distribution archive!\",\n name=requirement.name, version=requirement.version)\n # Check if we can find the binary distribution archive.\n filenames = os.listdir(dist_directory)\n if len(filenames) != 1:\n variables = dict(name=requirement.name,\n version=requirement.version,\n filenames=concatenate(sorted(filenames)))\n raise NoBuildOutput(\"\"\"\n Build of {name} ({version}) produced more than one\n distribution archive! (matches: {filenames})\n \"\"\", **variables)\n except Exception as e:\n # Decorate the exception with the output of the failed build.\n with open(temporary_file) as handle:\n build_output = handle.read()\n enhanced_message = compact(\"\"\"\n {message}\n\n Please check the build output because it will probably\n provide a hint about what went wrong.\n\n Build output:\n\n {output}\n \"\"\", message=e.args[0], output=build_output.strip())\n e.args = (enhanced_message,)\n raise\n logger.info(\"Finished building %s in %s.\", requirement.name, build_timer)\n return os.path.join(dist_directory, filenames[0])\n finally:\n # Close file descriptor before removing the temporary file.\n # Without closing Windows is complaining that the file cannot\n # be removed because it is used by another process.\n os.close(fd)\n os.unlink(temporary_file)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ntransforming a binary distribution archive into a form that can be cached for future use.", "response": "def transform_binary_dist(self, archive_path):\n \"\"\"\n Transform binary distributions into a form that can be cached for future use.\n\n :param archive_path: The pathname of the original binary distribution archive.\n :returns: An iterable of tuples with two values each:\n\n 1. A :class:`tarfile.TarInfo` object.\n 2. A file-like object.\n\n This method transforms a binary distribution archive created by\n :func:`build_binary_dist()` into a form that can be cached for future\n use. This comes down to making the pathnames inside the archive\n relative to the `prefix` that the binary distribution was built for.\n \"\"\"\n # Copy the tar archive file by file so we can rewrite the pathnames.\n logger.debug(\"Transforming binary distribution: %s.\", archive_path)\n archive = tarfile.open(archive_path, 'r')\n for member in archive.getmembers():\n # Some source distribution archives on PyPI that are distributed as ZIP\n # archives contain really weird permissions: the world readable bit is\n # missing. I've encountered this with the httplib2 (0.9) and\n # google-api-python-client (1.2) packages. I assume this is a bug of\n # some kind in the packaging process on \"their\" side.\n if member.mode & stat.S_IXUSR:\n # If the owner has execute permissions we'll give everyone read and\n # execute permissions (only the owner gets write permissions).\n member.mode = 0o755\n else:\n # If the owner doesn't have execute permissions we'll give everyone\n # read permissions (only the owner gets write permissions).\n member.mode = 0o644\n # In my testing the `dumb' tar files created with the `python\n # setup.py bdist' and `python setup.py bdist_dumb' commands contain\n # pathnames that are relative to `/' in one way or another:\n #\n # - In almost all cases the pathnames look like this:\n #\n # ./home/peter/.virtualenvs/pip-accel/lib/python2.7/site-packages/pip_accel/__init__.py\n #\n # - After working on pip-accel for several years I encountered\n # a pathname like this (Python 2.6 on Mac OS X 10.10.5):\n #\n # Users/peter/.virtualenvs/pip-accel/lib/python2.6/site-packages/pip_accel/__init__.py\n #\n # Both of the above pathnames are relative to `/' but in different\n # ways :-). The following normpath(join('/', ...))) pathname\n # manipulation logic is intended to handle both cases.\n original_pathname = member.name\n absolute_pathname = os.path.normpath(os.path.join('/', original_pathname))\n if member.isdev():\n logger.warn(\"Ignoring device file: %s.\", absolute_pathname)\n elif not member.isdir():\n modified_pathname = os.path.relpath(absolute_pathname, self.config.install_prefix)\n if os.path.isabs(modified_pathname):\n logger.warn(\"Failed to transform pathname in binary distribution\"\n \" to relative path! (original: %r, modified: %r)\",\n original_pathname, modified_pathname)\n else:\n # Rewrite /usr/local to /usr (same goes for all prefixes of course).\n modified_pathname = re.sub('^local/', '', modified_pathname)\n # Rewrite /dist-packages/ to /site-packages/. For details see\n # https://wiki.debian.org/Python#Deviations_from_upstream.\n if self.config.on_debian:\n modified_pathname = modified_pathname.replace('/dist-packages/', '/site-packages/')\n # Enable operators to debug the transformation process.\n logger.debug(\"Transformed %r -> %r.\", original_pathname, modified_pathname)\n # Get the file data from the input archive.\n handle = archive.extractfile(original_pathname)\n # Yield the modified metadata and a handle to the data.\n member.name = modified_pathname\n yield member, handle\n archive.close()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ninstall a binary distribution into the given prefix. :param members: An iterable of tuples with two values each: 1. A :class:`tarfile.TarInfo` object. 2. A file-like object. :param prefix: The \"prefix\" under which the requirements should be installed. This will be a pathname like ``/usr``, ``/usr/local`` or the pathname of a virtual environment. Defaults to :attr:`.Config.install_prefix`. :param python: The pathname of the Python executable to use in the shebang line of all executable Python scripts inside the binary distribution. Defaults to :attr:`.Config.python_executable`. :param virtualenv_compatible: Whether to enable workarounds to make the resulting filenames compatible with virtual environments (defaults to :data:`True`). :param track_installed_files: If this is :data:`True` (not the default for this method because of backwards compatibility) pip-accel will create ``installed-files.txt`` as required by pip to properly uninstall packages. This method installs a binary distribution created by :class:`build_binary_dist()` into the given prefix (a directory like ``/usr``, ``/usr/local`` or a virtual environment).", "response": "def install_binary_dist(self, members, virtualenv_compatible=True, prefix=None,\n python=None, track_installed_files=False):\n \"\"\"\n Install a binary distribution into the given prefix.\n\n :param members: An iterable of tuples with two values each:\n\n 1. A :class:`tarfile.TarInfo` object.\n 2. A file-like object.\n :param prefix: The \"prefix\" under which the requirements should be\n installed. This will be a pathname like ``/usr``,\n ``/usr/local`` or the pathname of a virtual environment.\n Defaults to :attr:`.Config.install_prefix`.\n :param python: The pathname of the Python executable to use in the shebang\n line of all executable Python scripts inside the binary\n distribution. Defaults to :attr:`.Config.python_executable`.\n :param virtualenv_compatible: Whether to enable workarounds to make the\n resulting filenames compatible with\n virtual environments (defaults to\n :data:`True`).\n :param track_installed_files: If this is :data:`True` (not the default for\n this method because of backwards\n compatibility) pip-accel will create\n ``installed-files.txt`` as required by\n pip to properly uninstall packages.\n\n This method installs a binary distribution created by\n :class:`build_binary_dist()` into the given prefix (a directory like\n ``/usr``, ``/usr/local`` or a virtual environment).\n \"\"\"\n # TODO This is quite slow for modules like Django. Speed it up! Two choices:\n # 1. Run the external tar program to unpack the archive. This will\n # slightly complicate the fixing up of hashbangs.\n # 2. Using links? The plan: We can maintain a \"seed\" environment under\n # $PIP_ACCEL_CACHE and use symbolic and/or hard links to populate other\n # places based on the \"seed\" environment.\n module_search_path = set(map(os.path.normpath, sys.path))\n prefix = os.path.normpath(prefix or self.config.install_prefix)\n python = os.path.normpath(python or self.config.python_executable)\n installed_files = []\n for member, from_handle in members:\n pathname = member.name\n if virtualenv_compatible:\n # Some binary distributions include C header files (see for example\n # the greenlet package) however the subdirectory of include/ in a\n # virtual environment is a symbolic link to a subdirectory of\n # /usr/include/ so we should never try to install C header files\n # inside the directory pointed to by the symbolic link. Instead we\n # implement the same workaround that pip uses to avoid this\n # problem.\n pathname = re.sub('^include/', 'include/site/', pathname)\n if self.config.on_debian and '/site-packages/' in pathname:\n # On Debian based system wide Python installs the /site-packages/\n # directory is not in Python's module search path while\n # /dist-packages/ is. We try to be compatible with this.\n match = re.match('^(.+?)/site-packages', pathname)\n if match:\n site_packages = os.path.normpath(os.path.join(prefix, match.group(0)))\n dist_packages = os.path.normpath(os.path.join(prefix, match.group(1), 'dist-packages'))\n if dist_packages in module_search_path and site_packages not in module_search_path:\n pathname = pathname.replace('/site-packages/', '/dist-packages/')\n pathname = os.path.join(prefix, pathname)\n if track_installed_files:\n # Track the installed file's absolute pathname.\n installed_files.append(pathname)\n directory = os.path.dirname(pathname)\n if not os.path.isdir(directory):\n logger.debug(\"Creating directory: %s ..\", directory)\n makedirs(directory)\n logger.debug(\"Creating file: %s ..\", pathname)\n with open(pathname, 'wb') as to_handle:\n contents = from_handle.read()\n if contents.startswith(b'#!/'):\n contents = self.fix_hashbang(contents, python)\n to_handle.write(contents)\n os.chmod(pathname, member.mode)\n if track_installed_files:\n self.update_installed_files(installed_files)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef fix_hashbang(self, contents, python):\n lines = contents.splitlines()\n if lines:\n hashbang = lines[0]\n # Get the base name of the command in the hashbang.\n executable = os.path.basename(hashbang)\n # Deal with hashbangs like `#!/usr/bin/env python'.\n executable = re.sub(b'^env ', b'', executable)\n # Only rewrite hashbangs that actually involve Python.\n if re.match(b'^python(\\\\d+(\\\\.\\\\d+)*)?$', executable):\n lines[0] = b'#!' + python.encode('ascii')\n logger.debug(\"Rewriting hashbang %r to %r!\", hashbang, lines[0])\n contents = b'\\n'.join(lines)\n return contents", "response": "Rewrite hashbangs_ to use the correct Python executable."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates the list of installed files by pip.", "response": "def update_installed_files(self, installed_files):\n \"\"\"\n Track the files installed by a package so pip knows how to remove the package.\n\n This method is used by :func:`install_binary_dist()` (which collects\n the list of installed files for :func:`update_installed_files()`).\n\n :param installed_files: A list of absolute pathnames (strings) with the\n files that were just installed.\n \"\"\"\n # Find the *.egg-info directory where installed-files.txt should be created.\n pkg_info_files = [fn for fn in installed_files if fnmatch.fnmatch(fn, '*.egg-info/PKG-INFO')]\n # I'm not (yet) sure how reliable the above logic is, so for now\n # I'll err on the side of caution and only act when the results\n # seem to be reliable.\n if len(pkg_info_files) != 1:\n logger.warning(\"Not tracking installed files (couldn't reliably determine *.egg-info directory)\")\n else:\n egg_info_directory = os.path.dirname(pkg_info_files[0])\n installed_files_path = os.path.join(egg_info_directory, 'installed-files.txt')\n logger.debug(\"Tracking installed files in %s ..\", installed_files_path)\n with open(installed_files_path, 'w') as handle:\n for pathname in installed_files:\n handle.write('%s\\n' % os.path.relpath(pathname, egg_info_directory))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef load_configuration_file(self, configuration_file):\n configuration_file = parse_path(configuration_file)\n logger.debug(\"Loading configuration file: %s\", configuration_file)\n parser = configparser.RawConfigParser()\n files_loaded = parser.read(configuration_file)\n if len(files_loaded) != 1:\n msg = \"Failed to load configuration file! (%s)\"\n raise Exception(msg % configuration_file)\n elif not parser.has_section('pip-accel'):\n msg = \"Missing 'pip-accel' section in configuration file! (%s)\"\n raise Exception(msg % configuration_file)\n else:\n self.configuration.update(parser.items('pip-accel'))", "response": "Load the configuration from a configuration file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the value of a configuration option in the configuration file.", "response": "def get(self, property_name=None, environment_variable=None, configuration_option=None, default=None):\n \"\"\"\n Internal shortcut to get a configuration option's value.\n\n :param property_name: The name of the property that users can set on\n the :class:`Config` class (a string).\n :param environment_variable: The name of the environment variable (a\n string).\n :param configuration_option: The name of the option in the\n configuration file (a string).\n :param default: The default value.\n :returns: The value of the environment variable or configuration file\n option or the default value.\n \"\"\"\n if self.overrides.get(property_name) is not None:\n return self.overrides[property_name]\n elif environment_variable and self.environment.get(environment_variable):\n return self.environment[environment_variable]\n elif self.configuration.get(configuration_option) is not None:\n return self.configuration[configuration_option]\n else:\n return default"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef source_index(self):\n return self.get(property_name='source_index',\n default=os.path.join(self.data_directory, 'sources'))", "response": "The absolute pathname of pip - accel s source index directory."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the absolute pathname of the directory where pip - accel s data files are stored.", "response": "def data_directory(self):\n \"\"\"\n The absolute pathname of the directory where pip-accel's data files are stored (a string).\n\n - Environment variable: ``$PIP_ACCEL_CACHE``\n - Configuration option: ``data-directory``\n - Default: ``/var/cache/pip-accel`` if running as ``root``, ``~/.pip-accel`` otherwise\n \"\"\"\n return expand_path(self.get(property_name='data_directory',\n environment_variable='PIP_ACCEL_CACHE',\n configuration_option='data-directory',\n default='/var/cache/pip-accel' if is_root() else '~/.pip-accel'))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the absolute pathname of the installation prefix to use for this system.", "response": "def install_prefix(self):\n \"\"\"\n The absolute pathname of the installation prefix to use (a string).\n\n This property is based on :data:`sys.prefix` except that when\n :data:`sys.prefix` is ``/usr`` and we're running on a Debian derived\n system ``/usr/local`` is used instead.\n\n The reason for this is that on Debian derived systems only apt (dpkg)\n should be allowed to touch files in ``/usr/lib/pythonX.Y/dist-packages``\n and ``python setup.py install`` knows this (see the ``posix_local``\n installation scheme in ``/usr/lib/pythonX.Y/sysconfig.py`` on Debian\n derived systems). Because pip-accel replaces ``python setup.py\n install`` it has to replicate this logic. Inferring all of this from\n the :mod:`sysconfig` module would be nice but that module wasn't\n available in Python 2.6.\n \"\"\"\n return self.get(property_name='install_prefix',\n default='/usr/local' if sys.prefix == '/usr' and self.on_debian else sys.prefix)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef auto_install(self):\n value = self.get(property_name='auto_install',\n environment_variable='PIP_ACCEL_AUTO_INSTALL',\n configuration_option='auto-install')\n if value is not None:\n return coerce_boolean(value)", "response": "Returns a boolean value indicating whether automatic installation of missing system packages is enabled."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef trust_mod_times(self):\n on_appveyor = coerce_boolean(os.environ.get('APPVEYOR', 'False'))\n return coerce_boolean(self.get(property_name='trust_mod_times',\n environment_variable='PIP_ACCEL_TRUST_MOD_TIMES',\n configuration_option='trust-mod-times',\n default=(not on_appveyor)))", "response": "Determines whether to trust file modification times for cache invalidation."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef s3_cache_readonly(self):\n return coerce_boolean(self.get(property_name='s3_cache_readonly',\n environment_variable='PIP_ACCEL_S3_READONLY',\n configuration_option='s3-readonly',\n default=False))", "response": "Determines if the Amazon S3 bucket is considered read only."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget the socket timeout in seconds for connections to Amazon S3.", "response": "def s3_cache_timeout(self):\n \"\"\"\n The socket timeout in seconds for connections to Amazon S3 (an integer).\n\n This value is injected into Boto's configuration to override the\n default socket timeout used for connections to Amazon S3.\n\n - Environment variable: ``$PIP_ACCEL_S3_TIMEOUT``\n - Configuration option: ``s3-timeout``\n - Default: ``60`` (`Boto's default`_)\n\n .. _Boto's default: http://boto.readthedocs.org/en/latest/boto_config_tut.html\n \"\"\"\n value = self.get(property_name='s3_cache_timeout',\n environment_variable='PIP_ACCEL_S3_TIMEOUT',\n configuration_option='s3-timeout')\n try:\n n = int(value)\n if n >= 0:\n return n\n except:\n return 60"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndownload a distribution archive from the configured Amazon S3 bucket.", "response": "def get(self, filename):\n \"\"\"\n Download a distribution archive from the configured Amazon S3 bucket.\n\n :param filename: The filename of the distribution archive (a string).\n :returns: The pathname of a distribution archive on the local file\n system or :data:`None`.\n :raises: :exc:`.CacheBackendError` when any underlying method fails.\n \"\"\"\n timer = Timer()\n self.check_prerequisites()\n with PatchedBotoConfig():\n # Check if the distribution archive is available.\n raw_key = self.get_cache_key(filename)\n logger.info(\"Checking if distribution archive is available in S3 bucket: %s\", raw_key)\n key = self.s3_bucket.get_key(raw_key)\n if key is None:\n logger.debug(\"Distribution archive is not available in S3 bucket.\")\n else:\n # Download the distribution archive to the local binary index.\n # TODO Shouldn't this use LocalCacheBackend.put() instead of\n # implementing the same steps manually?!\n logger.info(\"Downloading distribution archive from S3 bucket ..\")\n file_in_cache = os.path.join(self.config.binary_cache, filename)\n makedirs(os.path.dirname(file_in_cache))\n with AtomicReplace(file_in_cache) as temporary_file:\n key.get_contents_to_filename(temporary_file)\n logger.debug(\"Finished downloading distribution archive from S3 bucket in %s.\", timer)\n return file_in_cache"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef put(self, filename, handle):\n if self.config.s3_cache_readonly:\n logger.info('Skipping upload to S3 bucket (using S3 in read only mode).')\n else:\n timer = Timer()\n self.check_prerequisites()\n with PatchedBotoConfig():\n from boto.s3.key import Key\n raw_key = self.get_cache_key(filename)\n logger.info(\"Uploading distribution archive to S3 bucket: %s\", raw_key)\n key = Key(self.s3_bucket)\n key.key = raw_key\n try:\n key.set_contents_from_file(handle)\n except Exception as e:\n logger.info(\"Encountered error writing to S3 bucket, \"\n \"falling back to read only mode (exception: %s)\", e)\n self.config.s3_cache_readonly = True\n else:\n logger.info(\"Finished uploading distribution archive to S3 bucket in %s.\", timer)", "response": "Uploads a distribution archive to the configured Amazon S3 bucket."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef s3_bucket(self):\n if not hasattr(self, 'cached_bucket'):\n self.check_prerequisites()\n with PatchedBotoConfig():\n from boto.exception import BotoClientError, BotoServerError, S3ResponseError\n # The following try/except block translates unexpected exceptions\n # raised by Boto into a CacheBackendError exception.\n try:\n # The following try/except block handles the expected exception\n # raised by Boto when an Amazon S3 bucket does not exist.\n try:\n logger.debug(\"Connecting to Amazon S3 bucket: %s\", self.config.s3_cache_bucket)\n self.cached_bucket = self.s3_connection.get_bucket(self.config.s3_cache_bucket)\n except S3ResponseError as e:\n if e.status == 404 and self.config.s3_cache_create_bucket:\n logger.info(\"Amazon S3 bucket doesn't exist yet, creating it now: %s\",\n self.config.s3_cache_bucket)\n self.s3_connection.create_bucket(self.config.s3_cache_bucket)\n self.cached_bucket = self.s3_connection.get_bucket(self.config.s3_cache_bucket)\n else:\n # Don't swallow exceptions we can't handle.\n raise\n except (BotoClientError, BotoServerError):\n raise CacheBackendError(\"\"\"\n Failed to connect to the configured Amazon S3 bucket\n {bucket}! Are you sure the bucket exists and is accessible\n using the provided credentials? The Amazon S3 cache backend\n will be disabled for now.\n \"\"\", bucket=repr(self.config.s3_cache_bucket))\n return self.cached_bucket", "response": "Connect to the user defined Amazon S3 bucket."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconnecting to the Amazon S3 API.", "response": "def s3_connection(self):\n \"\"\"\n Connect to the Amazon S3 API.\n\n If the connection attempt fails because Boto can't find credentials the\n attempt is retried once with an anonymous connection.\n\n Called on demand by :attr:`s3_bucket`.\n\n :returns: A :class:`boto.s3.connection.S3Connection` object.\n :raises: :exc:`.CacheBackendError` when the connection to the Amazon\n S3 API fails.\n \"\"\"\n if not hasattr(self, 'cached_connection'):\n self.check_prerequisites()\n with PatchedBotoConfig():\n import boto\n from boto.exception import BotoClientError, BotoServerError, NoAuthHandlerFound\n from boto.s3.connection import S3Connection, SubdomainCallingFormat, OrdinaryCallingFormat\n try:\n # Configure the number of retries and the socket timeout used\n # by Boto. Based on the snippet given in the following email:\n # https://groups.google.com/d/msg/boto-users/0osmP0cUl5Y/X4NdlMGWKiEJ\n if not boto.config.has_section(BOTO_CONFIG_SECTION):\n boto.config.add_section(BOTO_CONFIG_SECTION)\n boto.config.set(BOTO_CONFIG_SECTION,\n BOTO_CONFIG_NUM_RETRIES_OPTION,\n str(self.config.s3_cache_retries))\n boto.config.set(BOTO_CONFIG_SECTION,\n BOTO_CONFIG_SOCKET_TIMEOUT_OPTION,\n str(self.config.s3_cache_timeout))\n logger.debug(\"Connecting to Amazon S3 API ..\")\n endpoint = urlparse(self.config.s3_cache_url)\n host, _, port = endpoint.netloc.partition(':')\n kw = dict(\n host=host,\n port=int(port) if port else None,\n is_secure=(endpoint.scheme == 'https'),\n calling_format=(SubdomainCallingFormat() if host == S3Connection.DefaultHost\n else OrdinaryCallingFormat()),\n )\n try:\n self.cached_connection = S3Connection(**kw)\n except NoAuthHandlerFound:\n logger.debug(\"Amazon S3 API credentials missing, retrying with anonymous connection ..\")\n self.cached_connection = S3Connection(anon=True, **kw)\n except (BotoClientError, BotoServerError):\n raise CacheBackendError(\"\"\"\n Failed to connect to the Amazon S3 API! Most likely your\n credentials are not correctly configured. The Amazon S3\n cache backend will be disabled for now.\n \"\"\")\n return self.cached_connection"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get(self, section, name, default=None, **kw):\n try:\n return self.unbound_method(self.instance, section, name, **kw)\n except Exception:\n return default", "response": "Replacement for boto. pyami. config. Config. get"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a list of the pathnames of the source distribution archives that this requirement is related to.", "response": "def related_archives(self):\n \"\"\"\n The pathnames of the source distribution(s) for this requirement (a list of strings).\n\n .. note:: This property is very new in pip-accel and its logic may need\n some time to mature. For now any misbehavior by this property\n shouldn't be too much of a problem because the pathnames\n reported by this property are only used for cache\n invalidation (see the :attr:`last_modified` and\n :attr:`checksum` properties).\n \"\"\"\n # Escape the requirement's name for use in a regular expression.\n name_pattern = escape_name(self.name)\n # Escape the requirement's version for in a regular expression.\n version_pattern = re.escape(self.version)\n # Create a regular expression that matches any of the known source\n # distribution archive extensions.\n extension_pattern = '|'.join(re.escape(ext) for ext in ARCHIVE_EXTENSIONS if ext != '.whl')\n # Compose the regular expression pattern to match filenames of source\n # distribution archives in the local source index directory.\n pattern = '^%s-%s(%s)$' % (name_pattern, version_pattern, extension_pattern)\n # Compile the regular expression for case insensitive matching.\n compiled_pattern = re.compile(pattern, re.IGNORECASE)\n # Find the matching source distribution archives.\n return [os.path.join(self.config.source_index, fn)\n for fn in os.listdir(self.config.source_index)\n if compiled_pattern.match(fn)]"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef last_modified(self):\n mtimes = list(map(os.path.getmtime, self.related_archives))\n return max(mtimes) if mtimes else time.time()", "response": "Returns the last modified time of the binary containing this requirement s source distribution archive."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndetermines if the given requirement is a wheel.", "response": "def is_wheel(self):\n \"\"\"\n :data:`True` when the requirement is a wheel, :data:`False` otherwise.\n\n .. note:: To my surprise it seems to be non-trivial to determine\n whether a given :class:`pip.req.InstallRequirement` object\n produced by pip's internal Python API concerns a source\n distribution or a wheel distribution.\n\n There's a :class:`pip.req.InstallRequirement.is_wheel`\n property but I'm currently looking at a wheel distribution\n whose ``is_wheel`` property returns :data:`None`, apparently\n because the requirement's ``url`` property is also :data:`None`.\n\n Whether this is an obscure implementation detail of pip or\n caused by the way pip-accel invokes pip, I really can't tell\n (yet).\n \"\"\"\n probably_sdist = os.path.isfile(os.path.join(self.source_directory, 'setup.py'))\n probably_wheel = len(glob.glob(os.path.join(self.source_directory, '*.dist-info', 'WHEEL'))) > 0\n if probably_wheel and not probably_sdist:\n return True\n elif probably_sdist and not probably_wheel:\n return False\n elif probably_sdist and probably_wheel:\n variables = dict(requirement=self.setuptools_requirement,\n directory=self.source_directory)\n raise UnknownDistributionFormat(\"\"\"\n The unpacked distribution of {requirement} in {directory} looks\n like a source distribution and a wheel distribution, I'm\n confused!\n \"\"\", **variables)\n else:\n variables = dict(requirement=self.setuptools_requirement,\n directory=self.source_directory)\n raise UnknownDistributionFormat(\"\"\"\n The unpacked distribution of {requirement} in {directory}\n doesn't look like a source distribution and also doesn't look\n like a wheel distribution, I'm confused!\n \"\"\", **variables)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef wheel_metadata(self):\n if not self.is_wheel:\n raise TypeError(\"Requirement is not a wheel distribution!\")\n for distribution in find_distributions(self.source_directory):\n return distribution\n msg = \"pkg_resources didn't find a wheel distribution in %s!\"\n raise Exception(msg % self.source_directory)", "response": "Get the distribution metadata of an unpacked wheel distribution."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nvalidating that the virtual environment is correct.", "response": "def validate_environment(self):\n \"\"\"\n Make sure :data:`sys.prefix` matches ``$VIRTUAL_ENV`` (if defined).\n\n This may seem like a strange requirement to dictate but it avoids hairy\n issues like `documented here `_.\n\n The most sneaky thing is that ``pip`` doesn't have this problem\n (de-facto) because ``virtualenv`` copies ``pip`` wherever it goes...\n (``pip-accel`` on the other hand has to be installed by the user).\n \"\"\"\n environment = os.environ.get('VIRTUAL_ENV')\n if environment:\n if not same_directories(sys.prefix, environment):\n raise EnvironmentMismatchError(\"\"\"\n You are trying to install packages in environment #1 which\n is different from environment #2 where pip-accel is\n installed! Please install pip-accel under environment #1 to\n install packages there.\n\n Environment #1: {environment} (defined by $VIRTUAL_ENV)\n\n Environment #2: {prefix} (Python's installation prefix)\n \"\"\", environment=environment, prefix=sys.prefix)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef initialize_directories(self):\n makedirs(self.config.source_index)\n makedirs(self.config.eggs_cache)", "response": "Automatically create local directories required by pip - accel."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef clean_source_index(self):\n cleanup_timer = Timer()\n cleanup_counter = 0\n for entry in os.listdir(self.config.source_index):\n pathname = os.path.join(self.config.source_index, entry)\n if os.path.islink(pathname) and not os.path.exists(pathname):\n logger.warn(\"Cleaning up broken symbolic link: %s\", pathname)\n os.unlink(pathname)\n cleanup_counter += 1\n logger.debug(\"Cleaned up %i broken symbolic links from source index in %s.\", cleanup_counter, cleanup_timer)", "response": "This method cleans up broken symbolic links in the local source distribution index directory."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndownload unpack build and install the specified requirements.", "response": "def install_from_arguments(self, arguments, **kw):\n \"\"\"\n Download, unpack, build and install the specified requirements.\n\n This function is a simple wrapper for :func:`get_requirements()`,\n :func:`install_requirements()` and :func:`cleanup_temporary_directories()`\n that implements the default behavior of the pip accelerator. If you're\n extending or embedding pip-accel you may want to call the underlying\n methods instead.\n\n If the requirement set includes wheels and ``setuptools >= 0.8`` is not\n yet installed, it will be added to the requirement set and installed\n together with the other requirement(s) in order to enable the usage of\n distributions installed from wheels (their metadata is different).\n\n :param arguments: The command line arguments to ``pip install ..`` (a\n list of strings).\n :param kw: Any keyword arguments are passed on to\n :func:`install_requirements()`.\n :returns: The result of :func:`install_requirements()`.\n \"\"\"\n try:\n requirements = self.get_requirements(arguments, use_wheels=self.arguments_allow_wheels(arguments))\n have_wheels = any(req.is_wheel for req in requirements)\n if have_wheels and not self.setuptools_supports_wheels():\n logger.info(\"Preparing to upgrade to setuptools >= 0.8 to enable wheel support ..\")\n requirements.extend(self.get_requirements(['setuptools >= 0.8']))\n if requirements:\n if '--user' in arguments:\n from site import USER_BASE\n kw.setdefault('prefix', USER_BASE)\n return self.install_requirements(requirements, **kw)\n else:\n logger.info(\"Nothing to do! (requirements already installed)\")\n return 0\n finally:\n self.cleanup_temporary_directories()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_requirements(self, arguments, max_retries=None, use_wheels=False):\n arguments = self.decorate_arguments(arguments)\n # Demote hash sum mismatch log messages from CRITICAL to DEBUG (hiding\n # implementation details from users unless they want to see them).\n with DownloadLogFilter():\n with SetupRequiresPatch(self.config, self.eggs_links):\n # Use a new build directory for each run of get_requirements().\n self.create_build_directory()\n # Check whether -U or --upgrade was given.\n if any(match_option(a, '-U', '--upgrade') for a in arguments):\n logger.info(\"Checking index(es) for new version (-U or --upgrade was given) ..\")\n else:\n # If -U or --upgrade wasn't given and all requirements can be\n # satisfied using the archives in pip-accel's local source\n # index we don't need pip to connect to PyPI looking for new\n # versions (that will just slow us down).\n try:\n return self.unpack_source_dists(arguments, use_wheels=use_wheels)\n except DistributionNotFound:\n logger.info(\"We don't have all distribution archives yet!\")\n # Get the maximum number of retries from the configuration if the\n # caller didn't specify a preference.\n if max_retries is None:\n max_retries = self.config.max_retries\n # If not all requirements are available locally we use pip to\n # download the missing source distribution archives from PyPI (we\n # retry a couple of times in case pip reports recoverable\n # errors).\n for i in range(max_retries):\n try:\n return self.download_source_dists(arguments, use_wheels=use_wheels)\n except Exception as e:\n if i + 1 < max_retries:\n # On all but the last iteration we swallow exceptions\n # during downloading.\n logger.warning(\"pip raised exception while downloading distributions: %s\", e)\n else:\n # On the last iteration we don't swallow exceptions\n # during downloading because the error reported by pip\n # is the most sensible error for us to report.\n raise\n logger.info(\"Retrying after pip failed (%i/%i) ..\", i + 1, max_retries)", "response": "Get the requirements for the given list of source distribution archives."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchanges pathnames of local files into ``file://`` URLs with ``#md5=...`` fragments. :param arguments: The command line arguments to ``pip install ...`` (a list of strings). :returns: A copy of the command line arguments with pathnames of local files rewritten to ``file://`` URLs. When pip-accel calls pip to download missing distribution archives and the user specified the pathname of a local distribution archive on the command line, pip will (by default) *not* copy the archive into the download directory if an archive for the same package name and version is already present. This can lead to the confusing situation where the user specifies a local distribution archive to install, a different (older) archive for the same package and version is present in the download directory and `pip-accel` installs the older archive instead of the newer archive. To avoid this confusing behavior, the :func:`decorate_arguments()` method rewrites the command line arguments given to ``pip install`` so that pathnames of local archives are changed into ``file://`` URLs that include a fragment with the hash of the file's contents. Here's an example: - Local pathname: ``/tmp/pep8-1.6.3a0.tar.gz`` - File URL: ``file:///tmp/pep8-1.6.3a0.tar.gz#md5=19cbf0b633498ead63fb3c66e5f1caf6`` When pip fills the download directory and encounters a previously cached distribution archive it will check the hash, realize the contents have changed and replace the archive in the download directory.", "response": "def decorate_arguments(self, arguments):\n \"\"\"\n Change pathnames of local files into ``file://`` URLs with ``#md5=...`` fragments.\n\n :param arguments: The command line arguments to ``pip install ...`` (a\n list of strings).\n :returns: A copy of the command line arguments with pathnames of local\n files rewritten to ``file://`` URLs.\n\n When pip-accel calls pip to download missing distribution archives and\n the user specified the pathname of a local distribution archive on the\n command line, pip will (by default) *not* copy the archive into the\n download directory if an archive for the same package name and\n version is already present.\n\n This can lead to the confusing situation where the user specifies a\n local distribution archive to install, a different (older) archive for\n the same package and version is present in the download directory and\n `pip-accel` installs the older archive instead of the newer archive.\n\n To avoid this confusing behavior, the :func:`decorate_arguments()`\n method rewrites the command line arguments given to ``pip install`` so\n that pathnames of local archives are changed into ``file://`` URLs that\n include a fragment with the hash of the file's contents. Here's an\n example:\n\n - Local pathname: ``/tmp/pep8-1.6.3a0.tar.gz``\n - File URL: ``file:///tmp/pep8-1.6.3a0.tar.gz#md5=19cbf0b633498ead63fb3c66e5f1caf6``\n\n When pip fills the download directory and encounters a previously\n cached distribution archive it will check the hash, realize the\n contents have changed and replace the archive in the download\n directory.\n \"\"\"\n arguments = list(arguments)\n for i, value in enumerate(arguments):\n is_constraint_file = (i >= 1 and match_option(arguments[i - 1], '-c', '--constraint'))\n is_requirement_file = (i >= 1 and match_option(arguments[i - 1], '-r', '--requirement'))\n if not is_constraint_file and not is_requirement_file and os.path.isfile(value):\n arguments[i] = '%s#md5=%s' % (create_file_url(value), hash_files('md5', value))\n return arguments"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfinds and unpack local source distributions and discover their metadata.", "response": "def unpack_source_dists(self, arguments, use_wheels=False):\n \"\"\"\n Find and unpack local source distributions and discover their metadata.\n\n :param arguments: The command line arguments to ``pip install ...`` (a\n list of strings).\n :param use_wheels: Whether pip and pip-accel are allowed to use wheels_\n (:data:`False` by default for backwards compatibility\n with callers that use pip-accel as a Python API).\n :returns: A list of :class:`pip_accel.req.Requirement` objects.\n :raises: Any exceptions raised by pip, for example\n :exc:`pip.exceptions.DistributionNotFound` when not all\n requirements can be satisfied.\n\n This function checks whether there are local source distributions\n available for all requirements, unpacks the source distribution\n archives and finds the names and versions of the requirements. By using\n the ``pip install --download`` command we avoid reimplementing the\n following pip features:\n\n - Parsing of ``requirements.txt`` (including recursive parsing).\n - Resolution of possibly conflicting pinned requirements.\n - Unpacking source distributions in multiple formats.\n - Finding the name & version of a given source distribution.\n \"\"\"\n unpack_timer = Timer()\n logger.info(\"Unpacking distribution(s) ..\")\n with PatchedAttribute(pip_install_module, 'PackageFinder', CustomPackageFinder):\n requirements = self.get_pip_requirement_set(arguments, use_remote_index=False, use_wheels=use_wheels)\n logger.info(\"Finished unpacking %s in %s.\", pluralize(len(requirements), \"distribution\"), unpack_timer)\n return requirements"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef download_source_dists(self, arguments, use_wheels=False):\n download_timer = Timer()\n logger.info(\"Downloading missing distribution(s) ..\")\n requirements = self.get_pip_requirement_set(arguments, use_remote_index=True, use_wheels=use_wheels)\n logger.info(\"Finished downloading distribution(s) in %s.\", download_timer)\n return requirements", "response": "Download missing source distributions."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a pip. req. RequirementSet object for the given pip arguments.", "response": "def get_pip_requirement_set(self, arguments, use_remote_index, use_wheels=False):\n \"\"\"\n Get the unpacked requirement(s) specified by the caller by running pip.\n\n :param arguments: The command line arguments to ``pip install ...`` (a\n list of strings).\n :param use_remote_index: A boolean indicating whether pip is allowed to\n connect to the main package index\n (http://pypi.python.org by default).\n :param use_wheels: Whether pip and pip-accel are allowed to use wheels_\n (:data:`False` by default for backwards compatibility\n with callers that use pip-accel as a Python API).\n :returns: A :class:`pip.req.RequirementSet` object created by pip.\n :raises: Any exceptions raised by pip.\n \"\"\"\n # Compose the pip command line arguments. This is where a lot of the\n # core logic of pip-accel is hidden and it uses some esoteric features\n # of pip so this method is heavily commented.\n command_line = []\n # Use `--download' to instruct pip to download requirement(s) into\n # pip-accel's local source distribution index directory. This has the\n # following documented side effects (see `pip install --help'):\n # 1. It disables the installation of requirements (without using the\n # `--no-install' option which is deprecated and slated for removal\n # in pip 7.x).\n # 2. It ignores requirements that are already installed (because\n # pip-accel doesn't actually need to re-install requirements that\n # are already installed we will have work around this later, but\n # that seems fairly simple to do).\n command_line.append('--download=%s' % self.config.source_index)\n # Use `--find-links' to point pip at pip-accel's local source\n # distribution index directory. This ensures that source distribution\n # archives are never downloaded more than once (regardless of the HTTP\n # cache that was introduced in pip 6.x).\n command_line.append('--find-links=%s' % create_file_url(self.config.source_index))\n # Use `--no-binary=:all:' to ignore wheel distributions by default in\n # order to preserve backwards compatibility with callers that expect a\n # requirement set consisting only of source distributions that can be\n # converted to `dumb binary distributions'.\n if not use_wheels and self.arguments_allow_wheels(arguments):\n command_line.append('--no-binary=:all:')\n # Use `--no-index' to force pip to only consider source distribution\n # archives contained in pip-accel's local source distribution index\n # directory. This enables pip-accel to ask pip \"Can the local source\n # distribution index satisfy all requirements in the given requirement\n # set?\" which enables pip-accel to keep pip off the internet unless\n # absolutely necessary :-).\n if not use_remote_index:\n command_line.append('--no-index')\n # Use `--no-clean' to instruct pip to unpack the source distribution\n # archives and *not* clean up the unpacked source distributions\n # afterwards. This enables pip-accel to replace pip's installation\n # logic with cached binary distribution archives.\n command_line.append('--no-clean')\n # Use `--build-directory' to instruct pip to unpack the source\n # distribution archives to a temporary directory managed by pip-accel.\n # We will clean up the build directory when we're done using the\n # unpacked source distributions.\n command_line.append('--build-directory=%s' % self.build_directory)\n # Append the user's `pip install ...' arguments to the command line\n # that we just assembled.\n command_line.extend(arguments)\n logger.info(\"Executing command: pip install %s\", ' '.join(command_line))\n # Clear the build directory to prevent PreviousBuildDirError exceptions.\n self.clear_build_directory()\n # During the pip 6.x upgrade pip-accel switched to using `pip install\n # --download' which can produce an interactive prompt as described in\n # issue 51 [1]. The documented way [2] to get rid of this interactive\n # prompt is pip's --exists-action option, but due to what is most\n # likely a bug in pip this doesn't actually work. The environment\n # variable $PIP_EXISTS_ACTION does work however, so if the user didn't\n # set it we will set a reasonable default for them.\n # [1] https://github.com/paylogic/pip-accel/issues/51\n # [2] https://pip.pypa.io/en/latest/reference/pip.html#exists-action-option\n os.environ.setdefault('PIP_EXISTS_ACTION', 'w')\n # Initialize and run the `pip install' command.\n command = InstallCommand()\n opts, args = command.parse_args(command_line)\n if not opts.ignore_installed:\n # If the user didn't supply the -I, --ignore-installed option we\n # will forcefully disable the option. Refer to the documentation of\n # the AttributeOverrides class for further details.\n opts = AttributeOverrides(opts, ignore_installed=False)\n requirement_set = command.run(opts, args)\n # Make sure the output of pip and pip-accel are not intermingled.\n sys.stdout.flush()\n if requirement_set is None:\n raise NothingToDoError(\"\"\"\n pip didn't generate a requirement set, most likely you\n specified an empty requirements file?\n \"\"\")\n else:\n return self.transform_pip_requirement_set(requirement_set)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ntransform pip s requirement set into one that pip - accel can work with.", "response": "def transform_pip_requirement_set(self, requirement_set):\n \"\"\"\n Transform pip's requirement set into one that `pip-accel` can work with.\n\n :param requirement_set: The :class:`pip.req.RequirementSet` object\n reported by pip.\n :returns: A list of :class:`pip_accel.req.Requirement` objects.\n\n This function converts the :class:`pip.req.RequirementSet` object\n reported by pip into a list of :class:`pip_accel.req.Requirement`\n objects.\n \"\"\"\n filtered_requirements = []\n for requirement in requirement_set.requirements.values():\n # The `satisfied_by' property is set by pip when a requirement is\n # already satisfied (i.e. a version of the package that satisfies\n # the requirement is already installed) and -I, --ignore-installed\n # is not used. We filter out these requirements because pip never\n # unpacks distributions for these requirements, so pip-accel can't\n # do anything useful with such requirements.\n if requirement.satisfied_by:\n continue\n # The `constraint' property marks requirement objects that\n # constrain the acceptable version(s) of another requirement but\n # don't define a requirement themselves, so we filter them out.\n if requirement.constraint:\n continue\n # All other requirements are reported to callers.\n filtered_requirements.append(requirement)\n self.reported_requirements.append(requirement)\n return sorted([Requirement(self.config, r) for r in filtered_requirements],\n key=lambda r: r.name.lower())"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef install_requirements(self, requirements, **kw):\n install_timer = Timer()\n install_types = []\n if any(not req.is_wheel for req in requirements):\n install_types.append('binary')\n if any(req.is_wheel for req in requirements):\n install_types.append('wheel')\n logger.info(\"Installing from %s distributions ..\", concatenate(install_types))\n # Track installed files by default (unless the caller specifically opted out).\n kw.setdefault('track_installed_files', True)\n num_installed = 0\n for requirement in requirements:\n # When installing setuptools we need to uninstall distribute,\n # otherwise distribute will shadow setuptools and all sorts of\n # strange issues can occur (e.g. upgrading to the latest\n # setuptools to gain wheel support and then having everything\n # blow up because distribute doesn't know about wheels).\n if requirement.name == 'setuptools' and is_installed('distribute'):\n uninstall('distribute')\n if requirement.is_editable:\n logger.debug(\"Installing %s in editable form using pip.\", requirement)\n with TransactionalUpdate(requirement):\n command = InstallCommand()\n opts, args = command.parse_args(['--no-deps', '--editable', requirement.source_directory])\n command.run(opts, args)\n elif requirement.is_wheel:\n logger.info(\"Installing %s wheel distribution using pip ..\", requirement)\n with TransactionalUpdate(requirement):\n wheel_version = pip_wheel_module.wheel_version(requirement.source_directory)\n pip_wheel_module.check_compatibility(wheel_version, requirement.name)\n requirement.pip_requirement.move_wheel_files(requirement.source_directory)\n else:\n logger.info(\"Installing %s binary distribution using pip-accel ..\", requirement)\n with TransactionalUpdate(requirement):\n binary_distribution = self.bdists.get_binary_dist(requirement)\n self.bdists.install_binary_dist(binary_distribution, **kw)\n num_installed += 1\n logger.info(\"Finished installing %s in %s.\",\n pluralize(num_installed, \"requirement\"),\n install_timer)\n return num_installed", "response": "Manually install a set of requirements from binary or wheel distributions."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef clear_build_directory(self):\n stat = os.stat(self.build_directory)\n shutil.rmtree(self.build_directory)\n os.makedirs(self.build_directory, stat.st_mode)", "response": "Clear the build directory where pip unpacks the source distribution archives."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef cleanup_temporary_directories(self):\n while self.build_directories:\n shutil.rmtree(self.build_directories.pop())\n for requirement in self.reported_requirements:\n requirement.remove_temporary_source()\n while self.eggs_links:\n symbolic_link = self.eggs_links.pop()\n if os.path.islink(symbolic_link):\n os.unlink(symbolic_link)", "response": "Delete the build directories and any temporary directories created by pip."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchanges the severity of selected log records.", "response": "def filter(self, record):\n \"\"\"Change the severity of selected log records.\"\"\"\n if isinstance(record.msg, basestring):\n message = record.msg.lower()\n if all(kw in message for kw in self.KEYWORDS):\n record.levelname = 'DEBUG'\n record.levelno = logging.DEBUG\n return 1"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck if a distribution archive exists in the local cache.", "response": "def get(self, filename):\n \"\"\"\n Check if a distribution archive exists in the local cache.\n\n :param filename: The filename of the distribution archive (a string).\n :returns: The pathname of a distribution archive on the local file\n system or :data:`None`.\n \"\"\"\n pathname = os.path.join(self.config.binary_cache, filename)\n if os.path.isfile(pathname):\n logger.debug(\"Distribution archive exists in local cache (%s).\", pathname)\n return pathname\n else:\n logger.debug(\"Distribution archive doesn't exist in local cache (%s).\", pathname)\n return None"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nstore a distribution archive in the local cache.", "response": "def put(self, filename, handle):\n \"\"\"\n Store a distribution archive in the local cache.\n\n :param filename: The filename of the distribution archive (a string).\n :param handle: A file-like object that provides access to the\n distribution archive.\n \"\"\"\n file_in_cache = os.path.join(self.config.binary_cache, filename)\n logger.debug(\"Storing distribution archive in local cache: %s\", file_in_cache)\n makedirs(os.path.dirname(file_in_cache))\n # Stream the contents of the distribution archive to a temporary file\n # to avoid race conditions (e.g. partial reads) between multiple\n # processes that are using the local cache at the same time.\n with AtomicReplace(file_in_cache) as temporary_file:\n with open(temporary_file, 'wb') as temporary_file_handle:\n shutil.copyfileobj(handle, temporary_file_handle)\n logger.debug(\"Finished caching distribution archive in local cache.\")"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef to_python(self, value):\n \"Normalize data to a list of strings.\"\n # Return None if no input was given.\n if not value:\n return []\n return [v.strip() for v in value.splitlines() if v != \"\"]", "response": "Normalize data to a list of strings."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef validate(self, value):\n \"Check if value consists only of valid emails.\"\n\n # Use the parent's handling of required fields, etc.\n super(MultiEmailField, self).validate(value)\n try:\n for email in value:\n validate_email(email)\n except ValidationError:\n raise ValidationError(self.message, code=self.code)", "response": "Check if value consists only of valid emails."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef prep_value(self, value):\n if value in MULTI_EMAIL_FIELD_EMPTY_VALUES:\n return \"\"\n elif isinstance(value, six.string_types):\n return value\n elif isinstance(value, list):\n return \"\\n\".join(value)\n raise ValidationError('Invalid format.')", "response": "Prepare value before effectively render widget"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\npauses the current process indefinitely.", "response": "def pause() -> None:\n \"\"\"\n Pauses the current process indefinitely -- it will require another process to `resume()` it. When this resumption\n happens, the process returns from this function.\n \"\"\"\n if _logger is not None:\n _log(INFO, \"Process\", local.name, \"pause\")\n Process.current().rsim()._gr.switch()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef advance(delay: float) -> None:\n if _logger is not None:\n _log(INFO, \"Process\", local.name, \"advance\", delay=delay)\n curr = Process.current()\n rsim = curr.rsim\n id_wakeup = rsim()._schedule(delay, curr.switch) # type: ignore\n\n try:\n rsim()._gr.switch() # type: ignore\n except Interrupt:\n rsim()._cancel(id_wakeup) # type: ignore\n raise", "response": "Pauses the current process for the given delay in simulated time."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef happens(intervals: Iterable[float], name: Optional[str] = None) -> Callable:\n def hook(event: Callable):\n def make_happen(*args_event: Any, **kwargs_event: Any) -> None:\n if name is not None:\n local.name = cast(str, name)\n for interval in intervals:\n advance(interval)\n add(event, *args_event, **kwargs_event)\n return make_happen\n return hook", "response": "Decorator that returns a function that runs a process at a given set of intervals."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef tagged(*tags: Tags) -> Callable:\n global GREENSIM_TAG_ATTRIBUTE\n \"\"\"\n Decorator for adding a label to the process.\n These labels are applied to any child Processes produced by event\n \"\"\"\n def hook(event: Callable):\n def wrapper(*args, **kwargs):\n event(*args, **kwargs)\n setattr(wrapper, GREENSIM_TAG_ATTRIBUTE, tags)\n return wrapper\n return hook", "response": "Decorator for adding a label to the process."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nallows the current process to wait for multiple concurrent signals.", "response": "def select(*signals: Signal, **kwargs) -> List[Signal]:\n \"\"\"\n Allows the current process to wait for multiple concurrent signals. Waits until one of the signals turns on, at\n which point this signal is returned.\n\n :param timeout:\n If this parameter is not ``None``, it is taken as a delay at the end of which the process times out, and\n stops waiting on the set of :py:class:`Signal`s. In such a situation, a :py:class:`Timeout` exception is raised\n on the process.\n \"\"\"\n class CleanUp(Interrupt):\n pass\n\n timeout = kwargs.get(\"timeout\", None)\n if not isinstance(timeout, (float, int, type(None))):\n raise ValueError(\"The timeout keyword parameter can be either None or a number.\")\n\n def wait_one(signal: Signal, common: Signal) -> None:\n try:\n signal.wait()\n common.turn_on()\n except CleanUp:\n pass\n\n # We simply sets up multiple sub-processes respectively waiting for one of the signals. Once one of them has fired,\n # the others will all run no-op eventually, so no need for any explicit clean-up.\n common = Signal(name=local.name + \"-selector\").turn_off()\n if _logger is not None:\n _log(INFO, \"select\", \"select\", \"select\", signals=[sig.name for sig in signals])\n procs = []\n for signal in signals:\n procs.append(add(wait_one, signal, common))\n\n try:\n common.wait(timeout)\n finally:\n for proc in procs: # Clean up the support processes.\n proc.interrupt(CleanUp())\n\n return [signal for signal in signals if signal.is_on]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\niterating over scheduled events.", "response": "def events(self) -> Iterable[Tuple[Optional[float], Callable, Sequence[Any], Mapping[str, Any]]]:\n \"\"\"\n Iterates over scheduled events. Each event is a 4-tuple composed of the moment (on the simulated clock) the\n event should execute, the function corresponding to the event, its positional parameters (as a tuple of\n arbitrary length), and its keyword parameters (as a dictionary).\n \"\"\"\n return (\n (event.timestamp, event.fn, event.args, event.kwargs)\n for event in self._events\n if not event.is_cancelled\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _schedule(self, delay: float, event: Callable, *args: Any, **kwargs: Any) -> int:\n if _logger is not None:\n self._log(\n DEBUG,\n \"schedule\",\n delay=delay,\n fn=event,\n args=args,\n kwargs=kwargs,\n counter=self._counter,\n __now=self.now()\n )\n delay = float(delay)\n if delay < 0.0:\n raise ValueError(\"Delay must be positive.\")\n\n # Use counter to strictly order events happening at the same simulated time. This gives a total order on events,\n # working around the heap queue not yielding a stable ordering.\n id_event = self._counter\n heappush(self._events, _Event(self._ts_now + delay, id_event, event, *args, **kwargs))\n self._counter += 1\n return id_event", "response": "Schedules a one - time event to be run along the simulation."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncancel a previously scheduled event.", "response": "def _cancel(self, id_cancel) -> None:\n \"\"\"\n Cancels a previously scheduled event. This method is private, and is meant for internal usage by the\n :py:class:`Simulator` and :py:class:`Process` classes, and helper functions of this module.\n \"\"\"\n if _logger is not None:\n self._log(DEBUG, \"cancel\", id=id_cancel)\n for event in self._events:\n if event.identifier == id_cancel:\n event.cancel()\n break"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add(self, fn_process: Callable, *args: Any, **kwargs: Any) -> 'Process':\n return self.add_in(0.0, fn_process, *args, **kwargs)", "response": "Adds a process to the simulation."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds a process to the simulation which is made to start after the given delay in simulated time.", "response": "def add_in(self, delay: float, fn_process: Callable, *args: Any, **kwargs: Any) -> 'Process':\n \"\"\"\n Adds a process to the simulation, which is made to start after the given delay in simulated time.\n\n See method add() for more details.\n \"\"\"\n process = Process(self, fn_process, self._gr)\n if _logger is not None:\n self._log(INFO, \"add\", __now=self.now(), fn=fn_process, args=args, kwargs=kwargs)\n self._schedule(delay, process.switch, *args, **kwargs)\n return process"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd a process to the simulation which is made to start at the given time.", "response": "def add_at(self, moment: float, fn_process: Callable, *args: Any, **kwargs: Any) -> 'Process':\n \"\"\"\n Adds a process to the simulation, which is made to start at the given exact time on the simulated clock. Note\n that times in the past when compared to the current moment on the simulated clock are forbidden.\n\n See method add() for more details.\n \"\"\"\n delay = moment - self.now()\n if delay < 0.0:\n raise ValueError(\n f\"The given moment to start the process ({moment:f}) is in the past (now is {self.now():f}).\"\n )\n return self.add_in(delay, fn_process, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nrunning the simulation until a stopping condition is met.", "response": "def run(self, duration: float = inf) -> None:\n \"\"\"\n Runs the simulation until a stopping condition is met (no more events, or an event invokes method stop()), or\n until the simulated clock hits the given duration.\n \"\"\"\n if _logger is not None:\n self._log(INFO, \"run\", __now=self.now(), duration=duration)\n counter_stop_event = None\n if duration != inf:\n counter_stop_event = self._counter\n self._schedule(duration, self.stop)\n\n self._is_running = True\n while self.is_running and len(self._events) > 0:\n event = heappop(self._events)\n self._ts_now = event.timestamp or self._ts_now\n event.execute(self)\n\n if len(self._events) == 0:\n if _logger is not None:\n self._log(DEBUG, \"out-of-events\", __now=self.now())\n self.stop()\n\n if counter_stop_event is not None:\n # Change the planned stop to a no-op. We would rather eliminate it, but this would force a re-sort of the\n # event queue.\n for (i, event) in enumerate(self._events):\n if event.identifier == counter_stop_event:\n if _logger is not None:\n self._log(DEBUG, \"cancel-stop\", counter=counter_stop_event)\n event.cancel()\n break"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef step(self) -> None:\n event = heappop(self._events)\n self._ts_now = event.timestamp or self._ts_now\n event.execute(self)", "response": "Runs a single event of the simulation."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nstop the running simulation once the current event is done executing.", "response": "def stop(self) -> None:\n \"\"\"\n Stops the running simulation once the current event is done executing.\n \"\"\"\n if self.is_running:\n if _logger is not None:\n self._log(INFO, \"stop\", __now=self.now())\n self._is_running = False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nclearing the internal state of the simulation and resets the simulated clock to 0. 0.", "response": "def _clear(self) -> None:\n \"\"\"\n Resets the internal state of the simulator, and sets the simulated clock back to 0.0. This discards all\n outstanding events and tears down hanging process instances.\n \"\"\"\n for _, event, _, _ in self.events():\n if hasattr(event, \"__self__\") and isinstance(event.__self__, Process): # type: ignore\n event.__self__.throw() # type: ignore\n self._events.clear()\n self._ts_now = 0.0"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nwrap around the _body method to catch Interrupt and finish the process.", "response": "def _run(self, *args: Any, **kwargs: Any) -> None:\n \"\"\"\n Wraps around the process body (the function that implements a process within the simulation) so as to catch the\n eventual Interrupt that may terminate the process.\n \"\"\"\n try:\n self._body(*args, **kwargs)\n if _logger is not None:\n _log(INFO, \"Process\", self.local.name, \"die-finish\")\n except Interrupt:\n if _logger is not None:\n _log(INFO, \"Process\", self.local.name, \"die-interrupt\")"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nbinds and call the constructor of a type directly and the TaggedObject s __init__ method.", "response": "def _bind_and_call_constructor(self, t: type, *args) -> None:\n \"\"\"\n Accesses the __init__ method of a type directly and calls it with *args\n\n This allows the constructors of both superclasses to be called, as described in get_binding.md\n\n This could be done using two calls to super() with a hack based on how Python searches __mro__:\n\n ```\n super().__init__(run, parent) # calls greenlet.greenlet.__init__\n super(greenlet.greenlet, self).__init__() # calls TaggedObject.__init__\n ```\n\n Python will always find greenlet.greenlet first since it is specified first, but will ignore it if it is\n the first argument to super, which is meant to indicate the subclass and thus is not meant to be called on\n\n See: https://docs.python.org/3.7/library/functions.html#super\n\n This is indirect, confusing, and not in following with the purpose of super(), so the direct method was used\n \"\"\"\n t.__init__.__get__(self)(*args)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the current process instance.", "response": "def current() -> 'Process':\n \"\"\"\n Returns the instance of the process that is executing at the current moment.\n \"\"\"\n curr = greenlet.getcurrent()\n if not isinstance(curr, Process):\n raise TypeError(\"Current greenlet does not correspond to a Process instance.\")\n return cast(Process, greenlet.getcurrent())"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef resume(self) -> None:\n if _logger is not None:\n _log(INFO, \"Process\", self.local.name, \"resume\")\n self.rsim()._schedule(0.0, self.switch)", "response": "Resumes a process that has been previously paused by invoking function resume."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef interrupt(self, inter: Optional[Interrupt] = None) -> None:\n if inter is None:\n inter = Interrupt()\n if _logger is not None:\n _log(INFO, \"Process\", self.local.name, \"interrupt\", type=type(inter).__name__)\n self.rsim()._schedule(0.0, self.throw, inter)", "response": "Interrupts a process that has been previously paused or made to advance or raise an Interrupt exception on it."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef join(self, timeout: Optional[float] = None):\n class CancelBalk(Interrupt):\n pass\n\n self._counter += 1\n if _logger is not None:\n self._log(INFO, \"join\")\n heappush(self._waiting, (self._get_order_token(self._counter), Process.current()))\n\n proc_balk = None\n if timeout is not None:\n def balk(proc):\n nonlocal proc_balk\n try:\n advance(cast(float, timeout))\n proc.interrupt(Timeout())\n except CancelBalk:\n pass\n finally:\n proc_balk = None\n\n # The balking process is started here.\n proc_balk = add(balk, Process.current())\n\n try:\n pause()\n except Interrupt:\n current = Process.current()\n for index in reversed([i for i, (_, proc) in enumerate(self._waiting) if proc is current]):\n del self._waiting[index]\n heapify(self._waiting)\n raise\n finally:\n # Three situations can prompt a process to exit a queue:\n #\n # 1. The process is pop()ped out of the queue by a peer.\n # 2. The process balk()s out after a timeout.\n # 3. The process leaves the queue because of a distinct interrupt (besides CancelBalk).\n #\n # In cases 1 and 3, the balking process has never exited and is still in the advance() call. In both these\n # cases, the balking process should itself be interrupted, otherwise it may prompt the balking of a future\n # queue traversal. However, if we exit the queue because of case no. 2, the balking process is finished.\n # Interrupting it would do no harm (it has been tested by accident), but we mean to be deliberate about when\n # this interruption is necessary. So we perform the interrupt of the balking process only in cases 1 and 3;\n # in case 2, the balk() function exits, thereby clearing the reference we have here to it. Do remark that\n # whenever a timeout is not set, proc_balk remains None all the way, reducing the situation to case 1.\n if proc_balk is not None:\n proc_balk.interrupt(CancelBalk())", "response": "This method is used to join the process queue."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef pop(self):\n if not self.is_empty():\n _, process = heappop(self._waiting)\n if _logger is not None:\n self._log(INFO, \"pop\", process=process.local.name)\n process.resume()", "response": "Removes the top process from the queue and resumes its execution."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nturns on the signal.", "response": "def turn_on(self) -> \"Signal\":\n \"\"\"\n Turns on the signal. If processes are waiting, they are all resumed. This may be invoked from any code.\n\n Remark that while processes are simultaneously resumed in simulated time, they are effectively resumed in the\n sequence corresponding to the queue discipline. Therefore, if one of the resumed processes turns the signal back\n off, remaining resumed processes join back the queue. If the queue discipline is not monotonic (for instance,\n if it bears a random component), then this toggling of the signal may reorder the processes.\n \"\"\"\n if _logger is not None:\n self._log(INFO, \"turn-on\")\n self._is_on = True\n while not self._queue.is_empty():\n self._queue.pop()\n return self"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nturn off the signal.", "response": "def turn_off(self) -> \"Signal\":\n \"\"\"\n Turns off the signal. This may be invoked from any code.\n \"\"\"\n if _logger is not None:\n self._log(INFO, \"turn-off\")\n self._is_on = False\n return self"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef wait(self, timeout: Optional[float] = None) -> None:\n if _logger is not None:\n self._log(INFO, \"wait\")\n while not self.is_on:\n self._queue.join(timeout)", "response": "Waits for the current process to exit."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef take(self, num_instances: int = 1, timeout: Optional[float] = None) -> None:\n\n if num_instances < 1:\n raise ValueError(f\"Process must request at least 1 instance; here requested {num_instances}.\")\n if num_instances > self.num_instances_total:\n raise ValueError(\n f\"Process must request at most {self.num_instances_total} instances; here requested {num_instances}.\"\n )\n if _logger is not None:\n self._log(INFO, \"take\", num_instances=num_instances, free=self.num_instances_free)\n proc = Process.current()\n if self._num_instances_free < num_instances:\n proc.local.__num_instances_required = num_instances\n try:\n self._waiting.join(timeout)\n finally:\n del proc.local.__num_instances_required\n self._num_instances_free -= num_instances\n if _logger is not None and proc in self._usage:\n self._log(WARNING, \"take-again\", already=self._usage[proc], more=num_instances)\n self._usage.setdefault(proc, 0)\n self._usage[proc] += num_instances", "response": "Take the specified number of resource instances from the queue."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreleases the specified number of instances.", "response": "def release(self, num_instances: int = 1) -> None:\n \"\"\"\n The current process releases instances it has previously taken. It may thus release less than it has taken.\n These released instances become free. If the total number of free instances then satisfy the request of the top\n process of the waiting queue, it is popped off the queue and resumed.\n \"\"\"\n proc = Process.current()\n error_format = \"Process %s holds %s instances, but requests to release more (%s)\"\n if self._usage.get(proc, 0) > 0:\n if num_instances > self._usage[proc]:\n raise ValueError(\n error_format % (proc.local.name, self._usage[proc], num_instances)\n )\n self._usage[proc] -= num_instances\n self._num_instances_free += num_instances\n if _logger is not None:\n self._log(\n INFO,\n \"release\",\n num_instances=num_instances,\n keeping=self._usage[proc],\n free=self.num_instances_free\n )\n if self._usage[proc] <= 0:\n del self._usage[proc]\n if not self._waiting.is_empty():\n num_instances_next = cast(int, self._waiting.peek().local.__num_instances_required)\n if num_instances_next <= self.num_instances_free:\n self._waiting.pop()\n elif _logger is not None:\n self._log(DEBUG, \"release-nopop\", next_requires=num_instances_next, free=self.num_instances_free)\n elif _logger is not None:\n self._log(DEBUG, \"release-queueempty\")\n else:\n raise RuntimeError(\n f\"Process {proc.local.name} tries to release {num_instances} instances, but is holding none.)\"\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef using(self, num_instances: int = 1, timeout: Optional[float] = None):\n self.take(num_instances, timeout)\n yield self\n self.release(num_instances)", "response": "Context manager around resource reservation."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nprogress capture that writes updated metrics to an interactive terminal.", "response": "def capture_print(file_dest_maybe: Optional[IO] = None):\n \"\"\"Progress capture that writes updated metrics to an interactive terminal.\"\"\"\n file_dest: IO = file_dest_maybe or sys.stderr\n\n def _print_progress(progress_min: float, rt_remaining: float, _mc: MeasureComparison) -> None:\n nonlocal file_dest\n percent_progress = progress_min * 100.0\n time_remaining, unit = _display_time(rt_remaining)\n print(\n f\"Progress: {percent_progress:.1f}% -- Time remaining: {time_remaining} {unit} \",\n end=\"\\r\",\n file=file_dest\n )\n\n return _print_progress"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntracking the progress of a given measure against a target.", "response": "def track_progress(\n measure: MeasureProgress,\n target: MetricProgress,\n interval_check: float,\n capture_maybe: Optional[CaptureProgress] = None\n) -> None:\n \"\"\"\n Tracks progress against a certain end condition of the simulation (for instance, a certain duration on the simulated\n clock), reporting this progress as the simulation chugs along. Stops the simulation once the target has been\n reached. By default, the progress is reported as printout on standard output, in a manner that works best for\n digital terminals.\n \"\"\"\n\n def measure_to_target() -> MeasureComparison:\n return list(zip(measure(), target))\n\n def is_finished(progress: MeasureComparison) -> bool:\n return all(p >= t for p, t in progress)\n\n capture = capture_maybe or capture_print()\n rt_started = now_real()\n while True:\n advance(interval_check)\n\n rt_elapsed = now_real() - rt_started\n progress = measure_to_target()\n ratio_progress_min = min(m / t for m, t in progress)\n if ratio_progress_min == 0.0:\n rt_total_projected = inf\n else:\n rt_total_projected = rt_elapsed / ratio_progress_min\n capture(ratio_progress_min, rt_total_projected - rt_elapsed, progress)\n\n if is_finished(progress):\n stop()\n break"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef calcTm(seq, mv_conc=50, dv_conc=0, dntp_conc=0.8, dna_conc=50,\n max_nn_length=60, tm_method='santalucia',\n salt_corrections_method='santalucia'):\n ''' Return the tm of `seq` as a float.\n '''\n tm_meth = _tm_methods.get(tm_method)\n if tm_meth is None:\n raise ValueError('{} is not a valid tm calculation method'.format(\n tm_method))\n salt_meth = _salt_corrections_methods.get(salt_corrections_method)\n if salt_meth is None:\n raise ValueError('{} is not a valid salt correction method'.format(\n salt_corrections_method))\n # For whatever reason mv_conc and dna_conc have to be ints\n args = [pjoin(PRIMER3_HOME, 'oligotm'),\n '-mv', str(mv_conc),\n '-dv', str(dv_conc),\n '-n', str(dntp_conc),\n '-d', str(dna_conc),\n '-tp', str(tm_meth),\n '-sc', str(salt_meth),\n seq]\n tm = subprocess.check_output(args, stderr=DEV_NULL,\n env=os.environ)\n return float(tm)", "response": "Calculate the tm of a sequence."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef calcThermo(seq1, seq2, calc_type='ANY', mv_conc=50, dv_conc=0,\n dntp_conc=0.8, dna_conc=50, temp_c=37, max_loop=30,\n temp_only=False):\n \"\"\" Main subprocess wrapper for calls to the ntthal executable.\n\n Returns a named tuple with tm, ds, dh, and dg values or None if no\n structure / complex could be computed.\n \"\"\"\n args = [pjoin(PRIMER3_HOME, 'ntthal'),\n '-a', str(calc_type),\n '-mv', str(mv_conc),\n '-dv', str(dv_conc),\n '-n', str(dntp_conc),\n '-d', str(dna_conc),\n '-t', str(temp_c),\n '-maxloop', str(max_loop),\n '-path', THERMO_PATH,\n '-s1', seq1,\n '-s2', seq2]\n if temp_only:\n args += ['-r']\n out = subprocess.check_output(args, stderr=DEV_NULL,\n env=os.environ)\n return _parse_ntthal(out)", "response": "This function calls the ntthal executable and returns the output of the ntthal executable."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef calcHairpin(seq, mv_conc=50, dv_conc=0, dntp_conc=0.8, dna_conc=50,\n temp_c=37, max_loop=30, temp_only=False):\n ''' Return a namedtuple of the dS, dH, dG, and Tm of any hairpin struct\n present.\n '''\n return calcThermo(seq, seq, 'HAIRPIN', mv_conc, dv_conc, dntp_conc,\n dna_conc, temp_c, max_loop, temp_only)", "response": "Calculates the thermo of the hairpin structure."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef calcHeterodimer(seq1, seq2, mv_conc=50, dv_conc=0, dntp_conc=0.8,\n dna_conc=50, temp_c=37, max_loop=30, temp_only=False):\n ''' Return a tuple of the dS, dH, dG, and Tm of any predicted heterodimer.\n '''\n return calcThermo(seq1, seq2, 'ANY', mv_conc, dv_conc, dntp_conc,\n dna_conc, temp_c, max_loop, temp_only)", "response": "Calculate the heterodimer of the two sequences."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nassessing the hairpin and homodimer characteristics of a sequence of thermodynamic characteristics.", "response": "def assessOligo(seq):\n '''\n Return the thermodynamic characteristics of hairpin/homodimer structures.\n\n Returns a tuple of namedtuples (hairpin data, homodimer data) in which each\n individual tuple is structured (dS, dH, dG, Tm).\n\n '''\n hairpin_out = calcHairpin(seq)\n homodimer_out = calcHomodimer(seq)\n return (hairpin_out, homodimer_out)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef designPrimers(p3_args, input_log=None, output_log=None, err_log=None):\n ''' Return the raw primer3_core output for the provided primer3 args.\n\n Returns an ordered dict of the boulderIO-format primer3 output file\n '''\n sp = subprocess.Popen([pjoin(PRIMER3_HOME, 'primer3_core')],\n stdout=subprocess.PIPE, stdin=subprocess.PIPE,\n stderr=subprocess.STDOUT)\n p3_args.setdefault('PRIMER_THERMODYNAMIC_PARAMETERS_PATH',\n pjoin(PRIMER3_HOME, 'primer3_config/'))\n in_str = _formatBoulderIO(p3_args)\n if input_log:\n input_log.write(in_str)\n input_log.flush()\n out_str, err_str = sp.communicate(input=in_str)\n if output_log:\n output_log.write(out_str)\n output_log.flush()\n if err_log and err_str is not None:\n err_log.write(err_str)\n err_log.flush()\n return _parseBoulderIO(out_str)", "response": "Return the raw primer3_core output for the provided primer3 args."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding the executable bit to the file at filepath fp", "response": "def makeExecutable(fp):\n ''' Adds the executable bit to the file at filepath `fp`\n '''\n mode = ((os.stat(fp).st_mode) | 0o555) & 0o7777\n setup_log.info(\"Adding executable bit to %s (mode is now %o)\", fp, mode)\n os.chmod(fp, mode)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef calcHairpin(seq, mv_conc=50.0, dv_conc=0.0, dntp_conc=0.8, dna_conc=50.0,\n temp_c=37, max_loop=30):\n ''' Calculate the hairpin formation thermodynamics of a DNA sequence.\n\n **Note that the maximum length of `seq` is 60 bp.** This is a cap suggested\n by the Primer3 team as the longest reasonable sequence length for which\n a two-state NN model produces reliable results (see primer3/src/libnano/thal.h:50).\n\n Args:\n seq (str): DNA sequence to analyze for hairpin formation\n\n mv_conc (float/int, optional): Monovalent cation conc. (mM)\n dv_conc (float/int, optional): Divalent cation conc. (mM)\n dntp_conc (float/int, optional): dNTP conc. (mM)\n dna_conc (float/int, optional): DNA conc. (nM)\n temp_c (int, optional): Simulation temperature for dG (Celsius)\n max_loop(int, optional): Maximum size of loops in the structure\n\n Returns:\n A `ThermoResult` object with thermodynamic characteristics of the\n hairpin formation.\n\n Raises:\n ``RuntimeError``\n\n '''\n _setThermoArgs(**locals())\n return _THERMO_ANALYSIS.calcHairpin(seq).checkExc()", "response": "Calculates the hairpin formation thermodynamics of a sequence."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef calcEndStability(seq1, seq2, mv_conc=50, dv_conc=0, dntp_conc=0.8,\n dna_conc=50, temp_c=37, max_loop=30):\n ''' Calculate the 3' end stability of DNA sequence `seq1` against DNA\n sequence `seq2`.\n\n **Note that at least one of the two sequences must by <60 bp in length.**\n This is a cap imposed by Primer3 as the longest reasonable sequence length\n for which a two-state NN model produces reliable results (see\n primer3/src/libnano/thal.h:50).\n\n Args:\n seq1 (str) : DNA sequence to analyze for 3' end\n hybridization against the target\n sequence\n seq2 (str) : Target DNA sequence to analyze for\n seq1 3' end hybridization\n\n mv_conc (float/int, optional) : Monovalent cation conc. (mM)\n dv_conc (float/int, optional) : Divalent cation conc. (mM)\n dntp_conc (float/int, optional) : dNTP conc. (mM)\n dna_conc (float/int, optional) : DNA conc. (nM)\n temp_c (int, optional) : Simulation temperature for dG (C)\n max_loop(int, optional) : Maximum size of loops in the\n structure\n\n Returns:\n A `ThermoResult` object with thermodynamic characteristics of the\n 3' hybridization interaction.\n\n Raises:\n ``RuntimeError``\n\n '''\n _setThermoArgs(**locals())\n return _THERMO_ANALYSIS.calcEndStability(seq1, seq2).checkExc()", "response": "Calculates the 3 end stability of seq1 against seq2."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncalculates the melting temperature of a sequence.", "response": "def calcTm(seq, mv_conc=50, dv_conc=0, dntp_conc=0.8, dna_conc=50,\n max_nn_length=60, tm_method='santalucia',\n salt_corrections_method='santalucia'):\n ''' Calculate the melting temperature (Tm) of a DNA sequence.\n\n Note that NN thermodynamics will be used to calculate the Tm of sequences\n up to 60 bp in length, after which point the following formula will be\n used::\n\n Tm = 81.5 + 16.6(log10([mv_conc])) + 0.41(%GC) - 600/length\n\n Args:\n seq (str) : DNA sequence\n mv_conc (float/int, optional) : Monovalent cation conc. (mM)\n dv_conc (float/int, optional) : Divalent cation conc. (mM)\n dntp_conc (float/int, optional) : dNTP conc. (mM)\n dna_conc (float/int, optional) : DNA conc. (nM)\n max_nn_length (int, optional) : Maximum length for\n nearest-neighbor calcs\n tm_method (str, optional) : Tm calculation method\n (breslauer or santalucia)\n salt_corrections_method (str, optional) : Salt correction method\n (schildkraut, owczarzy,\n santalucia)\n\n Returns:\n The melting temperature in degrees Celsius (float).\n\n '''\n _setThermoArgs(**locals())\n return _THERMO_ANALYSIS.calcTm(seq)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nrun the Primer3 design process.", "response": "def designPrimers(seq_args, global_args=None, misprime_lib=None,\n mishyb_lib=None, debug=False):\n ''' Run the Primer3 design process.\n\n If the global args have been previously set (either by a pervious\n `designPrimers` call or by a `setGlobals` call), `designPrimers` may be\n called with seqArgs alone (as a means of optimization).\n\n Args:\n seq_args (dict) : Primer3 sequence/design args as per\n Primer3 docs\n\n global_args (dict, optional) : Primer3 global args as per Primer3 docs\n misprime_lib (dict, optional) : `Sequence name: sequence` dictionary\n for mispriming checks.\n mishyb_lib (dict, optional) : `Sequence name: sequence` dictionary\n for mishybridization checks.\n\n Returns:\n A dictionary of Primer3 results (should be identical to the expected\n BoulderIO output from primer3_main)\n\n '''\n if global_args:\n primerdesign.setGlobals(global_args, misprime_lib, mishyb_lib)\n primerdesign.setSeqArgs(seq_args)\n return primerdesign.runDesign(debug)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef unravel_sections(section_data):\n sections = []\n for type, subsection_list in section_data.items():\n for section in subsection_list:\n section['sectionType'] = type\n sections.append(section)\n return sections", "response": "Unravels section type dictionary into flat list of sections with sectionType set as an attribute."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef unravel_staff(staff_data):\n staff_list = []\n for role, staff_members in staff_data['data'].items():\n for member in staff_members:\n member['role'] = role\n staff_list.append(member)\n return staff_list", "response": "Unravels staff role dictionary into flat list of staff\n members with role set as an attribute."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the gradebook id for a given gradebook uuid.", "response": "def get_gradebook_id(self, gbuuid):\n \"\"\"Return gradebookid for a given gradebook uuid.\n\n Args:\n gbuuid (str): gradebook uuid, i.e. ``STELLAR:/project/gbngtest``\n\n Raises:\n PyLmodUnexpectedData: No gradebook id returned\n requests.RequestException: Exception connection error\n ValueError: Unable to decode response content\n\n Returns:\n str: value of gradebook id\n \"\"\"\n gradebook = self.get('gradebook', params={'uuid': gbuuid})\n if 'data' not in gradebook:\n failure_messsage = ('Error in get_gradebook_id '\n 'for {0} - no data'.format(\n gradebook\n ))\n log.error(failure_messsage)\n raise PyLmodUnexpectedData(failure_messsage)\n return gradebook['data']['gradebookId']"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_options(self, gradebook_id):\n end_point = 'gradebook/options/{gradebookId}'.format(\n gradebookId=gradebook_id or self.gradebook_id)\n options = self.get(end_point)\n return options['data']", "response": "Get options dictionary for a gradebook."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting assignments for a given gradebook.", "response": "def get_assignments(\n self,\n gradebook_id='',\n simple=False,\n max_points=True,\n avg_stats=False,\n grading_stats=False\n ):\n \"\"\"Get assignments for a gradebook.\n\n Return list of assignments for a given gradebook,\n specified by a py:attribute::gradebook_id. You can control\n if additional parameters are returned, but the response\n time with py:attribute::avg_stats and py:attribute::grading_stats\n enabled is significantly longer.\n\n Args:\n gradebook_id (str): unique identifier for gradebook, i.e. ``2314``\n simple (bool): return just assignment names, default= ``False``\n max_points (bool):\n Max points is a property of the grading scheme for the\n assignment rather than a property of the assignment itself,\n default= ``True``\n avg_stats (bool): return average grade, default= ``False``\n grading_stats (bool):\n return grading statistics, i.e. number of approved grades,\n unapproved grades, etc., default= ``False``\n\n Raises:\n requests.RequestException: Exception connection error\n ValueError: Unable to decode response content\n\n Returns:\n list: list of assignment dictionaries\n\n An example return value is:\n\n .. code-block:: python\n\n [\n {\n u'assignmentId': 2431240,\n u'categoryId': 1293820,\n u'description': u'',\n u'dueDate': 1372392000000,\n u'dueDateString': u'06-28-2013',\n u'gradebookId': 1293808,\n u'graderVisible': True,\n u'gradingSchemeId': 2431243,\n u'gradingSchemeType': u'NUMERIC',\n u'isComposite': False,\n u'isHomework': False,\n u'maxPointsTotal': 10.0,\n u'name': u'Homework 1',\n u'shortName': u'HW1',\n u'userDeleted': False,\n u'weight': 1.0\n },\n {\n u'assignmentId': 16708850,\n u'categoryId': 1293820,\n u'description': u'',\n u'dueDate': 1383541200000,\n u'dueDateString': u'11-04-2013',\n u'gradebookId': 1293808,\n u'graderVisible': False,\n u'gradingSchemeId': 16708851,\n u'gradingSchemeType': u'NUMERIC',\n u'isComposite': False,\n u'isHomework': False,\n u'maxPointsTotal': 100.0,\n u'name': u'midterm1',\n u'shortName': u'mid1',\n u'userDeleted': False,\n u'weight': 1.0\n },\n ]\n\n \"\"\"\n # These are parameters required for the remote API call, so\n # there aren't too many arguments\n # pylint: disable=too-many-arguments\n params = dict(\n includeMaxPoints=json.dumps(max_points),\n includeAvgStats=json.dumps(avg_stats),\n includeGradingStats=json.dumps(grading_stats)\n )\n\n assignments = self.get(\n 'assignments/{gradebookId}'.format(\n gradebookId=gradebook_id or self.gradebook_id\n ),\n params=params,\n )\n if simple:\n return [{'AssignmentName': x['name']}\n for x in assignments['data']]\n return assignments['data']"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_assignment_by_name(self, assignment_name, assignments=None):\n if assignments is None:\n assignments = self.get_assignments()\n for assignment in assignments:\n if assignment['name'] == assignment_name:\n return assignment['assignmentId'], assignment\n return None, None", "response": "Get an assignment by name. It works by retrieving all assignments from the service and returning the first matching assignment."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate a new assignment.", "response": "def create_assignment( # pylint: disable=too-many-arguments\n self,\n name,\n short_name,\n weight,\n max_points,\n due_date_str,\n gradebook_id='',\n **kwargs\n ):\n \"\"\"Create a new assignment.\n\n Create a new assignment. By default, assignments are created\n under the `Uncategorized` category.\n\n Args:\n name (str): descriptive assignment name,\n i.e. ``new NUMERIC SIMPLE ASSIGNMENT``\n short_name (str): short name of assignment, one word of\n no more than 5 characters, i.e. ``SAnew``\n weight (str): floating point value for weight, i.e. ``1.0``\n max_points (str): floating point value for maximum point\n total, i.e. ``100.0``\n due_date_str (str): due date as string in ``mm-dd-yyyy``\n format, i.e. ``08-21-2011``\n gradebook_id (str): unique identifier for gradebook, i.e. ``2314``\n kwargs (dict): dictionary containing additional parameters,\n i.e. ``graderVisible``, ``totalAverage``, and ``categoryId``.\n\n For example:\n\n .. code-block:: python\n\n {\n u'graderVisible': True,\n u'totalAverage': None\n u'categoryId': 1007964,\n }\n\n Raises:\n requests.RequestException: Exception connection error\n ValueError: Unable to decode response content\n\n Returns:\n dict: dictionary containing ``data``, ``status`` and ``message``\n for example:\n\n .. code-block:: python\n\n {\n u'data':\n {\n u'assignmentId': 18490492,\n u'categoryId': 1293820,\n u'description': u'',\n u'dueDate': 1312171200000,\n u'dueDateString': u'08-01-2011',\n u'gradebookId': 1293808,\n u'graderVisible': False,\n u'gradingSchemeId': 18490493,\n u'gradingSchemeType': u'NUMERIC',\n u'isComposite': False,\n u'isHomework': False,\n u'maxPointsTotal': 100.0,\n u'name': u'new NUMERIC SIMPLE ASSIGNMENT',\n u'numStudentGradesToBeApproved': 0,\n u'numStudentsToBeGraded': 614,\n u'shortName': u'SAnew',\n u'userDeleted': False,\n u'weight': 1.0\n },\n u'message': u'assignment is created successfully',\n u'status': 1\n }\n\n \"\"\"\n data = {\n 'name': name,\n 'shortName': short_name,\n 'weight': weight,\n 'graderVisible': False,\n 'gradingSchemeType': 'NUMERIC',\n 'gradebookId': gradebook_id or self.gradebook_id,\n 'maxPointsTotal': max_points,\n 'dueDateString': due_date_str\n }\n data.update(kwargs)\n log.info(\"Creating assignment %s\", name)\n response = self.post('assignment', data)\n log.debug('Received response data: %s', response)\n return response"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set_grade(\n self,\n assignment_id,\n student_id,\n grade_value,\n gradebook_id='',\n **kwargs\n ):\n \"\"\"Set numerical grade for student and assignment.\n\n Set a numerical grade for for a student and assignment. Additional\n options\n for grade ``mode`` are: OVERALL_GRADE = ``1``, REGULAR_GRADE = ``2``\n To set 'excused' as the grade, enter ``None`` for letter and\n numeric grade values,\n and pass ``x`` as the ``specialGradeValue``.\n ``ReturnAffectedValues`` flag determines whether or not to return\n student cumulative points and\n impacted assignment category grades (average and student grade).\n\n Args:\n assignment_id (str): numerical ID for assignment\n student_id (str): numerical ID for student\n grade_value (str): numerical grade value\n gradebook_id (str): unique identifier for gradebook, i.e. ``2314``\n kwargs (dict): dictionary of additional parameters\n\n .. code-block:: python\n\n {\n u'letterGradeValue':None,\n u'booleanGradeValue':None,\n u'specialGradeValue':None,\n u'mode':2,\n u'isGradeApproved':False,\n u'comment':None,\n u'returnAffectedValues': True,\n }\n\n Raises:\n requests.RequestException: Exception connection error\n ValueError: Unable to decode response content\n\n Returns:\n dict: dictionary containing response ``status`` and ``message``\n\n .. code-block:: python\n\n {\n u'message': u'grade saved successfully',\n u'status': 1\n }\n\n \"\"\"\n # pylint: disable=too-many-arguments\n\n # numericGradeValue stringified because 'x' is a possible\n # value for excused grades.\n grade_info = {\n 'studentId': student_id,\n 'assignmentId': assignment_id,\n 'mode': 2,\n 'comment': 'from MITx {0}'.format(time.ctime(time.time())),\n 'numericGradeValue': str(grade_value),\n 'isGradeApproved': False\n }\n grade_info.update(kwargs)\n log.info(\n \"student %s set_grade=%s for assignment %s\",\n student_id,\n grade_value,\n assignment_id)\n return self.post(\n 'grades/{gradebookId}'.format(\n gradebookId=gradebook_id or self.gradebook_id\n ),\n data=grade_info,\n )", "response": "Set a numerical grade for a student and assignment."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef multi_grade(self, grade_array, gradebook_id=''):\n log.info('Sending grades: %r', grade_array)\n return self.post(\n 'multiGrades/{gradebookId}'.format(\n gradebookId=gradebook_id or self.gradebook_id\n ),\n data=grade_array,\n )", "response": "Set multiple grades for students."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_sections(self, gradebook_id='', simple=False):\n params = dict(includeMembers='false')\n\n section_data = self.get(\n 'sections/{gradebookId}'.format(\n gradebookId=gradebook_id or self.gradebook_id\n ),\n params=params\n )\n\n if simple:\n sections = self.unravel_sections(section_data['data'])\n return [{'SectionName': x['name']} for x in sections]\n return section_data['data']", "response": "Get the sections for a given gradebook."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_section_by_name(self, section_name):\n sections = self.unravel_sections(self.get_sections())\n for section in sections:\n if section['name'] == section_name:\n return section['groupId'], section\n return None, None", "response": "Get a section by its name."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_students(\n self,\n gradebook_id='',\n simple=False,\n section_name='',\n include_photo=False,\n include_grade_info=False,\n include_grade_history=False,\n include_makeup_grades=False\n ):\n \"\"\"Get students for a gradebook.\n\n Get a list of students for a given gradebook,\n specified by a gradebook id. Does not include grade data.\n\n Args:\n gradebook_id (str): unique identifier for gradebook, i.e. ``2314``\n simple (bool):\n if ``True``, just return dictionary with keys ``email``,\n ``name``, ``section``, default = ``False``\n section_name (str): section name\n include_photo (bool): include student photo, default= ``False``\n include_grade_info (bool):\n include student's grade info, default= ``False``\n include_grade_history (bool):\n include student's grade history, default= ``False``\n include_makeup_grades (bool):\n include student's makeup grades, default= ``False``\n\n Raises:\n requests.RequestException: Exception connection error\n ValueError: Unable to decode response content\n\n Returns:\n list: list of student dictionaries\n\n .. code-block:: python\n\n [{\n u'accountEmail': u'stellar.test2@gmail.com',\n u'displayName': u'Molly Parker',\n u'photoUrl': None,\n u'middleName': None,\n u'section': u'Unassigned',\n u'sectionId': 1293925,\n u'editable': False,\n u'overallGradeInformation': None,\n u'studentId': 1145,\n u'studentAssignmentInfo': None,\n u'sortableName': u'Parker, Molly',\n u'surname': u'Parker',\n u'givenName': u'Molly',\n u'nickName': u'Molly',\n u'email': u'stellar.test2@gmail.com'\n },]\n\n \"\"\"\n # These are parameters required for the remote API call, so\n # there aren't too many arguments, or too many variables\n # pylint: disable=too-many-arguments,too-many-locals\n\n # Set params by arguments\n params = dict(\n includePhoto=json.dumps(include_photo),\n includeGradeInfo=json.dumps(include_grade_info),\n includeGradeHistory=json.dumps(include_grade_history),\n includeMakeupGrades=json.dumps(include_makeup_grades),\n )\n\n url = 'students/{gradebookId}'\n if section_name:\n group_id, _ = self.get_section_by_name(section_name)\n if group_id is None:\n failure_message = (\n 'in get_students -- Error: '\n 'No such section %s' % section_name\n )\n log.critical(failure_message)\n raise PyLmodNoSuchSection(failure_message)\n url += '/section/{0}'.format(group_id)\n\n student_data = self.get(\n url.format(\n gradebookId=gradebook_id or self.gradebook_id\n ),\n params=params,\n )\n\n if simple:\n # just return dict with keys email, name, section\n student_map = dict(\n accountEmail='email',\n displayName='name',\n section='section'\n )\n\n def remap(students):\n \"\"\"Convert mit.edu domain to upper-case for student emails.\n\n The mit.edu domain for user email must be upper-case,\n i.e. MIT.EDU.\n\n Args:\n students (list): list of students\n\n Returns:\n dict: dictionary of updated student email domains\n \"\"\"\n newx = dict((student_map[k], students[k]) for k in student_map)\n # match certs\n newx['email'] = newx['email'].replace('@mit.edu', '@MIT.EDU')\n return newx\n\n return [remap(x) for x in student_data['data']]\n\n return student_data['data']", "response": "Get a list of students for a given gradebook."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting a student based on an email address.", "response": "def get_student_by_email(self, email, students=None):\n \"\"\"Get a student based on an email address.\n\n Calls ``self.get_students()`` to get list of all students,\n if not passed as the ``students`` parameter.\n\n Args:\n email (str): student email\n students (list): dictionary of students to search, default: None\n When ``students`` is unspecified, all students in gradebook\n are retrieved.\n\n Raises:\n requests.RequestException: Exception connection error\n ValueError: Unable to decode response content\n\n Returns:\n tuple: tuple of student id and student dictionary.\n \"\"\"\n if students is None:\n students = self.get_students()\n\n email = email.lower()\n for student in students:\n if student['accountEmail'].lower() == email:\n return student['studentId'], student\n return None, None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ntransfer grades from spreadsheet to array. Helper function that transfer grades from spreadsheet using ``multi_grade()`` (multiple students at a time). We do this by creating a large array containing all grades to transfer, then make one call to the Gradebook API. Args: csv_reader (csv.DictReader): list of rows in CSV file email_field (str): The name of the email field non_assignment_fields (list): list of column names in CSV file which should not be treated as assignment names approve_grades (bool): Should grades be auto approved? use_max_points_column (bool): If true, read the max points and normalize values from the CSV and use the max points value in place of the default if normalized is False. max_points_column (str): The name of the max_pts column. All rows contain the same number, the max points for the assignment. normalize_column (str): The name of the normalize column which indicates whether to use the max points value. Raises: PyLmodFailedAssignmentCreation: Failed to create assignment requests.RequestException: Exception connection error ValueError: Unable to decode response content Returns: tuple: tuple of dictionary containing response ``status`` and ``message``, and duration of operation", "response": "def _spreadsheet2gradebook_multi(\n self,\n csv_reader,\n email_field,\n non_assignment_fields,\n approve_grades=False,\n use_max_points_column=False,\n max_points_column=None,\n normalize_column=None\n ):\n \"\"\"Transfer grades from spreadsheet to array.\n\n Helper function that transfer grades from spreadsheet using\n ``multi_grade()`` (multiple students at a time). We do this by\n creating a large array containing all grades to transfer, then\n make one call to the Gradebook API.\n\n Args:\n csv_reader (csv.DictReader): list of rows in CSV file\n email_field (str): The name of the email field\n non_assignment_fields (list): list of column names in CSV file\n which should not be treated as assignment names\n approve_grades (bool): Should grades be auto approved?\n use_max_points_column (bool): If true, read the max points\n and normalize values from the CSV and use the max points value\n in place of the default if normalized is False.\n max_points_column (str): The name of the max_pts column. All\n rows contain the same number, the max points for\n the assignment.\n normalize_column (str): The name of the normalize column which\n indicates whether to use the max points value.\n\n Raises:\n PyLmodFailedAssignmentCreation: Failed to create assignment\n requests.RequestException: Exception connection error\n ValueError: Unable to decode response content\n\n Returns:\n tuple: tuple of dictionary containing response ``status``\n and ``message``, and duration of operation\n\n \"\"\"\n # pylint: disable=too-many-locals\n if use_max_points_column:\n if max_points_column is None:\n raise ValueError(\n \"max_points_column must be set \"\n \"if use_max_points_column is set\"\n )\n if normalize_column is None:\n raise ValueError(\n \"normalize_column must be set \"\n \"if use_max_points_column is set\"\n )\n\n assignments = self.get_assignments()\n students = self.get_students()\n assignment2id = {}\n grade_array = []\n for row in csv_reader:\n email = row[email_field]\n sid, _ = self.get_student_by_email(email, students)\n if sid is None:\n log.warning(\n 'Error in spreadsheet2gradebook: cannot find '\n 'student id for email=\"%s\"\\n', email\n )\n continue\n for field in row.keys():\n if field in non_assignment_fields:\n continue\n if field not in assignment2id:\n assignment_id, _ = self.get_assignment_by_name(\n field, assignments=assignments\n )\n # If no assignment found, try creating it.\n if assignment_id is None:\n name = field\n shortname = field[0:3] + field[-2:]\n log.info('calling create_assignment from multi')\n\n # If the max_pts and normalize columns are present,\n # and use_max_points_column is True,\n # replace the default value for max points.\n max_points = DEFAULT_MAX_POINTS\n if use_max_points_column:\n # This value means it was already normalized, and\n # we should use the default max points\n # instead of the one in the CSV.\n normalize_value = True\n normalize_value_str = row.get(normalize_column)\n if normalize_value_str is not None:\n try:\n normalize_value = bool(\n int(normalize_value_str)\n )\n except ValueError as ex:\n # Value is already normalized\n log.warning(\n 'Bool conversion error '\n ' in normalize column for '\n 'value: %s, exception: %s',\n normalize_value_str,\n ex\n )\n\n if not normalize_value:\n max_points_str = row.get(max_points_column)\n if max_points_str is not None:\n try:\n max_points = float(max_points_str)\n except ValueError as ex:\n log.warning(\n 'Floating point conversion error '\n 'in max points column for '\n 'value: %s, exception: %s',\n max_points_str,\n ex\n )\n\n response = self.create_assignment(\n name, shortname, 1.0, max_points, '12-15-2013'\n )\n if (\n not response.get('data', '') or\n 'assignmentId' not in response.get('data')\n ):\n failure_message = (\n \"Error! Failed to create assignment {0}\"\n \", got {1}\".format(\n name, response\n )\n )\n log.critical(failure_message)\n raise PyLmodFailedAssignmentCreation(\n failure_message\n )\n assignment_id = response['data']['assignmentId']\n log.info(\"Assignment %s has Id=%s\", field, assignment_id)\n assignment2id[field] = assignment_id\n\n assignment_id = assignment2id[field]\n successful = True\n try:\n # Try to convert to numeric, but grade the\n # rest anyway if any particular grade isn't a\n # number\n gradeval = float(row[field]) * 1.0\n log.debug(\n 'Received grade value %s(converted to %s) for '\n 'student %s on assignment %s',\n row[field],\n gradeval,\n sid,\n assignment_id,\n )\n except ValueError as err:\n log.exception(\n \"Failed in converting grade for student %s\"\n \", row=%s, err=%s\", sid, row, err\n )\n successful = False\n if successful:\n grade_array.append({\n \"studentId\": sid,\n \"assignmentId\": assignment_id,\n \"numericGradeValue\": gradeval,\n \"mode\": 2,\n \"isGradeApproved\": approve_grades\n })\n # Everything is setup to post, do the post and track the time\n # it takes.\n log.info(\n 'Data read from file, doing multiGrades API '\n 'call (%d grades)', len(grade_array)\n )\n tstart = time.time()\n response = self.multi_grade(grade_array)\n duration = time.time() - tstart\n log.info(\n 'multiGrades API call done (%d bytes returned) '\n 'dt=%6.2f seconds.', len(json.dumps(response)), duration\n )\n return response, duration"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nupload grades from CSV format spreadsheet file into Learning Modules gradebook.", "response": "def spreadsheet2gradebook(\n self,\n csv_file,\n email_field=None,\n approve_grades=False,\n use_max_points_column=False,\n max_points_column=None,\n normalize_column=None\n ):\n \"\"\"Upload grade spreadsheet to gradebook.\n\n Upload grades from CSV format spreadsheet file into the\n Learning Modules gradebook. The spreadsheet must have a column\n named ``External email`` which is used as the student's email\n address (for looking up and matching studentId).\n\n These columns are disregarded: ``ID``, ``Username``,\n ``Full Name``, ``edX email``, ``External email``,\n as well as the strings passed in ``max_points_column``\n and ``normalize_column``, if any.\n All other columns are taken as assignments.\n\n If ``email_field`` is specified, then that field name is taken as\n the student's email.\n\n .. code-block:: none\n\n External email,AB Assignment 01,AB Assignment 02\n jeannechiang@gmail.com,1.0,0.9\n stellar.test2@gmail.com,0.2,0.4\n stellar.test1@gmail.com,0.93,0.77\n\n Args:\n csv_reader (str): filename of csv data, or readable file object\n email_field (str): student's email\n approve_grades (bool): Should grades be auto approved?\n use_max_points_column (bool):\n If ``True``, read the max points and normalize values\n from the CSV and use the max points value in place of\n the default if normalized is ``False``.\n max_points_column (str): The name of the max_pts column. All\n rows contain the same number, the max points for\n the assignment.\n normalize_column (str): The name of the normalize column which\n indicates whether to use the max points value.\n\n Raises:\n PyLmodFailedAssignmentCreation: Failed to create assignment\n requests.RequestException: Exception connection error\n ValueError: Unable to decode response content\n\n Returns:\n tuple: tuple of dictionary containing response ``status``\n and ``message``, and duration of operation\n\n \"\"\"\n non_assignment_fields = [\n 'ID', 'Username', 'Full Name', 'edX email', 'External email'\n ]\n if max_points_column is not None:\n non_assignment_fields.append(max_points_column)\n if normalize_column is not None:\n non_assignment_fields.append(normalize_column)\n\n if email_field is not None:\n non_assignment_fields.append(email_field)\n else:\n email_field = 'External email'\n\n if not hasattr(csv_file, 'read'):\n file_pointer = open(csv_file)\n else:\n file_pointer = csv_file\n csv_reader = csv.DictReader(file_pointer, dialect='excel')\n\n response = self._spreadsheet2gradebook_multi(\n csv_reader,\n email_field,\n non_assignment_fields,\n approve_grades=approve_grades,\n use_max_points_column=use_max_points_column,\n max_points_column=max_points_column,\n normalize_column=normalize_column\n )\n return response"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a list of dictionaries one for each member of the gradebook and one for each role.", "response": "def get_staff(self, gradebook_id, simple=False):\n \"\"\"Get staff list for gradebook.\n\n Get staff list for the gradebook specified. Optionally, return\n a less detailed list by specifying ``simple = True``.\n\n If simple=True, return a list of dictionaries, one dictionary\n for each member. The dictionary contains a member's ``email``,\n ``displayName``, and ``role``. Members with multiple roles will\n appear in the list once for each role.\n\n Args:\n gradebook_id (str): unique identifier for gradebook, i.e. ``2314``\n simple (bool): Return a staff list with less detail. Default\n is ``False``.\n\n Returns:\n\n An example return value is:\n\n .. code-block:: python\n\n {\n u'data': {\n u'COURSE_ADMIN': [\n {\n u'accountEmail': u'benfranklin@mit.edu',\n u'displayName': u'Benjamin Franklin',\n u'editable': False,\n u'email': u'benfranklin@mit.edu',\n u'givenName': u'Benjamin',\n u'middleName': None,\n u'mitId': u'921344431',\n u'nickName': u'Benjamin',\n u'personId': 10710616,\n u'sortableName': u'Franklin, Benjamin',\n u'surname': u'Franklin',\n u'year': None\n },\n ],\n u'COURSE_PROF': [\n {\n u'accountEmail': u'dduck@mit.edu',\n u'displayName': u'Donald Duck',\n u'editable': False,\n u'email': u'dduck@mit.edu',\n u'givenName': u'Donald',\n u'middleName': None,\n u'mitId': u'916144889',\n u'nickName': u'Donald',\n u'personId': 8117160,\n u'sortableName': u'Duck, Donald',\n u'surname': u'Duck',\n u'year': None\n },\n ],\n u'COURSE_TA': [\n {\n u'accountEmail': u'hduck@mit.edu',\n u'displayName': u'Huey Duck',\n u'editable': False,\n u'email': u'hduck@mit.edu',\n u'givenName': u'Huey',\n u'middleName': None,\n u'mitId': u'920445024',\n u'nickName': u'Huey',\n u'personId': 1299059,\n u'sortableName': u'Duck, Huey',\n u'surname': u'Duck',\n u'year': None\n },\n ]\n },\n }\n\n\n \"\"\"\n staff_data = self.get(\n 'staff/{gradebookId}'.format(\n gradebookId=gradebook_id or self.gradebook_id\n ),\n params=None,\n )\n if simple:\n simple_list = []\n unraveled_list = self.unravel_staff(staff_data)\n for member in unraveled_list.__iter__():\n simple_list.append({\n 'accountEmail': member['accountEmail'],\n 'displayName': member['displayName'],\n 'role': member['role'],\n })\n return simple_list\n return staff_data['data']"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_group(self, uuid=None):\n if uuid is None:\n uuid = self.uuid\n group_data = self.get('group', params={'uuid': uuid})\n return group_data", "response": "Get group data based on uuid."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_group_id(self, uuid=None):\n group_data = self.get_group(uuid)\n try:\n return group_data['response']['docs'][0]['id']\n except (KeyError, IndexError):\n failure_message = ('Error in get_group response data - '\n 'got {0}'.format(group_data))\n log.exception(failure_message)\n raise PyLmodUnexpectedData(failure_message)", "response": "Get group id based on uuid."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_membership(self, uuid=None):\n group_id = self.get_group_id(uuid=uuid)\n uri = 'group/{group_id}/member'\n mbr_data = self.get(uri.format(group_id=group_id), params=None)\n return mbr_data", "response": "Get the membership data based on uuid."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning True if an email is associated with a role.", "response": "def email_has_role(self, email, role_name, uuid=None):\n \"\"\"Determine if an email is associated with a role.\n\n Args:\n email (str): user email\n role_name (str): user role\n uuid (str): optional uuid. defaults to self.cuuid\n\n Raises:\n PyLmodUnexpectedData: Unexpected data was returned.\n requests.RequestException: Exception connection error\n\n Returns:\n bool: True or False if email has role_name\n\n \"\"\"\n mbr_data = self.get_membership(uuid=uuid)\n docs = []\n try:\n docs = mbr_data['response']['docs']\n except KeyError:\n failure_message = ('KeyError in membership data - '\n 'got {0}'.format(mbr_data))\n log.exception(failure_message)\n raise PyLmodUnexpectedData(failure_message)\n if len(docs) == 0:\n return False\n has_role = any(\n (x.get('email') == email and x.get('roleType') == role_name)\n for x in docs\n )\n if has_role:\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets course id based on uuid.", "response": "def get_course_id(self, course_uuid):\n \"\"\"Get course id based on uuid.\n\n Args:\n uuid (str): course uuid, i.e. /project/mitxdemosite\n\n Raises:\n PyLmodUnexpectedData: No course data was returned.\n requests.RequestException: Exception connection error\n\n Returns:\n int: numeric course id\n\n \"\"\"\n course_data = self.get(\n 'courseguide/course?uuid={uuid}'.format(\n uuid=course_uuid or self.course_id\n ),\n params=None\n )\n try:\n return course_data['response']['docs'][0]['id']\n except KeyError:\n failure_message = ('KeyError in get_course_id - '\n 'got {0}'.format(course_data))\n log.exception(failure_message)\n raise PyLmodUnexpectedData(failure_message)\n except TypeError:\n failure_message = ('TypeError in get_course_id - '\n 'got {0}'.format(course_data))\n log.exception(failure_message)\n raise PyLmodUnexpectedData(failure_message)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets a list of staff roster members for a given course.", "response": "def get_course_guide_staff(self, course_id=''):\n \"\"\"Get the staff roster for a course.\n\n Get a list of staff members for a given course,\n specified by a course id.\n\n Args:\n course_id (int): unique identifier for course, i.e. ``2314``\n\n Raises:\n requests.RequestException: Exception connection error\n ValueError: Unable to decode response content\n\n Returns:\n list: list of dictionaries containing staff data\n\n An example return value is:\n\n .. code-block:: python\n\n [\n {\n u'displayName': u'Huey Duck',\n u'role': u'TA',\n u'sortableDisplayName': u'Duck, Huey'\n },\n {\n u'displayName': u'Louie Duck',\n u'role': u'CourseAdmin',\n u'sortableDisplayName': u'Duck, Louie'\n },\n {\n u'displayName': u'Benjamin Franklin',\n u'role': u'CourseAdmin',\n u'sortableDisplayName': u'Franklin, Benjamin'\n },\n {\n u'displayName': u'George Washington',\n u'role': u'Instructor',\n u'sortableDisplayName': u'Washington, George'\n },\n ]\n \"\"\"\n staff_data = self.get(\n 'courseguide/course/{courseId}/staff'.format(\n courseId=course_id or self.course_id\n ),\n params=None\n )\n return staff_data['response']['docs']"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _data_to_json(data):\n if type(data) not in [str, unicode]:\n data = json.dumps(data)\n return data", "response": "Convert to json if it isn t already a string."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ngenerate URL from urlbase and service.", "response": "def _url_format(self, service):\n \"\"\"Generate URL from urlbase and service.\n\n Args:\n service (str): The endpoint service to use, i.e. gradebook\n Returns:\n str: URL to where the request should be made\n \"\"\"\n base_service_url = '{base}{service}'.format(\n base=self.urlbase,\n service=service\n )\n return base_service_url"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef post(self, service, data):\n url = self._url_format(service)\n data = Base._data_to_json(data)\n # Add content-type for body in POST.\n headers = {'content-type': 'application/json'}\n return self.rest_action(self._session.post, url,\n data=data, headers=headers)", "response": "Generic POST operation for sending data to Learning Modules API."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create_token(secret, data, options=None):\n if not isinstance(secret, basestring):\n raise ValueError(\"firebase_token_generator.create_token: secret must be a string.\")\n if not options and not data:\n raise ValueError(\"firebase_token_generator.create_token: data is empty and no options are set. This token will have no effect on Firebase.\");\n if not options:\n options = {}\n is_admin_token = ('admin' in options and options['admin'] == True)\n _validate_data(data, is_admin_token)\n claims = _create_options_claims(options)\n claims['v'] = TOKEN_VERSION\n claims['iat'] = int(time.time())\n claims['d'] = data\n\n token = _encode_token(secret, claims)\n if len(token) > 1024:\n raise RuntimeError(\"firebase_token_generator.create_token: generated token is too long.\")\n return token", "response": "Generates a secure authentication token."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef parse_html(fileobj, encoding):\n parser = HTMLParser(encoding=encoding, remove_blank_text=True)\n return parse(fileobj, parser)", "response": "Given a file object and an encoding return an ElementTree instance."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfetch sequence by seq_id optionally with start end bounds", "response": "def fetch(self, seq_id, start=None, end=None):\n \"\"\"fetch sequence by seq_id, optionally with start, end bounds\n\n \"\"\"\n rec = self._db.execute(\"\"\"select * from seqinfo where seq_id = ? order by added desc\"\"\", [seq_id]).fetchone()\n\n if rec is None:\n raise KeyError(seq_id)\n\n if self._writing and self._writing[\"relpath\"] == rec[\"relpath\"]:\n logger.warning(\"\"\"Fetching from file opened for writing;\n closing first ({})\"\"\".format(rec[\"relpath\"]))\n self.commit()\n\n path = os.path.join(self._root_dir, rec[\"relpath\"])\n fabgz = self._open_for_reading(path)\n return fabgz.fetch(seq_id, start, end)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nstores a sequence with key seq_id in the sqlite3 database and return the seq_id.", "response": "def store(self, seq_id, seq):\n \"\"\"store a sequence with key seq_id. The sequence itself is stored in\n a fasta file and a reference to it in the sqlite3 database.\n\n \"\"\"\n\n if not self._writeable:\n raise RuntimeError(\"Cannot write -- opened read-only\")\n\n # open a file for writing if necessary\n # path: //\n # <---- relpath ---->\n # <------ dir_ ----->\n # <----------- path ----------->\n if self._writing is None:\n reldir = datetime.datetime.utcnow().strftime(\"%Y/%m%d/%H%M\")\n basename = str(time.time()) + \".fa.bgz\"\n relpath = os.path.join(reldir, basename)\n\n dir_ = os.path.join(self._root_dir, reldir)\n path = os.path.join(self._root_dir, reldir, basename)\n makedirs(dir_, exist_ok=True)\n fabgz = FabgzWriter(path)\n self._writing = {\"relpath\": relpath, \"fabgz\": fabgz}\n logger.info(\"Opened for writing: \" + path)\n\n self._writing[\"fabgz\"].store(seq_id, seq)\n alpha = \"\".join(sorted(set(seq)))\n self._db.execute(\"\"\"insert into seqinfo (seq_id, len, alpha, relpath)\n values (?, ?, ?,?)\"\"\", (seq_id, len(seq), alpha, self._writing[\"relpath\"]))\n return seq_id"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef copy_to(source, dest, engine_or_conn, **flags):\n dialect = postgresql.dialect()\n statement = getattr(source, 'statement', source)\n compiled = statement.compile(dialect=dialect)\n conn, autoclose = raw_connection_from(engine_or_conn)\n cursor = conn.cursor()\n query = cursor.mogrify(compiled.string, compiled.params).decode()\n formatted_flags = '({})'.format(format_flags(flags)) if flags else ''\n copy = 'COPY ({}) TO STDOUT {}'.format(query, formatted_flags)\n cursor.copy_expert(copy, dest)\n if autoclose:\n conn.close()", "response": "Export a SQLAlchemy query or select to a file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncopying a table from a source file to a destination file.", "response": "def copy_from(source, dest, engine_or_conn, columns=(), **flags):\n \"\"\"Import a table from a file. For flags, see the PostgreSQL documentation\n at http://www.postgresql.org/docs/9.5/static/sql-copy.html.\n\n Examples: ::\n with open('/path/to/file.tsv') as fp:\n copy_from(fp, MyTable, conn)\n\n with open('/path/to/file.csv') as fp:\n copy_from(fp, MyModel, engine, format='csv')\n\n :param source: Source file pointer, in read mode\n :param dest: SQLAlchemy model or table\n :param engine_or_conn: SQLAlchemy engine, connection, or raw_connection\n :param columns: Optional tuple of columns\n :param **flags: Options passed through to COPY\n\n If an existing connection is passed to `engine_or_conn`, it is the caller's\n responsibility to commit and close.\n\n The `columns` flag can be set to a tuple of strings to specify the column\n order. Passing `header` alone will not handle out of order columns, it simply tells\n postgres to ignore the first line of `source`.\n \"\"\"\n tbl = dest.__table__ if is_model(dest) else dest\n conn, autoclose = raw_connection_from(engine_or_conn)\n cursor = conn.cursor()\n relation = '.'.join('\"{}\"'.format(part) for part in (tbl.schema, tbl.name) if part)\n formatted_columns = '({})'.format(','.join(columns)) if columns else ''\n formatted_flags = '({})'.format(format_flags(flags)) if flags else ''\n copy = 'COPY {} {} FROM STDIN {}'.format(\n relation,\n formatted_columns,\n formatted_flags,\n )\n cursor.copy_expert(copy, source)\n if autoclose:\n conn.commit()\n conn.close()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nextracts a raw_connection and determine if it should be automatically closed.", "response": "def raw_connection_from(engine_or_conn):\n \"\"\"Extract a raw_connection and determine if it should be automatically closed.\n\n Only connections opened by this package will be closed automatically.\n \"\"\"\n if hasattr(engine_or_conn, 'cursor'):\n return engine_or_conn, False\n if hasattr(engine_or_conn, 'connection'):\n return engine_or_conn.connection, False\n return engine_or_conn.raw_connection(), True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef fetch_in_thread(sr, nsa):\n\n def fetch_seq(q, nsa):\n pid, ppid = os.getpid(), os.getppid()\n q.put((pid, ppid, sr[nsa]))\n \n q = Queue()\n p = Process(target=fetch_seq, args=(q, nsa))\n p.start()\n pid, ppid, seq = q.get()\n p.join()\n\n assert pid != ppid, \"sequence was not fetched from thread\"\n return pid, ppid, seq", "response": "fetch a sequence in a thread"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndownloading and install RPM Python binding.", "response": "def download_and_install(self):\n \"\"\"Download and install RPM Python binding.\"\"\"\n if self.is_installed_from_bin:\n try:\n self.installer.install_from_rpm_py_package()\n return\n except RpmPyPackageNotFoundError as e:\n Log.warn('RPM Py Package not found. reason: {0}'.format(e))\n\n # Pass to try to install from the source.\n pass\n\n # Download and install from the source.\n top_dir_name = self.downloader.download_and_expand()\n rpm_py_dir = os.path.join(top_dir_name, 'python')\n\n setup_py_in_found = False\n with Cmd.pushd(rpm_py_dir):\n if self.installer.setup_py.exists_in_path():\n setup_py_in_found = True\n self.installer.run()\n\n if not setup_py_in_found:\n self.installer.install_from_rpm_py_package()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_patchs_to_build_without_pkg_config(self, lib_dir, include_dir):\n additional_patches = [\n {\n 'src': r\"pkgconfig\\('--libs-only-L'\\)\",\n 'dest': \"['{0}']\".format(lib_dir),\n },\n # Considering -libs-only-l and -libs-only-L\n # https://github.com/rpm-software-management/rpm/pull/327\n {\n 'src': r\"pkgconfig\\('--libs(-only-l)?'\\)\",\n 'dest': \"['rpm', 'rpmio']\",\n 'required': True,\n },\n {\n 'src': r\"pkgconfig\\('--cflags'\\)\",\n 'dest': \"['{0}']\".format(include_dir),\n 'required': True,\n },\n ]\n self.patches.extend(additional_patches)", "response": "Add patches to remove pkg - config command and rpm. pc part."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\napplies replaced words and patches and save setup. py file.", "response": "def apply_and_save(self):\n \"\"\"Apply replaced words and patches, and save setup.py file.\"\"\"\n patches = self.patches\n\n content = None\n with open(self.IN_PATH) as f_in:\n # As setup.py.in file size is 2.4 KByte.\n # it's fine to read entire content.\n content = f_in.read()\n\n # Replace words.\n for key in self.replaced_word_dict:\n content = content.replace(key, self.replaced_word_dict[key])\n\n # Apply patches.\n out_patches = []\n for patch in patches:\n pattern = re.compile(patch['src'], re.MULTILINE)\n (content, subs_num) = re.subn(pattern, patch['dest'],\n content)\n if subs_num > 0:\n patch['applied'] = True\n out_patches.append(patch)\n\n for patch in out_patches:\n if patch.get('required') and not patch.get('applied'):\n Log.warn('Patch not applied {0}'.format(patch['src']))\n\n with open(self.OUT_PATH, 'w') as f_out:\n f_out.write(content)\n\n self.pathces = out_patches\n # Release content data to make it released by GC quickly.\n content = None"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef download_and_expand(self):\n top_dir_name = None\n if self.git_branch:\n # Download a source by git clone.\n top_dir_name = self._download_and_expand_by_git()\n else:\n # Download a source from the arcihve URL.\n # Downloading the compressed archive is better than \"git clone\",\n # because it is faster.\n # If download failed due to URL not found, try \"git clone\".\n try:\n top_dir_name = self._download_and_expand_from_archive_url()\n except RemoteFileNotFoundError:\n Log.info('Try to download by git clone.')\n top_dir_name = self._download_and_expand_by_git()\n return top_dir_name", "response": "Download and expand RPM Python binding."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nrun the install main logic.", "response": "def run(self):\n \"\"\"Run install main logic.\"\"\"\n self._make_lib_file_symbolic_links()\n self._copy_each_include_files_to_include_dir()\n self._make_dep_lib_file_sym_links_and_copy_include_files()\n self.setup_py.add_patchs_to_build_without_pkg_config(\n self.rpm.lib_dir, self.rpm.include_dir\n )\n self.setup_py.apply_and_save()\n self._build_and_install()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nmakes symbolic links for lib files.", "response": "def _make_lib_file_symbolic_links(self):\n \"\"\"Make symbolic links for lib files.\n\n Make symbolic links from system library files or downloaded lib files\n to downloaded source library files.\n\n For example, case: Fedora x86_64\n Make symbolic links\n from\n a. /usr/lib64/librpmio.so* (one of them)\n b. /usr/lib64/librpm.so* (one of them)\n c. If rpm-build-libs package is installed,\n /usr/lib64/librpmbuild.so* (one of them)\n otherwise, downloaded and extracted rpm-build-libs.\n ./usr/lib64/librpmbuild.so* (one of them)\n c. If rpm-build-libs package is installed,\n /usr/lib64/librpmsign.so* (one of them)\n otherwise, downloaded and extracted rpm-build-libs.\n ./usr/lib64/librpmsign.so* (one of them)\n to\n a. rpm/rpmio/.libs/librpmio.so\n b. rpm/lib/.libs/librpm.so\n c. rpm/build/.libs/librpmbuild.so\n d. rpm/sign/.libs/librpmsign.so\n .\n This is a status after running \"make\" on actual rpm build process.\n \"\"\"\n so_file_dict = {\n 'rpmio': {\n 'sym_src_dir': self.rpm.lib_dir,\n 'sym_dst_dir': 'rpmio/.libs',\n 'require': True,\n },\n 'rpm': {\n 'sym_src_dir': self.rpm.lib_dir,\n 'sym_dst_dir': 'lib/.libs',\n 'require': True,\n },\n 'rpmbuild': {\n 'sym_src_dir': self.rpm.lib_dir,\n 'sym_dst_dir': 'build/.libs',\n 'require': True,\n },\n 'rpmsign': {\n 'sym_src_dir': self.rpm.lib_dir,\n 'sym_dst_dir': 'sign/.libs',\n },\n }\n\n self._update_sym_src_dirs_conditionally(so_file_dict)\n\n for name in so_file_dict:\n so_dict = so_file_dict[name]\n pattern = 'lib{0}.so*'.format(name)\n so_files = Cmd.find(so_dict['sym_src_dir'], pattern)\n if not so_files:\n is_required = so_dict.get('require', False)\n if not is_required:\n message_format = (\n \"Skip creating symbolic link of \"\n \"not existing so file '{0}'\"\n )\n Log.debug(message_format.format(name))\n continue\n\n message = 'so file pattern {0} not found at {1}'.format(\n pattern, so_dict['sym_src_dir']\n )\n raise InstallError(message)\n sym_dst_dir = os.path.abspath('../{0}'.format(\n so_dict['sym_dst_dir']))\n if not os.path.isdir(sym_dst_dir):\n Cmd.mkdir_p(sym_dst_dir)\n\n cmd = 'ln -sf {0} {1}/lib{2}.so'.format(so_files[0],\n sym_dst_dir,\n name)\n Cmd.sh_e(cmd)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _copy_each_include_files_to_include_dir(self):\n src_header_dirs = [\n 'rpmio',\n 'lib',\n 'build',\n 'sign',\n ]\n with Cmd.pushd('..'):\n src_include_dir = os.path.abspath('./include')\n for header_dir in src_header_dirs:\n if not os.path.isdir(header_dir):\n message_format = \"Skip not existing header directory '{0}'\"\n Log.debug(message_format.format(header_dir))\n continue\n header_files = Cmd.find(header_dir, '*.h')\n for header_file in header_files:\n pattern = '^{0}/'.format(header_dir)\n (dst_header_file, subs_num) = re.subn(pattern,\n '', header_file)\n if subs_num == 0:\n message = 'Failed to replace header_file: {0}'.format(\n header_file)\n raise ValueError(message)\n dst_header_file = os.path.abspath(\n os.path.join(src_include_dir, 'rpm', dst_header_file)\n )\n dst_dir = os.path.dirname(dst_header_file)\n if not os.path.isdir(dst_dir):\n Cmd.mkdir_p(dst_dir)\n shutil.copyfile(header_file, dst_header_file)", "response": "Copy each header file from the source directory to the destination directory."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _make_dep_lib_file_sym_links_and_copy_include_files(self):\n if not self._rpm_py_has_popt_devel_dep():\n message = (\n 'The RPM Python binding does not have popt-devel dependency'\n )\n Log.debug(message)\n return\n if self._is_popt_devel_installed():\n message = '{0} package is installed.'.format(\n self.pacakge_popt_devel_name)\n Log.debug(message)\n return\n if not self._is_package_downloadable():\n message = '''\nInstall a {0} download plugin or\ninstall the {0} pacakge [{1}].\n'''.format(self.package_sys_name, self.pacakge_popt_devel_name)\n raise InstallError(message)\n if not self._is_popt_installed():\n message = '''\nRequired {0} not installed: [{1}],\nInstall the {0} package.\n'''.format(self.package_sys_name, self.pacakge_popt_name)\n raise InstallError(message)\n\n self._download_and_extract_popt_devel()\n\n # Copy libpopt.so to rpm_root/lib/.libs/.\n popt_lib_dirs = [\n self.rpm.lib_dir,\n # /lib64/libpopt.so* installed at popt-1.13-7.el6.x86_64.\n '/lib64',\n # /lib/*/libpopt.so* installed at libpopt0-1.16-8ubuntu1\n '/lib',\n ]\n pattern = 'libpopt.so*'\n popt_so_file = None\n for popt_lib_dir in popt_lib_dirs:\n so_files = Cmd.find(popt_lib_dir, pattern)\n if so_files:\n popt_so_file = so_files[0]\n break\n\n if not popt_so_file:\n message = 'so file pattern {0} not found at {1}'.format(\n pattern, str(popt_lib_dirs)\n )\n raise InstallError(message)\n\n cmd = 'ln -sf {0} ../lib/.libs/libpopt.so'.format(\n popt_so_file)\n Cmd.sh_e(cmd)\n\n # Copy popt.h to rpm_root/include\n shutil.copy('./usr/include/popt.h', '../include')", "response": "Make symbolic links for lib files and copy include files."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nchecking if the RPM Python binding has a depndency to popt - devel.", "response": "def _rpm_py_has_popt_devel_dep(self):\n \"\"\"Check if the RPM Python binding has a depndency to popt-devel.\n\n Search include header files in the source code to check it.\n \"\"\"\n found = False\n with open('../include/rpm/rpmlib.h') as f_in:\n for line in f_in:\n if re.match(r'^#include .*popt.h.*$', line):\n found = True\n break\n return found"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef run(self):\n try:\n if not self._is_rpm_all_lib_include_files_installed():\n self._make_lib_file_symbolic_links()\n self._copy_each_include_files_to_include_dir()\n self._make_dep_lib_file_sym_links_and_copy_include_files()\n self.setup_py.add_patchs_to_build_without_pkg_config(\n self.rpm.lib_dir, self.rpm.include_dir\n )\n self.setup_py.apply_and_save()\n self._build_and_install()\n except InstallError as e:\n if not self._is_rpm_all_lib_include_files_installed():\n org_message = str(e)\n message = '''\nInstall failed without rpm-devel package by below reason.\nCan you install the RPM package, and run this installer again?\n'''\n message += org_message\n raise InstallError(message)\n else:\n raise e", "response": "Run the main logic."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef install_from_rpm_py_package(self):\n self._download_and_extract_rpm_py_package()\n\n # Find ./usr/lib64/pythonN.N/site-packages/rpm directory.\n # A binary built by same version Python with used Python is target\n # for the safe installation.\n\n if self.rpm.has_set_up_py_in():\n # If RPM has setup.py.in, this strict check is okay.\n # Because we can still install from the source.\n py_dir_name = 'python{0}.{1}'.format(\n sys.version_info[0], sys.version_info[1])\n else:\n # If RPM does not have setup.py.in such as CentOS6,\n # Only way to install is by different Python's RPM package.\n py_dir_name = '*'\n\n python_lib_dir_pattern = os.path.join(\n 'usr', '*', py_dir_name, 'site-packages')\n rpm_dir_pattern = os.path.join(python_lib_dir_pattern, 'rpm')\n downloaded_rpm_dirs = glob.glob(rpm_dir_pattern)\n if not downloaded_rpm_dirs:\n message = 'Directory with a pattern: {0} not found.'.format(\n rpm_dir_pattern)\n raise RpmPyPackageNotFoundError(message)\n src_rpm_dir = downloaded_rpm_dirs[0]\n\n # Remove rpm directory for the possible installed directories.\n for rpm_dir in self.python.python_lib_rpm_dirs:\n if os.path.isdir(rpm_dir):\n Log.debug(\"Remove existing rpm directory {0}\".format(rpm_dir))\n shutil.rmtree(rpm_dir)\n\n dst_rpm_dir = self.python.python_lib_rpm_dir\n Log.debug(\"Copy directory from '{0}' to '{1}'\".format(\n src_rpm_dir, dst_rpm_dir))\n shutil.copytree(src_rpm_dir, dst_rpm_dir)\n\n file_name_pattern = 'rpm-*.egg-info'\n rpm_egg_info_pattern = os.path.join(\n python_lib_dir_pattern, file_name_pattern)\n downloaded_rpm_egg_infos = glob.glob(rpm_egg_info_pattern)\n if downloaded_rpm_egg_infos:\n existing_rpm_egg_info_pattern = os.path.join(\n self.python.python_lib_dir, file_name_pattern)\n existing_rpm_egg_infos = glob.glob(existing_rpm_egg_info_pattern)\n for existing_rpm_egg_info in existing_rpm_egg_infos:\n Log.debug(\"Remove existing rpm egg info file '{0}'\".format(\n existing_rpm_egg_info))\n os.remove(existing_rpm_egg_info)\n\n Log.debug(\"Copy file from '{0}' to '{1}'\".format(\n downloaded_rpm_egg_infos[0], self.python.python_lib_dir))\n shutil.copy2(downloaded_rpm_egg_infos[0],\n self.python.python_lib_dir)", "response": "Run install from RPM Python binding RPM package."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nverify dependency RPM package status.", "response": "def verify_package_status(self):\n \"\"\"Verify dependency RPM package status.\"\"\"\n # rpm-libs is required for /usr/lib64/librpm*.so\n self.rpm.verify_packages_installed(['rpm-libs'])\n\n # Check RPM so files to build the Python binding.\n message_format = '''\nRPM: {0} or\nRPM download tool (dnf-plugins-core (dnf) or yum-utils (yum)) required.\nInstall any of those.\n'''\n if self.rpm.has_composed_rpm_bulid_libs():\n if (not self.rpm.is_package_installed('rpm-build-libs')\n and not self.rpm.is_downloadable()):\n raise InstallError(message_format.format('rpm-build-libs'))\n else:\n # All the needed so files are included in rpm-libs package.\n pass"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef create_installer(self, rpm_py_version, **kwargs):\n return FedoraInstaller(rpm_py_version, self.python, self.rpm, **kwargs)", "response": "Create an Installer object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef python_lib_rpm_dirs(self):\n libs = [self.python_lib_arch_dir, self.python_lib_non_arch_dir]\n\n def append_rpm(path):\n return os.path.join(path, 'rpm')\n\n return map(append_rpm, libs)", "response": "Both arch and non - arch site - packages directories."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef is_python_binding_installed(self):\n is_installed = False\n is_install_error = False\n\n try:\n is_installed = self.is_python_binding_installed_on_pip()\n except InstallError:\n # Consider a case of pip is not installed in old Python (<= 2.6).\n is_install_error = True\n if not is_installed or is_install_error:\n for rpm_dir in self.python_lib_rpm_dirs:\n init_py = os.path.join(rpm_dir, '__init__.py')\n if os.path.isfile(init_py):\n is_installed = True\n break\n\n return is_installed", "response": "Check if the Python binding is already installed."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck if the Python binding has already been installed on pip.", "response": "def is_python_binding_installed_on_pip(self):\n \"\"\"Check if the Python binding has already installed.\"\"\"\n pip_version = self._get_pip_version()\n Log.debug('Pip version: {0}'.format(pip_version))\n pip_major_version = int(pip_version.split('.')[0])\n\n installed = False\n # --format is from pip v9.0.0\n # https://pip.pypa.io/en/stable/news/\n if pip_major_version >= 9:\n json_obj = self._get_pip_list_json_obj()\n\n for package in json_obj:\n Log.debug('pip list: {0}'.format(package))\n if package['name'] in ('rpm-python', 'rpm'):\n installed = True\n Log.debug('Package installed: {0}, {1}'.format(\n package['name'], package['version']))\n break\n else:\n # Implementation for pip old version.\n # It will be removed in the future.\n lines = self._get_pip_list_lines()\n for line in lines:\n if re.match('^rpm(-python)? ', line):\n installed = True\n Log.debug('Package installed.')\n break\n\n return installed"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the version of the RPM.", "response": "def version(self):\n \"\"\"RPM vesion string.\"\"\"\n stdout = Cmd.sh_e_out('{0} --version'.format(self.rpm_path))\n rpm_version = stdout.split()[2]\n return rpm_version"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck if the RPM is system RPM.", "response": "def is_system_rpm(self):\n \"\"\"Check if the RPM is system RPM.\"\"\"\n sys_rpm_paths = [\n '/usr/bin/rpm',\n # On CentOS6, system RPM is installed in this directory.\n '/bin/rpm',\n ]\n matched = False\n for sys_rpm_path in sys_rpm_paths:\n if self.rpm_path.startswith(sys_rpm_path):\n matched = True\n break\n return matched"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nchecks if the RPM package is installed.", "response": "def is_package_installed(self, package_name):\n \"\"\"Check if the RPM package is installed.\"\"\"\n if not package_name:\n raise ValueError('package_name required.')\n\n installed = True\n try:\n Cmd.sh_e('{0} --query {1} --quiet'.format(self.rpm_path,\n package_name))\n except InstallError:\n installed = False\n return installed"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck if the RPM packages are installed. Raise InstallError if any of the packages are not installed.", "response": "def verify_packages_installed(self, package_names):\n \"\"\"Check if the RPM packages are installed.\n\n Raise InstallError if any of the packages is not installed.\n \"\"\"\n if not package_names:\n raise ValueError('package_names required.')\n\n missing_packages = []\n for package_name in package_names:\n if not self.is_package_installed(package_name):\n missing_packages.append(package_name)\n\n if missing_packages:\n comma_packages = ', '.join(missing_packages)\n message = '''\nRequired RPM not installed: [{0}].\nInstall the RPM package.\n'''.format(comma_packages)\n raise InstallError(message)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef lib_dir(self):\n if not self._lib_dir:\n rpm_lib_dir = None\n cmd = '{0} -ql rpm-libs'.format(self.rpm_path)\n out = Cmd.sh_e_out(cmd)\n lines = out.split('\\n')\n for line in lines:\n if 'librpm.so' in line:\n rpm_lib_dir = os.path.dirname(line)\n break\n self._lib_dir = rpm_lib_dir\n return self._lib_dir", "response": "Return the path to the standard library directory used by RPM libs."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning if rpm is downloadable by the package command.", "response": "def is_downloadable(self):\n \"\"\"Return if rpm is downloadable by the package command.\n\n Check if dnf or yum plugin package exists.\n \"\"\"\n is_plugin_avaiable = False\n if self.is_dnf:\n is_plugin_avaiable = self.is_package_installed(\n 'dnf-plugins-core')\n else:\n \"\"\" yum environment.\n Make sure\n # yum -y --downloadonly --downloaddir=. install package_name\n is only available for root user.\n\n yumdownloader in yum-utils is available for normal user.\n https://access.redhat.com/solutions/10154\n \"\"\"\n is_plugin_avaiable = self.is_package_installed(\n 'yum-utils')\n return is_plugin_avaiable"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef lib_dir(self):\n if not self._lib_dir:\n lib_files = glob.glob(\"/usr/lib/*/librpm.so*\")\n if not lib_files:\n raise InstallError(\"Can not find lib directory.\")\n self._lib_dir = os.path.dirname(lib_files[0])\n return self._lib_dir", "response": "Return standard library directory path used by RPM libs."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef sh_e(cls, cmd, **kwargs):\n Log.debug('CMD: {0}'.format(cmd))\n cmd_kwargs = {\n 'shell': True,\n }\n cmd_kwargs.update(kwargs)\n\n env = os.environ.copy()\n # Better to parse English output\n env['LC_ALL'] = 'en_US.utf-8'\n if 'env' in kwargs:\n env.update(kwargs['env'])\n cmd_kwargs['env'] = env\n # Capture stderr to show it on error message.\n cmd_kwargs['stderr'] = subprocess.PIPE\n\n proc = None\n try:\n proc = subprocess.Popen(cmd, **cmd_kwargs)\n stdout, stderr = proc.communicate()\n returncode = proc.returncode\n message_format = (\n 'CMD Return Code: [{0}], Stdout: [{1}], Stderr: [{2}]'\n )\n Log.debug(message_format.format(returncode, stdout, stderr))\n\n if stdout is not None:\n stdout = stdout.decode('utf-8')\n if stderr is not None:\n stderr = stderr.decode('utf-8')\n\n if returncode != 0:\n message = 'CMD: [{0}], Return Code: [{1}] at [{2}]'.format(\n cmd, returncode, os.getcwd())\n if stderr is not None:\n message += ' Stderr: [{0}]'.format(stderr)\n ie = CmdError(message)\n ie.stdout = stdout\n ie.stderr = stderr\n raise ie\n\n return (stdout, stderr)\n except Exception as e:\n try:\n proc.kill()\n except Exception:\n pass\n raise e", "response": "Runs the command and returns stdout and stderr."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nrunning the command and returns the stdout.", "response": "def sh_e_out(cls, cmd, **kwargs):\n \"\"\"Run the command. and returns the stdout.\"\"\"\n cmd_kwargs = {\n 'stdout': subprocess.PIPE,\n }\n cmd_kwargs.update(kwargs)\n return cls.sh_e(cmd, **cmd_kwargs)[0]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nchange directory. It behaves like cd directory", "response": "def cd(cls, directory):\n \"\"\"Change directory. It behaves like \"cd directory\".\"\"\"\n Log.debug('CMD: cd {0}'.format(directory))\n os.chdir(directory)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nchanging directory and back to previous directory.", "response": "def pushd(cls, new_dir):\n \"\"\"Change directory, and back to previous directory.\n\n It behaves like \"pushd directory; something; popd\".\n \"\"\"\n previous_dir = os.getcwd()\n try:\n new_ab_dir = None\n if os.path.isabs(new_dir):\n new_ab_dir = new_dir\n else:\n new_ab_dir = os.path.join(previous_dir, new_dir)\n # Use absolute path to show it on FileNotFoundError message.\n cls.cd(new_ab_dir)\n yield\n finally:\n cls.cd(previous_dir)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn an absolute path of the command. It behaves like which command.", "response": "def which(cls, cmd):\n \"\"\"Return an absolute path of the command.\n\n It behaves like \"which command\".\n \"\"\"\n abs_path_cmd = None\n if sys.version_info >= (3, 3):\n abs_path_cmd = shutil.which(cmd)\n else:\n abs_path_cmd = find_executable(cmd)\n return abs_path_cmd"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef curl_remote_name(cls, file_url):\n tar_gz_file_name = file_url.split('/')[-1]\n\n if sys.version_info >= (3, 2):\n from urllib.request import urlopen\n from urllib.error import HTTPError\n else:\n from urllib2 import urlopen\n from urllib2 import HTTPError\n\n response = None\n try:\n response = urlopen(file_url)\n except HTTPError as e:\n message = 'Download failed: URL: {0}, reason: {1}'.format(\n file_url, e)\n if 'HTTP Error 404' in str(e):\n raise RemoteFileNotFoundError(message)\n else:\n raise InstallError(message)\n\n tar_gz_file_obj = io.BytesIO(response.read())\n with open(tar_gz_file_name, 'wb') as f_out:\n f_out.write(tar_gz_file_obj.read())\n return tar_gz_file_name", "response": "Download file_url and save as a file name of the URL. It behaves like curl - O or -- remote - name."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nextracting tar. gz or tar. bz2 file.", "response": "def tar_extract(cls, tar_comp_file_path):\n \"\"\"Extract tar.gz or tar bz2 file.\n\n It behaves like\n - tar xzf tar_gz_file_path\n - tar xjf tar_bz2_file_path\n It raises tarfile.ReadError if the file is broken.\n \"\"\"\n try:\n with contextlib.closing(tarfile.open(tar_comp_file_path)) as tar:\n tar.extractall()\n except tarfile.ReadError as e:\n message_format = (\n 'Extract failed: '\n 'tar_comp_file_path: {0}, reason: {1}'\n )\n raise InstallError(message_format.format(tar_comp_file_path, e))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfinds matched files. It does not include symbolic file in the result.", "response": "def find(cls, searched_dir, pattern):\n \"\"\"Find matched files.\n\n It does not include symbolic file in the result.\n \"\"\"\n Log.debug('find {0} with pattern: {1}'.format(searched_dir, pattern))\n matched_files = []\n for root_dir, dir_names, file_names in os.walk(searched_dir,\n followlinks=False):\n for file_name in file_names:\n if fnmatch.fnmatch(file_name, pattern):\n file_path = os.path.join(root_dir, file_name)\n if not os.path.islink(file_path):\n matched_files.append(file_path)\n matched_files.sort()\n return matched_files"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef version_str2tuple(cls, version_str):\n if not isinstance(version_str, str):\n ValueError('version_str invalid instance.')\n version_info_list = re.findall(r'[0-9a-zA-Z]+', version_str)\n\n def convert_to_int(string):\n value = None\n if re.match(r'^\\d+$', string):\n value = int(string)\n else:\n value = string\n return value\n\n version_info_list = [convert_to_int(s) for s in version_info_list]\n\n return tuple(version_info_list)", "response": "Convert version string to tuple object. ex. 4 14 0 rc1."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef which(file):\n for path in os.environ[\"PATH\"].split(os.pathsep):\n if os.path.exists(os.path.join(path, file)):\n return os.path.join(path, file)\n return None", "response": "Return the path to a file in the system."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef install_rpm_py():\n python_path = sys.executable\n cmd = '{0} install.py'.format(python_path)\n exit_status = os.system(cmd)\n if exit_status != 0:\n raise Exception('Command failed: {0}'.format(cmd))", "response": "Install RPM Python binding."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn bgzip version as string", "response": "def _get_bgzip_version(exe):\n \"\"\"return bgzip version as string\"\"\"\n p = subprocess.Popen([exe, \"-h\"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)\n output = p.communicate()\n version_line = output[0].splitlines()[1]\n version = re.match(r\"(?:Version:|bgzip \\(htslib\\))\\s+(\\d+\\.\\d+(\\.\\d+)?)\", version_line).group(1)\n return version"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfinding the bgzip executable and check if it meets version requirements", "response": "def _find_bgzip():\n \"\"\"return path to bgzip if found and meets version requirements, else exception\"\"\"\n missing_file_exception = OSError if six.PY2 else FileNotFoundError\n min_bgzip_version = \".\".join(map(str, min_bgzip_version_info))\n exe = os.environ.get(\"SEQREPO_BGZIP_PATH\", which(\"bgzip\") or \"/usr/bin/bgzip\")\n\n try:\n bgzip_version = _get_bgzip_version(exe)\n except AttributeError:\n raise RuntimeError(\"Didn't find version string in bgzip executable ({exe})\".format(exe=exe))\n except missing_file_exception:\n raise RuntimeError(\"{exe} doesn't exist; you need to install htslib (See https://github.com/biocommons/biocommons.seqrepo#requirements)\".format(exe=exe))\n except Exception:\n raise RuntimeError(\"Unknown error while executing {exe}\".format(exe=exe))\n bgzip_version_info = tuple(map(int, bgzip_version.split(\".\")))\n if bgzip_version_info < min_bgzip_version_info:\n raise RuntimeError(\"bgzip ({exe}) {ev} is too old; >= {rv} is required; please upgrade\".format(\n exe=exe, ev=bgzip_version, rv=min_bgzip_version))\n logger.info(\"Using bgzip {ev} ({exe})\".format(ev=bgzip_version, exe=exe))\n return exe"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef fetch_uri(self, uri, start=None, end=None):\n\n namespace, alias = uri_re.match(uri).groups()\n return self.fetch(alias=alias, namespace=namespace, start=start, end=end)", "response": "fetch sequence for URI"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef store(self, seq, nsaliases):\n if not self._writeable:\n raise RuntimeError(\"Cannot write -- opened read-only\")\n\n if self._upcase:\n seq = seq.upper()\n\n try:\n seqhash = bioutils.digests.seq_seqhash(seq)\n except Exception as e:\n import pprint\n _logger.critical(\"Exception raised for \" + pprint.pformat(nsaliases))\n raise\n seq_id = seqhash\n\n # add sequence if not present\n n_seqs_added = n_aliases_added = 0\n msg = \"sh{nsa_sep}{seq_id:.10s}... ({l} residues; {na} aliases {aliases})\".format(\n seq_id=seq_id,\n l=len(seq),\n na=len(nsaliases),\n nsa_sep=nsa_sep,\n aliases=\", \".join(\"{nsa[namespace]}:{nsa[alias]}\".format(nsa=nsa) for nsa in nsaliases))\n if seq_id not in self.sequences:\n _logger.info(\"Storing \" + msg)\n if len(seq) > ct_n_residues: # pragma: no cover\n _logger.debug(\"Precommit for large sequence\")\n self.commit()\n self.sequences.store(seq_id, seq)\n n_seqs_added += 1\n self._pending_sequences += 1\n self._pending_sequences_len += len(seq)\n self._pending_aliases += self._update_digest_aliases(seq_id, seq)\n else:\n _logger.debug(\"Sequence exists: \" + msg)\n\n # add/update external aliases for new and existing sequences\n # updating is optimized to load only new tuples\n existing_aliases = self.aliases.fetch_aliases(seq_id)\n ea_tuples = [(r[\"seq_id\"], r[\"namespace\"], r[\"alias\"]) for r in existing_aliases]\n new_tuples = [(seq_id, r[\"namespace\"], r[\"alias\"]) for r in nsaliases]\n upd_tuples = set(new_tuples) - set(ea_tuples)\n if upd_tuples:\n _logger.info(\"{} new aliases for {}\".format(len(upd_tuples), msg))\n for _, namespace, alias in upd_tuples:\n self.aliases.store_alias(seq_id=seq_id, namespace=namespace, alias=alias)\n self._pending_aliases += len(upd_tuples)\n n_aliases_added += len(upd_tuples)\n if (self._pending_sequences > ct_n_seqs or self._pending_aliases > ct_n_aliases\n or self._pending_sequences_len > ct_n_residues): # pragma: no cover\n _logger.info(\"Hit commit thresholds ({self._pending_sequences} sequences, \"\n \"{self._pending_aliases} aliases, {self._pending_sequences_len} residues)\".format(self=self))\n self.commit()\n return n_seqs_added, n_aliases_added", "response": "Store a sequence into the internal cache."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ngive an alias and optional namespace return a list of all other aliases for same sequence", "response": "def translate_alias(self, alias, namespace=None, target_namespaces=None, translate_ncbi_namespace=None):\n \"\"\"given an alias and optional namespace, return a list of all other\n aliases for same sequence\n\n \"\"\"\n\n if translate_ncbi_namespace is None:\n translate_ncbi_namespace = self.translate_ncbi_namespace\n seq_id = self._get_unique_seqid(alias=alias, namespace=namespace)\n aliases = self.aliases.fetch_aliases(seq_id=seq_id,\n translate_ncbi_namespace=translate_ncbi_namespace)\n if target_namespaces:\n aliases = [a for a in aliases if a[\"namespace\"] in target_namespaces]\n return aliases"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngive a string identifier return a list of aliases that refer to the same sequence.", "response": "def translate_identifier(self, identifier, target_namespaces=None, translate_ncbi_namespace=None):\n \"\"\"Given a string identifier, return a list of aliases (as\n identifiers) that refer to the same sequence.\n\n \"\"\"\n namespace, alias = identifier.split(nsa_sep) if nsa_sep in identifier else (None, identifier)\n aliases = self.translate_alias(alias=alias,\n namespace=namespace,\n target_namespaces=target_namespaces,\n translate_ncbi_namespace=translate_ncbi_namespace)\n return [nsa_sep.join((a[\"namespace\"], a[\"alias\"])) for a in aliases]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_unique_seqid(self, alias, namespace):\n\n recs = self.aliases.find_aliases(alias=alias, namespace=namespace)\n seq_ids = set(r[\"seq_id\"] for r in recs)\n if len(seq_ids) == 0:\n raise KeyError(\"Alias {} (namespace: {})\".format(alias, namespace))\n if len(seq_ids) > 1:\n # This should only happen when namespace is None\n raise KeyError(\"Alias {} (namespace: {}): not unique\".format(alias, namespace))\n return seq_ids.pop()", "response": "given alias and namespace return unique sequence id"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncomputing digest aliases for seq and update", "response": "def _update_digest_aliases(self, seq_id, seq):\n\n \"\"\"compute digest aliases for seq and update; returns number of digest\n aliases (some of which may have already existed)\n\n For the moment, sha512 is computed for seq_id separately from\n the sha512 here. We should fix that.\n\n \"\"\"\n\n ir = bioutils.digests.seq_vmc_identifier(seq)\n seq_aliases = [\n {\n \"namespace\": ir[\"namespace\"],\n \"alias\": ir[\"accession\"],\n },\n {\n \"namespace\": \"SHA1\",\n \"alias\": bioutils.digests.seq_sha1(seq)\n },\n {\n \"namespace\": \"MD5\",\n \"alias\": bioutils.digests.seq_md5(seq)\n },\n {\n \"namespace\": \"SEGUID\",\n \"alias\": bioutils.digests.seq_seguid(seq)\n },\n ]\n for sa in seq_aliases:\n self.aliases.store_alias(seq_id=seq_id, **sa)\n return len(seq_aliases)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef fetch_aliases(self, seq_id, current_only=True, translate_ncbi_namespace=None):\n return [dict(r) for r in self.find_aliases(seq_id=seq_id,\n current_only=current_only,\n translate_ncbi_namespace=translate_ncbi_namespace)]", "response": "return list of alias annotation records for a given sequence id"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn iterator over alias annotation records that match criteria", "response": "def find_aliases(self, seq_id=None, namespace=None, alias=None, current_only=True, translate_ncbi_namespace=None):\n \"\"\"returns iterator over alias annotation records that match criteria\n \n The arguments, all optional, restrict the records that are\n returned. Without arguments, all aliases are returned.\n\n If arguments contain %, the `like` comparison operator is\n used. Otherwise arguments must match exactly.\n\n \"\"\"\n clauses = []\n params = []\n\n def eq_or_like(s):\n return \"like\" if \"%\" in s else \"=\"\n\n if translate_ncbi_namespace is None:\n translate_ncbi_namespace = self.translate_ncbi_namespace\n if alias is not None:\n clauses += [\"alias {} ?\".format(eq_or_like(alias))]\n params += [alias]\n if namespace is not None:\n # Switch to using RefSeq for RefSeq accessions\n # issue #38: translate \"RefSeq\" to \"NCBI\" to enable RefSeq lookups\n # issue #31: later breaking change, translate database\n if namespace == \"RefSeq\":\n namespace = \"NCBI\"\n clauses += [\"namespace {} ?\".format(eq_or_like(namespace))]\n params += [namespace]\n if seq_id is not None:\n clauses += [\"seq_id {} ?\".format(eq_or_like(seq_id))]\n params += [seq_id]\n if current_only:\n clauses += [\"is_current = 1\"]\n\n cols = [\"seqalias_id\", \"seq_id\", \"alias\", \"added\", \"is_current\"]\n if translate_ncbi_namespace:\n cols += [\"case namespace when 'NCBI' then 'RefSeq' else namespace end as namespace\"]\n else:\n cols += [\"namespace\"]\n sql = \"select {cols} from seqalias\".format(cols=\", \".join(cols))\n if clauses:\n sql += \" where \" + \" and \".join(\"(\" + c + \")\" for c in clauses)\n sql += \" order by seq_id, namespace, alias\"\n\n _logger.debug(\"Executing: \" + sql)\n return self._db.execute(sql, params)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef store_alias(self, seq_id, namespace, alias):\n\n if not self._writeable:\n raise RuntimeError(\"Cannot write -- opened read-only\")\n\n log_pfx = \"store({q},{n},{a})\".format(n=namespace, a=alias, q=seq_id)\n try:\n c = self._db.execute(\"insert into seqalias (seq_id, namespace, alias) values (?, ?, ?)\", (seq_id, namespace,\n alias))\n # success => new record\n return c.lastrowid\n except sqlite3.IntegrityError:\n pass\n\n # IntegrityError fall-through\n\n # existing record is guaranteed to exist uniquely; fetchone() should always succeed\n current_rec = self.find_aliases(namespace=namespace, alias=alias).fetchone()\n\n # if seq_id matches current record, it's a duplicate (seq_id, namespace, alias) tuple\n # and we return current record\n if current_rec[\"seq_id\"] == seq_id:\n _logger.debug(log_pfx + \": duplicate record\")\n return current_rec[\"seqalias_id\"]\n\n # otherwise, we're reassigning; deprecate old record, then retry\n _logger.debug(log_pfx + \": collision; deprecating {s1}\".format(s1=current_rec[\"seq_id\"]))\n self._db.execute(\"update seqalias set is_current = 0 where seqalias_id = ?\", [current_rec[\"seqalias_id\"]])\n return self.store_alias(seq_id, namespace, alias)", "response": "associate a namespaced alias with a sequence"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nupgrading db using scripts for specified schema version", "response": "def _upgrade_db(self):\n \"\"\"upgrade db using scripts for specified (current) schema version\"\"\"\n migration_path = \"_data/migrations\"\n sqlite3.connect(self._db_path).close() # ensure that it exists\n db_url = \"sqlite:///\" + self._db_path\n backend = yoyo.get_backend(db_url)\n migration_dir = pkg_resources.resource_filename(__package__, migration_path)\n migrations = yoyo.read_migrations(migration_dir)\n assert len(migrations) > 0, \"no migration scripts found -- wrong migraion path for \" + __package__\n migrations_to_apply = backend.to_apply(migrations)\n backend.apply_migrations(migrations_to_apply)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd the names of the assembled sequences to the existing sequences", "response": "def add_assembly_names(opts):\n \"\"\"add assembly names as aliases to existing sequences\n\n Specifically, associate aliases like GRCh37.p9:1 with existing\n refseq accessions\n\n ```\n [{'aliases': ['chr19'],\n 'assembly_unit': 'Primary Assembly',\n 'genbank_ac': 'CM000681.2',\n 'length': 58617616,\n 'name': '19',\n 'refseq_ac': 'NC_000019.10',\n 'relationship': '=',\n 'sequence_role': 'assembled-molecule'}]\n ```\n\n For the above sample record, this function adds the following aliases:\n * genbank:CM000681.2\n * GRCh38:19\n * GRCh38:chr19\n to the sequence referred to by refseq:NC_000019.10.\n\n \"\"\"\n seqrepo_dir = os.path.join(opts.root_directory, opts.instance_name)\n sr = SeqRepo(seqrepo_dir, writeable=True)\n\n assemblies = bioutils.assemblies.get_assemblies()\n if opts.reload_all:\n assemblies_to_load = sorted(assemblies)\n else:\n namespaces = [r[\"namespace\"] for r in sr.aliases._db.execute(\"select distinct namespace from seqalias\")]\n assemblies_to_load = sorted(k for k in assemblies if k not in namespaces)\n _logger.info(\"{} assemblies to load\".format(len(assemblies_to_load)))\n\n ncbi_alias_map = {r[\"alias\"]: r[\"seq_id\"] for r in sr.aliases.find_aliases(namespace=\"NCBI\", current_only=False)}\n\n for assy_name in tqdm.tqdm(assemblies_to_load, unit=\"assembly\"):\n _logger.debug(\"loading \" + assy_name)\n sequences = assemblies[assy_name][\"sequences\"]\n eq_sequences = [s for s in sequences if s[\"relationship\"] in (\"=\", \"<>\")]\n if not eq_sequences:\n _logger.info(\"No '=' sequences to load for {an}; skipping\".format(an=assy_name))\n continue\n\n # all assembled-molecules (1..22, X, Y, MT) have ncbi aliases in seqrepo\n not_in_seqrepo = [s[\"refseq_ac\"] for s in eq_sequences if s[\"refseq_ac\"] not in ncbi_alias_map]\n if not_in_seqrepo:\n _logger.warning(\"Assembly {an} references {n} accessions not in SeqRepo instance {opts.instance_name} (e.g., {acs})\".format(\n an=assy_name, n=len(not_in_seqrepo), opts=opts, acs=\", \".join(not_in_seqrepo[:5]+[\"...\"]), seqrepo_dir=seqrepo_dir))\n if not opts.partial_load:\n _logger.warning(\"Skipping {an} (-p to enable partial loading)\".format(an=assy_name))\n continue\n \n eq_sequences = [es for es in eq_sequences if es[\"refseq_ac\"] in ncbi_alias_map]\n _logger.info(\"Loading {n} new accessions for assembly {an}\".format(an=assy_name, n=len(eq_sequences)))\n\n for s in eq_sequences:\n seq_id = ncbi_alias_map[s[\"refseq_ac\"]]\n aliases = [{\"namespace\": assy_name, \"alias\": a} for a in [s[\"name\"]] + s[\"aliases\"]]\n if \"genbank_ac\" in s and s[\"genbank_ac\"]:\n aliases += [{\"namespace\": \"genbank\", \"alias\": s[\"genbank_ac\"]}]\n for alias in aliases:\n sr.aliases.store_alias(seq_id=seq_id, **alias)\n _logger.debug(\"Added assembly alias {a[namespace]}:{a[alias]} for {seq_id}\".format(a=alias, seq_id=seq_id))\n sr.commit()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nblocking until a job with the given job_id has updated its state.", "response": "def wait_for_job_update(self, job_id, timeout=None):\n \"\"\"\n Blocks until a job given by job_id has updated its state (canceled, completed, progress updated, etc.)\n if timeout is not None, then this function raises iceqube.exceptions.TimeoutError.\n\n :param job_id: the job's job_id to monitor for changes.\n :param timeout: if None, wait forever for a job update. If given, wait until timeout seconds, and then raise\n iceqube.exceptions.TimeoutError.\n :return: the Job object corresponding to job_id.\n \"\"\"\n # internally, we register an Event object for each entry in this function.\n # when self.notify_of_job_update() is called, we call Event.set() on all events\n # registered for that job, thereby releasing any threads waiting for that specific job.\n\n event = JOB_EVENT_MAPPING[job_id]\n event.clear()\n result = event.wait(timeout=timeout)\n job = self.get_job(job_id)\n\n if result:\n return job\n else:\n raise TimeoutError(\"Job {} has not received any updates.\".format(job_id))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nimports a string that represents a module and function and return the same function.", "response": "def import_stringified_func(funcstring):\n \"\"\"\n Import a string that represents a module and function, e.g. {module}.{funcname}.\n\n Given a function f, import_stringified_func(stringify_func(f)) will return the same function.\n :param funcstring: String to try to import\n :return: callable\n \"\"\"\n assert isinstance(funcstring, str)\n\n modulestring, funcname = funcstring.rsplit('.', 1)\n\n mod = importlib.import_module(modulestring)\n\n func = getattr(mod, funcname)\n return func"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef main_loop(self, timeout=None):\n if self.trigger_event.wait(timeout):\n try:\n self.func()\n except Exception as e:\n self.logger.warning(\"Got an exception running {func}: {e}\".format(func=self.func, e=str(e)))\n finally:\n self.trigger_event.clear()", "response": "Main loop for the main event loop."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set_sqlite_pragmas(self):\n\n def _pragmas_on_connect(dbapi_con, con_record):\n dbapi_con.execute(\"PRAGMA journal_mode = WAL;\")\n\n event.listen(self.engine, \"connect\", _pragmas_on_connect)", "response": "Sets the connection PRAGMAs for the sqlalchemy engine stored in self. engine."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding the job given by j to the job queue.", "response": "def schedule_job(self, j):\n \"\"\"\n Add the job given by j to the job queue.\n\n Note: Does not actually run the job.\n \"\"\"\n job_id = uuid.uuid4().hex\n j.job_id = job_id\n\n session = self.sessionmaker()\n orm_job = ORMJob(\n id=job_id,\n state=j.state,\n app=self.app,\n namespace=self.namespace,\n obj=j)\n session.add(orm_job)\n try:\n session.commit()\n except Exception as e:\n logging.error(\n \"Got an error running session.commit(): {}\".format(e))\n\n return job_id"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nmarking the job as being canceled.", "response": "def mark_job_as_canceling(self, job_id):\n \"\"\"\n Mark the job as requested for canceling. Does not actually try to cancel a running job.\n\n :param job_id: the job to be marked as canceling.\n :return: the job object\n \"\"\"\n\n job, _ = self._update_job_state(job_id, State.CANCELING)\n return job"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nclear the queue and the job data.", "response": "def clear(self, job_id=None, force=False):\n \"\"\"\n Clear the queue and the job data. If job_id is not given, clear out all\n jobs marked COMPLETED. If job_id is given, clear out the given job's\n data. This function won't do anything if the job's state is not COMPLETED or FAILED.\n :type job_id: NoneType or str\n :param job_id: the job_id to clear. If None, clear all jobs.\n :type force: bool\n :param force: If True, clear the job (or jobs), even if it hasn't completed or failed.\n \"\"\"\n s = self.sessionmaker()\n q = self._ns_query(s)\n if job_id:\n q = q.filter_by(id=job_id)\n\n # filter only by the finished jobs, if we are not specified to force\n if not force:\n q = q.filter(\n or_(ORMJob.state == State.COMPLETED, ORMJob.state == State.FAILED))\n\n q.delete(synchronize_session=False)\n s.commit()\n s.close()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nupdating the job s progress info.", "response": "def update_job_progress(self, job_id, progress, total_progress):\n \"\"\"\n Update the job given by job_id's progress info.\n :type total_progress: int\n :type progress: int\n :type job_id: str\n :param job_id: The id of the job to update\n :param progress: The current progress achieved by the job\n :param total_progress: The total progress achievable by the job.\n :return: the job_id\n \"\"\"\n\n session = self.sessionmaker()\n job, orm_job = self._update_job_state(\n job_id, state=State.RUNNING, session=session)\n\n # Note (aron): looks like SQLAlchemy doesn't automatically\n # save any pickletype fields even if we re-set (orm_job.obj = job) that\n # field. My hunch is that it's tracking the id of the object,\n # and if that doesn't change, then SQLAlchemy doesn't repickle the object\n # and save to the DB.\n # Our hack here is to just copy the job object, and then set thespecific\n # field we want to edit, in this case the job.state. That forces\n # SQLAlchemy to re-pickle the object, thus setting it to the correct state.\n\n job = copy(job)\n\n job.progress = progress\n job.total_progress = total_progress\n orm_job.obj = job\n\n session.add(orm_job)\n session.commit()\n session.close()\n return job_id"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef mark_job_as_failed(self, job_id, exception, traceback):\n session = self.sessionmaker()\n job, orm_job = self._update_job_state(\n job_id, State.FAILED, session=session)\n\n # Note (aron): looks like SQLAlchemy doesn't automatically\n # save any pickletype fields even if we re-set (orm_job.obj = job) that\n # field. My hunch is that it's tracking the id of the object,\n # and if that doesn't change, then SQLAlchemy doesn't repickle the object\n # and save to the DB.\n # Our hack here is to just copy the job object, and then set thespecific\n # field we want to edit, in this case the job.state. That forces\n # SQLAlchemy to re-pickle the object, thus setting it to the correct state.\n job = copy(job)\n\n job.exception = exception\n job.traceback = traceback\n orm_job.obj = job\n\n session.add(orm_job)\n session.commit()\n session.close()", "response": "Mark the job as failed and record the traceback and exception."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _ns_query(self, session):\n return session.query(ORMJob).filter(ORMJob.app == self.app,\n ORMJob.namespace == self.namespace)", "response": "Returns a SQLAlchemy query that is already namespaced by the app and namespace given to this backend\n during initialization."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nstarting up the message processor thread that continuously reads the messages sent to self. incoming_message_mailbox and starts or cancels jobs based on the message received.", "response": "def start_message_processing(self):\n \"\"\"\n Starts up the message processor thread, that continuously reads\n messages sent to self.incoming_message_mailbox, and starts or cancels jobs based on the message received.\n Returns: the Thread object.\n\n \"\"\"\n t = InfiniteLoopThread(self.process_messages, thread_name=\"MESSAGEPROCESSOR\", wait_between_runs=0.5)\n t.start()\n return t"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreads from the incoming_message_mailbox and process the messages.", "response": "def process_messages(self):\n \"\"\"\n Read from the incoming_message_mailbox and report to the storage backend\n based on the first message found there.\n Returns: None\n \"\"\"\n try:\n msg = self.msgbackend.pop(self.incoming_message_mailbox)\n self.handle_incoming_message(msg)\n except queue.Empty:\n logger.debug(\"Worker message queue currently empty.\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nhandling incoming messages from the queue.", "response": "def handle_incoming_message(self, msg):\n \"\"\"\n Start or cancel a job, based on the msg.\n\n If msg.type == MessageType.START_JOB, then start the job given by msg.job.\n\n If msg.type == MessageType.CANCEL_JOB, then try to cancel the job given by msg.job.job_id.\n\n Args:\n msg (barbequeue.messaging.classes.Message):\n\n Returns: None\n\n \"\"\"\n if msg.type == MessageType.START_JOB:\n job = msg.message['job']\n self.schedule_job(job)\n elif msg.type == MessageType.CANCEL_JOB:\n job_id = msg.message['job_id']\n self.cancel(job_id)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _reraise_with_traceback(f):\n\n def wrap(*args, **kwargs):\n try:\n return f(*args, **kwargs)\n except Exception as e:\n traceback_str = traceback.format_exc()\n e.traceback = traceback_str\n raise e\n\n return wrap", "response": "A decorator that wraps a function with a traceback attribute."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef schedule_job(self, job):\n l = _reraise_with_traceback(job.get_lambda_to_execute())\n\n future = self.workers.submit(l, update_progress_func=self.update_progress, cancel_job_func=self._check_for_cancel)\n\n # assign the futures to a dict, mapping them to a job\n self.job_future_mapping[future] = job\n self.future_job_mapping[job.job_id] = future\n\n # callback for when the future is now!\n future.add_done_callback(self.handle_finished_future)\n # add the job to our cancel notifications data structure, with False at first\n self.cancel_notifications[job.job_id] = False\n\n return future", "response": "schedule a job to the type of workers spawned by self. start_workers."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef cancel(self, job_id):\n future = self.future_job_mapping[job_id]\n is_future_cancelled = future.cancel()\n\n if is_future_cancelled: # success!\n return True\n else:\n if future.running():\n # Already running, but let's mark the future as cancelled\n # anyway, to make sure that calling future.result() will raise an error.\n # Our cancelling callback will then check this variable to see its state,\n # and exit if it's cancelled.\n from concurrent.futures._base import CANCELLED\n future._state = CANCELLED\n return False\n else: # probably finished already, too late to cancel!\n return False", "response": "Cancels a job from the executor pool."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nchecking if a job has been requested to be cancelled.", "response": "def _check_for_cancel(self, job_id, current_stage=\"\"):\n \"\"\"\n Check if a job has been requested to be cancelled. When called, the calling function can\n optionally give the stage it is currently in, so the user has information on where the job\n was before it was cancelled.\n\n :param job_id: The job_id to check\n :param current_stage: Where the job currently is\n\n :return: raises a CancelledError if we find out that we were cancelled.\n \"\"\"\n\n future = self.future_job_mapping[job_id]\n is_cancelled = future._state in [CANCELLED, CANCELLED_AND_NOTIFIED]\n\n if is_cancelled:\n raise UserCancelledError(last_stage=current_stage)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nstart the scheduler thread.", "response": "def start_scheduler(self):\n \"\"\"\n Start the scheduler thread. This thread reads the queue of jobs to be\n scheduled and sends them to the workers.\n Returns: None\n\n \"\"\"\n t = InfiniteLoopThread(\n func=self.schedule_next_job,\n thread_name=\"SCHEDULER\",\n wait_between_runs=0.5)\n t.start()\n return t"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nstart the worker message handler thread.", "response": "def start_worker_message_handler(self):\n \"\"\"\n Start the worker message handler thread, that loops over messages from workers\n (job progress updates, failures and successes etc.) and then updates the job's status.\n Returns: None\n\n \"\"\"\n t = InfiniteLoopThread(\n func=lambda: self.handle_worker_messages(timeout=2),\n thread_name=\"WORKERMESSAGEHANDLER\",\n wait_between_runs=0.5)\n t.start()\n return t"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef shutdown(self, wait=True):\n self.scheduler_thread.stop()\n self.worker_message_handler_thread.stop()\n\n if wait:\n self.scheduler_thread.join()\n self.worker_message_handler_thread.join()", "response": "Shut down the worker message handler and scheduler threads."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsend a message to the workers to cancel the job with job_id.", "response": "def request_job_cancel(self, job_id):\n \"\"\"\n Send a message to the workers to cancel the job with job_id. We then mark the job in the storage\n as being canceled.\n\n :param job_id: the job to cancel\n :return: None\n \"\"\"\n msg = CancelMessage(job_id)\n self.messaging_backend.send(self.worker_mailbox, msg)\n self.storage_backend.mark_job_as_canceling(job_id)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef schedule_next_job(self):\n next_job = self.storage_backend.get_next_scheduled_job()\n # TODO: don't loop over if workers are already all running\n\n if not next_job:\n logging.debug(\"No job to schedule right now.\")\n return\n\n try:\n self.messaging_backend.send(self.worker_mailbox,\n Message(\n type=MessageType.START_JOB,\n message={'job': next_job}))\n self.storage_backend.mark_job_as_queued(next_job.job_id)\n except Full:\n logging.debug(\n \"Worker queue full; skipping scheduling of job {} for now.\".format(next_job.job_id)\n )\n return", "response": "Schedules the next job in the queue."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef handle_worker_messages(self, timeout):\n msgs = self.messaging_backend.popn(self.incoming_mailbox, n=20)\n\n for msg in msgs:\n self.handle_single_message(msg)", "response": "Read messages that are placed in self. incoming_mailbox and update the job states corresponding to each message."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef handle_single_message(self, msg):\n job_id = msg.message['job_id']\n actual_msg = msg.message\n if msg.type == MessageType.JOB_UPDATED:\n progress = actual_msg['progress']\n total_progress = actual_msg['total_progress']\n self.storage_backend.update_job_progress(job_id, progress,\n total_progress)\n elif msg.type == MessageType.JOB_COMPLETED:\n self.storage_backend.complete_job(job_id)\n elif msg.type == MessageType.JOB_FAILED:\n exc = actual_msg['exception']\n trace = actual_msg['traceback']\n self.storage_backend.mark_job_as_failed(job_id, exc, trace)\n elif msg.type == MessageType.JOB_CANCELED:\n self.storage_backend.mark_job_as_canceled(job_id)\n else:\n self.logger.error(\"Unknown message type: {}\".format(msg.type))", "response": "Handle a single message and modify the job storage appropriately."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a function that executes the function assigned to this job.", "response": "def get_lambda_to_execute(self):\n \"\"\"\n return a function that executes the function assigned to this job.\n\n If job.track_progress is None (the default), the returned function accepts no argument\n and simply needs to be called. If job.track_progress is True, an update_progress function\n is passed in that can be used by the function to provide feedback progress back to the\n job scheduling system.\n\n :return: a function that executes the original function assigned to this job.\n \"\"\"\n def y(update_progress_func, cancel_job_func):\n \"\"\"\n Call the function stored in self.func, and passing in update_progress_func\n or cancel_job_func depending if self.track_progress or self.cancellable is defined,\n respectively.\n :param update_progress_func: The callback for when the job updates its progress.\n :param cancel_job_func: The function that the function has to call occasionally to see\n if the user wants to cancel the currently running job.\n :return: Any\n \"\"\"\n\n func = import_stringified_func(self.func)\n extrafunckwargs = {}\n\n args, kwargs = copy.copy(self.args), copy.copy(self.kwargs)\n\n if self.track_progress:\n extrafunckwargs[\"update_progress\"] = partial(update_progress_func, self.job_id)\n\n if self.cancellable:\n extrafunckwargs[\"check_for_cancel\"] = partial(cancel_job_func, self.job_id)\n\n kwargs.update(extrafunckwargs)\n\n return func(*args, **kwargs)\n\n return y"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a float between 0 and 1 representing the current job s progress in its task.", "response": "def percentage_progress(self):\n \"\"\"\n Returns a float between 0 and 1, representing the current job's progress in its task.\n If total_progress is not given or 0, just return self.progress.\n\n :return: float corresponding to the total percentage progress of the job.\n \"\"\"\n\n if self.total_progress != 0:\n return float(self.progress) / self.total_progress\n else:\n return self.progress"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nschedules a function func for execution.", "response": "def schedule(self, func, *args, **kwargs):\n \"\"\"\n Schedules a function func for execution.\n\n One special parameter is track_progress. If passed in and not None, the func will be passed in a\n keyword parameter called update_progress:\n\n def update_progress(progress, total_progress, stage=\"\"):\n\n The running function can call the update_progress function to notify interested parties of the function's\n current progress.\n\n Another special parameter is the \"cancellable\" keyword parameter. When passed in and not None, a special\n \"check_for_cancel\" parameter is passed in. When called, it raises an error when the user has requested a job\n to be cancelled.\n\n The caller can also pass in any pickleable object into the \"extra_metadata\" parameter. This data is stored\n within the job and can be retrieved when the job status is queried.\n\n All other parameters are directly passed to the function when it starts running.\n\n :type func: callable or str\n :param func: A callable object that will be scheduled for running.\n :return: a string representing the job_id.\n \"\"\"\n\n # if the func is already a job object, just schedule that directly.\n if isinstance(func, Job):\n job = func\n # else, turn it into a job first.\n else:\n job = Job(func, *args, **kwargs)\n\n job.track_progress = kwargs.pop('track_progress', False)\n job.cancellable = kwargs.pop('cancellable', False)\n job.extra_metadata = kwargs.pop('extra_metadata', {})\n job_id = self.storage.schedule_job(job)\n return job_id"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nwaiting until the job with the given id has a new update.", "response": "def wait(self, job_id, timeout=None):\n \"\"\"\n Wait until the job given by job_id has a new update.\n\n :param job_id: the id of the job to wait for.\n :param timeout: how long to wait for a job state change before timing out.\n :return: Job object corresponding to job_id\n \"\"\"\n return self.storage.wait_for_job_update(job_id, timeout=timeout)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef wait_for_completion(self, job_id, timeout=None):\n while 1:\n job = self.wait(job_id, timeout=timeout)\n if job.state in [State.COMPLETED, State.FAILED, State.CANCELED]:\n return job\n else:\n continue", "response": "Wait for the job given by job_id to change to COMPLETED or CANCELED."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef shutdown(self):\n self._storage.clear()\n self._scheduler.shutdown(wait=False)\n self._workers.shutdown(wait=False)", "response": "Shutdown the client and all of its managed resources"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning dict with info about nodes in cluster and current version { 'nodes': [ 'IP:port', 'IP:port', ], 'version': '1.4.4' }", "response": "def get_cluster_info(host, port, ignore_cluster_errors=False):\n \"\"\"\n return dict with info about nodes in cluster and current version\n {\n 'nodes': [\n 'IP:port',\n 'IP:port',\n ],\n 'version': '1.4.4'\n }\n \"\"\"\n client = Telnet(host, int(port))\n client.write(b'version\\n')\n res = client.read_until(b'\\r\\n').strip()\n version_list = res.split(b' ')\n if len(version_list) not in [2, 3] or version_list[0] != b'VERSION':\n raise WrongProtocolData('version', res)\n version = version_list[1]\n if StrictVersion(smart_text(version)) >= StrictVersion('1.4.14'):\n cmd = b'config get cluster\\n'\n else:\n cmd = b'get AmazonElastiCache:cluster\\n'\n client.write(cmd)\n regex_index, match_object, res = client.expect([\n re.compile(b'\\n\\r\\nEND\\r\\n'),\n re.compile(b'ERROR\\r\\n')\n ])\n client.close()\n\n if res == b'ERROR\\r\\n' and ignore_cluster_errors:\n return {\n 'version': version,\n 'nodes': [\n '{0}:{1}'.format(smart_text(host),\n smart_text(port))\n ]\n }\n\n ls = list(filter(None, re.compile(br'\\r?\\n').split(res)))\n if len(ls) != 4:\n raise WrongProtocolData(cmd, res)\n\n try:\n version = int(ls[1])\n except ValueError:\n raise WrongProtocolData(cmd, res)\n nodes = []\n try:\n for node in ls[2].split(b' '):\n host, ip, port = node.split(b'|')\n nodes.append('{0}:{1}'.format(smart_text(ip or host),\n smart_text(port)))\n except ValueError:\n raise WrongProtocolData(cmd, res)\n return {\n 'version': version,\n 'nodes': nodes\n }"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef invalidate_cache_after_error(f):\n @wraps(f)\n def wrapper(self, *args, **kwds):\n try:\n return f(self, *args, **kwds)\n except Exception:\n self.clear_cluster_nodes_cache()\n raise\n return wrapper", "response": "Decorator to invalidate internal cache after an error."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nupdate connection params to maximize performance", "response": "def update_params(self, params):\n \"\"\"\n update connection params to maximize performance\n \"\"\"\n if not params.get('BINARY', True):\n raise Warning('To increase performance please use ElastiCache'\n ' in binary mode')\n else:\n params['BINARY'] = True # patch params, set binary mode\n if 'OPTIONS' not in params:\n # set special 'behaviors' pylibmc attributes\n params['OPTIONS'] = {\n 'tcp_nodelay': True,\n 'ketama': True\n }"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting all nodes in cluster", "response": "def get_cluster_nodes(self):\n \"\"\"\n return list with all nodes in cluster\n \"\"\"\n if not hasattr(self, '_cluster_nodes_cache'):\n server, port = self._servers[0].split(':')\n try:\n self._cluster_nodes_cache = (\n get_cluster_info(server, port,\n self._ignore_cluster_errors)['nodes'])\n except (socket.gaierror, socket.timeout) as err:\n raise Exception('Cannot connect to cluster {0} ({1})'.format(\n self._servers[0], err\n ))\n return self._cluster_nodes_cache"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef humanize_placeholders(msgid):\n return re.sub(\n r'%(?:\\((\\w+)\\))?([sd])',\n lambda match: r'__{0}__'.format(\n match.group(1).lower() if match.group(1) else 'number' if match.group(2) == 'd' else 'item'),\n msgid)", "response": "Convert the placeholders to the google translate service friendly form."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef restore_placeholders(msgid, translation):\n placehoders = re.findall(r'(\\s*)(%(?:\\(\\w+\\))?[sd])(\\s*)', msgid)\n return re.sub(\n r'(\\s*)(__[\\w]+?__)(\\s*)',\n lambda matches: '{0}{1}{2}'.format(placehoders[0][0], placehoders[0][1], placehoders.pop(0)[2]),\n translation)", "response": "Restore placeholders in the translated message."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_strings_to_translate(self, po):\n strings = []\n for index, entry in enumerate(po):\n if not self.need_translate(entry):\n continue\n strings.append(humanize_placeholders(entry.msgid))\n if entry.msgid_plural:\n strings.append(humanize_placeholders(entry.msgid_plural))\n return strings", "response": "Return list of strings to translate from po file."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nupdates the translations in the entries.", "response": "def update_translations(self, entries, translated_strings):\n \"\"\"Update translations in entries.\n\n The order and number of translations should match to get_strings_to_translate() result.\n\n :param entries: list of entries to translate\n :type entries: collections.Iterable[polib.POEntry] | polib.POFile\n :param translated_strings: list of translations\n :type translated_strings: collections.Iterable[six.text_type]\n \"\"\"\n translations = iter(translated_strings)\n for entry in entries:\n if not self.need_translate(entry):\n continue\n\n if entry.msgid_plural:\n # fill the first plural form with the entry.msgid translation\n translation = next(translations)\n translation = fix_translation(entry.msgid, translation)\n entry.msgstr_plural[0] = translation\n\n # fill the rest of plural forms with the entry.msgid_plural translation\n translation = next(translations)\n translation = fix_translation(entry.msgid_plural, translation)\n for k, v in entry.msgstr_plural.items():\n if k != 0:\n entry.msgstr_plural[k] = translation\n else:\n translation = next(translations)\n translation = fix_translation(entry.msgid, translation)\n entry.msgstr = translation\n\n # Set the 'fuzzy' flag on translation\n if self.set_fuzzy and 'fuzzy' not in entry.flags:\n entry.flags.append('fuzzy')"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef load_ply(fileobj):\n def nextline():\n \"\"\"Read next line, skip comments\"\"\"\n while True:\n line = fileobj.readline()\n assert line != '' # eof\n if not line.startswith('comment'):\n return line.strip()\n\n assert nextline() == 'ply'\n assert nextline() == 'format ascii 1.0'\n line = nextline()\n assert line.startswith('element vertex')\n nverts = int(line.split()[2])\n # print 'nverts : ', nverts\n assert nextline() == 'property float x'\n assert nextline() == 'property float y'\n assert nextline() == 'property float z'\n line = nextline()\n\n assert line.startswith('element face')\n nfaces = int(line.split()[2])\n # print 'nfaces : ', nfaces\n assert nextline() == 'property list uchar int vertex_indices'\n line = nextline()\n has_texcoords = line == 'property list uchar float texcoord'\n if has_texcoords:\n assert nextline() == 'end_header'\n else:\n assert line == 'end_header'\n\n # Verts\n verts = np.zeros((nverts, 3))\n for i in range(nverts):\n vals = nextline().split()\n verts[i, :] = [float(v) for v in vals[:3]]\n # Faces\n faces = []\n faces_uv = []\n for i in range(nfaces):\n vals = nextline().split()\n assert int(vals[0]) == 3\n faces.append([int(v) for v in vals[1:4]])\n if has_texcoords:\n assert len(vals) == 11\n assert int(vals[4]) == 6\n faces_uv.append([(float(vals[5]), float(vals[6])),\n (float(vals[7]), float(vals[8])),\n (float(vals[9]), float(vals[10]))])\n # faces_uv.append([float(v) for v in vals[5:]])\n else:\n assert len(vals) == 4\n return verts, faces, faces_uv", "response": "This function loads a PLY file into a new tree structure."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef read_ssh_config(path):\n with open(path, \"r\") as fh_:\n lines = fh_.read().splitlines()\n return SshConfig(lines)", "response": "Read SSH config file and return parsed SshConfig"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchanging key into correct casing if we know the parameter", "response": "def _remap_key(key):\n \"\"\" Change key into correct casing if we know the parameter \"\"\"\n if key in KNOWN_PARAMS:\n return key\n if key.lower() in known_params:\n return KNOWN_PARAMS[known_params.index(key.lower())]\n return key"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nparse lines from ssh config file", "response": "def parse(self, lines):\n \"\"\"Parse lines from ssh config file\"\"\"\n cur_entry = None\n for line in lines:\n kv_ = _key_value(line)\n if len(kv_) > 1:\n key, value = kv_\n if key.lower() == \"host\":\n cur_entry = value\n self.hosts_.add(value)\n self.lines_.append(ConfigLine(line=line, host=cur_entry, key=key, value=value))\n else:\n self.lines_.append(ConfigLine(line=line))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef host(self, host):\n if host in self.hosts_:\n vals = defaultdict(list)\n for k, value in [(x.key.lower(), x.value) for x in self.lines_\n if x.host == host and x.key.lower() != \"host\"]:\n vals[k].append(value)\n flatten = lambda x: x[0] if len(x) == 1 else x\n return {k: flatten(v) for k, v in vals.items()}\n return {}", "response": "Return the configuration of a specific host as a dictionary."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set(self, host, **kwargs):\n self.__check_host_args(host, kwargs)\n\n def update_line(key, value):\n \"\"\"Produce new config line\"\"\"\n return \" %s %s\" % (key, value)\n\n for key, values in kwargs.items():\n if type(values) not in [list, tuple]: # pylint: disable=unidiomatic-typecheck\n values = [values]\n\n lower_key = key.lower()\n update_idx = [idx for idx, x in enumerate(self.lines_)\n if x.host == host and x.key.lower() == lower_key]\n extra_remove = []\n for idx in update_idx:\n if values: # values available, update the line\n value = values.pop()\n self.lines_[idx].line = update_line(self.lines_[idx].key, value)\n self.lines_[idx].value = value\n else: # no more values available, remove the line\n extra_remove.append(idx)\n\n for idx in reversed(sorted(extra_remove)):\n del self.lines_[idx]\n\n if values:\n mapped_key = _remap_key(key)\n max_idx = max([idx for idx, line in enumerate(self.lines_) if line.host == host])\n for value in values:\n self.lines_.insert(max_idx + 1, ConfigLine(line=update_line(mapped_key, value),\n host=host, key=mapped_key,\n value=value))", "response": "Set the configuration values for an existing host."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef unset(self, host, *args):\n self.__check_host_args(host, args)\n remove_idx = [idx for idx, x in enumerate(self.lines_)\n if x.host == host and x.key.lower() in args]\n for idx in reversed(sorted(remove_idx)):\n del self.lines_[idx]", "response": "Removes settings for a host."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef __check_host_args(self, host, keys):\n if host not in self.hosts_:\n raise ValueError(\"Host %s: not found\" % host)\n\n if \"host\" in [x.lower() for x in keys]:\n raise ValueError(\"Cannot modify Host value\")", "response": "Checks the parameters of the host attribute"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef rename(self, old_host, new_host):\n if new_host in self.hosts_:\n raise ValueError(\"Host %s: already exists.\" % new_host)\n for line in self.lines_: # update lines\n if line.host == old_host:\n line.host = new_host\n if line.key.lower() == \"host\":\n line.value = new_host\n line.line = \"Host %s\" % new_host\n self.hosts_.remove(old_host) # update host cache\n self.hosts_.add(new_host)", "response": "Renames a host configuration."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add(self, host, **kwargs):\n if host in self.hosts_:\n raise ValueError(\"Host %s: exists (use update).\" % host)\n self.hosts_.add(host)\n self.lines_.append(ConfigLine(line=\"\", host=None))\n self.lines_.append(ConfigLine(line=\"Host %s\" % host, host=host, key=\"Host\", value=host))\n for k, v in kwargs.items():\n if type(v) not in [list, tuple]:\n v = [v]\n mapped_k = _remap_key(k)\n for value in v:\n self.lines_.append(ConfigLine(line=\" %s %s\" % (mapped_k, str(value)), host=host, key=mapped_k, value=value))\n self.lines_.append(ConfigLine(line=\"\", host=None))", "response": "Add another host to the SSH configuration."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef remove(self, host):\n if host not in self.hosts_:\n raise ValueError(\"Host %s: not found.\" % host)\n self.hosts_.remove(host)\n # remove lines, including comments inside the host lines\n host_lines = [ idx for idx, x in enumerate(self.lines_) if x.host == host ]\n remove_range = reversed(range(min(host_lines), max(host_lines) + 1))\n for idx in remove_range:\n del self.lines_[idx]", "response": "Removes a host from the SSH configuration."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nwrite the config file to the specified path.", "response": "def write(self, path):\n \"\"\"\n Writes ssh config file\n\n Parameters\n ----------\n path : The file to write to\n \"\"\"\n with open(path, \"w\") as fh_:\n fh_.write(self.config())"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef orthogonal_vector(v):\n if v[1] != 0 or v[2] != 0:\n c = (1, 0, 0)\n else:\n c = (0, 1, 0)\n return np.cross(v, c)", "response": "Return an arbitrary vector that is orthogonal to v"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef show_plane(orig, n, scale=1.0, **kwargs):\n b1 = orthogonal_vector(n)\n b1 /= la.norm(b1)\n b2 = np.cross(b1, n)\n b2 /= la.norm(b2)\n verts = [orig + scale*(-b1 - b2),\n orig + scale*(b1 - b2),\n orig + scale*(b1 + b2),\n orig + scale*(-b1 + b2)]\n faces = [(0, 1, 2), (0, 2, 3)]\n trimesh3d(np.array(verts), faces, **kwargs)", "response": "Show a plane with the given origin and normal. scale give its size"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef slice_triangle_plane(verts, tri, plane_orig, plane_norm):\n dists = [point_to_plane_dist(p, plane_orig, plane_norm)\n for p in verts[tri]]\n\n if np.sign(dists[0]) == np.sign(dists[1]) \\\n and np.sign(dists[1]) == np.sign(dists[2]):\n # Triangle is on one side of the plane\n return []\n\n # Iterate through the edges, cutting the ones that intersect\n intersect_points = []\n for fi in range(3):\n v1 = verts[tri[fi]]\n d1 = dists[fi]\n v2 = verts[tri[(fi + 1) % 3]]\n d2 = dists[(fi + 1) % 3]\n\n if d1 * d2 < 0:\n # intersection factor (between 0 and 1)\n # here is a nice drawing :\n # https://ravehgonen.files.wordpress.com/2013/02/slide8.png\n s = d1 / (d1 - d2)\n vdir = v2 - v1\n intersect_points.append(v1 + vdir * s)\n elif np.fabs(d1) < 1e-5:\n # point on plane\n intersect_points.append(v1)\n\n return intersect_points", "response": "Slice the internal list of the base class of the base class of the base class of the base class."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn true if the given triangle is cut by the plane. This will return false if the given triangle is cut by the plane. This will return True if the given triangle is cut by the plane. This will return False if the given triangle lies on the plane.", "response": "def triangle_intersects_plane(mesh, tid, plane):\n \"\"\"\n Returns true if the given triangle is cut by the plane. This will return\n false if a single vertex of the triangle lies on the plane\n \"\"\"\n dists = [point_to_plane_dist(mesh.verts[vid], plane)\n for vid in mesh.tris[tid]]\n side = np.sign(dists)\n return not (side[0] == side[1] == side[2])"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef compute_triangle_plane_intersections(mesh, tid, plane, dist_tol=1e-8):\n # TODO: Use a distance cache\n dists = {vid: point_to_plane_dist(mesh.verts[vid], plane)\n for vid in mesh.tris[tid]}\n # TODO: Use an edge intersection cache (we currently compute each edge\n # intersection twice : once for each tri)\n\n # This is to avoid registering the same vertex intersection twice\n # from two different edges\n vert_intersect = {vid: False for vid in dists.keys()}\n\n # Iterate through the edges, cutting the ones that intersect\n intersections = []\n for e in mesh.edges_for_triangle(tid):\n v1 = mesh.verts[e[0]]\n d1 = dists[e[0]]\n v2 = mesh.verts[e[1]]\n d2 = dists[e[1]]\n\n if np.fabs(d1) < dist_tol:\n # Avoid creating the vertex intersection twice\n if not vert_intersect[e[0]]:\n # point on plane\n intersections.append((INTERSECT_VERTEX, v1, e[0]))\n vert_intersect[e[0]] = True\n if np.fabs(d2) < dist_tol:\n if not vert_intersect[e[1]]:\n # point on plane\n intersections.append((INTERSECT_VERTEX, v2, e[1]))\n vert_intersect[e[1]] = True\n\n # If vertices are on opposite sides of the plane, we have an edge\n # intersection\n if d1 * d2 < 0:\n # Due to numerical accuracy, we could have both a vertex intersect\n # and an edge intersect on the same vertex, which is impossible\n if not vert_intersect[e[0]] and not vert_intersect[e[1]]:\n # intersection factor (between 0 and 1)\n # here is a nice drawing :\n # https://ravehgonen.files.wordpress.com/2013/02/slide8.png\n # keep in mind d1, d2 are *signed* distances (=> d1 - d2)\n s = d1 / (d1 - d2)\n vdir = v2 - v1\n ipos = v1 + vdir * s\n intersections.append((INTERSECT_EDGE, ipos, e))\n\n return intersections", "response": "Compute the intersection between a triangle and a plane"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the next triangle to visit given the intersection and T", "response": "def get_next_triangle(mesh, T, plane, intersection, dist_tol):\n \"\"\"\n Returns the next triangle to visit given the intersection and\n the list of unvisited triangles (T)\n\n We look for a triangle that is cut by the plane (2 intersections) as\n opposed to one that only touch the plane (1 vertex intersection)\n \"\"\"\n if intersection[0] == INTERSECT_EDGE:\n tris = mesh.triangles_for_edge(intersection[2])\n elif intersection[0] == INTERSECT_VERTEX:\n tris = mesh.triangles_for_vert(intersection[2])\n else:\n assert False, 'Invalid intersection[0] value : %d' % intersection[0]\n\n # Knowing where we come from is not enough. If an edge of the triangle\n # lies exactly on the plane, i.e. :\n #\n # /t1\\\n # -v1---v2-\n # \\t2/\n #\n # With v1, v2 being the vertices and t1, t2 being the triangles, then\n # if you just try to go to the next connected triangle that intersect,\n # you can visit v1 -> t1 -> v2 -> t2 -> v1 .\n # Therefore, we need to limit the new candidates to the set of unvisited\n # triangles and once we've visited a triangle and decided on a next one,\n # remove all the neighbors of the visited triangle so we don't come\n # back to it\n\n T = set(T)\n for tid in tris:\n if tid in T:\n intersections = compute_triangle_plane_intersections(\n mesh, tid, plane, dist_tol)\n if len(intersections) == 2:\n T = T.difference(tris)\n return tid, intersections, T\n return None, [], T"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _walk_polyline(tid, intersect, T, mesh, plane, dist_tol):\n T = set(T)\n p = []\n # Loop until we have explored all the triangles for the current\n # polyline\n while True:\n p.append(intersect[1])\n\n tid, intersections, T = get_next_triangle(mesh, T, plane,\n intersect, dist_tol)\n if tid is None:\n break\n\n # get_next_triangle returns triangles that our plane actually\n # intersects (as opposed to touching only a single vertex),\n # hence the assert\n assert len(intersections) == 2\n\n # Of the two returned intersections, one should have the\n # intersection point equal to p[-1]\n if la.norm(intersections[0][1] - p[-1]) < dist_tol:\n intersect = intersections[1]\n else:\n assert la.norm(intersections[1][1] - p[-1]) < dist_tol, \\\n '%s not close to %s' % (str(p[-1]), str(intersections))\n intersect = intersections[0]\n\n return p, T", "response": "Given an intersection walk through the mesh and compute the intersection with the cut plane and add those intersection to the polyline."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a list of all triangles that are cross - sectioned by a given plane.", "response": "def cross_section_mesh(mesh, plane, dist_tol=1e-8):\n \"\"\"\n Args:\n mesh: A geom.TriangleMesh instance\n plane: The cut plane : geom.Plane instance\n dist_tol: If two points are closer than dist_tol, they are considered\n the same\n \"\"\"\n # Set of all triangles\n T = set(range(len(mesh.tris)))\n # List of all cross-section polylines\n P = []\n\n while len(T) > 0:\n tid = T.pop()\n\n intersections = compute_triangle_plane_intersections(\n mesh, tid, plane, dist_tol)\n\n if len(intersections) == 2:\n for intersection in intersections:\n p, T = _walk_polyline(tid, intersection, T, mesh, plane,\n dist_tol)\n if len(p) > 1:\n P.append(np.array(p))\n return P"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef cross_section(verts, tris, plane_orig, plane_normal, **kwargs):\n mesh = TriangleMesh(verts, tris)\n plane = Plane(plane_orig, plane_normal)\n return cross_section_mesh(mesh, plane, **kwargs)", "response": "Compute the planar cross section of a mesh. This is a wrapper around the cross_section_mesh function that creates a polyline that is connected to the mesh."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncomputing spatial distance using pure numpy", "response": "def pdist_squareformed_numpy(a):\n \"\"\"\n Compute spatial distance using pure numpy\n (similar to scipy.spatial.distance.cdist())\n\n Thanks to Divakar Roy (@droyed) at stackoverflow.com\n\n Note this needs at least np.float64 precision!\n\n Returns: dist\n \"\"\"\n a = np.array(a, dtype=np.float64)\n a_sumrows = np.einsum('ij,ij->i', a, a)\n dist = a_sumrows[:, None] + a_sumrows - 2 * np.dot(a, a.T)\n np.fill_diagonal(dist, 0)\n return dist"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconverting signed hexadecimal to floating value.", "response": "def signed_to_float(hex: str) -> float:\n \"\"\"Convert signed hexadecimal to floating value.\"\"\"\n if int(hex, 16) & 0x8000:\n return -(int(hex, 16) & 0x7FFF) / 10\n else:\n return int(hex, 16) / 10"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef decode_packet(packet: str) -> dict:\n node_id, _, protocol, attrs = packet.split(DELIM, 3)\n\n data = cast(Dict[str, Any], {\n 'node': PacketHeader(node_id).name,\n })\n\n # make exception for version response\n data['protocol'] = UNKNOWN\n if '=' in protocol:\n attrs = protocol + DELIM + attrs\n\n # no attributes but instead the welcome banner\n elif 'RFLink Gateway' in protocol:\n data.update(parse_banner(protocol))\n\n elif protocol == 'PONG':\n data['ping'] = protocol.lower()\n\n # debug response\n elif protocol == 'DEBUG':\n data['protocol'] = protocol.lower()\n data['tm'] = packet[3:5]\n\n # failure response\n elif protocol == 'CMD UNKNOWN':\n data['response'] = 'command_unknown'\n data['ok'] = False\n\n # ok response\n elif protocol == 'OK':\n data['ok'] = True\n\n # its a regular packet\n else:\n data['protocol'] = protocol.lower()\n\n # convert key=value pairs where needed\n for attr in filter(None, attrs.strip(DELIM).split(DELIM)):\n key, value = attr.lower().split('=')\n if key in VALUE_TRANSLATION:\n value = VALUE_TRANSLATION.get(key)(value)\n name = PACKET_FIELDS.get(key, key)\n data[name] = value\n unit = UNITS.get(key, None)\n\n if unit:\n data[name + '_unit'] = unit\n\n # correct KaKu device address\n if data.get('protocol', '') == 'kaku' and len(data['id']) != 6:\n data['id'] = '0000' + data['id']\n\n return data", "response": "Decode a packet into a dict of data structures."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef encode_packet(packet: dict) -> str:\n if packet['protocol'] == 'rfdebug':\n return '10;RFDEBUG=' + packet['command'] + ';'\n elif packet['protocol'] == 'rfudebug':\n return '10;RFDEBUG=' + packet['command'] + ';'\n else:\n return SWITCH_COMMAND_TEMPLATE.format(\n node=PacketHeader.master.value,\n **packet\n )", "response": "Construct packet string from packet dictionary."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef serialize_packet_id(packet: dict) -> str:\n # translate protocol in something reversable\n protocol = protocol_translations[packet['protocol']]\n\n if protocol == UNKNOWN:\n protocol = 'rflink'\n\n return '_'.join(filter(None, [\n protocol,\n packet.get('id', None),\n packet.get('switch', None),\n ]))", "response": "Serialize packet identifiers into one reversable string."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef deserialize_packet_id(packet_id: str) -> dict:\n if packet_id == 'rflink':\n return {'protocol': UNKNOWN}\n\n protocol, *id_switch = packet_id.split(PACKET_ID_SEP)\n assert len(id_switch) < 3\n\n packet_identifiers = {\n # lookup the reverse translation of the protocol in the translation\n # table, fallback to protocol. If this is a unserializable protocol\n # name, it has not been serialized before and is not in the\n # translate_protocols table this will result in an invalid command.\n 'protocol': protocol_translations.get(protocol, protocol),\n }\n if id_switch:\n packet_identifiers['id'] = id_switch[0]\n if len(id_switch) > 1:\n packet_identifiers['switch'] = id_switch[1]\n\n return packet_identifiers", "response": "r Deserialize a packet id into a dictionary of packet identifiers."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning list of all events in the packet.", "response": "def packet_events(packet: dict) -> Generator:\n \"\"\"Return list of all events in the packet.\n\n >>> x = list(packet_events({\n ... 'protocol': 'alecto v1',\n ... 'id': 'ec02',\n ... 'temperature': 1.0,\n ... 'temperature_unit': '\u00b0C',\n ... 'humidity': 10,\n ... 'humidity_unit': '%',\n ... }))\n >>> assert {\n ... 'id': 'alectov1_ec02_temp',\n ... 'sensor': 'temperature',\n ... 'value': 1.0,\n ... 'unit': '\u00b0C',\n ... } in x\n >>> assert {\n ... 'id': 'alectov1_ec02_hum',\n ... 'sensor': 'humidity',\n ... 'value': 10,\n ... 'unit': '%',\n ... } in x\n >>> y = list(packet_events({\n ... 'protocol': 'newkaku',\n ... 'id': '000001',\n ... 'switch': '01',\n ... 'command': 'on',\n ... }))\n >>> assert {'id': 'newkaku_000001_01', 'command': 'on'} in y\n \"\"\"\n field_abbrev = {v: k for k, v in PACKET_FIELDS.items()}\n\n packet_id = serialize_packet_id(packet)\n events = {f: v for f, v in packet.items() if f in field_abbrev}\n if 'command' in events or 'version' in events:\n # switch events only have one event in each packet\n yield dict(id=packet_id, **events)\n else:\n if packet_id == 'debug':\n yield {\n 'id': 'raw',\n 'value': packet.get('pulses(usec)'),\n 'tm': packet.get('tm'),\n 'pulses': packet.get('pulses'),\n }\n else:\n # sensors can have multiple\n for sensor, value in events.items():\n unit = packet.get(sensor + '_unit', None)\n yield {\n 'id': packet_id + PACKET_ID_SEP + field_abbrev[sensor],\n 'sensor': sensor,\n 'value': value,\n 'unit': unit,\n }\n\n if packet_id != 'rflink':\n yield {\n 'id': packet_id + PACKET_ID_SEP + 'update_time',\n 'sensor': 'update_time',\n 'value': round(time.time()),\n 'unit': 's',\n }"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndecoding a single packet into a dict of data structures.", "response": "def decode_tx_packet(packet: str) -> dict:\n \"\"\"Break packet down into primitives, and do basic interpretation.\n\n >>> decode_packet('10;Kaku;ID=41;SWITCH=1;CMD=ON;') == {\n ... 'node': 'gateway',\n ... 'protocol': 'kaku',\n ... 'id': '000041',\n ... 'switch': '1',\n ... 'command': 'on',\n ... }\n True\n \"\"\"\n node_id, protocol, attrs = packet.split(DELIM, 2)\n\n data = cast(Dict[str, Any], {\n 'node': PacketHeader(node_id).name,\n })\n\n data['protocol'] = protocol.lower()\n\n for i, attr in enumerate(filter(None, attrs.strip(DELIM).split(DELIM))):\n if i == 0:\n data['id'] = attr\n if i == 1:\n data['switch'] = attr\n if i == 2:\n data['command'] = attr\n\n # correct KaKu device address\n if data.get('protocol', '') == 'kaku' and len(data['id']) != 6:\n data['id'] = '0000' + data['id']\n\n return data"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nparses argument and setup main program loop.", "response": "def main(argv=sys.argv[1:], loop=None):\n \"\"\"Parse argument and setup main program loop.\"\"\"\n args = docopt(__doc__, argv=argv,\n version=pkg_resources.require('rflink')[0].version)\n\n level = logging.ERROR\n if args['-v']:\n level = logging.INFO\n if args['-v'] == 2:\n level = logging.DEBUG\n logging.basicConfig(level=level)\n\n if not loop:\n loop = asyncio.get_event_loop()\n\n host = args['--host']\n port = args['--port']\n baud = args['--baud']\n listenport = args['--listenport']\n\n proxy = RFLinkProxy(port=port, host=host, baud=baud, loop=loop)\n\n server_coro = asyncio.start_server(\n proxy.client_connected_callback,\n host=\"\",\n port=listenport,\n loop=loop,\n )\n\n server = loop.run_until_complete(server_coro)\n addr = server.sockets[0].getsockname()\n log.info('Serving on %s', addr)\n\n conn_coro = proxy.connect()\n loop.run_until_complete(conn_coro)\n\n proxy.closing = False\n try:\n loop.run_forever()\n except KeyboardInterrupt:\n proxy.closing = True\n\n # cleanup server\n server.close()\n loop.run_until_complete(server.wait_closed())\n\n # cleanup server connections\n writers = [i[1] for i in list(clients)]\n for writer in writers:\n writer.close()\n if sys.version_info >= (3, 7):\n loop.run_until_complete(writer.wait_closed())\n\n # cleanup RFLink connection\n proxy.transport.close()\n\n finally:\n loop.close()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nparsing a raw packet into a dict.", "response": "def handle_raw_packet(self, raw_packet):\n \"\"\"Parse raw packet string into packet dict.\"\"\"\n log.debug('got packet: %s', raw_packet)\n packet = None\n try:\n packet = decode_packet(raw_packet)\n except:\n log.exception('failed to parse packet: %s', packet)\n\n log.debug('decoded packet: %s', packet)\n\n if packet:\n if 'ok' in packet:\n # handle response packets internally\n log.debug('command response: %s', packet)\n self._last_ack = packet\n self._command_ack.set()\n elif self.raw_callback:\n self.raw_callback(raw_packet)\n else:\n log.warning('no valid packet')"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nparses a raw packet into a dict.", "response": "def handle_raw_tx_packet(self, writer, raw_packet):\n \"\"\"Parse raw packet string into packet dict.\"\"\"\n peer = writer.get_extra_info('peername')\n log.debug(' %s:%s: processing data: %s', peer[0], peer[1], raw_packet)\n packet = None\n try:\n packet = decode_tx_packet(raw_packet)\n except:\n log.exception(' %s:%s: failed to parse packet: %s',\n peer[0], peer[1], packet)\n\n log.debug(' %s:%s: decoded packet: %s', peer[0], peer[1], packet)\n if self.protocol and packet:\n if not ';PING;' in raw_packet:\n log.info(' %s:%s: forwarding packet %s to RFLink', peer[0], peer[1], raw_packet)\n else:\n log.debug(' %s:%s: forwarding packet %s to RFLink', peer[0], peer[1], raw_packet)\n yield from self.forward_packet(writer, packet, raw_packet)\n else:\n log.warning(' %s:%s: no valid packet %s', peer[0], peer[1], packet)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nforwards packet from client to RFLink.", "response": "def forward_packet(self, writer, packet, raw_packet):\n \"\"\"Forward packet from client to RFLink.\"\"\"\n peer = writer.get_extra_info('peername')\n log.debug(' %s:%s: forwarding data: %s', peer[0], peer[1], packet)\n if 'command' in packet:\n packet_id = serialize_packet_id(packet)\n command = packet['command']\n ack = yield from self.protocol.send_command_ack(\n packet_id, command)\n if ack:\n writer.write(\"20;00;OK;\".encode() + CRLF)\n for _ in range(DEFAULT_SIGNAL_REPETITIONS-1):\n yield from self.protocol.send_command_ack(\n packet_id, command)\n else:\n self.protocol.send_raw_packet(raw_packet)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef client_connected_callback(self, reader, writer):\n peer = writer.get_extra_info('peername')\n clients.append((reader, writer, peer))\n log.info(\"Incoming connection from: %s:%s\", peer[0], peer[1])\n try:\n while True:\n data = yield from reader.readline()\n if not data:\n break\n try:\n line = data.decode().strip()\n except UnicodeDecodeError:\n line = '\\x00'\n\n # Workaround for domoticz issue #2816\n if line[-1] != DELIM:\n line = line + DELIM\n\n if valid_packet(line):\n yield from self.handle_raw_tx_packet(writer, line)\n else:\n log.warning(\" %s:%s: dropping invalid data: '%s'\", peer[0], peer[1], line)\n pass\n except ConnectionResetError:\n pass\n except Exception as e:\n log.exception(e)\n\n log.info(\"Disconnected from: %s:%s\", peer[0], peer[1])\n writer.close()\n clients.remove((reader, writer, peer))", "response": "Handle a connection from a domoticz server."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsends data to all connected clients.", "response": "def raw_callback(self, raw_packet):\n \"\"\"Send data to all connected clients.\"\"\"\n if not ';PONG;' in raw_packet:\n log.info('forwarding packet %s to clients', raw_packet)\n else:\n log.debug('forwarding packet %s to clients', raw_packet)\n writers = [i[1] for i in list(clients)]\n for writer in writers:\n writer.write(str(raw_packet).encode() + CRLF)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef reconnect(self, exc=None):\n # Reset protocol binding before starting reconnect\n self.protocol = None\n\n if not self.closing:\n log.warning('disconnected from Rflink, reconnecting')\n self.loop.create_task(self.connect())", "response": "Schedule reconnect after connection has been unexpectedly lost."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset up connection and hook it into HA for reconnect/shutdown.", "response": "async def connect(self):\n \"\"\"Set up connection and hook it into HA for reconnect/shutdown.\"\"\"\n import serial\n\n log.info('Initiating Rflink connection')\n\n # Rflink create_rflink_connection decides based on the value of host\n # (string or None) if serial or tcp mode should be used\n\n # Setup protocol\n protocol = partial(\n ProxyProtocol,\n disconnect_callback=self.reconnect,\n raw_callback=self.raw_callback,\n loop=self.loop,\n )\n\n # Initiate serial/tcp connection to Rflink gateway\n if self.host:\n connection = self.loop.create_connection(protocol, self.host, self.port)\n else:\n connection = create_serial_connection(self.loop, protocol, self.port, self.baud)\n\n try:\n with async_timeout.timeout(CONNECTION_TIMEOUT,\n loop=self.loop):\n self.transport, self.protocol = await connection\n\n except (serial.serialutil.SerialException, ConnectionRefusedError,\n TimeoutError, OSError, asyncio.TimeoutError) as exc:\n reconnect_interval = DEFAULT_RECONNECT_INTERVAL\n log.error(\n \"Error connecting to Rflink, reconnecting in %s\",\n reconnect_interval)\n\n self.loop.call_later(reconnect_interval, self.reconnect, exc)\n return\n\n log.info('Connected to Rflink')"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_rflink_connection(port=None, host=None, baud=57600, protocol=RflinkProtocol,\n packet_callback=None, event_callback=None,\n disconnect_callback=None, ignore=None, loop=None):\n \"\"\"Create Rflink manager class, returns transport coroutine.\"\"\"\n # use default protocol if not specified\n protocol = partial(\n protocol,\n loop=loop if loop else asyncio.get_event_loop(),\n packet_callback=packet_callback,\n event_callback=event_callback,\n disconnect_callback=disconnect_callback,\n ignore=ignore if ignore else [],\n )\n\n # setup serial connection if no transport specified\n if host:\n conn = loop.create_connection(protocol, host, port)\n else:\n baud = baud\n conn = create_serial_connection(loop, protocol, port, baud)\n\n return conn", "response": "Create Rflink manager class, returns transport coroutine."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds incoming data to buffer.", "response": "def data_received(self, data):\n \"\"\"Add incoming data to buffer.\"\"\"\n data = data.decode()\n log.debug('received data: %s', data.strip())\n self.buffer += data\n self.handle_lines()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef handle_lines(self):\n while \"\\r\\n\" in self.buffer:\n line, self.buffer = self.buffer.split(\"\\r\\n\", 1)\n if valid_packet(line):\n self.handle_raw_packet(line)\n else:\n log.warning('dropping invalid data: %s', line)", "response": "Assemble incoming data into per - line packets."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef send_raw_packet(self, packet: str):\n data = packet + '\\r\\n'\n log.debug('writing data: %s', repr(data))\n self.transport.write(data.encode())", "response": "Encode and put packet string onto write buffer."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nlogs all data received from RFLink to file.", "response": "def log_all(self, file):\n \"\"\"Log all data received from RFLink to file.\"\"\"\n global rflink_log\n if file == None:\n rflink_log = None\n else:\n log.debug('logging to: %s', file)\n rflink_log = open(file, 'a')"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nlog when connection is lost.", "response": "def connection_lost(self, exc):\n \"\"\"Log when connection is closed, if needed call callback.\"\"\"\n if exc:\n log.exception('disconnected due to exception')\n else:\n log.info('disconnected because of close/abort.')\n if self.disconnect_callback:\n self.disconnect_callback(exc)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef handle_raw_packet(self, raw_packet):\n log.debug('got packet: %s', raw_packet)\n if rflink_log:\n print(raw_packet, file=rflink_log)\n rflink_log.flush()\n packet = None\n try:\n packet = decode_packet(raw_packet)\n except:\n log.exception('failed to parse packet: %s', packet)\n\n log.debug('decoded packet: %s', packet)\n\n if packet:\n if 'ok' in packet:\n # handle response packets internally\n log.debug('command response: %s', packet)\n self._last_ack = packet\n self._command_ack.set()\n else:\n self.handle_packet(packet)\n else:\n log.warning('no valid packet')", "response": "Parse raw packet into dict."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef handle_packet(self, packet):\n if self.packet_callback:\n # forward to callback\n self.packet_callback(packet)\n else:\n print('packet', packet)", "response": "Process incoming packet dict and optionally call callback."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsends a command to rflink gateway.", "response": "def send_command(self, device_id, action):\n \"\"\"Send device command to rflink gateway.\"\"\"\n command = deserialize_packet_id(device_id)\n command['command'] = action\n log.debug('sending command: %s', command)\n self.send_packet(command)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsending command to device and wait for gateway to repond with acknowledgement.", "response": "def send_command_ack(self, device_id, action):\n \"\"\"Send command, wait for gateway to repond with acknowledgment.\"\"\"\n # serialize commands\n yield from self._ready_to_send.acquire()\n acknowledgement = None\n try:\n self._command_ack.clear()\n self.send_command(device_id, action)\n\n log.debug('waiting for acknowledgement')\n try:\n yield from asyncio.wait_for(self._command_ack.wait(),\n TIMEOUT.seconds, loop=self.loop)\n log.debug('packet acknowledged')\n except concurrent.futures._base.TimeoutError:\n acknowledgement = {'ok': False, 'message': 'timeout'}\n log.warning('acknowledge timeout')\n else:\n acknowledgement = self._last_ack.get('ok', False)\n finally:\n # allow next command\n self._ready_to_send.release()\n\n return acknowledgement"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _handle_packet(self, packet):\n events = packet_events(packet)\n\n for event in events:\n if self.ignore_event(event['id']):\n log.debug('ignoring event with id: %s', event)\n continue\n log.debug('got event: %s', event)\n if self.event_callback:\n self.event_callback(event)\n else:\n self.handle_event(event)", "response": "Handle a single packet and call event callback or ignore event if event_callback is set to True."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef handle_event(self, event):\n string = '{id:<32} '\n if 'command' in event:\n string += '{command}'\n elif 'version' in event:\n if 'hardware' in event:\n string += '{hardware} {firmware} '\n string += 'V{version} R{revision}'\n else:\n string += '{value}'\n if event.get('unit'):\n string += ' {unit}'\n\n print(string.format(**event))", "response": "Default handling of incoming event ( print )."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef ignore_event(self, event_id):\n for ignore in self.ignore:\n if (ignore == event_id or\n (ignore.endswith('*') and event_id.startswith(ignore[:-1]))):\n return True\n return False", "response": "Verify event id against list of events to ignore."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef handle_event(self, event):\n if event.get('command'):\n if event['command'] == 'on':\n cmd = 'off'\n else:\n cmd = 'on'\n\n task = self.send_command_ack(event['id'], cmd)\n self.loop.create_task(task)", "response": "Handle incoming packet from rflink gateway."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nhandle incoming packet from rflink gateway.", "response": "def handle_event(self, packet):\n \"\"\"Handle incoming packet from rflink gateway.\"\"\"\n if packet.get('command'):\n task = self.send_command_ack(packet['id'], packet['command'])\n self.loop.create_task(task)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nparses argument and setup main program loop.", "response": "def main(argv=sys.argv[1:], loop=None):\n \"\"\"Parse argument and setup main program loop.\"\"\"\n args = docopt(__doc__, argv=argv,\n version=pkg_resources.require('rflink')[0].version)\n\n level = logging.ERROR\n if args['-v']:\n level = logging.INFO\n if args['-v'] == 2:\n level = logging.DEBUG\n logging.basicConfig(level=level)\n\n if not loop:\n loop = asyncio.get_event_loop()\n\n if args['--ignore']:\n ignore = args['--ignore'].split(',')\n else:\n ignore = []\n\n command = next((c for c in ALL_COMMANDS if args[c] is True), None)\n\n if command:\n protocol = PROTOCOLS['command']\n else:\n protocol = PROTOCOLS[args['-m']]\n\n conn = create_rflink_connection(\n protocol=protocol,\n host=args['--host'],\n port=args['--port'],\n baud=args['--baud'],\n loop=loop,\n ignore=ignore,\n )\n\n transport, protocol = loop.run_until_complete(conn)\n\n try:\n if command:\n for _ in range(int(args['--repeat'])):\n loop.run_until_complete(\n protocol.send_command_ack(\n args[''], command))\n else:\n loop.run_forever()\n except KeyboardInterrupt:\n # cleanup connection\n transport.close()\n loop.run_forever()\n finally:\n loop.close()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a random initial population of floating point values.", "response": "def _initial_population_gsa(population_size, solution_size, lower_bounds,\n upper_bounds):\n \"\"\"Create a random initial population of floating point values.\n\n Args:\n population_size: an integer representing the number of solutions in the population.\n problem_size: the number of values in each solution.\n lower_bounds: a list, each value is a lower bound for the corresponding\n part of the solution.\n upper_bounds: a list, each value is a upper bound for the corresponding\n part of the solution.\n\n Returns:\n list; A list of random solutions.\n \"\"\"\n if len(lower_bounds) != solution_size or len(upper_bounds) != solution_size:\n raise ValueError(\n \"Lower and upper bounds much have a length equal to the problem size.\"\n )\n\n return common.make_population(population_size, common.random_real_solution,\n solution_size, lower_bounds, upper_bounds)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _new_population_gsa(population, fitnesses, velocities, lower_bounds,\n upper_bounds, grav_initial, grav_reduction_rate,\n iteration, max_iterations):\n \"\"\"Generate a new population as given by GSA algorithm.\n\n In GSA paper, grav_initial is G_i\n \"\"\"\n # Update the gravitational constant, and the best and worst of the population\n # Calculate the mass and acceleration for each solution\n # Update the velocity and position of each solution\n population_size = len(population)\n solution_size = len(population[0])\n\n # In GSA paper, grav is G\n grav = _next_grav_gsa(grav_initial, grav_reduction_rate, iteration,\n max_iterations)\n masses = _get_masses(fitnesses)\n\n # Create bundled solution with position and mass for the K best calculation\n # Also store index to later check if two solutions are the same\n # Sorted by solution fitness (mass)\n solutions = [{\n 'pos': pos,\n 'mass': mass,\n 'index': i\n } for i, (pos, mass) in enumerate(zip(population, masses))]\n solutions.sort(key=lambda x: x['mass'], reverse=True)\n\n # Get the force on each solution\n # Only the best K solutions apply force\n # K linearly decreases to 1\n num_best = int(population_size - (population_size - 1) *\n (iteration / float(max_iterations)))\n forces = []\n for i in range(population_size):\n force_vectors = []\n for j in range(num_best):\n # If it is not the same solution\n if i != solutions[j]['index']:\n force_vectors.append(\n _gsa_force(grav, masses[i], solutions[j]['mass'],\n population[i], solutions[j]['pos']))\n forces.append(_gsa_total_force(force_vectors, solution_size))\n\n # Get the acceleration of each solution\n accelerations = []\n for i in range(population_size):\n accelerations.append(_gsa_acceleration(forces[i], masses[i]))\n\n # Update the velocity of each solution\n new_velocities = []\n for i in range(population_size):\n new_velocities.append(\n _gsa_update_velocity(velocities[i], accelerations[i]))\n\n # Create the new population\n new_population = []\n for i in range(population_size):\n new_position = _gsa_update_position(population[i], new_velocities[i])\n # Constrain to bounds\n new_position = list(\n numpy.clip(new_position, lower_bounds, upper_bounds))\n\n new_population.append(new_position)\n\n return new_population, new_velocities", "response": "Generate a new population as given by GSA algorithm."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _next_grav_gsa(grav_initial, grav_reduction_rate, iteration,\n max_iterations):\n \"\"\"Calculate G as given by GSA algorithm.\n\n In GSA paper, grav is G\n \"\"\"\n return grav_initial * math.exp(\n -grav_reduction_rate * iteration / float(max_iterations))", "response": "Calculate G as given by GSA algorithm."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_masses(fitnesses):\n # Obtain constants\n best_fitness = max(fitnesses)\n worst_fitness = min(fitnesses)\n fitness_range = best_fitness - worst_fitness\n\n # Calculate raw masses for each solution\n raw_masses = []\n for fitness in fitnesses:\n # Epsilon is added to prevent divide by zero errors\n raw_masses.append((fitness - worst_fitness) / (fitness_range + EPSILON)\n + EPSILON)\n\n # Normalize to obtain final mass for each solution\n total_mass = sum(raw_masses)\n masses = []\n for mass in raw_masses:\n masses.append(mass / total_mass)\n\n return masses", "response": "Convert fitnesses into masses as given by GSA algorithm."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngives the force of solution i on solution j on solution i.", "response": "def _gsa_force(grav, mass_i, mass_j, position_i, position_j):\n \"\"\"Gives the force of solution j on solution i.\n\n Variable name in GSA paper given in ()\n\n args:\n grav: The gravitational constant. (G)\n mass_i: The mass of solution i (derived from fitness). (M_i)\n mass_j: The mass of solution j (derived from fitness). (M_j)\n position_i: The position of solution i. (x_i)\n position_j: The position of solution j. (x_j)\n\n returns:\n numpy.array; The force vector of solution j on solution i.\n \"\"\"\n\n position_diff = numpy.subtract(position_j, position_i)\n distance = numpy.linalg.norm(position_diff)\n\n # The first 3 terms give the magnitude of the force\n # The last term is a vector that provides the direction\n # Epsilon prevents divide by zero errors\n return grav * (mass_i * mass_j) / (distance + EPSILON) * position_diff"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _gsa_total_force(force_vectors, vector_length):\n if len(force_vectors) == 0:\n return [0.0] * vector_length\n # The GSA algorithm specifies that the total force in each dimension\n # is a random sum of the individual forces in that dimension.\n # For this reason we sum the dimensions individually instead of simply\n # using vec_a+vec_b\n total_force = [0.0] * vector_length\n for force_vec in force_vectors:\n for i in range(vector_length):\n total_force[i] += random.uniform(0.0, 1.0) * force_vec[i]\n return total_force", "response": "Returns a randomly weighted sum of the force vectors."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _new_population_genalg(population,\n fitnesses,\n mutation_chance=0.02,\n crossover_chance=0.7,\n selection_function=gaoperators.tournament_selection,\n crossover_function=gaoperators.one_point_crossover):\n \"\"\"Perform all genetic algorithm operations on a population, and return a new population.\n\n population must have an even number of chromosomes.\n\n Args:\n population: A list of binary lists, ex. [[0,1,1,0], [1,0,1,0]]\n fitness: A list of fitnesses that correspond with chromosomes in the population,\n ex. [1.2, 10.8]\n mutation_chance: the chance that a bit will be flipped during mutation\n crossover_chance: the chance that two parents will be crossed during crossover\n selection_function: A function that will select parents for crossover and mutation\n crossover_function: A function that will cross two parents\n\n Returns:\n list; A new population of chromosomes, that should be more fit.\n \"\"\"\n # Selection\n # Create the population of parents that will be crossed and mutated.\n intermediate_population = selection_function(population, fitnesses)\n\n # Crossover\n new_population = _crossover(intermediate_population, crossover_chance,\n crossover_function)\n\n # Mutation\n # Mutates chromosomes in place\n gaoperators.random_flip_mutate(new_population, mutation_chance)\n\n # Return new population\n return new_population", "response": "Perform all genetic algorithm operations on a population and return a new population."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _crossover(population, crossover_chance, crossover_operator):\n new_population = []\n for i in range(0, len(population), 2): # For every other index\n # Take parents from every set of 2 in the population\n # Wrap index if out of range\n try:\n parents = (population[i], population[i + 1])\n except IndexError:\n parents = (population[i], population[0])\n\n # If crossover takes place\n if random.uniform(0.0, 1.0) <= crossover_chance:\n # Add children to the new population\n new_population.extend(crossover_operator(parents))\n else:\n new_population.extend(parents)\n\n return new_population", "response": "Perform crossover on a population."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nmake a list of random real numbers between lower and upper bounds.", "response": "def random_real_solution(solution_size, lower_bounds, upper_bounds):\n \"\"\"Make a list of random real numbers between lower and upper bounds.\"\"\"\n return [\n random.uniform(lower_bounds[i], upper_bounds[i])\n for i in range(solution_size)\n ]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nmake a population with the supplied generator.", "response": "def make_population(population_size, solution_generator, *args, **kwargs):\n \"\"\"Make a population with the supplied generator.\"\"\"\n return [\n solution_generator(*args, **kwargs) for _ in range(population_size)\n ]"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a list of parents with tournament winners.", "response": "def tournament_selection(population,\n fitnesses,\n num_competitors=2,\n diversity_weight=0.0):\n \"\"\"Create a list of parents with tournament selection.\n\n Args:\n population: A list of solutions.\n fitnesses: A list of fitness values corresponding to solutions in population.\n num_competitors: Number of solutions to compare every round.\n Best solution among competitors is selected.\n diversity_weight: Weight of diversity metric.\n Determines how frequently diversity is used to select tournament winners.\n Note that fitness is given a weight of 1.0.\n diversity_weight == 1.0 gives equal weight to diversity and fitness.\n \"\"\"\n # Optimization if diversity factor is disabled\n if diversity_weight <= 0.0:\n fitness_pop = zip(fitnesses,\n population) # Zip for easy fitness comparison\n\n # Get num_competitors random chromosomes, then add best to result,\n # by taking max fitness and getting chromosome from tuple.\n # Repeat until full.\n return [\n max(random.sample(fitness_pop, num_competitors))[1]\n for _ in range(len(population))\n ]\n else:\n indices = range(len(population))\n\n # Select tournament winners by either max fitness or diversity.\n # The metric to check is randomly selected, weighted by diversity_weight.\n # diversity_metric is calculated between the given solution,\n # and the list of all currently selected solutions.\n selected_solutions = []\n # Select as many solutions are there are in population\n for _ in range(len(population)):\n competitor_indices = random.sample(indices, num_competitors)\n\n # Select by either fitness or diversity,\n # Selected by weighted random selection\n # NOTE: We assume fitness has a weight of 1.0\n if random.uniform(0.0, 1.0) < (1.0 / (1.0 + diversity_weight)):\n # Fitness\n selected_solutions.append(\n max(\n zip([fitnesses[i] for i in competitor_indices],\n [population[i] for i in competitor_indices]))[-1])\n else:\n # Diversity\n # Break ties by fitness\n selected_solutions.append(\n max(\n zip([\n _diversity_metric(population[i], selected_solutions\n ) for i in competitor_indices\n ], [fitnesses[i] for i in competitor_indices],\n [population[i] for i in competitor_indices]))[-1])\n\n return selected_solutions"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef stochastic_selection(population, fitnesses):\n pop_size = len(population)\n probabilities = _fitnesses_to_probabilities(fitnesses)\n\n # Create selection list (for stochastic universal sampling)\n selection_list = []\n selection_spacing = 1.0 / pop_size\n selection_start = random.uniform(0.0, selection_spacing)\n for i in range(pop_size):\n selection_list.append(selection_start + selection_spacing * i)\n\n # Select intermediate population according to selection list\n intermediate_population = []\n for selection in selection_list:\n for (i, probability) in enumerate(probabilities):\n if probability >= selection:\n intermediate_population.append(population[i])\n break\n random.shuffle(intermediate_population)\n\n return intermediate_population", "response": "Create a list of parents with stochastic universal sampling."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a list of parents with roulette selection.", "response": "def roulette_selection(population, fitnesses):\n \"\"\"Create a list of parents with roulette selection.\"\"\"\n probabilities = _fitnesses_to_probabilities(fitnesses)\n\n intermediate_population = []\n for _ in range(len(population)):\n # Choose a random individual\n selection = random.uniform(0.0, 1.0)\n # Iterate over probabilities list\n for i, probability in enumerate(probabilities):\n if probability >= selection: # First probability that is greater\n intermediate_population.append(population[i])\n break\n\n return intermediate_population"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _rescale(vector):\n # Subtract min, making smallest value 0\n min_val = min(vector)\n vector = [v - min_val for v in vector]\n\n # Divide by max, making largest value 1\n max_val = float(max(vector))\n try:\n return [v / max_val for v in vector]\n except ZeroDivisionError: # All values are the same\n return [1.0] * len(vector)", "response": "Scale values in vector to the range [ 0 1 )"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _diversity_metric(solution, population):\n # Edge case for empty population\n # If there are no other solutions, the given solution has maximum diversity\n if population == []:\n return 1.0\n\n return (\n sum([_manhattan_distance(solution, other) for other in population])\n # Normalize (assuming each value in solution is in range [0.0, 1.0])\n # NOTE: len(solution) is maximum manhattan distance\n / (len(population) * len(solution)))", "response": "Return diversity value for solution compared to given population."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _manhattan_distance(vec_a, vec_b):\n if len(vec_a) != len(vec_b):\n raise ValueError('len(vec_a) must equal len(vec_b)')\n return sum(map(lambda a, b: abs(a - b), vec_a, vec_b))", "response": "Return manhattan distance between two lists of numbers."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _fitnesses_to_probabilities(fitnesses):\n # Do not allow negative fitness values\n min_fitness = min(fitnesses)\n if min_fitness < 0.0:\n # Make smallest fitness value 0\n fitnesses = map(lambda f: f - min_fitness, fitnesses)\n\n fitness_sum = sum(fitnesses)\n\n # Generate probabilities\n # Creates a list of increasing values.\n # The greater the gap between two values, the greater the probability.\n # Ex. [0.1, 0.23, 0.56, 1.0]\n prob_sum = 0.0\n probabilities = []\n for fitness in fitnesses:\n if fitness < 0:\n raise ValueError(\n \"Fitness cannot be negative, fitness = {}.\".format(fitness))\n prob_sum += (fitness / fitness_sum)\n probabilities.append(prob_sum)\n probabilities[-1] += 0.0001 # to compensate for rounding errors\n\n return probabilities", "response": "Return a list of probabilities proportional to fitnesses."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef one_point_crossover(parents):\n # The point that the chromosomes will be crossed at (see Ex. above)\n crossover_point = random.randint(1, len(parents[0]) - 1)\n\n return (_one_parent_crossover(parents[0], parents[1], crossover_point),\n _one_parent_crossover(parents[1], parents[0], crossover_point))", "response": "Perform one point crossover on two parent chromosomes."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nperform uniform crossover on two parent chromosomes.", "response": "def uniform_crossover(parents):\n \"\"\"Perform uniform crossover on two parent chromosomes.\n\n Randomly take genes from one parent or the other.\n Ex. p1 = xxxxx, p2 = yyyyy, child = xyxxy\n \"\"\"\n chromosome_length = len(parents[0])\n\n children = [[], []]\n\n for i in range(chromosome_length):\n selected_parent = random.randint(0, 1)\n\n # Take from the selected parent, and add it to child 1\n # Take from the other parent, and add it to child 2\n children[0].append(parents[selected_parent][i])\n children[1].append(parents[1 - selected_parent][i])\n\n return children"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nmutates every chromosome in a population by randomly flipping bits.", "response": "def random_flip_mutate(population, mutation_chance):\n \"\"\"Mutate every chromosome in a population, list is modified in place.\n\n Mutation occurs by randomly flipping bits (genes).\n \"\"\"\n for chromosome in population: # For every chromosome in the population\n for i in range(len(chromosome)): # For every bit in the chromosome\n # If mutation takes place\n if random.uniform(0.0, 1.0) <= mutation_chance:\n chromosome[i] = 1 - chromosome[i]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn dict mapping item to indices.", "response": "def _duplicates(list_):\n \"\"\"Return dict mapping item -> indices.\"\"\"\n item_indices = {}\n for i, item in enumerate(list_):\n try:\n item_indices[item].append(i)\n except KeyError: # First time seen\n item_indices[item] = [i]\n return item_indices"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _parse_parameter_locks(optimizer, meta_parameters, parameter_locks):\n # WARNING: meta_parameters is modified inline\n\n locked_values = {}\n if parameter_locks:\n for name in parameter_locks:\n # Store the current optimzier value\n # and remove from our dictionary of paramaters to optimize\n locked_values[name] = getattr(optimizer, name)\n meta_parameters.pop(name)\n\n return locked_values", "response": "Synchronize meta_parameters and locked_values."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndetermine size of binary encoding of parameters.", "response": "def _get_hyperparameter_solution_size(meta_parameters):\n \"\"\"Determine size of binary encoding of parameters.\n\n Also adds binary size information for each parameter.\n \"\"\"\n # WARNING: meta_parameters is modified inline\n\n solution_size = 0\n for _, parameters in meta_parameters.iteritems():\n if parameters['type'] == 'discrete':\n # Binary encoding of discrete values -> log_2 N\n num_values = len(parameters['values'])\n binary_size = helpers.binary_size(num_values)\n elif parameters['type'] == 'int':\n # Use enough bits to cover range from min to max\n # + 1 to include max in range\n int_range = parameters['max'] - parameters['min'] + 1\n binary_size = helpers.binary_size(int_range)\n elif parameters['type'] == 'float':\n # Use enough bits to provide fine steps between min and max\n float_range = parameters['max'] - parameters['min']\n # * 1000 provides 1000 values between each natural number\n binary_size = helpers.binary_size(float_range * 1000)\n else:\n raise ValueError('Parameter type \"{}\" does not match known values'.\n format(parameters['type']))\n\n # Store binary size with parameters for use in decode function\n parameters['binary_size'] = binary_size\n\n solution_size += binary_size\n\n return solution_size"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a function that converts the binary solution to parameters.", "response": "def _make_hyperparameter_decode_func(locked_values, meta_parameters):\n \"\"\"Create a function that converts the binary solution to parameters.\"\"\"\n\n # Locked parameters are also returned by decode function, but are not\n # based on solution\n\n def decode(solution):\n \"\"\"Convert solution into dict of hyperparameters.\"\"\"\n # Start with out stationary (locked) paramaters\n hyperparameters = copy.deepcopy(locked_values)\n\n # Obtain moving hyperparameters from binary solution\n index = 0\n for name, parameters in meta_parameters.iteritems():\n # Obtain binary for this hyperparameter\n binary_size = parameters['binary_size']\n binary = solution[index:index + binary_size]\n index += binary_size # Just index to start of next hyperparameter\n\n # Decode binary\n if parameters['type'] == 'discrete':\n i = helpers.binary_to_int(\n binary, upper_bound=len(parameters['values']) - 1)\n value = parameters['values'][i]\n elif parameters['type'] == 'int':\n value = helpers.binary_to_int(\n binary,\n lower_bound=parameters['min'],\n upper_bound=parameters['max'])\n elif parameters['type'] == 'float':\n value = helpers.binary_to_float(\n binary,\n lower_bound=parameters['min'],\n upper_bound=parameters['max'])\n else:\n raise ValueError(\n 'Parameter type \"{}\" does not match known values'.format(\n parameters['type']))\n\n # Store value\n hyperparameters[name] = value\n\n return hyperparameters\n\n return decode"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _meta_fitness_func(parameters,\n _optimizer,\n _problems,\n _master_fitness_dict,\n _runs=20):\n \"\"\"Test a metaheuristic with parameters encoded in solution.\n\n Our goal is to minimize number of evaluation runs until a solution is found,\n while maximizing chance of finding solution to the underlying problem\n NOTE: while meta optimization requires a 'known' solution, this solution\n can be an estimate to provide the meta optimizer with a sense of progress.\n \"\"\"\n # Create the optimizer with parameters encoded in solution\n optimizer = copy.deepcopy(_optimizer)\n optimizer._set_hyperparameters(parameters)\n optimizer.logging = False\n\n # Preload fitness dictionary from master, and disable clearing dict\n # NOTE: master_fitness_dict will be modified inline, and therefore,\n # we do not need to take additional steps to update it\n if _master_fitness_dict != None: # None means low memory mode\n optimizer.clear_cache = False\n optimizer._Optimizer__encoded_cache = _master_fitness_dict\n\n # Because metaheuristics are stochastic, we run the optimizer multiple times,\n # to obtain an average of performance\n all_evaluation_runs = []\n solutions_found = []\n for _ in range(_runs):\n for problem in _problems:\n # Get performance for problem\n optimizer.optimize(problem)\n all_evaluation_runs.append(optimizer.fitness_runs)\n if optimizer.solution_found:\n solutions_found.append(1.0)\n else:\n solutions_found.append(0.0)\n\n # Our main goal is to minimize time the optimizer takes\n fitness = 1.0 / helpers.avg(all_evaluation_runs)\n\n # Optimizer is heavily penalized for missing solutions\n # To avoid 0 fitness\n fitness = fitness * helpers.avg(solutions_found)**2 + 1e-19\n\n return fitness", "response": "Test a metaheuristic with parameters encoded in solution."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef copy(self,\n fitness_function=None,\n decode_function=None,\n fitness_args=None,\n decode_args=None,\n fitness_kwargs=None,\n decode_kwargs=None):\n \"\"\"Return a copy of this problem.\n\n Optionally replace this problems arguments with those passed in.\n \"\"\"\n if fitness_function is None:\n fitness_function = self._fitness_function\n if decode_function is None:\n decode_function = self._decode_function\n if fitness_args is None:\n fitness_args = self._fitness_args\n if decode_args is None:\n decode_args = self._decode_args\n if fitness_kwargs is None:\n fitness_kwargs = self._fitness_kwargs\n if decode_kwargs is None:\n decode_kwargs = self._decode_kwargs\n\n return Problem(\n fitness_function,\n decode_function=decode_function,\n fitness_args=fitness_args,\n decode_args=decode_args,\n fitness_kwargs=fitness_kwargs,\n decode_kwargs=decode_kwargs)", "response": "Returns a copy of this problem."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_fitness(self, solution):\n return self._fitness_function(solution, *self._fitness_args,\n **self._fitness_kwargs)", "response": "Return fitness for the given solution."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef decode_solution(self, encoded_solution):\n return self._decode_function(encoded_solution, *self._decode_args,\n **self._decode_kwargs)", "response": "Return solution from an encoded representation."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef optimize(self, problem, max_iterations=100, max_seconds=float('inf'),\n cache_encoded=True, cache_solution=False, clear_cache=True,\n logging_func=_print_fitnesses,\n n_processes=0):\n \"\"\"Find the optimal inputs for a given fitness function.\n\n Args:\n problem: An instance of Problem. The problem to solve.\n max_iterations: The number of iterations to optimize before stopping.\n max_seconds: Maximum number of seconds to optimize for, before stopping.\n Note that condition is only checked one per iteration,\n meaning optimization can take more than max_seconds,\n especially if fitnesses take a long time to calculate.\n cache_encoded: bool; Whether or not to cache fitness of encoded strings.\n Encoded strings are produced directly by the optimizer.\n If an encoded string is found in cache, it will not be decoded.\n cache_solution: bool; Whether or not to cache fitness of decoded solutions.\n Decoded solution is provided by problems decode function.\n If problem does not provide a hash solution function,\n Various naive hashing methods will be attempted, including:\n tuple, tuple(sorted(dict.items)), str.\n clear_cache: bool; Whether or not to reset cache after optimization.\n Disable if you want to run optimize multiple times on the same problem.\n logging_func: func/None; Function taking:\n iteration, population, solutions, fitnesses, best_solution, best_fitness\n Called after every iteration.\n Use for custom logging, or set to None to disable logging.\n Note that best_solution and best_fitness are the best of all iterations so far.\n n_processes: int; Number of processes to use for multiprocessing.\n If <= 0, do not use multiprocessing.\n\n Returns:\n object; The best solution, after decoding.\n \"\"\"\n if not isinstance(problem, Problem):\n raise TypeError('problem must be an instance of Problem class')\n\n # Prepare pool for multiprocessing\n if n_processes > 0:\n try:\n pool = multiprocessing.Pool(processes=n_processes)\n except NameError:\n raise ImportError(\n 'pickle, dill, or multiprocessing library is not available.'\n )\n else:\n pool = None\n\n # Set first, incase optimizer uses _max_iterations in initialization\n self.__max_iterations = max_iterations\n\n # Initialize algorithm\n self._reset()\n\n best_solution = {'solution': None, 'fitness': None}\n population = self.initial_population()\n try:\n # Begin optimization loop\n start = time.clock()\n for self.iteration in itertools.count(1): # Infinite sequence of iterations\n # Evaluate potential solutions\n solutions, fitnesses, finished = self._get_fitnesses(\n problem,\n population,\n cache_encoded=cache_encoded,\n cache_solution=cache_solution,\n pool=pool)\n\n # If the best fitness from this iteration is better than\n # the global best\n best_index, best_fitness = max(\n enumerate(fitnesses), key=operator.itemgetter(1))\n if best_fitness > best_solution['fitness']:\n # Store the new best solution\n best_solution['fitness'] = best_fitness\n best_solution['solution'] = solutions[best_index]\n\n if logging_func:\n logging_func(self.iteration, population, solutions,\n fitnesses, best_solution['solution'],\n best_solution['fitness'])\n\n # Break if solution found\n if finished:\n self.solution_found = True\n break\n\n # Break if out of time\n if time.clock() - start >= max_seconds:\n break\n\n # Break if out of iterations\n if self.iteration >= max_iterations:\n break\n\n # Continue optimizing\n population = self.next_population(population, fitnesses)\n\n # Store best internally, before returning\n self.best_solution = best_solution['solution']\n self.best_fitness = best_solution['fitness']\n\n finally:\n # Clear caches\n if clear_cache:\n # Clear caches from memory\n self.__encoded_cache = {}\n self.__solution_cache = {}\n\n # Reset encoded, and decoded key functions\n self._get_encoded_key = self._get_encoded_key_type\n self._get_solution_key = self._get_solution_key_type\n\n # Clean up multiprocesses\n try:\n pool.terminate() # Kill outstanding work\n pool.close() # Close child processes\n except AttributeError:\n # No pool\n assert pool is None\n\n return self.best_solution", "response": "This function will find the optimal inputs for a given fitness function."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _reset_bookkeeping(self):\n self.iteration = 0\n self.fitness_runs = 0\n self.best_solution = None\n self.best_fitness = None\n self.solution_found = False", "response": "Reset the bookkeeping parameters to initial values."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the fitness for every solution in a population.", "response": "def _get_fitnesses(self,\n problem,\n population,\n cache_encoded=True,\n cache_solution=False,\n pool=None):\n \"\"\"Get the fitness for every solution in a population.\n\n Args:\n problem: Problem; The problem that defines fitness.\n population: list; List of potential solutions.\n pool: None/multiprocessing.Pool; Pool of processes for parallel\n decoding and evaluation.\n \"\"\"\n fitnesses = [None] * len(population)\n\n #############################\n # Decoding\n #############################\n if cache_encoded:\n try:\n encoded_keys = map(self._get_encoded_key, population)\n\n # Get all fitnesses from encoded_solution cache\n to_decode_indices = []\n for i, encoded_key in enumerate(encoded_keys):\n try:\n fitnesses[i] = self.__encoded_cache[encoded_key]\n # Note that this fitness will never be better than the current best\n # because we have already evaluted it,\n # Therefore, we do not need to worry about decoding the solution\n except KeyError: # Cache miss\n to_decode_indices.append(i)\n\n except UnhashableError: # Cannot hash encoded solution\n encoded_keys = None\n to_decode_indices = range(len(population))\n else:\n encoded_keys = None\n to_decode_indices = range(len(population))\n\n # Decode all that need to be decoded, and combine back into list the same length\n # as population\n if encoded_keys is None:\n to_decode_keys = None\n else:\n to_decode_keys = [encoded_keys[i] for i in to_decode_indices]\n\n solutions = [None] * len(population)\n for i, solution in zip(to_decode_indices,\n self._pmap(\n problem.decode_solution,\n [population[i] for i in to_decode_indices],\n to_decode_keys,\n pool)):\n solutions[i] = solution\n\n #############################\n # Evaluating\n #############################\n if cache_solution:\n try:\n # Try to make solutions hashable\n # Use user provided hash function if available\n if problem.hash_solution:\n hash_solution_func = problem.hash_solution\n else:\n # Otherwise, default to built in \"smart\" hash function\n hash_solution_func = self._get_solution_key\n\n solution_keys = [\n hash_solution_func(solution)\n # None corresponds to encoded_solutions found in cache\n if solution is not None else None for solution in solutions\n ]\n\n # Get all fitnesses from solution cache\n to_eval_indices = []\n for i, solution_key in enumerate(solution_keys):\n if solution_key is not None: # Otherwise, fitness already found in encoded cache\n try:\n fitnesses[i] = self.__solution_cache[solution_key]\n except KeyError: # Cache miss\n to_eval_indices.append(i)\n\n except UnhashableError: # Cannot hash solution\n solution_keys = None\n to_eval_indices = to_decode_indices[:]\n else:\n solution_keys = None\n to_eval_indices = to_decode_indices[:]\n\n # Evaluate all that need to be evaluated, and combine back into fitnesses list\n if solution_keys is None:\n if encoded_keys is None:\n # No way to detect duplicates\n to_eval_keys = None\n else:\n # Cannot use decoded keys, default to encoded keys\n to_eval_keys = [encoded_keys[i] for i in to_eval_indices]\n else:\n to_eval_keys = [solution_keys[i] for i in to_eval_indices]\n\n finished = False\n eval_bookkeeping = {}\n for i, fitness_finished in zip(to_eval_indices,\n self._pmap(\n problem.get_fitness,\n [solutions[i] for i in to_eval_indices],\n to_eval_keys,\n pool,\n bookkeeping_dict=eval_bookkeeping)):\n # Unpack fitness_finished tuple\n try:\n fitness, maybe_finished = fitness_finished\n if maybe_finished:\n finished = True\n except TypeError: # Not (fitness, finished) tuple\n fitness = fitness_finished\n\n fitnesses[i] = fitness\n\n #############################\n # Finishing\n #############################\n # Bookkeeping\n # keep track of how many times fitness is evaluated\n self.fitness_runs += len(eval_bookkeeping['key_indices']) # Evaled once for each unique key\n\n # Add evaluated fitnesses to caches (both of them)\n if cache_encoded and encoded_keys is not None:\n for i in to_decode_indices: # Encoded cache misses\n self.__encoded_cache[encoded_keys[i]] = fitnesses[i]\n if cache_solution and solution_keys is not None:\n for i in to_eval_indices: # Decoded cache misses\n self.__solution_cache[solution_keys[i]] = fitnesses[i]\n\n # Return\n # assert None not in fitnesses # Un-comment for debugging\n return solutions, fitnesses, finished"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _pmap(self, func, items, keys, pool, bookkeeping_dict=None):\n if keys is not None: # Otherwise, cannot hash items\n # Remove duplicates first (use keys)\n # Create mapping (dict) of key to list of indices\n key_indices = _duplicates(keys).values()\n else: # Cannot hash items\n # Assume no duplicates\n key_indices = [[i] for i in range(len(items))]\n\n # Use only the first of duplicate indices in decoding\n if pool is not None:\n # Parallel map\n results = pool.map(\n functools.partial(_unpickle_run, pickle.dumps(func)),\n [items[i[0]] for i in key_indices])\n else:\n results = map(func, [items[i[0]] for i in key_indices])\n\n # Add bookkeeping\n if bookkeeping_dict is not None:\n bookkeeping_dict['key_indices'] = key_indices\n\n # Combine duplicates back into list\n all_results = [None] * len(items)\n for indices, result in zip(key_indices, results):\n for j, i in enumerate(indices):\n # Avoid duplicate result objects in list,\n # in case they are used in functions with side effects\n if j > 0:\n result = copy.deepcopy(result)\n all_results[i] = result\n\n return all_results", "response": "Efficiently map func over all items."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _set_hyperparameters(self, parameters):\n for name, value in parameters.iteritems():\n try:\n getattr(self, name)\n except AttributeError:\n raise ValueError(\n 'Each parameter in parameters must be an attribute. '\n '{} is not.'.format(name))\n setattr(self, name, value)", "response": "Set internal optimization parameters."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_hyperparameters(self):\n hyperparameters = {}\n for key in self._hyperparameters:\n hyperparameters[key] = getattr(self, key)\n return hyperparameters", "response": "Get internal optimization parameters."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef optimize_hyperparameters(self,\n problems,\n parameter_locks=None,\n smoothing=20,\n max_iterations=100,\n _meta_optimizer=None,\n _low_memory=True):\n \"\"\"Optimize hyperparameters for a given problem.\n\n Args:\n parameter_locks: a list of strings, each corresponding to a hyperparamter\n that should not be optimized.\n problems: Either a single problem, or a list of problem instances,\n allowing optimization based on multiple similar problems.\n smoothing: int; number of runs to average over for each set of hyperparameters.\n max_iterations: The number of iterations to optimize before stopping.\n _low_memory: disable performance enhancements to save memory\n (they use a lot of memory otherwise).\n \"\"\"\n if smoothing <= 0:\n raise ValueError('smoothing must be > 0')\n\n # problems supports either one or many problem instances\n if isinstance(problems, collections.Iterable):\n for problem in problems:\n if not isinstance(problem, Problem):\n raise TypeError(\n 'problem must be Problem instance or list of Problem instances'\n )\n elif isinstance(problems, Problem):\n problems = [problems]\n else:\n raise TypeError(\n 'problem must be Problem instance or list of Problem instances'\n )\n\n # Copy to avoid permanent modification\n meta_parameters = copy.deepcopy(self._hyperparameters)\n\n # First, handle parameter locks, since it will modify our\n # meta_parameters dict\n locked_values = _parse_parameter_locks(self, meta_parameters,\n parameter_locks)\n\n # We need to know the size of our chromosome,\n # based on the hyperparameters to optimize\n solution_size = _get_hyperparameter_solution_size(meta_parameters)\n\n # We also need to create a decode function to transform the binary solution\n # into parameters for the metaheuristic\n decode = _make_hyperparameter_decode_func(locked_values,\n meta_parameters)\n\n # A master fitness dictionary can be stored for use between calls\n # to meta_fitness\n if _low_memory:\n master_fitness_dict = None\n else:\n master_fitness_dict = {}\n\n additional_parameters = {\n '_optimizer': self,\n '_problems': problems,\n '_runs': smoothing,\n '_master_fitness_dict': master_fitness_dict,\n }\n META_FITNESS = Problem(\n _meta_fitness_func,\n decode_function=decode,\n fitness_kwargs=additional_parameters)\n if _meta_optimizer is None:\n # Initialize default meta optimizer\n # GenAlg is used because it supports both discrete and continous\n # attributes\n from optimal import GenAlg\n\n # Create metaheuristic with computed decode function and soltuion\n # size\n _meta_optimizer = GenAlg(solution_size)\n else:\n # Adjust supplied metaheuristic for this problem\n _meta_optimizer._solution_size = solution_size\n\n # Determine the best hyperparameters with a metaheuristic\n best_parameters = _meta_optimizer.optimize(\n META_FITNESS, max_iterations=max_iterations)\n\n # Set the hyperparameters inline\n self._set_hyperparameters(best_parameters)\n\n # And return\n return best_parameters", "response": "Optimize hyperparameters for a given problem."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncompare a set of optimizers and problems.", "response": "def compare(optimizers, problems, runs=20, all_kwargs={}):\n \"\"\"Compare a set of optimizers.\n\n Args:\n optimizers: list/Optimizer; Either a list of optimizers to compare,\n or a single optimizer to test on each problem.\n problems: list/Problem; Either a problem instance or a list of problem instances,\n one for each optimizer.\n all_kwargs: dict/list; Either the Optimizer.optimize keyword arguments\n for all optimizers, or a list of keyword arguments, one for each optimizer.\n runs: int; How many times to run each optimizer (smoothness)\n\n Returns:\n dict; mapping optimizer identifier to stats.\n \"\"\"\n if not (isinstance(optimizers, collections.Iterable)\n or isinstance(problems, collections.Iterable)):\n raise TypeError('optimizers or problems must be iterable')\n\n # If optimizers is not a list, repeat into list for each problem\n if not isinstance(optimizers, collections.Iterable):\n optimizers = [copy.deepcopy(optimizers) for _ in range(len(problems))]\n\n # If problems is not a list, repeat into list for each optimizer\n if not isinstance(problems, collections.Iterable):\n problems = [copy.deepcopy(problems) for _ in range(len(optimizers))]\n\n # If all_kwargs is not a list, repeat it into a list\n if isinstance(all_kwargs, dict):\n all_kwargs = [all_kwargs] * len(optimizers)\n elif not isinstance(all_kwargs, collections.Iterable):\n raise TypeError('all_kwargs must be dict or list of dict')\n\n stats = {}\n key_counts = {}\n for optimizer, problem, kwargs in zip(optimizers, problems, all_kwargs):\n # For nice human readable dictionaries, extract useful names from\n # optimizer\n class_name = optimizer.__class__.__name__\n fitness_func_name = problem._fitness_function.__name__\n key_name = '{} {}'.format(class_name, fitness_func_name)\n\n # Keep track of how many optimizers of each class / fitness func\n # for better keys in stats dict\n try:\n key_counts[key_name] += 1\n except KeyError:\n key_counts[key_name] = 1\n\n # Foo 1, Foo 2, Bar 1, etc.\n key = '{} {}'.format(key_name, key_counts[key_name])\n\n print key + ': ',\n\n # Finally, get the actual stats\n stats[key] = benchmark(optimizer, problem, runs=runs, **kwargs)\n\n print\n\n return stats"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef benchmark(optimizer, problem, runs=20, **kwargs):\n stats = {'runs': []}\n\n # Disable logging, to avoid spamming the user\n # TODO: Maybe we shouldn't disable by default?\n kwargs = copy.copy(kwargs)\n kwargs['logging_func'] = None\n\n # Determine effectiveness of metaheuristic over many runs\n # The stochastic nature of metaheuristics make this necessary\n # for an accurate evaluation\n for _ in range(runs):\n optimizer.optimize(problem, **kwargs)\n\n # Convert bool to number for mean and standard deviation calculations\n if optimizer.solution_found:\n finished_num = 1.0\n else:\n finished_num = 0.0\n\n stats_ = {\n 'fitness': optimizer.best_fitness,\n 'fitness_runs': optimizer.fitness_runs,\n 'solution_found': finished_num\n }\n stats['runs'].append(stats_)\n\n # Little progress 'bar'\n print '.',\n\n # Mean gives a good overall idea of the metaheuristics effectiveness\n # Standard deviation (SD) shows consistency of performance\n _add_mean_sd_to_stats(stats)\n\n return stats", "response": "Run an optimizer through a problem multiple times and return a dictionary of various statistics."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncombine stats for multiple optimizers to obtain one mean and sd.", "response": "def aggregate(all_stats):\n \"\"\"Combine stats for multiple optimizers to obtain one mean and sd.\n\n Useful for combining stats for the same optimizer class and multiple problems.\n\n Args:\n all_stats: dict; output from compare.\n \"\"\"\n aggregate_stats = {'means': [], 'standard_deviations': []}\n for optimizer_key in all_stats:\n # runs is the mean, for add_mean_sd function\n mean_stats = copy.deepcopy(all_stats[optimizer_key]['mean'])\n mean_stats['name'] = optimizer_key\n aggregate_stats['means'].append(mean_stats)\n\n # also keep track of standard deviations\n sd_stats = copy.deepcopy(\n all_stats[optimizer_key]['standard_deviation'])\n sd_stats['name'] = optimizer_key\n aggregate_stats['standard_deviations'].append(sd_stats)\n\n _add_mean_sd_to_stats(aggregate_stats, 'means')\n\n return aggregate_stats"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nobtains the mean of stats.", "response": "def _mean_of_runs(stats, key='runs'):\n \"\"\"Obtain the mean of stats.\n\n Args:\n stats: dict; A set of stats, structured as above.\n key: str; Optional key to determine where list of runs is found in stats\n \"\"\"\n\n num_runs = len(stats[key])\n first = stats[key][0]\n\n mean = {}\n for stat_key in first:\n # Skip non numberic attributes\n if isinstance(first[stat_key], numbers.Number):\n mean[stat_key] = sum(run[stat_key]\n for run in stats[key]) / float(num_runs)\n\n return mean"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _sd_of_runs(stats, mean, key='runs'):\n\n num_runs = len(stats[key])\n first = stats[key][0]\n\n standard_deviation = {}\n for stat_key in first:\n # Skip non numberic attributes\n if isinstance(first[stat_key], numbers.Number):\n standard_deviation[stat_key] = math.sqrt(\n sum((run[stat_key] - mean[stat_key])**2\n for run in stats[key]) / float(num_runs))\n\n return standard_deviation", "response": "Obtain the standard deviation of stats."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns random binary string with given probabilities.", "response": "def _sample(probability_vec):\n \"\"\"Return random binary string, with given probabilities.\"\"\"\n return map(int,\n numpy.random.random(probability_vec.size) <= probability_vec)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadjusting probabilities towards the best solution.", "response": "def _adjust_probability_vec_best(population, fitnesses, probability_vec,\n adjust_rate):\n \"\"\"Shift probabilities towards the best solution.\"\"\"\n best_solution = max(zip(fitnesses, population))[1]\n\n # Shift probabilities towards best solution\n return _adjust(probability_vec, best_solution, adjust_rate)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _mutate_probability_vec(probability_vec, mutation_chance, mutation_adjust_rate):\n bits_to_mutate = numpy.random.random(probability_vec.size) <= mutation_chance\n probability_vec[bits_to_mutate] = _adjust(\n probability_vec[bits_to_mutate],\n numpy.random.random(numpy.sum(bits_to_mutate)), mutation_adjust_rate)", "response": "Randomly adjusts the probability_vec."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nmaking a new population after each optimization iteration.", "response": "def next_population(self, population, fitnesses):\n \"\"\"Make a new population after each optimization iteration.\n\n Args:\n population: The population current population of solutions.\n fitnesses: The fitness associated with each solution in the population\n Returns:\n list; a list of solutions.\n \"\"\"\n # Update probability vector\n self._probability_vec = _adjust_probability_vec_best(\n population, fitnesses, self._probability_vec, self._adjust_rate)\n\n # Mutate probability vector\n _mutate_probability_vec(self._probability_vec, self._mutation_chance,\n self._mutation_adjust_rate)\n\n # Return new samples\n return [\n _sample(self._probability_vec)\n for _ in range(self._population_size)\n ]"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef benchmark_multi(optimizer):\n # Get our benchmark stats\n all_stats = benchmark.compare(optimizer, PROBLEMS, runs=100)\n return benchmark.aggregate(all_stats)", "response": "Benchmark an optimizer configuration on multiple functions."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a random population drawn with regard to a set of probabilities", "response": "def _sample(probabilities, population_size):\n \"\"\"Return a random population, drawn with regard to a set of probabilities\"\"\"\n population = []\n for _ in range(population_size):\n solution = []\n for probability in probabilities:\n # probability of 1.0: always 1\n # probability of 0.0: always 0\n if random.uniform(0.0, 1.0) < probability:\n solution.append(1)\n else:\n solution.append(0)\n population.append(solution)\n return population"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _chance(solution, pdf):\n # 1.0 - abs(bit - p) gives probability of bit given p\n return _prod([1.0 - abs(bit - p) for bit, p in zip(solution, pdf)])", "response": "Return the chance of obtaining a solution from a pdf."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngiving the value of a pdf.", "response": "def _pdf_value(pdf, population, fitnesses, fitness_threshold):\n \"\"\"Give the value of a pdf.\n\n This represents the likelihood of a pdf generating solutions\n that exceed the threshold.\n \"\"\"\n # Add the chance of obtaining a solution from the pdf\n # when the fitness for that solution exceeds a threshold\n value = 0.0\n for solution, fitness in zip(population, fitnesses):\n if fitness >= fitness_threshold:\n # 1.0 + chance to avoid issues with chance of 0\n value += math.log(1.0 + _chance(solution, pdf))\n\n # The official equation states that value is now divided by len(fitnesses)\n # however, this is unnecessary when we are only obtaining the best pdf,\n # because every solution is of the same size\n return value"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _update_pdf(population, fitnesses, pdfs, quantile):\n # First we determine a fitness threshold based on a quantile of fitnesses\n fitness_threshold = _get_quantile_cutoff(fitnesses, quantile)\n\n # Then check all of our possible pdfs with a stochastic program\n return _best_pdf(pdfs, population, fitnesses, fitness_threshold)", "response": "Find a better pdf based on fitnesses."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a floating point number between lower and upper bounds, from binary. Args: binary_list: list; List of 0s and 1s. The number of bits in this list determine the number of possible values between lower and upper bound. Increase the size of binary_list for more precise floating points. lower_bound: Minimum value for output, inclusive. A binary list of 0s will have this value. upper_bound: Maximum value for output, inclusive. A binary list of 1s will have this value. Returns: float; A floating point number.", "response": "def binary_to_float(binary_list, lower_bound, upper_bound):\n \"\"\"Return a floating point number between lower and upper bounds, from binary.\n\n Args:\n binary_list: list; List of 0s and 1s.\n The number of bits in this list determine the number of possible\n values between lower and upper bound.\n Increase the size of binary_list for more precise floating points.\n lower_bound: Minimum value for output, inclusive.\n A binary list of 0s will have this value.\n upper_bound: Maximum value for output, inclusive.\n A binary list of 1s will have this value.\n\n Returns:\n float; A floating point number.\n \"\"\"\n # Edge case for empty binary_list\n if binary_list == []:\n # With 0 bits, only one value can be represented,\n # and we default to lower_bound\n return lower_bound\n\n\n # A little bit of math gets us a floating point\n # number between upper and lower bound\n # We look at the relative position of\n # the integer corresponding to our binary list\n # between the upper and lower bound,\n # and offset that by lower bound\n return ((\n # Range between lower and upper bound\n float(upper_bound - lower_bound)\n # Divided by the maximum possible integer\n / (2**len(binary_list) - 1)\n # Times the integer represented by the given binary\n * binary_to_int(binary_list))\n # Plus the lower bound\n + lower_bound)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef binary_to_int(binary_list, lower_bound=0, upper_bound=None):\n # Edge case for empty binary_list\n if binary_list == []:\n # With 0 bits, only one value can be represented,\n # and we default to lower_bound\n return lower_bound\n else:\n # The builtin int construction can take a base argument,\n # but it requires a string,\n # so we convert our binary list to a string\n integer = int(''.join([str(bit) for bit in binary_list]), 2)\n\n # Trim if over upper_bound\n if (upper_bound is not None) and integer + lower_bound > upper_bound:\n # Bounce back. Ex. w/ upper_bound = 2: [0, 1, 2, 2, 1, 0]\n return upper_bound - (integer % (upper_bound - lower_bound + 1))\n else:\n # Not over upper_bound\n return integer + lower_bound", "response": "Return the base 10 integer corresponding to a binary list."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn bit list representation of integer.", "response": "def _int_to_binary(integer, size=None):\n \"\"\"Return bit list representation of integer.\n\n If size is given, binary string is padded with 0s, or clipped to the size.\n \"\"\"\n binary_list = map(int, format(integer, 'b'))\n\n if size is None:\n return binary_list\n else:\n if len(binary_list) > size:\n # Too long, take only last n\n return binary_list[len(binary_list)-size:]\n elif size > len(binary_list):\n # Too short, pad\n return [0]*(size-len(binary_list)) + binary_list\n else:\n # Just right\n return binary_list"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef next_population(self, population, fitnesses):\n return common.make_population(self._population_size,\n self._generate_solution)", "response": "Make a new population after each optimization iteration."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _generate_solution(self):\n return common.random_real_solution(\n self._solution_size, self._lower_bounds, self._upper_bounds)", "response": "Return a single random solution."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nmakes a new population after each optimization iteration.", "response": "def next_population(self, population, fitnesses):\n \"\"\"Make a new population after each optimization iteration.\n\n Args:\n population: The population current population of solutions.\n fitnesses: The fitness associated with each solution in the population\n Returns:\n list; a list of solutions.\n \"\"\"\n return [self._next_solution() for _ in range(self._population_size)]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _build_tree(self):\n\n groups = self._groups or self.get_children_paths(self.root_path)\n for group in groups:\n node = Node(name=group, parent=self.root)\n self.root.children.append(node)\n self._init_sub_groups(node)", "response": "Build a full or partial tree based on the groups and sub - groups specified."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ninitialise sub - groups and create any that do not already exist.", "response": "def _init_sub_groups(self, parent):\n \"\"\"\n Initialise sub-groups, and create any that do not already exist.\n \"\"\"\n\n if self._sub_groups:\n for sub_group in self._sub_groups:\n for component in split_path_components(sub_group):\n fp = os.path.join(parent.full_path, component)\n if os.path.exists(fp):\n node = Node(name=component, parent=parent)\n parent.children.append(node)\n else:\n node = parent.create_cgroup(component)\n parent = node\n self._init_children(node)\n else:\n self._init_children(parent)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _init_children(self, parent):\n\n for dir_name in self.get_children_paths(parent.full_path):\n child = Node(name=dir_name, parent=parent)\n parent.children.append(child)\n self._init_children(child)", "response": "Initialize each node s children - essentially build the tree."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef full_path(self):\n\n if self.parent:\n return os.path.join(self.parent.full_path, self.name)\n return self.name", "response": "Absolute system path to the node"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_node_type(self):\n\n if self.parent is None:\n return self.NODE_ROOT\n elif self.parent.node_type == self.NODE_ROOT:\n return self.NODE_CONTROLLER_ROOT\n elif b\".slice\" in self.name or b'.partition' in self.name:\n return self.NODE_SLICE\n elif b\".scope\" in self.name:\n return self.NODE_SCOPE\n else:\n return self.NODE_CGROUP", "response": "Returns the current node s type"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _get_controller_type(self):\n\n if self.node_type == self.NODE_CONTROLLER_ROOT and self.name in self.CONTROLLERS:\n return self.name\n elif self.parent:\n return self.parent.controller_type\n else:\n return None", "response": "Returns the current node s controller type"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a cgroup by name and attach it under this node.", "response": "def create_cgroup(self, name):\n \"\"\"\n Create a cgroup by name and attach it under this node.\n \"\"\"\n node = Node(name, parent=self)\n if node in self.children:\n raise RuntimeError('Node {} already exists under {}'.format(name, self.path))\n\n name = name.encode()\n fp = os.path.join(self.full_path, name)\n os.mkdir(fp)\n self.children.append(node)\n return node"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delete_cgroup(self, name):\n name = name.encode()\n fp = os.path.join(self.full_path, name)\n if os.path.exists(fp):\n os.rmdir(fp)\n node = Node(name, parent=self)\n try:\n self.children.remove(node)\n except ValueError:\n return", "response": "Delete a cgroup by name and detach it from this node. Raises OSError if the cgroup is not empty."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef delete_empty_children(self):\n for child in self.children:\n child.delete_empty_children()\n try:\n if os.path.exists(child.full_path):\n os.rmdir(child.full_path)\n except OSError: pass\n else: self.children.remove(child)", "response": "Delete any empty children of this node."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds a node to the group.", "response": "def add_node(self, node):\n \"\"\"\n A a Node object to the group. Only one node per cgroup is supported\n \"\"\"\n if self.controllers.get(node.controller_type, None):\n raise RuntimeError(\"Cannot add node {} to the node group. A node for {} group is already assigned\".format(\n node,\n node.controller_type\n ))\n self.nodes.append(node)\n if node.controller:\n self.controllers[node.controller_type] = node.controller\n setattr(self, node.controller_type, node.controller)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef group_tasks(self):\n tasks = set()\n for node in walk_tree(self):\n for ctrl in node.controllers.values():\n tasks.update(ctrl.tasks)\n return tasks", "response": "All tasks in the hierarchy affected by this group."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a set of all the tasks in this exact group.", "response": "def tasks(self):\n \"\"\"Tasks in this exact group\"\"\"\n tasks = set()\n for ctrl in self.controllers.values():\n tasks.update(ctrl.tasks)\n return tasks"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef filepath(self, filename):\n\n return os.path.join(self.node.full_path, filename)", "response": "The full path to a file"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_property(self, filename):\n\n with open(self.filepath(filename)) as f:\n return f.read().strip()", "response": "Opens the file and reads the value"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nopens the file and writes the value", "response": "def set_property(self, filename, value):\n \"\"\"Opens the file and writes the value\"\"\"\n\n with open(self.filepath(filename), \"w\") as f:\n return f.write(str(value))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef walk_tree(root):\n yield root\n\n for child in root.children:\n for el in walk_tree(child):\n yield el", "response": "Walks the tree and returns an iterator over the elements."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef walk_up_tree(root):\n for child in root.children:\n for el in walk_up_tree(child):\n yield el\n\n yield root", "response": "Recursively walks up the tree and returns an iterator over the elements in the tree."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_device_major_minor(dev_path):\n stat = os.lstat(dev_path)\n return os.major(stat.st_rdev), os.minor(stat.st_rdev)", "response": "Returns the device major and minor tuple for simplicity\n "} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nvalidate that this instance matches its schema.", "response": "def validate(self):\n \"\"\"\n Validate that this instance matches its schema.\n\n \"\"\"\n schema = Schema(self.__class__.SCHEMA)\n resolver = RefResolver.from_schema(\n schema,\n store=REGISTRY,\n )\n validate(self, schema, resolver=resolver)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef dumps(self):\n with closing(StringIO()) as fileobj:\n self.dump(fileobj)\n return fileobj.getvalue()", "response": "Dump this instance as YAML."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef loads(cls, s):\n with closing(StringIO(s)) as fileobj:\n return cls.load(fileobj)", "response": "Load an instance of this class from a string."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef property_schema(self, key):\n schema = self.__class__.SCHEMA\n\n # first try plain properties\n plain_schema = schema.get(\"properties\", {}).get(key)\n if plain_schema is not None:\n return plain_schema\n # then try pattern properties\n pattern_properties = schema.get(\"patternProperties\", {})\n for pattern, pattern_schema in pattern_properties.items():\n if match(pattern, key):\n return pattern_schema\n # finally try additional properties (defaults to true per JSON Schema)\n return schema.get(\"additionalProperties\", True)", "response": "Lookup the schema for a specific property."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make(class_name, base, schema):\n return type(class_name, (base,), dict(SCHEMA=schema))", "response": "Create a new schema aware type."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef make_definition(name, base, schema):\n class_name = make_class_name(name)\n cls = register(make(class_name, base, schema))\n globals()[class_name] = cls", "response": "Create a new definition."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef register(cls):\n definition_name = make_definition_name(cls.__name__)\n REGISTRY[definition_name] = cls\n return cls", "response": "Register a class as a new node in the registry."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef lookup(schema):\n if not isinstance(schema, dict) or \"$ref\" not in schema:\n return None\n\n ref = schema[\"$ref\"]\n\n return REGISTRY.get(ref)", "response": "Lookup a class by property schema."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef connect(self):\n if self._connected or self.is_alive():\n raise ConnectionExistsError(\"Already connected\")\n # After starting the thread we wait for it to post us\n # an event signifying that connection is established. This\n # ensures that the caller only resumes when we are fully connected.\n self.start()\n with self._lock:\n self._connect_cond.wait_for(lambda: self._connected)", "response": "Connects to the lutron controller."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsending the specified command to the lutron controller.", "response": "def _send_locked(self, cmd):\n \"\"\"Sends the specified command to the lutron controller.\n\n Assumes self._lock is held.\n \"\"\"\n _LOGGER.debug(\"Sending: %s\" % cmd)\n try:\n self._telnet.write(cmd.encode('ascii') + b'\\r\\n')\n except BrokenPipeError:\n self._disconnect_locked()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _do_login_locked(self):\n self._telnet = telnetlib.Telnet(self._host)\n self._telnet.read_until(LutronConnection.USER_PROMPT)\n self._telnet.write(self._user + b'\\r\\n')\n self._telnet.read_until(LutronConnection.PW_PROMPT)\n self._telnet.write(self._password + b'\\r\\n')\n self._telnet.read_until(LutronConnection.PROMPT)\n\n self._send_locked(\"#MONITORING,12,2\")\n self._send_locked(\"#MONITORING,255,2\")\n self._send_locked(\"#MONITORING,3,1\")\n self._send_locked(\"#MONITORING,4,1\")\n self._send_locked(\"#MONITORING,5,1\")\n self._send_locked(\"#MONITORING,6,1\")\n self._send_locked(\"#MONITORING,8,1\")", "response": "Executes the login procedure and sets up some\n connection defaults like turning off the prompt etc."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncloses the current connection.", "response": "def _disconnect_locked(self):\n \"\"\"Closes the current connection. Assume self._lock is held.\"\"\"\n self._connected = False\n self._connect_cond.notify_all()\n self._telnet = None\n _LOGGER.warning(\"Disconnected\")"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _maybe_reconnect(self):\n with self._lock:\n if not self._connected: \n _LOGGER.info(\"Connecting\")\n self._do_login_locked()\n self._connected = True\n self._connect_cond.notify_all()\n _LOGGER.info(\"Connected\")", "response": "Reconnects to the controller if we have previously disconnected."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef run(self):\n _LOGGER.info(\"Started\")\n while True:\n self._maybe_reconnect()\n line = ''\n try:\n # If someone is sending a command, we can lose our connection so grab a\n # copy beforehand. We don't need the lock because if the connection is\n # open, we are the only ones that will read from telnet (the reconnect\n # code runs synchronously in this loop).\n t = self._telnet\n if t is not None:\n line = t.read_until(b\"\\n\")\n except EOFError:\n try:\n self._lock.acquire()\n self._disconnect_locked()\n continue\n finally:\n self._lock.release()\n self._recv_cb(line.decode('ascii').rstrip())", "response": "Main thread function to maintain connection and receive remote status."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse(self):\n import xml.etree.ElementTree as ET\n\n root = ET.fromstring(self._xml_db_str)\n # The structure is something like this:\n # \n # \n # \n # \n # \n # \n # \n # \n\n # First area is useless, it's the top-level project area that defines the\n # \"house\". It contains the real nested Areas tree, which is the one we want.\n top_area = root.find('Areas').find('Area')\n self.project_name = top_area.get('Name')\n areas = top_area.find('Areas')\n for area_xml in areas.getiterator('Area'):\n area = self._parse_area(area_xml)\n self.areas.append(area) \n return True", "response": "Main entry point into the parser. It interprets and creates all the\n relevant Lutron objects and stuffs them into the appropriate hierarchy."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _parse_area(self, area_xml):\n area = Area(self._lutron,\n name=area_xml.get('Name'),\n integration_id=int(area_xml.get('IntegrationID')),\n occupancy_group_id=area_xml.get('OccupancyGroupAssignedToID'))\n for output_xml in area_xml.find('Outputs'):\n output = self._parse_output(output_xml)\n area.add_output(output)\n # device group in our case means keypad\n # device_group.get('Name') is the location of the keypad\n for device_group in area_xml.find('DeviceGroups'):\n if device_group.tag == 'DeviceGroup':\n devs = device_group.find('Devices')\n elif device_group.tag == 'Device':\n devs = [device_group]\n else:\n _LOGGER.info(\"Unknown tag in DeviceGroups child %s\" % devs)\n devs = []\n for device_xml in devs:\n if device_xml.tag != 'Device':\n continue\n if device_xml.get('DeviceType') in (\n 'SEETOUCH_KEYPAD',\n 'SEETOUCH_TABLETOP_KEYPAD',\n 'PICO_KEYPAD',\n 'HYBRID_SEETOUCH_KEYPAD',\n 'MAIN_REPEATER'):\n keypad = self._parse_keypad(device_xml)\n area.add_keypad(keypad)\n elif device_xml.get('DeviceType') == 'MOTION_SENSOR':\n motion_sensor = self._parse_motion_sensor(device_xml)\n area.add_sensor(motion_sensor)\n #elif device_xml.get('DeviceType') == 'VISOR_CONTROL_RECEIVER':\n return area", "response": "Parses an Area tag which is effectively a room depending on how the Lutron controller programming was done."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _parse_output(self, output_xml):\n output = Output(self._lutron,\n name=output_xml.get('Name'),\n watts=int(output_xml.get('Wattage')),\n output_type=output_xml.get('OutputType'),\n integration_id=int(output_xml.get('IntegrationID')))\n return output", "response": "Parses an output from a XML response."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nparse a keypad device.", "response": "def _parse_keypad(self, keypad_xml):\n \"\"\"Parses a keypad device (the Visor receiver is technically a keypad too).\"\"\"\n keypad = Keypad(self._lutron,\n name=keypad_xml.get('Name'),\n integration_id=int(keypad_xml.get('IntegrationID')))\n components = keypad_xml.find('Components')\n if not components:\n return keypad\n for comp in components:\n if comp.tag != 'Component':\n continue\n comp_type = comp.get('ComponentType')\n if comp_type == 'BUTTON':\n button = self._parse_button(keypad, comp)\n keypad.add_button(button)\n elif comp_type == 'LED':\n led = self._parse_led(keypad, comp)\n keypad.add_led(led)\n return keypad"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _parse_button(self, keypad, component_xml):\n button_xml = component_xml.find('Button')\n name = button_xml.get('Engraving')\n button_type = button_xml.get('ButtonType')\n direction = button_xml.get('Direction')\n # Hybrid keypads have dimmer buttons which have no engravings.\n if button_type == 'SingleSceneRaiseLower':\n name = 'Dimmer ' + direction\n if not name:\n name = \"Unknown Button\"\n button = Button(self._lutron, keypad,\n name=name,\n num=int(component_xml.get('ComponentNumber')),\n button_type=button_type,\n direction=direction)\n return button", "response": "Parses a button device that part of a keypad."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nparses an LED device that part of a keypad.", "response": "def _parse_led(self, keypad, component_xml):\n \"\"\"Parses an LED device that part of a keypad.\"\"\"\n component_num = int(component_xml.get('ComponentNumber'))\n led_num = component_num - 80\n led = Led(self._lutron, keypad,\n name=('LED %d' % led_num),\n led_num=led_num,\n component_num=component_num)\n return led"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nparses a motion sensor object.", "response": "def _parse_motion_sensor(self, sensor_xml):\n \"\"\"Parses a motion sensor object.\n\n TODO: We don't actually do anything with these yet. There's a lot of info\n that needs to be managed to do this right. We'd have to manage the occupancy\n groups, what's assigned to them, and when they go (un)occupied. We'll handle\n this later.\n \"\"\"\n return MotionSensor(self._lutron,\n name=sensor_xml.get('Name'),\n integration_id=int(sensor_xml.get('IntegrationID')))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef subscribe(self, obj, handler):\n if not isinstance(obj, LutronEntity):\n raise InvalidSubscription(\"Subscription target not a LutronEntity\")\n _LOGGER.warning(\"DEPRECATED: Subscribing via Lutron.subscribe is obsolete. \"\n \"Please use LutronEntity.subscribe\")\n if obj not in self._legacy_subscribers:\n self._legacy_subscribers[obj] = handler\n obj.subscribe(self._dispatch_legacy_subscriber, None)", "response": "Subscribes to status updates of the requested object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef register_id(self, cmd_type, obj):\n ids = self._ids.setdefault(cmd_type, {})\n if obj.id in ids:\n raise IntegrationIdExistsError\n self._ids[cmd_type][obj.id] = obj", "response": "Registers an object through its integration id to receive update\n notifications."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _dispatch_legacy_subscriber(self, obj, *args, **kwargs):\n if obj in self._legacy_subscribers:\n self._legacy_subscribers[obj](obj)", "response": "This dispatches the registered callback for obj. This is only used by legacy subscribers since new users should register with the target\n object directly."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _recv(self, line):\n if line == '':\n return\n # Only handle query response messages, which are also sent on remote status\n # updates (e.g. user manually pressed a keypad button)\n if line[0] != Lutron.OP_RESPONSE:\n _LOGGER.debug(\"ignoring %s\" % line)\n return\n parts = line[1:].split(',')\n cmd_type = parts[0]\n integration_id = int(parts[1])\n args = parts[2:]\n if cmd_type not in self._ids:\n _LOGGER.info(\"Unknown cmd %s (%s)\" % (cmd_type, line))\n return\n ids = self._ids[cmd_type]\n if integration_id not in ids:\n _LOGGER.warning(\"Unknown id %d (%s)\" % (integration_id, line))\n return\n obj = ids[integration_id]\n handled = obj.handle_update(args)", "response": "Invoked by the connection manager to process incoming data."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nformats and sends the requested command to the Lutron controller.", "response": "def send(self, op, cmd, integration_id, *args):\n \"\"\"Formats and sends the requested command to the Lutron controller.\"\"\"\n out_cmd = \",\".join(\n (cmd, str(integration_id)) + tuple((str(x) for x in args)))\n self._conn.send(op + out_cmd)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef load_xml_db(self):\n\n import urllib.request\n xmlfile = urllib.request.urlopen('http://' + self._host + '/DbXmlInfo.xml')\n xml_db = xmlfile.read()\n xmlfile.close()\n _LOGGER.info(\"Loaded xml db\")\n\n parser = LutronXmlDbParser(lutron=self, xml_db_str=xml_db)\n assert(parser.parse()) # throw our own exception\n self._areas = parser.areas\n self._name = parser.project_name\n\n _LOGGER.info('Found Lutron project: %s, %d areas' % (\n self._name, len(self.areas)))\n\n return True", "response": "Load the Lutron database from the server."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef request(self, action):\n ev = threading.Event()\n first = False\n with self.__lock:\n if len(self.__events) == 0:\n first = True\n self.__events.append(ev)\n if first:\n action()\n return ev", "response": "Request an action to be performed in case one."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _dispatch_event(self, event: LutronEvent, params: Dict):\n for handler, context in self._subscribers:\n handler(self, context, event, params)", "response": "Dispatches the specified event to all the subscribers."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsubscribe to events from this entity.", "response": "def subscribe(self, handler: LutronEventHandler, context):\n \"\"\"Subscribes to events from this entity.\n\n handler: A callable object that takes the following arguments (in order)\n obj: the LutrongEntity object that generated the event\n context: user-supplied (to subscribe()) context object\n event: the LutronEvent that was generated.\n params: a dict of event-specific parameters\n\n context: User-supplied, opaque object that will be passed to handler.\n \"\"\"\n self._subscribers.append((handler, context))"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nhandle an update event for this object.", "response": "def handle_update(self, args):\n \"\"\"Handles an event update for this object, e.g. dimmer level change.\"\"\"\n _LOGGER.debug(\"handle_update %d -- %s\" % (self._integration_id, args))\n state = int(args[0])\n if state != Output._ACTION_ZONE_LEVEL:\n return False\n level = float(args[1])\n _LOGGER.debug(\"Updating %d(%s): s=%d l=%f\" % (\n self._integration_id, self._name, state, level))\n self._level = level\n self._query_waiters.notify()\n self._dispatch_event(Output.Event.LEVEL_CHANGED, {'level': self._level})\n return True"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the current output level by querying the remote controller.", "response": "def level(self):\n \"\"\"Returns the current output level by querying the remote controller.\"\"\"\n ev = self._query_waiters.request(self.__do_query_level)\n ev.wait(1.0)\n return self._level"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets the new output level.", "response": "def level(self, new_level):\n \"\"\"Sets the new output level.\"\"\"\n if self._level == new_level:\n return\n self._lutron.send(Lutron.OP_EXECUTE, Output._CMD_TYPE, self._integration_id,\n Output._ACTION_ZONE_LEVEL, \"%.2f\" % new_level)\n self._level = new_level"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nhandling the specified action on this component.", "response": "def handle_update(self, action, params):\n \"\"\"Handle the specified action on this component.\"\"\"\n _LOGGER.debug('Keypad: \"%s\" Handling \"%s\" Action: %s Params: %s\"' % (\n self._keypad.name, self.name, action, params))\n return False"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef press(self):\n self._lutron.send(Lutron.OP_EXECUTE, Keypad._CMD_TYPE, self._keypad.id,\n self.component_number, Button._ACTION_PRESS)", "response": "Triggers a simulated button press to the Keypad."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nhandles the specified action on this keypad.", "response": "def handle_update(self, action, params):\n \"\"\"Handle the specified action on this component.\"\"\"\n _LOGGER.debug('Keypad: \"%s\" %s Action: %s Params: %s\"' % (\n self._keypad.name, self, action, params))\n ev_map = {\n Button._ACTION_PRESS: Button.Event.PRESSED,\n Button._ACTION_RELEASE: Button.Event.RELEASED\n }\n if action not in ev_map:\n _LOGGER.debug(\"Unknown action %d for button %d in keypad %d\" % (\n action, self.number, self.keypad.name))\n return False\n self._dispatch_event(ev_map[action], {})\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef __do_query_state(self):\n self._lutron.send(Lutron.OP_QUERY, Keypad._CMD_TYPE, self._keypad.id,\n self.component_number, Led._ACTION_LED_STATE)", "response": "Helper method to perform the actual query for the current LED state."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the current LED state by querying the remote controller.", "response": "def state(self):\n \"\"\"Returns the current LED state by querying the remote controller.\"\"\"\n ev = self._query_waiters.request(self.__do_query_state)\n ev.wait(1.0)\n return self._state"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting the new led state.", "response": "def state(self, new_state: bool):\n \"\"\"Sets the new led state.\n\n new_state: bool\n \"\"\"\n self._lutron.send(Lutron.OP_EXECUTE, Keypad._CMD_TYPE, self._keypad.id,\n self.component_number, Led._ACTION_LED_STATE,\n int(new_state))\n self._state = new_state"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef handle_update(self, action, params):\n _LOGGER.debug('Keypad: \"%s\" %s Action: %s Params: %s\"' % (\n self._keypad.name, self, action, params))\n if action != Led._ACTION_LED_STATE:\n _LOGGER.debug(\"Unknown action %d for led %d in keypad %d\" % (\n action, self.number, self.keypad.name))\n return False\n elif len(params) < 1:\n _LOGGER.debug(\"Unknown params %s (action %d on led %d in keypad %d)\" % (\n params, action, self.number, self.keypad.name))\n return False\n self._state = bool(params[0])\n self._query_waiters.notify()\n self._dispatch_event(Led.Event.STATE_CHANGED, {'state': self._state})\n return True", "response": "Handle the specified action on this component."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_button(self, button):\n self._buttons.append(button)\n self._components[button.component_number] = button", "response": "Adds a button that s part of this keypad. We ll use this to\n dispatch button events."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadd an LED that s part of this keypad.", "response": "def add_led(self, led):\n \"\"\"Add an LED that's part of this keypad.\"\"\"\n self._leds.append(led)\n self._components[led.component_number] = led"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nvalidate a pattern against the STIX Pattern grammar.", "response": "def run_validator(pattern):\n \"\"\"\n Validates a pattern against the STIX Pattern grammar. Error messages are\n returned in a list. The test passed if the returned list is empty.\n \"\"\"\n\n start = ''\n if isinstance(pattern, six.string_types):\n start = pattern[:2]\n pattern = InputStream(pattern)\n\n if not start:\n start = pattern.readline()[:2]\n pattern.seek(0)\n\n parseErrListener = STIXPatternErrorListener()\n\n lexer = STIXPatternLexer(pattern)\n # it always adds a console listener by default... remove it.\n lexer.removeErrorListeners()\n\n stream = CommonTokenStream(lexer)\n\n parser = STIXPatternParser(stream)\n parser.buildParseTrees = False\n # it always adds a console listener by default... remove it.\n parser.removeErrorListeners()\n parser.addErrorListener(parseErrListener)\n\n # To improve error messages, replace \"\" in the literal\n # names with symbolic names. This is a hack, but seemed like\n # the simplest workaround.\n for i, lit_name in enumerate(parser.literalNames):\n if lit_name == u\"\":\n parser.literalNames[i] = parser.symbolicNames[i]\n\n parser.pattern()\n\n # replace with easier-to-understand error message\n if not (start[0] == '[' or start == '(['):\n parseErrListener.err_strings[0] = \"FAIL: Error found at line 1:0. \" \\\n \"input is missing square brackets\"\n\n return parseErrListener.err_strings"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nvalidate a user input string.", "response": "def validate(user_input, ret_errs=False, print_errs=False):\n \"\"\"\n Wrapper for run_validator function that returns True if the user_input\n contains a valid STIX pattern or False otherwise. The error messages may\n also be returned or printed based upon the ret_errs and print_errs arg\n values.\n \"\"\"\n\n errs = run_validator(user_input)\n passed = len(errs) == 0\n\n if print_errs:\n for err in errs:\n print(err)\n\n if ret_errs:\n return passed, errs\n\n return passed"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncontinues to validate patterns until it encounters EOF within a pattern file or Ctrl-C is pressed by the user.", "response": "def main():\n \"\"\"\n Continues to validate patterns until it encounters EOF within a pattern\n file or Ctrl-C is pressed by the user.\n \"\"\"\n parser = argparse.ArgumentParser(description='Validate STIX Patterns.')\n parser.add_argument('-f', '--file',\n help=\"Specify this arg to read patterns from a file.\",\n type=argparse.FileType(\"r\"))\n args = parser.parse_args()\n\n pass_count = fail_count = 0\n\n # I tried using a generator (where each iteration would run raw_input()),\n # but raw_input()'s behavior seems to change when called from within a\n # generator: I only get one line, then the generator completes! I don't\n # know why behavior changes...\n import functools\n if args.file:\n nextpattern = args.file.readline\n else:\n nextpattern = functools.partial(six.moves.input, \"Enter a pattern to validate: \")\n\n try:\n while True:\n pattern = nextpattern()\n if not pattern:\n break\n tests_passed, err_strings = validate(pattern, True)\n\n if tests_passed:\n print(\"\\nPASS: %s\" % pattern)\n pass_count += 1\n\n else:\n for err in err_strings:\n print(err, '\\n')\n fail_count += 1\n except (EOFError, KeyboardInterrupt):\n pass\n finally:\n if args.file:\n args.file.close()\n\n print(\"\\nPASSED:\", pass_count, \" patterns\")\n print(\"FAILED:\", fail_count, \" patterns\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert a StringLiteral token to a plain string.", "response": "def _string_literal_to_string(string_literal_token):\n \"\"\"Converts the StringLiteral token to a plain string: get text content,\n removes quote characters, and unescapes it.\n\n :param string_literal_token: The string literal\n :return:\n \"\"\"\n token_text = string_literal_token.getText()\n return token_text[1:-1].replace(u\"\\\\'\", u\"'\"). \\\n replace(u\"\\\\\\\\\", u\"\\\\\")"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get(self, request, *args, **kwargs):\n if 'html' in request.GET:\n # Output HTML\n content = self.render_html(*args, **kwargs)\n return HttpResponse(content)\n\n else:\n # Output PDF\n content = self.render_pdf(*args, **kwargs)\n\n response = HttpResponse(content, content_type='application/pdf')\n\n if (not self.inline or 'download' in request.GET) and 'inline' not in request.GET:\n response['Content-Disposition'] = 'attachment; filename=%s' % self.get_filename()\n\n response['Content-Length'] = len(content)\n\n return response", "response": "Returns a HTTPResponse either of a PDF file HTML or a PDF file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef render_pdf(self, *args, **kwargs):\n html = self.render_html(*args, **kwargs)\n\n options = self.get_pdfkit_options()\n if 'debug' in self.request.GET and settings.DEBUG:\n options['debug-javascript'] = 1\n\n kwargs = {}\n wkhtmltopdf_bin = os.environ.get('WKHTMLTOPDF_BIN')\n if wkhtmltopdf_bin:\n kwargs['configuration'] = pdfkit.configuration(wkhtmltopdf=wkhtmltopdf_bin)\n\n pdf = pdfkit.from_string(html, False, options, **kwargs)\n\n return pdf", "response": "Render the PDF and returns as bytes."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_filename(self):\n if self.filename is None:\n name = splitext(basename(self.template_name))[0]\n return '{}.pdf'.format(name)\n\n return self.filename", "response": "Return the filename of the current object."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrender the template. :rtype: str", "response": "def render_html(self, *args, **kwargs):\n \"\"\"\n Renders the template.\n\n :rtype: str\n \"\"\"\n static_url = '%s://%s%s' % (self.request.scheme, self.request.get_host(), settings.STATIC_URL)\n media_url = '%s://%s%s' % (self.request.scheme, self.request.get_host(), settings.MEDIA_URL)\n\n with override_settings(STATIC_URL=static_url, MEDIA_URL=media_url):\n template = loader.get_template(self.template_name)\n context = self.get_context_data(*args, **kwargs)\n html = template.render(context)\n return html"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ninspect a pattern. This gives information regarding the sorts of operations content etc in use in the pattern.", "response": "def inspect(self):\n \"\"\"\n Inspect a pattern. This gives information regarding the sorts of\n operations, content, etc in use in the pattern.\n\n :return: Pattern information\n \"\"\"\n\n inspector = stix2patterns.inspector.InspectionListener()\n self.walk(inspector)\n\n return inspector.pattern_data()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nwalk the parse tree using the given listener.", "response": "def walk(self, listener):\n \"\"\"Walk the parse tree, using the given listener. The listener\n should be a\n stix2patterns.grammars.STIXPatternListener.STIXPatternListener (or\n subclass) instance.\"\"\"\n antlr4.ParseTreeWalker.DEFAULT.walk(listener, self.__parse_tree)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef __do_parse(self, pattern_str):\n in_ = antlr4.InputStream(pattern_str)\n lexer = STIXPatternLexer(in_)\n lexer.removeErrorListeners() # remove the default \"console\" listener\n token_stream = antlr4.CommonTokenStream(lexer)\n\n parser = STIXPatternParser(token_stream)\n parser.removeErrorListeners() # remove the default \"console\" listener\n error_listener = ParserErrorListener()\n parser.addErrorListener(error_listener)\n\n # I found no public API for this...\n # The default error handler tries to keep parsing, and I don't\n # think that's appropriate here. (These error handlers are only for\n # handling the built-in RecognitionException errors.)\n parser._errHandler = antlr4.BailErrorStrategy()\n\n # To improve error messages, replace \"\" in the literal\n # names with symbolic names. This is a hack, but seemed like\n # the simplest workaround.\n for i, lit_name in enumerate(parser.literalNames):\n if lit_name == u\"\":\n parser.literalNames[i] = parser.symbolicNames[i]\n\n # parser.setTrace(True)\n\n try:\n tree = parser.pattern()\n # print(tree.toStringTree(recog=parser))\n\n return tree\n except antlr4.error.Errors.ParseCancellationException as e:\n # The cancellation exception wraps the real RecognitionException\n # which caused the parser to bail.\n real_exc = e.args[0]\n\n # I want to bail when the first error is hit. But I also want\n # a decent error message. When an error is encountered in\n # Parser.match(), the BailErrorStrategy produces the\n # ParseCancellationException. It is not a subclass of\n # RecognitionException, so none of the 'except' clauses which would\n # normally report an error are invoked.\n #\n # Error message creation is buried in the ErrorStrategy, and I can\n # (ab)use the API to get a message: register an error listener with\n # the parser, force an error report, then get the message out of the\n # listener. Error listener registration is above; now we force its\n # invocation. Wish this could be cleaner...\n parser._errHandler.reportError(parser, real_exc)\n\n # should probably chain exceptions if we can...\n # Should I report the cancellation or recognition exception as the\n # cause...?\n six.raise_from(ParseException(error_listener.error_message),\n real_exc)", "response": "Parses the given pattern and returns the parse tree."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef stdout(self):\n streams = self._payload.get('streams', None)\n return streams[0] if streams is not None and len(streams) >= 1 else ''", "response": "Returns the stdout of the job or None if the job is not running."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef stderr(self):\n streams = self._payload.get('streams', None)\n return streams[1] if streams is not None and len(streams) >= 2 else ''", "response": "Returns the stderr of the job"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning true if the job result exists", "response": "def exists(self):\n \"\"\"\n Returns true if the job is still running or zero-os still knows about this job ID\n\n After a job is finished, a job remains on zero-os for max of 5min where you still can read the job result\n after the 5 min is gone, the job result is no more fetchable\n :return: bool\n \"\"\"\n r = self._client._redis\n flag = '{}:flag'.format(self._queue)\n return bool(r.exists(flag))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning True if the job is in running state False otherwise.", "response": "def running(self):\n \"\"\"\n Returns true if job still in running state\n :return:\n \"\"\"\n r = self._client._redis\n flag = '{}:flag'.format(self._queue)\n if bool(r.exists(flag)):\n return r.ttl(flag) is None\n\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nstreams the log messages from the process.", "response": "def stream(self, callback=None):\n \"\"\"\n Runtime copy of job messages. This required the 'stream` flag to be set to True otherwise it will\n not be able to copy any output, while it will block until the process exits.\n\n :note: This function will block until it reaches end of stream or the process is no longer running.\n\n :param callback: callback method that will get called for each received message\n callback accepts 3 arguments\n - level int: the log message levels, refer to the docs for available levels\n and their meanings\n - message str: the actual output message\n - flags int: flags associated with this message\n - 0x2 means EOF with success exit status\n - 0x4 means EOF with error\n\n for example (eof = flag & 0x6) eof will be true for last message u will ever\n receive on this callback.\n\n Note: if callback is none, a default callback will be used that prints output on stdout/stderr\n based on level.\n :return: None\n \"\"\"\n if callback is None:\n callback = Response.__default\n\n if not callable(callback):\n raise Exception('callback must be callable')\n\n queue = 'stream:%s' % self.id\n r = self._client._redis\n\n # we can terminate quickly by checking if the process is not running and it has no queued output.\n # if not self.running and r.llen(queue) == 0:\n # return\n\n while True:\n data = r.blpop(queue, 10)\n if data is None:\n if not self.running:\n break\n continue\n _, body = data\n payload = json.loads(body.decode())\n message = payload['message']\n line = message['message']\n meta = message['meta']\n callback(meta >> 16, line, meta & 0xff)\n\n if meta & 0x6 != 0:\n break"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nwaits for a job to finish and return the result.", "response": "def get(self, timeout=None):\n \"\"\"\n Waits for a job to finish (max of given timeout seconds) and return job results. When a job exits get() will\n keep returning the same result until zero-os doesn't remember the job anymore (self.exists == False)\n\n :notes: the timeout here is a client side timeout, it's different than the timeout given to the job on start\n (like in system method) witch will cause the job to be killed if it exceeded this timeout.\n\n :param timeout: max time to wait for the job to finish in seconds\n :return: Return object\n \"\"\"\n if timeout is None:\n timeout = self._client.timeout\n r = self._client._redis\n start = time.time()\n maxwait = timeout\n while maxwait > 0:\n if not self.exists:\n raise JobNotFoundError(self.id)\n v = r.brpoplpush(self._queue, self._queue, min(maxwait, 10))\n if v is not None:\n payload = json.loads(v.decode())\n r = Return(payload)\n logger.debug('%s << %s, stdout=\"%s\", stderr=\"%s\", data=\"%s\"',\n self._id, r.state, r.stdout, r.stderr, r.data[:1000])\n return r\n logger.debug('%s still waiting (%ss)', self._id, int(time.time() - start))\n maxwait -= 10\n raise TimeoutError()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget response as json", "response": "def get(self, timeout=None):\n \"\"\"\n Get response as json, will fail if the job doesn't return a valid json response\n\n :param timeout: client side timeout in seconds\n :return: int\n \"\"\"\n result = super().get(timeout)\n if result.state != 'SUCCESS':\n raise ResultError(result.data, result.code)\n if result.level != 20:\n raise ResultError('not a json response: %d' % result.level, 406)\n\n return json.loads(result.data)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nlisting all running jobs", "response": "def list(self, id=None):\n \"\"\"\n List all running jobs\n\n :param id: optional ID for the job to list\n \"\"\"\n args = {'id': id}\n self._job_chk.check(args)\n return self._client.json('job.list', args)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef kill(self, id, signal=signal.SIGTERM):\n args = {\n 'id': id,\n 'signal': int(signal),\n }\n self._kill_chk.check(args)\n return self._client.json('job.kill', args)", "response": "Kill a job with given id"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef list(self, id=None):\n args = {'pid': id}\n self._process_chk.check(args)\n return self._client.json('process.list', args)", "response": "List all running processes"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef open(self, file, mode='r', perm=0o0644):\n args = {\n 'file': file,\n 'mode': mode,\n 'perm': perm,\n }\n\n return self._client.json('filesystem.open', args)", "response": "Opens a file on the node with the given path."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nmove a path to a destination", "response": "def move(self, path, destination):\n \"\"\"\n Move a path to destination\n\n :param path: source\n :param destination: destination\n :return:\n \"\"\"\n args = {\n 'path': path,\n 'destination': destination,\n }\n\n return self._client.json('filesystem.move', args)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef chmod(self, path, mode, recursive=False):\n args = {\n 'path': path,\n 'mode': mode,\n 'recursive': recursive,\n }\n\n return self._client.json('filesystem.chmod', args)", "response": "Change file permissions of a file or directory."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nchanging file owner of path with user and group.", "response": "def chown(self, path, user, group, recursive=False):\n \"\"\"\n Change file/dir owner\n\n :param path: path of file/dir\n :param user: user name\n :param group: group name\n :param recursive: apply chown recursively\n :return:\n \"\"\"\n args = {\n 'path': path,\n 'user': user,\n 'group': group,\n 'recursive': recursive,\n }\n\n return self._client.json('filesystem.chown', args)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef read(self, fd):\n args = {\n 'fd': fd,\n }\n\n data = self._client.json('filesystem.read', args)\n return base64.decodebytes(data.encode())", "response": "Reads a block from the given file descriptor."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nwriting a block of bytes to an open file descriptor.", "response": "def write(self, fd, bytes):\n \"\"\"\n Write a block of bytes to an open file descriptor (that is open with one of the writing modes\n\n :param fd: file descriptor\n :param bytes: bytes block to write\n :return:\n\n :note: don't overkill the node with large byte chunks, also for large file upload check the upload method.\n \"\"\"\n args = {\n 'fd': fd,\n 'block': base64.encodebytes(bytes).decode(),\n }\n\n return self._client.json('filesystem.write', args)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef upload(self, remote, reader):\n\n fd = self.open(remote, 'w')\n while True:\n chunk = reader.read(512 * 1024)\n if chunk == b'':\n break\n self.write(fd, chunk)\n self.close(fd)", "response": "Uploads a file to the local cache."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef download(self, remote, writer):\n\n fd = self.open(remote)\n while True:\n chunk = self.read(fd)\n if chunk == b'':\n break\n writer.write(chunk)\n self.close(fd)", "response": "Downloads a file from the local file system."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef upload_file(self, remote, local):\n file = open(local, 'rb')\n try:\n self.upload(remote, file)\n finally:\n file.close()", "response": "Uploads a file to the naculiac"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef download_file(self, remote, local):\n file = open(local, 'wb')\n try:\n self.download(remote, file)\n finally:\n file.close()", "response": "Downloads a file from remote to local."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nimplement the low level command call, this needs to build the command structure and push it on the correct queue. :param command: Command name to execute supported by the node (ex: core.system, info.cpu, etc...) check documentation for list of built in commands :param arguments: A dict of required command arguments depends on the command name. :param queue: command queue (commands on the same queue are executed sequentially) :param max_time: kill job server side if it exceeded this amount of seconds :param stream: If True, process stdout and stderr are pushed to a special queue (stream:) so client can stream output :param tags: job tags :param id: job id. Generated if not supplied :return: Response object", "response": "def raw(self, command, arguments, queue=None, max_time=None, stream=False, tags=None, id=None):\n \"\"\"\n Implements the low level command call, this needs to build the command structure\n and push it on the correct queue.\n\n :param command: Command name to execute supported by the node (ex: core.system, info.cpu, etc...)\n check documentation for list of built in commands\n :param arguments: A dict of required command arguments depends on the command name.\n :param queue: command queue (commands on the same queue are executed sequentially)\n :param max_time: kill job server side if it exceeded this amount of seconds\n :param stream: If True, process stdout and stderr are pushed to a special queue (stream:) so\n client can stream output\n :param tags: job tags\n :param id: job id. Generated if not supplied\n :return: Response object\n \"\"\"\n raise NotImplemented()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef sync(self, command, arguments, tags=None, id=None):\n response = self.raw(command, arguments, tags=tags, id=id)\n\n result = response.get()\n if result.state != 'SUCCESS':\n raise ResultError(msg='%s' % result.data, code=result.code)\n\n\n return result", "response": "This method is used to sync the job with the server."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nexecutes a command and return the output of the command.", "response": "def system(self, command, dir='', stdin='', env=None, queue=None, max_time=None, stream=False, tags=None, id=None):\n \"\"\"\n Execute a command\n\n :param command: command to execute (with its arguments) ex: `ls -l /root`\n :param dir: CWD of command\n :param stdin: Stdin data to feed to the command stdin\n :param env: dict with ENV variables that will be exported to the command\n :param id: job id. Auto generated if not defined.\n :return:\n \"\"\"\n parts = shlex.split(command)\n if len(parts) == 0:\n raise ValueError('invalid command')\n\n args = {\n 'name': parts[0],\n 'args': parts[1:],\n 'dir': dir,\n 'stdin': stdin,\n 'env': env,\n }\n\n self._system_chk.check(args)\n response = self.raw(command='core.system', arguments=args,\n queue=queue, max_time=max_time, stream=stream, tags=tags, id=id)\n\n return response"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef bash(self, script, stdin='', queue=None, max_time=None, stream=False, tags=None, id=None):\n args = {\n 'script': script,\n 'stdin': stdin,\n }\n self._bash_chk.check(args)\n response = self.raw(command='bash', arguments=args,\n queue=queue, max_time=max_time, stream=stream, tags=tags, id=id)\n\n return response", "response": "Execute a bash script or run a process inside a bash shell."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef subscribe(self, job, id=None):\n return self.raw('core.subscribe', {'id': job}, stream=True, id=id)", "response": "Subscribes to the job logs."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nimplement the low level command call, this needs to build the command structure and push it on the correct queue. :param command: Command name to execute supported by the node (ex: core.system, info.cpu, etc...) check documentation for list of built in commands :param arguments: A dict of required command arguments depends on the command name. :param queue: command queue (commands on the same queue are executed sequentially) :param max_time: kill job server side if it exceeded this amount of seconds :param stream: If True, process stdout and stderr are pushed to a special queue (stream:) so client can stream output :param tags: job tags :param id: job id. Generated if not supplied :return: Response object", "response": "def raw(self, command, arguments, queue=None, max_time=None, stream=False, tags=None, id=None):\n \"\"\"\n Implements the low level command call, this needs to build the command structure\n and push it on the correct queue.\n\n :param command: Command name to execute supported by the node (ex: core.system, info.cpu, etc...)\n check documentation for list of built in commands\n :param arguments: A dict of required command arguments depends on the command name.\n :param queue: command queue (commands on the same queue are executed sequentially)\n :param max_time: kill job server side if it exceeded this amount of seconds\n :param stream: If True, process stdout and stderr are pushed to a special queue (stream:) so\n client can stream output\n :param tags: job tags\n :param id: job id. Generated if not supplied\n :return: Response object\n \"\"\"\n args = {\n 'container': self._container,\n 'command': {\n 'command': command,\n 'arguments': arguments,\n 'queue': queue,\n 'max_time': max_time,\n 'stream': stream,\n 'tags': tags,\n 'id': id,\n },\n }\n\n # check input\n self._raw_chk.check(args)\n\n response = self._client.raw('corex.dispatch', args)\n\n result = response.get()\n if result.state != 'SUCCESS':\n raise RuntimeError('failed to dispatch command to container: %s' % result.data)\n\n cmd_id = json.loads(result.data)\n return self._client.response_for(cmd_id)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a new container with the given root filesystem and mount points.", "response": "def create(self, root_url, mount=None, host_network=False, nics=DefaultNetworking, port=None,\n hostname=None, privileged=False, storage=None, name=None, tags=None, identity=None, env=None,\n cgroups=None):\n \"\"\"\n Creater a new container with the given root flist, mount points and\n zerotier id, and connected to the given bridges\n :param root_url: The root filesystem flist\n :param mount: a dict with {host_source: container_target} mount points.\n where host_source directory must exists.\n host_source can be a url to a flist to mount.\n :param host_network: Specify if the container should share the same network stack as the host.\n if True, container creation ignores both zerotier, bridge and ports arguments below. Not\n giving errors if provided.\n :param nics: Configure the attached nics to the container\n each nic object is a dict of the format\n {\n 'type': nic_type # one of default, bridge, zerotier, macvlan, passthrough, vlan, or vxlan (note, vlan and vxlan only supported by ovs)\n 'id': id # depends on the type\n bridge: bridge name,\n zerotier: network id,\n macvlan: the parent link name,\n passthrough: the link name,\n vlan: the vlan tag,\n vxlan: the vxlan id\n 'name': name of the nic inside the container (ignored in zerotier type)\n 'hwaddr': Mac address of nic.\n 'config': { # config is only honored for bridge, vlan, and vxlan types\n 'dhcp': bool,\n 'cidr': static_ip # ip/mask\n 'gateway': gateway\n 'dns': [dns]\n }\n }\n :param port: A dict of host_port: container_port pairs (only if default networking is enabled)\n Example:\n `port={8080: 80, 7000:7000}`\n :param hostname: Specific hostname you want to give to the container.\n if None it will automatically be set to core-x,\n x beeing the ID of the container\n :param privileged: If true, container runs in privileged mode.\n :param storage: A Url to the ardb storage to use to mount the root flist (or any other mount that requires g8fs)\n if not provided, the default one from core0 configuration will be used.\n :param name: Optional name for the container\n :param identity: Container Zerotier identity, Only used if at least one of the nics is of type zerotier\n :param env: a dict with the environment variables needed to be set for the container\n :param cgroups: custom list of cgroups to apply to this container on creation. formated as [(subsystem, name), ...]\n please refer to the cgroup api for more detailes.\n \"\"\"\n\n if nics == self.DefaultNetworking:\n nics = [{'type': 'default'}]\n elif nics is None:\n nics = []\n\n args = {\n 'root': root_url,\n 'mount': mount,\n 'host_network': host_network,\n 'nics': nics,\n 'port': port,\n 'hostname': hostname,\n 'privileged': privileged,\n 'storage': storage,\n 'name': name,\n 'identity': identity,\n 'env': env,\n 'cgroups': cgroups,\n }\n\n # validate input\n self._create_chk.check(args)\n\n response = self._client.raw('corex.create', args, tags=tags)\n\n return JSONResponse(response)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef find(self, *tags):\n tags = list(map(str, tags))\n return self._client.json('corex.find', {'tags': tags})", "response": "Find containers that match set of tags."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nterminating a container given it s id", "response": "def terminate(self, container):\n \"\"\"\n Terminate a container given it's id\n\n :param container: container id\n :return:\n \"\"\"\n self._client_chk.check(container)\n args = {\n 'container': int(container),\n }\n response = self._client.raw('corex.terminate', args)\n\n result = response.get()\n if result.state != 'SUCCESS':\n raise RuntimeError('failed to terminate container: %s' % result.data)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nadds a nic into a container", "response": "def nic_add(self, container, nic):\n \"\"\"\n Hot plug a nic into a container\n\n :param container: container ID\n :param nic: {\n 'type': nic_type # one of default, bridge, zerotier, macvlan, passthrough, vlan, or vxlan (note, vlan and vxlan only supported by ovs)\n 'id': id # depends on the type\n bridge: bridge name,\n zerotier: network id,\n macvlan: the parent link name,\n passthrough: the link name,\n vlan: the vlan tag,\n vxlan: the vxlan id\n 'name': name of the nic inside the container (ignored in zerotier type)\n 'hwaddr': Mac address of nic.\n 'config': { # config is only honored for bridge, vlan, and vxlan types\n 'dhcp': bool,\n 'cidr': static_ip # ip/mask\n 'gateway': gateway\n 'dns': [dns]\n }\n }\n :return:\n \"\"\"\n args = {\n 'container': container,\n 'nic': nic\n }\n self._nic_add.check(args)\n\n return self._client.json('corex.nic-add', args)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef nic_remove(self, container, index):\n args = {\n 'container': container,\n 'index': index\n }\n self._nic_remove.check(args)\n\n return self._client.json('corex.nic-remove', args)", "response": "Remove a nic from a container"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a client instance bound to the specified container.", "response": "def client(self, container):\n \"\"\"\n Return a client instance that is bound to that container.\n\n :param container: container id\n :return: Client object bound to the specified container id\n Return a ContainerResponse from container.create\n \"\"\"\n\n self._client_chk.check(container)\n return ContainerClient(self._client, int(container))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef backup(self, container, url):\n\n args = {\n 'container': container,\n 'url': url,\n }\n\n return JSONResponse(self._client.raw('corex.backup', args))", "response": "Backup a container to the given restic url"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nrestoring the current state of a backedup container.", "response": "def restore(self, url, tags=None):\n \"\"\"\n Full restore of a container backup. This restore method will recreate\n an exact copy of the backedup container (including same network setup, and other\n configurations as defined by the `create` method.\n\n To just restore the container data, and use new configuration, use the create method instead\n with the `root_url` set to `restic:`\n\n :param url: Snapshot url, the snapshot ID is passed as a url fragment\n examples:\n `file:///path/to/restic/repo?password=#`\n :param tags: this will always override the original container tags (even if not set)\n :return:\n \"\"\"\n args = {\n 'url': url,\n }\n\n return JSONResponse(self._client.raw('corex.restore', args, tags=tags))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create(self, name, hwaddr=None, network=None, nat=False, settings={}):\n args = {\n 'name': name,\n 'hwaddr': hwaddr,\n 'network': {\n 'mode': network,\n 'nat': nat,\n 'settings': settings,\n }\n }\n\n self._bridge_create_chk.check(args)\n\n return self._client.json('bridge.create', args)", "response": "Create a new bridge with the given name hwaddr and network and settings."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndelete a specific node by name.", "response": "def delete(self, bridge):\n \"\"\"\n Delete a bridge by name\n\n :param bridge: bridge name\n :return:\n \"\"\"\n args = {\n 'name': bridge,\n }\n\n self._bridge_chk.check(args)\n\n return self._client.json('bridge.delete', args)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef nic_add(self, bridge, nic):\n\n args = {\n 'name': bridge,\n 'nic': nic,\n }\n\n self._nic_add_chk.check(args)\n\n return self._client.json('bridge.nic-add', args)", "response": "Attach a nic to a bridge"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndetach a nic from a bridge", "response": "def nic_remove(self, nic):\n \"\"\"\n Detach a nic from a bridge\n\n :param nic: nic name to detach\n \"\"\"\n\n args = {\n 'nic': nic,\n }\n\n self._nic_remove_chk.check(args)\n\n return self._client.json('bridge.nic-remove', args)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef nic_list(self, bridge):\n\n args = {\n 'name': bridge,\n }\n\n self._bridge_chk.check(args)\n\n return self._client.json('bridge.nic-list', args)", "response": "List nics attached to a bridge"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef list(self):\n response = self._client.raw('disk.list', {})\n\n result = response.get()\n\n if result.state != 'SUCCESS':\n raise RuntimeError('failed to list disks: %s' % result.stderr)\n\n if result.level != 20: # 20 is JSON output.\n raise RuntimeError('invalid response type from disk.list command')\n\n data = result.data.strip()\n if data:\n return json.loads(data)\n else:\n return {}", "response": "List available block devices."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef mktable(self, disk, table_type='gpt'):\n args = {\n 'disk': disk,\n 'table_type': table_type,\n }\n\n self._mktable_chk.check(args)\n\n response = self._client.raw('disk.mktable', args)\n\n result = response.get()\n\n if result.state != 'SUCCESS':\n raise RuntimeError('failed to create table: %s' % result.stderr)", "response": "Make partition table on block device."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef getinfo(self, disk, part=''):\n args = {\n \"disk\": disk,\n \"part\": part,\n }\n\n self._getpart_chk.check(args)\n\n response = self._client.raw('disk.getinfo', args)\n\n result = response.get()\n\n if result.state != 'SUCCESS':\n raise RuntimeError('failed to get info: %s' % result.data)\n\n if result.level != 20: # 20 is JSON output.\n raise RuntimeError('invalid response type from disk.getinfo command')\n\n data = result.data.strip()\n if data:\n return json.loads(data)\n else:\n return {}", "response": "Get more info about a disk or a partition."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef mkpart(self, disk, start, end, part_type='primary'):\n args = {\n 'disk': disk,\n 'start': start,\n 'end': end,\n 'part_type': part_type,\n }\n\n self._mkpart_chk.check(args)\n\n response = self._client.raw('disk.mkpart', args)\n\n result = response.get()\n\n if result.state != 'SUCCESS':\n raise RuntimeError('failed to create partition: %s' % result.stderr)", "response": "Make a partition on the specified disk."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nremoves partion from disk", "response": "def rmpart(self, disk, number):\n \"\"\"\n Remove partion from disk\n :param disk: device path (/dev/sda, /dev/sdb, etc...)\n :param number: Partition number (starting from 1)\n \"\"\"\n args = {\n 'disk': disk,\n 'number': number,\n }\n\n self._rmpart_chk.check(args)\n\n response = self._client.raw('disk.rmpart', args)\n\n result = response.get()\n\n if result.state != 'SUCCESS':\n raise RuntimeError('failed to remove partition: %s' % result.stderr)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef mount(self, source, target, options=[]):\n\n if len(options) == 0:\n options = ['']\n\n args = {\n 'options': ','.join(options),\n 'source': source,\n 'target': target,\n }\n\n self._mount_chk.check(args)\n response = self._client.raw('disk.mount', args)\n\n result = response.get()\n\n if result.state != 'SUCCESS':\n raise RuntimeError('failed to mount partition: %s' % result.stderr)", "response": "Mount partion on target\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef umount(self, source):\n\n args = {\n 'source': source,\n }\n self._umount_chk.check(args)\n\n response = self._client.raw('disk.umount', args)\n\n result = response.get()\n\n if result.state != 'SUCCESS':\n raise RuntimeError('failed to umount partition: %s' % result.stderr)", "response": "Unmount partion\n :param source: Full partition path like /dev/sda1"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef seektime(self, disk):\n args = {\n 'disk': disk,\n }\n\n self._seektime_chk.check(args)\n\n return self._client.json(\"disk.seektime\", args)", "response": "Returns seek latency on disk"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a btrfs filesystem with the given label devices and profiles.", "response": "def create(self, label, devices, metadata_profile=\"\", data_profile=\"\", overwrite=False):\n \"\"\"\n Create a btrfs filesystem with the given label, devices, and profiles\n :param label: name/label\n :param devices : array of devices (/dev/sda1, etc...)\n :metadata_profile: raid0, raid1, raid5, raid6, raid10, dup or single\n :data_profile: same as metadata profile\n :overwrite: force creation of the filesystem. Overwrite any existing filesystem\n \"\"\"\n args = {\n 'label': label,\n 'metadata': metadata_profile,\n 'data': data_profile,\n 'devices': devices,\n 'overwrite': overwrite\n }\n\n self._create_chk.check(args)\n\n self._client.sync('btrfs.create', args)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd one or more devices to btrfs filesystem mounted under mountpoint.", "response": "def device_add(self, mountpoint, *device):\n \"\"\"\n Add one or more devices to btrfs filesystem mounted under `mountpoint`\n\n :param mountpoint: mount point of the btrfs system\n :param devices: one ore more devices to add\n :return:\n \"\"\"\n if len(device) == 0:\n return\n\n args = {\n 'mountpoint': mountpoint,\n 'devices': device,\n }\n\n self._device_chk.check(args)\n\n self._client.sync('btrfs.device_add', args)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a btrfs subvolume in the specified path", "response": "def subvol_create(self, path):\n \"\"\"\n Create a btrfs subvolume in the specified path\n :param path: path to create\n \"\"\"\n args = {\n 'path': path\n }\n self._subvol_chk.check(args)\n self._client.sync('btrfs.subvol_create', args)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndelete a btrfs subvolume in the specified path", "response": "def subvol_delete(self, path):\n \"\"\"\n Delete a btrfs subvolume in the specified path\n :param path: path to delete\n \"\"\"\n args = {\n 'path': path\n }\n\n self._subvol_chk.check(args)\n\n self._client.sync('btrfs.subvol_delete', args)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef subvol_quota(self, path, limit):\n args = {\n 'path': path,\n 'limit': limit,\n }\n\n self._subvol_quota_chk.check(args)\n\n self._client.sync('btrfs.subvol_quota', args)", "response": "Apply a quota to a btrfs subvolume in the specified path"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef subvol_snapshot(self, source, destination, read_only=False):\n\n args = {\n \"source\": source,\n \"destination\": destination,\n \"read_only\": read_only,\n }\n\n self._subvol_snapshot_chk.check(args)\n self._client.sync('btrfs.subvol_snapshot', args)", "response": "Take a snapshot of a subvolume"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef join(self, network):\n args = {'network': network}\n self._network_chk.check(args)\n response = self._client.raw('zerotier.join', args)\n result = response.get()\n\n if result.state != 'SUCCESS':\n raise RuntimeError('failed to join zerotier network: %s', result.stderr)", "response": "Join a zerotier network"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a new kvm domain.", "response": "def create(self, name, media=None, flist=None, cpu=2, memory=512,\n nics=None, port=None, mount=None, tags=None, config=None):\n \"\"\"\n :param name: Name of the kvm domain\n :param media: (optional) array of media objects to attach to the machine, where the first object is the boot device\n each media object is a dict of {url, type} where type can be one of 'disk', or 'cdrom', or empty (default to disk)\n example: [{'url': 'nbd+unix:///test?socket=/tmp/ndb.socket'}, {'type': 'cdrom', 'url: '/somefile.iso'}]\n zdb exmpale:\n [{'url': 'zdb://host:port?size=10G&blocksize=4096'},\n {'url': 'zdb+unix:///path/to/unix.socket?size=5G'}]\n :param flist: (optional) VM flist. A special bootable flist witch has a correct boot.yaml file\n example: http://hub.gig.tech/azmy/ubuntu-zesty.flist\n :param cpu: number of vcpu cores\n :param memory: memory in MiB\n :param port: A dict of host_port: container_port pairs\n Example:\n `port={8080: 80, 7000:7000}`\n Only supported if default network is used\n :param nics: Configure the attached nics to the container\n each nic object is a dict of the format\n {\n 'type': nic_type # default, bridge, vlan, or vxlan (note, vlan and vxlan only supported by ovs)\n 'id': id # depends on the type, bridge name (bridge type) zerotier network id (zertier type), the vlan tag or the vxlan id\n }\n :param port: Configure port forwards to vm, this only works if default network nic is added. Is a dict of {host-port: guest-port}\n :param mount: A list of host shared folders in the format {'source': '/host/path', 'target': '/guest/path', 'readonly': True|False}\n :param tags: A list of user defined tags (strings)\n :param config: a map with the config file path as a key and content as a value. This only works when creating a VM from an flist. The\n config files are written to the machine before booting.\n Example:\n config = {'/root/.ssh/authorized_keys': ''}\n\n If the machine is not booted from an flist, the config are discarded\n\n :note: At least one media or an flist must be provided.\n :return: uuid of the virtual machine\n \"\"\"\n\n if nics is None:\n nics = []\n\n args = {\n 'name': name,\n 'media': media,\n 'cpu': cpu,\n 'flist': flist,\n 'memory': memory,\n 'nics': nics,\n 'port': port,\n 'mount': mount,\n 'tags': tags,\n 'config': config,\n }\n self._create_chk.check(args)\n\n if media is None and flist is None:\n raise ValueError('need at least one boot media via media or an flist')\n\n return self._client.json('kvm.create', args, tags=tags)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nprepare the migration target for the current kvm domain.", "response": "def prepare_migration_target(self, uuid, nics=None, port=None, tags=None):\n \"\"\"\n :param name: Name of the kvm domain that will be migrated\n :param port: A dict of host_port: container_port pairs\n Example:\n `port={8080: 80, 7000:7000}`\n Only supported if default network is used\n :param nics: Configure the attached nics to the container\n each nic object is a dict of the format\n {\n 'type': nic_type # default, bridge, vlan, or vxlan (note, vlan and vxlan only supported by ovs)\n 'id': id # depends on the type, bridge name (bridge type) zerotier network id (zertier type), the vlan tag or the vxlan id\n }\n :param uuid: uuid of machine to be migrated on old node\n :return:\n \"\"\"\n\n if nics is None:\n nics = []\n\n args = {\n 'nics': nics,\n 'port': port,\n 'uuid': uuid\n }\n self._migrate_network_chk.check(args)\n\n self._client.sync('kvm.prepare_migration_target', args, tags=tags)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef destroy(self, uuid):\n args = {\n 'uuid': uuid,\n }\n self._domain_action_chk.check(args)\n\n self._client.sync('kvm.destroy', args)", "response": "Destroy a kvm domain by uuid"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef shutdown(self, uuid):\n args = {\n 'uuid': uuid,\n }\n self._domain_action_chk.check(args)\n\n self._client.sync('kvm.shutdown', args)", "response": "Shutdown a kvm domain by uuid"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nresets a kvm domain by uuid", "response": "def reset(self, uuid):\n \"\"\"\n Reset (Force reboot) a kvm domain by uuid\n :param uuid: uuid of the kvm container (same as the used in create)\n :return:\n \"\"\"\n args = {\n 'uuid': uuid,\n }\n self._domain_action_chk.check(args)\n\n self._client.sync('kvm.reset', args)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\npausing a kvm domain by uuid", "response": "def pause(self, uuid):\n \"\"\"\n Pause a kvm domain by uuid\n :param uuid: uuid of the kvm container (same as the used in create)\n :return:\n \"\"\"\n args = {\n 'uuid': uuid,\n }\n self._domain_action_chk.check(args)\n\n self._client.sync('kvm.pause', args)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nresume a kvm domain by uuid", "response": "def resume(self, uuid):\n \"\"\"\n Resume a kvm domain by uuid\n :param uuid: uuid of the kvm container (same as the used in create)\n :return:\n \"\"\"\n args = {\n 'uuid': uuid,\n }\n self._domain_action_chk.check(args)\n\n self._client.sync('kvm.resume', args)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef info(self, uuid):\n args = {\n 'uuid': uuid,\n }\n self._domain_action_chk.check(args)\n\n return self._client.json('kvm.info', args)", "response": "Get info about a kvm domain by uuid"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets info per second about a kvm domain by uuid", "response": "def infops(self, uuid):\n \"\"\"\n Get info per second about a kvm domain by uuid\n :param uuid: uuid of the kvm container (same as the used in create)\n :return:\n \"\"\"\n args = {\n 'uuid': uuid,\n }\n self._domain_action_chk.check(args)\n\n return self._client.json('kvm.infops', args)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef attach_disk(self, uuid, media):\n args = {\n 'uuid': uuid,\n 'media': media,\n }\n self._man_disk_action_chk.check(args)\n\n self._client.sync('kvm.attach_disk', args)", "response": "Attach a disk to a kvm container"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_nic(self, uuid, type, id=None, hwaddr=None):\n args = {\n 'uuid': uuid,\n 'type': type,\n 'id': id,\n 'hwaddr': hwaddr,\n }\n self._man_nic_action_chk.check(args)\n\n return self._client.json('kvm.add_nic', args)", "response": "Add a nic to a kvm container"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef limit_disk_io(self, uuid, media, totalbytessecset=False, totalbytessec=0, readbytessecset=False, readbytessec=0, writebytessecset=False,\n writebytessec=0, totaliopssecset=False, totaliopssec=0, readiopssecset=False, readiopssec=0, writeiopssecset=False, writeiopssec=0,\n totalbytessecmaxset=False, totalbytessecmax=0, readbytessecmaxset=False, readbytessecmax=0, writebytessecmaxset=False, writebytessecmax=0,\n totaliopssecmaxset=False, totaliopssecmax=0, readiopssecmaxset=False, readiopssecmax=0, writeiopssecmaxset=False, writeiopssecmax=0,\n totalbytessecmaxlengthset=False, totalbytessecmaxlength=0, readbytessecmaxlengthset=False, readbytessecmaxlength=0,\n writebytessecmaxlengthset=False, writebytessecmaxlength=0, totaliopssecmaxlengthset=False, totaliopssecmaxlength=0,\n readiopssecmaxlengthset=False, readiopssecmaxlength=0, writeiopssecmaxlengthset=False, writeiopssecmaxlength=0, sizeiopssecset=False,\n sizeiopssec=0, groupnameset=False, groupname=''):\n \"\"\"\n Remove a nic from a machine\n :param uuid: uuid of the kvm container (same as the used in create)\n :param media: the media to limit the diskio\n :return:\n \"\"\"\n args = {\n 'uuid': uuid,\n 'media': media,\n 'totalbytessecset': totalbytessecset,\n 'totalbytessec': totalbytessec,\n 'readbytessecset': readbytessecset,\n 'readbytessec': readbytessec,\n 'writebytessecset': writebytessecset,\n 'writebytessec': writebytessec,\n 'totaliopssecset': totaliopssecset,\n 'totaliopssec': totaliopssec,\n 'readiopssecset': readiopssecset,\n 'readiopssec': readiopssec,\n 'writeiopssecset': writeiopssecset,\n 'writeiopssec': writeiopssec,\n 'totalbytessecmaxset': totalbytessecmaxset,\n 'totalbytessecmax': totalbytessecmax,\n 'readbytessecmaxset': readbytessecmaxset,\n 'readbytessecmax': readbytessecmax,\n 'writebytessecmaxset': writebytessecmaxset,\n 'writebytessecmax': writebytessecmax,\n 'totaliopssecmaxset': totaliopssecmaxset,\n 'totaliopssecmax': totaliopssecmax,\n 'readiopssecmaxset': readiopssecmaxset,\n 'readiopssecmax': readiopssecmax,\n 'writeiopssecmaxset': writeiopssecmaxset,\n 'writeiopssecmax': writeiopssecmax,\n 'totalbytessecmaxlengthset': totalbytessecmaxlengthset,\n 'totalbytessecmaxlength': totalbytessecmaxlength,\n 'readbytessecmaxlengthset': readbytessecmaxlengthset,\n 'readbytessecmaxlength': readbytessecmaxlength,\n 'writebytessecmaxlengthset': writebytessecmaxlengthset,\n 'writebytessecmaxlength': writebytessecmaxlength,\n 'totaliopssecmaxlengthset': totaliopssecmaxlengthset,\n 'totaliopssecmaxlength': totaliopssecmaxlength,\n 'readiopssecmaxlengthset': readiopssecmaxlengthset,\n 'readiopssecmaxlength': readiopssecmaxlength,\n 'writeiopssecmaxlengthset': writeiopssecmaxlengthset,\n 'writeiopssecmaxlength': writeiopssecmaxlength,\n 'sizeiopssecset': sizeiopssecset,\n 'sizeiopssec': sizeiopssec,\n 'groupnameset': groupnameset,\n 'groupname': groupname,\n }\n self._limit_disk_io_action_chk.check(args)\n\n self._client.sync('kvm.limit_disk_io', args)", "response": "Limit disk IO for a given kvm container."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nmigrates a vm to another node", "response": "def migrate(self, uuid, desturi):\n \"\"\"\n Migrate a vm to another node\n :param uuid: uuid of the kvm container (same as the used in create)\n :param desturi: the uri of the destination node\n :return:\n \"\"\"\n args = {\n 'uuid': uuid,\n 'desturi': desturi,\n }\n self._migrate_action_chk.check(args)\n\n self._client.sync('kvm.migrate', args)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get(self, uuid):\n args = {'uuid':uuid}\n self._get_chk.check(args)\n return self._client.json('kvm.get', args)", "response": "Get a specific key - value pair for a given domain uuid"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef set_level(self, level):\n args = {\n 'level': level,\n }\n self._level_chk.check(args)\n\n return self._client.json('logger.set_level', args)", "response": "Set the log level of the g8oscale log file"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef subscribe(self, queue=None, *levels):\n args = {\n 'queue': queue,\n 'levels': list(levels),\n }\n\n self._subscribe_chk.check(args)\n\n return self._client.json('logger.subscribe', args)", "response": "Subscribe to the aggregated log stream."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef drop_port(self, port, interface=None, subnet=None):\n args = {\n 'port': port,\n 'interface': interface,\n 'subnet': subnet,\n }\n self._port_chk.check(args)\n\n return self._client.json('nft.drop_port', args)", "response": "Drop an opened port"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef query(self, key=None, **tags):\n args = {\n 'key': key,\n 'tags': tags,\n }\n self._query_chk.check(args)\n\n return self._client.json('aggregator.query', args)", "response": "Query zero - os aggregator for current state object of monitored metrics."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nstarts rtinfo - client", "response": "def start(self, host=\"localhost\", port=8999, disks=None):\n \"\"\"\n Start rtinfo-client\n :param host: str rtinfod host address\n :param port: int rtinfod client port\n :param disks: list of prefixes of wathable disks (e.g [\"sd\"])\n\n \"\"\"\n disks = [] if disks is None else disks\n\n args = {\n \"host\": host,\n \"port\": port,\n \"disks\": disks\n }\n self._rtinfo_start_params_chk.check(args)\n\n return self._client.json(\"rtinfo.start\", args)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef stop(self, host, port):\n\n args = {\n \"host\": host,\n \"port\": port,\n }\n self._rtinfo_stop_params_chk.check(args)\n\n return self._client.json(\"rtinfo.stop\", args)", "response": "Stop rtinfo - client"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a cgroup if it doesn t exist under the specified subsystem and the given name.", "response": "def ensure(self, subsystem, name):\n \"\"\"\n Creates a cgroup if it doesn't exist under the specified subsystem\n and the given name\n\n :param subsystem: the cgroup subsystem (currently support 'memory', and 'cpuset')\n :param name: name of the cgroup to delete\n \"\"\"\n args = {\n 'subsystem': subsystem,\n 'name': name,\n }\n\n self._cgroup_chk.check(args)\n return self._client.json('cgroup.ensure', args)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd a process to a cgroup", "response": "def task_add(self, subsystem, name, pid):\n \"\"\"\n Add process (with pid) to a cgroup\n\n :param subsystem: the cgroup subsystem (currently support 'memory', and 'cpuset')\n :param name: name of the cgroup\n :param pid: PID to add\n \"\"\"\n\n args = {\n 'subsystem': subsystem,\n 'name': name,\n 'pid': pid,\n }\n\n self._task_chk.check(args)\n return self._client.json('cgroup.task-add', args)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets the memory cgroup specification and limits for both mem and swap.", "response": "def memory(self, name, mem=0, swap=0):\n \"\"\"\n Set/Get memory cgroup specification/limitation\n the call to this method will always GET the current set values for both mem and swap.\n If mem is not zero, the memory will set the memory limit to the given value, and swap to the given value (even 0)\n\n :param mem: Set memory limit to the given value (in bytes), ignore if 0\n :param swap: Set swap limit to the given value (in bytes) (only if mem is not zero)\n\n :return: current memory limitation\n \"\"\"\n\n args = {\n 'name': name,\n 'mem': mem,\n 'swap': swap,\n }\n\n self._memory_spec.check(args)\n return self._client.json('cgroup.memory.spec', args)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting the cpuset cgroup specification and limitation", "response": "def cpuset(self, name, cpus=None, mems=None):\n \"\"\"\n Set/Get cpuset cgroup specification/limitation\n the call to this method will always GET the current set values for both cpus and mems\n If cpus, or mems is NOT NONE value it will be set as the spec for that attribute\n\n :param cpus: Set cpus affinity limit to the given value (0, 1, 0-10, etc...)\n :param mems: Set mems affinity limit to the given value (0, 1, 0-10, etc...)\n\n :return: current cpuset\n \"\"\"\n\n args = {\n 'name': name,\n 'cpus': cpus,\n 'mems': mems,\n }\n\n self._cpuset_spec.check(args)\n return self._client.json('cgroup.cpuset.spec', args)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nimplementing the low level command call, this needs to build the command structure and push it on the correct queue. :param command: Command name to execute supported by the node (ex: core.system, info.cpu, etc...) check documentation for list of built in commands :param arguments: A dict of required command arguments depends on the command name. :param queue: command queue (commands on the same queue are executed sequentially) :param max_time: kill job server side if it exceeded this amount of seconds :param stream: If True, process stdout and stderr are pushed to a special queue (stream:) so client can stream output :param tags: job tags :param id: job id. Generated if not supplied :return: Response object", "response": "def raw(self, command, arguments, queue=None, max_time=None, stream=False, tags=None, id=None):\n \"\"\"\n Implements the low level command call, this needs to build the command structure\n and push it on the correct queue.\n\n :param command: Command name to execute supported by the node (ex: core.system, info.cpu, etc...)\n check documentation for list of built in commands\n :param arguments: A dict of required command arguments depends on the command name.\n :param queue: command queue (commands on the same queue are executed sequentially)\n :param max_time: kill job server side if it exceeded this amount of seconds\n :param stream: If True, process stdout and stderr are pushed to a special queue (stream:) so\n client can stream output\n :param tags: job tags\n :param id: job id. Generated if not supplied\n :return: Response object\n \"\"\"\n if not id:\n id = str(uuid.uuid4())\n\n payload = {\n 'id': id,\n 'command': command,\n 'arguments': arguments,\n 'queue': queue,\n 'max_time': max_time,\n 'stream': stream,\n 'tags': tags,\n }\n\n self._raw_chk.check(payload)\n flag = 'result:{}:flag'.format(id)\n self._redis.rpush('core:default', json.dumps(payload))\n if self._redis.brpoplpush(flag, flag, DefaultTimeout) is None:\n TimeoutError('failed to queue job {}'.format(id))\n logger.debug('%s >> g8core.%s(%s)', id, command, ', '.join((\"%s=%s\" % (k, v) for k, v in arguments.items())))\n\n return Response(self, id)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef calczeta(phi1, phi2, theta1, theta2):\n \n zeta = 0.0\n \n if phi1 == phi2 and theta1 == theta2:\n zeta = 0.0\n else:\n argument = sin(theta1)*sin(theta2)*cos(phi1-phi2) + \\\n cos(theta1)*cos(theta2)\n \n if argument < -1:\n zeta = np.pi\n elif argument > 1:\n zeta = 0.0\n else:\n zeta = acos(argument)\n \n return zeta", "response": "Calculates the angular separation between two positions."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn value of d^l_mk as defined in allen ottewill 97.", "response": "def dlmk(l,m,k,theta1):\n \"\"\"\n returns value of d^l_mk as defined in allen, ottewill 97.\n Called by Dlmk\n \n \"\"\"\n \n if m >= k:\n \n factor = sqrt(factorial(l-k)*factorial(l+m)/factorial(l+k)/factorial(l-m))\n part2 = (cos(theta1/2))**(2*l+k-m)*(-sin(theta1/2))**(m-k)/factorial(m-k)\n part3 = sp.hyp2f1(m-l,-k-l,m-k+1,-(tan(theta1/2))**2)\n\n return factor*part2*part3\n \n else:\n \n return (-1)**(m-k) * dlmk(l,k,m,theta1)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn D^l_mk for a given l m k and phi1 and phi2", "response": "def Dlmk(l,m,k,phi1,phi2,theta1,theta2):\n \"\"\"\n returns value of D^l_mk as defined in allen, ottewill 97.\n \n \"\"\"\n \n return exp(complex(0.,-m*phi1)) * dlmk(l,m,k,theta1) * \\\n exp(complex(0.,-k*gamma(phi1,phi2,theta1,theta2)))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncalculate third rotation angle for a single object", "response": "def gamma(phi1,phi2,theta1,theta2):\n \"\"\"\n calculate third rotation angle\n inputs are angles from 2 pulsars\n returns the angle.\n \n \"\"\"\n \n if phi1 == phi2 and theta1 == theta2:\n gamma = 0\n else:\n gamma = atan( sin(theta2)*sin(phi2-phi1) / \\\n (cos(theta1)*sin(theta2)*cos(phi1-phi2) - \\\n sin(theta1)*cos(theta2)) )\n\n dummy_arg = (cos(gamma)*cos(theta1)*sin(theta2)*cos(phi1-phi2) + \\\n sin(gamma)*sin(theta2)*sin(phi2-phi1) - \\\n cos(gamma)*sin(theta1)*cos(theta2))\n \n if dummy_arg >= 0:\n return gamma\n else:\n return pi + gamma"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef rotated_Gamma_ml(m,l,phi1,phi2,theta1,theta2,gamma_ml):\n \n rotated_gamma = 0\n \n for ii in range(2*l+1):\n rotated_gamma += Dlmk(l,m,ii-l,phi1,phi2,theta1,theta2).conjugate()*gamma_ml[ii]\n\n return rotated_gamma", "response": "This function rotates any gamma in the computational frame to the cosmic frame."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the total chisq for the current timing solution.", "response": "def chisq(psr,formbats=False):\n \"\"\"Return the total chisq for the current timing solution,\n removing noise-averaged mean residual, and ignoring deleted points.\"\"\"\n \n if formbats:\n psr.formbats()\n\n res, err = psr.residuals(removemean=False)[psr.deleted == 0], psr.toaerrs[psr.deleted == 0]\n \n res -= numpy.sum(res/err**2) / numpy.sum(1/err**2)\n\n return numpy.sum(res * res / (1e-12 * err * err))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning gradient of total chisq for the current timing solution.", "response": "def dchisq(psr,formbats=False,renormalize=True):\n \"\"\"Return gradient of total chisq for the current timing solution,\n after removing noise-averaged mean residual, and ignoring deleted points.\"\"\"\n \n if formbats:\n psr.formbats()\n\n res, err = psr.residuals(removemean=False)[psr.deleted == 0], psr.toaerrs[psr.deleted == 0]\n\n res -= numpy.sum(res/err**2) / numpy.sum(1/err**2)\n \n # bats already updated by residuals(); skip constant-phase column\n M = psr.designmatrix(updatebats=False,fixunits=True,fixsigns=True)[psr.deleted==0,1:]\n\n # renormalize design-matrix columns\n if renormalize:\n norm = numpy.sqrt(numpy.sum(M**2,axis=0))\n M /= norm\n else:\n norm = 1.0\n\n # compute chisq derivative, de-renormalize\n dr = -2 * numpy.dot(M.T,res / (1e-12 * err**2)) * norm\n \n return dr"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nuses scipy. optimize. minimize to find minimum - chisq timing solution.", "response": "def findmin(psr,method='Nelder-Mead',history=False,formbats=False,renormalize=True,bounds={},**kwargs):\n \"\"\"Use scipy.optimize.minimize to find minimum-chisq timing solution,\n passing through all extra options. Resets psr[...].val to the final solution,\n and returns the final chisq. Will use chisq gradient if method requires it.\n Ignores deleted points.\"\"\"\n\n ctr, err = psr.vals(), psr.errs()\n\n # to avoid losing precision, we're searching in units of parameter errors\n \n if numpy.any(err == 0.0):\n print(\"Warning: one or more fit parameters have zero a priori error, and won't be searched.\")\n\n hloc, hval = [], []\n\n def func(xs):\n psr.vals([c + x*e for x,c,e in zip(xs,ctr,err)])\n\n ret = chisq(psr,formbats=formbats)\n\n if numpy.isnan(ret):\n print(\"Warning: chisq is nan at {0}.\".format(psr.vals()))\n\n if history:\n hloc.append(psr.vals())\n hval.append(ret)\n\n return ret\n\n def dfunc(xs):\n psr.vals([c + x*e for x,c,e in zip(xs,ctr,err)])\n\n dc = dchisq(psr,formbats=formbats,renormalize=renormalize)\n ret = numpy.array([d*e for d,e in zip(dc,err)],'d')\n\n return ret\n\n opts = kwargs.copy()\n\n if method not in ['Nelder-Mead','Powell']:\n opts['jac'] = dfunc\n\n if method in ['L-BFGS-B']:\n opts['bounds'] = [(float((bounds[par][0] - ctr[i])/err[i]),\n float((bounds[par][1] - ctr[i])/err[i])) if par in bounds else (None,None)\n for i,par in enumerate(psr.pars())]\n\n res = scipy.optimize.minimize(func,[0.0]*len(ctr),method=method,**opts)\n\n if hasattr(res,'message'):\n print(res.message)\n\n # this will also set parameters to the minloc\n minchisq = func(res.x)\n\n if history:\n return minchisq, numpy.array(hval), numpy.array(hloc)\n else: \n return minchisq"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nsolve local GLS problem using scipy. linalg. cholesky. Update psr. val and psr. err from solution.", "response": "def glsfit(psr,renormalize=True):\n \"\"\"Solve local GLS problem using scipy.linalg.cholesky.\n Update psr[...].val and psr[...].err from solution.\n If renormalize=True, normalize each design-matrix column by its norm.\"\"\"\n \n mask = psr.deleted == 0\n res, err = psr.residuals(removemean=False)[mask], psr.toaerrs[mask]\n M = psr.designmatrix(updatebats=False,fixunits=True,fixsigns=True)[mask,:]\n \n C = numpy.diag((err * 1e-6)**2)\n \n if renormalize:\n norm = numpy.sqrt(numpy.sum(M**2,axis=0))\n M /= norm\n else:\n norm = np.ones_like(M[0,:])\n \n mtcm = numpy.dot(M.T,numpy.dot(numpy.linalg.inv(C),M))\n mtcy = numpy.dot(M.T,numpy.dot(numpy.linalg.inv(C),res))\n \n xvar = numpy.linalg.inv(mtcm)\n \n c = scipy.linalg.cho_factor(mtcm)\n xhat = scipy.linalg.cho_solve(c,mtcy)\n\n sol = psr.vals()\n psr.vals(sol + xhat[1:] / norm[1:])\n psr.errs(numpy.sqrt(numpy.diag(xvar)[1:]) / norm[1:])\n \n return chisq(psr)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create_fourier_design_matrix(t, nmodes, freq=False, Tspan=None,\n logf=False, fmin=None, fmax=None):\n \"\"\"\n Construct fourier design matrix from eq 11 of Lentati et al, 2013\n\n :param t: vector of time series in seconds\n :param nmodes: number of fourier coefficients to use\n :param freq: option to output frequencies\n :param Tspan: option to some other Tspan\n :param logf: use log frequency spacing\n :param fmin: lower sampling frequency\n :param fmax: upper sampling frequency\n\n :return: F: fourier design matrix\n :return: f: Sampling frequencies (if freq=True)\n \"\"\"\n\n N = len(t)\n F = np.zeros((N, 2 * nmodes))\n\n if Tspan is not None:\n T = Tspan\n else:\n T = t.max() - t.min()\n\n # define sampling frequencies\n if fmin is not None and fmax is not None:\n f = np.linspace(fmin, fmax, nmodes)\n else:\n f = np.linspace(1 / T, nmodes / T, nmodes)\n if logf:\n f = np.logspace(np.log10(1 / T), np.log10(nmodes / T), nmodes)\n\n Ffreqs = np.zeros(2 * nmodes)\n Ffreqs[0::2] = f\n Ffreqs[1::2] = f\n\n F[:,::2] = np.sin(2*np.pi*t[:,None]*f[None,:])\n F[:,1::2] = np.cos(2*np.pi*t[:,None]*f[None,:])\n\n if freq:\n return F, Ffreqs\n else:\n return F", "response": "Create a fourier design matrix from eq 11 of Lentati et al 2013 2013"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef powerlaw(f, log10_A=-16, gamma=5):\n\n fyr = 1 / 3.16e7\n return (10**log10_A)**2 / 12.0 / np.pi**2 * fyr**(gamma-3) * f**(-gamma)", "response": "Power - law PSD."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_gwb(psr, dist=1, ngw=1000, seed=None, flow=1e-8, fhigh=1e-5,\n gwAmp=1e-20, alpha=-0.66, logspacing=True):\n \"\"\"Add a stochastic background from inspiraling binaries, using the tempo2\n code that underlies the GWbkgrd plugin.\n\n Here 'dist' is the pulsar distance [in kpc]; 'ngw' is the number of binaries,\n 'seed' (a negative integer) reseeds the GWbkgrd pseudorandom-number-generator,\n 'flow' and 'fhigh' [Hz] determine the background band, 'gwAmp' and 'alpha'\n determine its amplitude and exponent, and setting 'logspacing' to False\n will use linear spacing for the individual sources.\n\n It is also possible to create a background object with\n\n gwb = GWB(ngw,seed,flow,fhigh,gwAmp,alpha,logspacing)\n\n then call the method gwb.add_gwb(pulsar[i],dist) repeatedly to get a\n consistent background for multiple pulsars.\n\n Returns the GWB object\n \"\"\"\n \n gwb = GWB(ngw,seed,flow,fhigh,gwAmp,alpha,logspacing)\n gwb.add_gwb(psr,dist)\n\n return gwb", "response": "Add a stochastic background from inspiraling binaries using the GWbkgrd pseudorandom - number - generator."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds a stochastic background from inspiraling binaries distributed by a pure dipole distribution.", "response": "def add_dipole_gwb(psr, dist=1, ngw=1000, seed=None, flow=1e-8,\n fhigh=1e-5, gwAmp=1e-20, alpha=-0.66,\n logspacing=True, dipoleamps=None, \n dipoledir=None, dipolemag=None):\n \n \"\"\"Add a stochastic background from inspiraling binaries distributed\n according to a pure dipole distribution, using the tempo2\n code that underlies the GWdipolebkgrd plugin.\n\n The basic use is identical to that of 'add_gwb':\n Here 'dist' is the pulsar distance [in kpc]; 'ngw' is the number of binaries,\n 'seed' (a negative integer) reseeds the GWbkgrd pseudorandom-number-generator,\n 'flow' and 'fhigh' [Hz] determine the background band, 'gwAmp' and 'alpha'\n determine its amplitude and exponent, and setting 'logspacing' to False\n will use linear spacing for the individual sources.\n\n Additionally, the dipole component can be specified by using one of two\n methods:\n 1) Specify the dipole direction as three dipole amplitudes, in the vector\n dipoleamps\n 2) Specify the direction of the dipole as a magnitude dipolemag, and a vector\n dipoledir=[dipolephi, dipoletheta]\n\n It is also possible to create a background object with\n \n gwb = GWB(ngw,seed,flow,fhigh,gwAmp,alpha,logspacing)\n\n then call the method gwb.add_gwb(pulsar[i],dist) repeatedly to get a\n consistent background for multiple pulsars.\n \n Returns the GWB object\n \"\"\"\n\n gwb = GWB(ngw, seed, flow, fhigh, gwAmp, alpha, logspacing,\n dipoleamps, dipoledir, dipolemag)\n gwb.add_gwb(psr,dist)\n \n return gwb"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef fakepulsar(parfile, obstimes, toaerr, freq=1440.0, observatory='AXIS',\n flags='', iters=3):\n \"\"\"Returns a libstempo tempopulsar object corresponding to a noiseless set\n of observations for the pulsar specified in 'parfile', with observations\n happening at times (MJD) given in the array (or list) 'obstimes', with\n measurement errors given by toaerr (us).\n\n A new timfile can then be saved with pulsar.savetim(). Re the other parameters:\n - 'toaerr' needs to be either a common error, or a list of errors\n of the same length of 'obstimes';\n - 'freq' can be either a common observation frequency in MHz, or a list;\n it defaults to 1440;\n - 'observatory' can be either a common observatory name, or a list;\n it defaults to the IPTA MDC 'AXIS';\n - 'flags' can be a string (such as '-sys EFF.EBPP.1360') or a list of strings;\n it defaults to an empty string;\n - 'iters' is the number of iterative removals of computed residuals from TOAs\n (which is how the fake pulsar is made...)\"\"\"\n\n import tempfile\n outfile = tempfile.NamedTemporaryFile(delete=False)\n\n outfile.write(b'FORMAT 1\\n')\n outfile.write(b'MODE 1\\n')\n\n obsname = 'fake_' + os.path.basename(parfile)\n if obsname[-4:] == '.par':\n obsname = obsname[:-4]\n\n for i,t in enumerate(obstimes):\n outfile.write('{0} {1} {2} {3} {4} {5}\\n'.format(\n obsname,_geti(freq,i),t,_geti(toaerr,i),_geti(observatory,i),_geti(flags,i)\n ).encode('ascii'))\n\n timfile = outfile.name\n outfile.close()\n\n pulsar = libstempo.tempopulsar(parfile,timfile,dofit=False)\n\n for i in range(iters):\n pulsar.stoas[:] -= pulsar.residuals() / 86400.0\n pulsar.formbats()\n \n os.remove(timfile)\n\n return pulsar", "response": "Returns a libstempo tempopulsar object corresponding to a noiseless set of observations for the specified parfile."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_efac(psr, efac=1.0, flagid=None, flags=None, seed=None):\n \n if seed is not None:\n N.random.seed(seed)\n\n # default efacvec\n efacvec = N.ones(psr.nobs)\n\n # check that efac is scalar if flags is None\n if flags is None:\n if not N.isscalar(efac):\n raise ValueError('ERROR: If flags is None, efac must be a scalar')\n else:\n efacvec = N.ones(psr.nobs) * efac\n\n if flags is not None and flagid is not None and not N.isscalar(efac):\n if len(efac) == len(flags):\n for ct, flag in enumerate(flags):\n ind = flag == N.array(psr.flagvals(flagid))\n efacvec[ind] = efac[ct]\n\n psr.stoas[:] += efacvec * psr.toaerrs * (1e-6 / day) * N.random.randn(psr.nobs)", "response": "Add nominal TOA errors multiplied by efac factor."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_equad(psr, equad, flagid=None, flags=None, seed=None):\n\n if seed is not None:\n N.random.seed(seed)\n \n # default equadvec\n equadvec = N.zeros(psr.nobs)\n\n # check that equad is scalar if flags is None\n if flags is None:\n if not N.isscalar(equad):\n raise ValueError('ERROR: If flags is None, equad must be a scalar') \n else:\n equadvec = N.ones(psr.nobs) * equad\n\n if flags is not None and flagid is not None and not N.isscalar(equad):\n if len(equad) == len(flags):\n for ct, flag in enumerate(flags):\n ind = flag == N.array(psr.flagvals(flagid))\n equadvec[ind] = equad[ct]\n \n psr.stoas[:] += (equadvec / day) * N.random.randn(psr.nobs)", "response": "Add quadrature noise of rms equad [ s."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds correlated quadrature noise of rms ecorr of rms flagid and flags.", "response": "def add_jitter(psr, ecorr ,flagid=None, flags=None, coarsegrain=0.1,\n seed=None):\n \"\"\"Add correlated quadrature noise of rms `ecorr` [s],\n with coarse-graining time `coarsegrain` [days].\n Optionally take a pseudorandom-number-generator seed.\"\"\"\n \n if seed is not None:\n N.random.seed(seed)\n \n if flags is None:\n t, U = quantize_fast(N.array(psr.toas(),'d'), dt=coarsegrain)\n elif flags is not None and flagid is not None:\n t, f, U = quantize_fast(N.array(psr.toas(),'d'),\n N.array(psr.flagvals(flagid)),\n dt=coarsegrain)\n\n # default jitter value\n ecorrvec = N.zeros(len(t))\n \n # check that jitter is scalar if flags is None\n if flags is None:\n if not N.isscalar(ecorr):\n raise ValueError('ERROR: If flags is None, jitter must be a scalar')\n else:\n ecorrvec = N.ones(len(t)) * ecorr\n\n if flags is not None and flagid is not None and not N.isscalar(ecorr):\n if len(ecorr) == len(flags):\n for ct, flag in enumerate(flags):\n ind = flag == N.array(f)\n ecorrvec[ind] = ecorr[ct]\n\n psr.stoas[:] += (1 / day) * N.dot(U*ecorrvec, N.random.randn(U.shape[1]))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_rednoise(psr,A,gamma,components=10,seed=None):\n\n if seed is not None:\n N.random.seed(seed)\n \n t = psr.toas()\n minx, maxx = N.min(t), N.max(t)\n x = (t - minx) / (maxx - minx)\n T = (day/year) * (maxx - minx)\n\n size = 2*components\n F = N.zeros((psr.nobs,size),'d')\n f = N.zeros(size,'d')\n\n for i in range(components):\n F[:,2*i] = N.cos(2*math.pi*(i+1)*x)\n F[:,2*i+1] = N.sin(2*math.pi*(i+1)*x)\n\n f[2*i] = f[2*i+1] = (i+1) / T\n\n norm = A**2 * year**2 / (12 * math.pi**2 * T)\n prior = norm * f**(-gamma)\n \n y = N.sqrt(prior) * N.random.randn(size)\n psr.stoas[:] += (1.0/day) * N.dot(F,y)", "response": "Add red noise with P = A^ - gamma components = 10"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding a line of frequency f and amplitude A through the dataset.", "response": "def add_line(psr,f,A,offset=0.5):\n \"\"\"\n Add a line of frequency `f` [Hz] and amplitude `A` [s],\n with origin at a fraction `offset` through the dataset.\n \"\"\"\n \n t = psr.toas()\n t0 = offset * (N.max(t) - N.min(t))\n sine = A * N.cos(2 * math.pi * f * day * (t - t0))\n\n psr.stoas[:] += sine / day"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add_glitch(psr, epoch, amp):\n \n # Define the heaviside function\n heaviside = lambda x: 0.5 * (N.sign(x) + 1)\n\n # Glitches are spontaneous spin-up events.\n # Thus TOAs will be advanced, and resiudals will be negative.\n \n psr.stoas[:] -= amp * heaviside(psr.toas() - epoch) * \\\n (psr.toas() - epoch)*86400.0", "response": "Adds a glitch to the internal list of the object that is at a given epoch."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfunctions to create GW - induced residuals from an SMBMB", "response": "def add_cgw(psr, gwtheta, gwphi, mc, dist, fgw, phase0, psi, inc, pdist=1.0, \\\n pphase=None, psrTerm=True, evolve=True, \\\n phase_approx=False, tref=0):\n \"\"\"\n Function to create GW-induced residuals from a SMBMB as \n defined in Ellis et. al 2012,2013. Tries to be smart about it...\n\n :param psr: pulsar object\n :param gwtheta: Polar angle of GW source in celestial coords [radians]\n :param gwphi: Azimuthal angle of GW source in celestial coords [radians]\n :param mc: Chirp mass of SMBMB [solar masses]\n :param dist: Luminosity distance to SMBMB [Mpc]\n :param fgw: Frequency of GW (twice the orbital frequency) [Hz]\n :param phase0: Initial Phase of GW source [radians]\n :param psi: Polarization of GW source [radians]\n :param inc: Inclination of GW source [radians]\n :param pdist: Pulsar distance to use other than those in psr [kpc]\n :param pphase: Use pulsar phase to determine distance [radian]\n :param psrTerm: Option to include pulsar term [boolean] \n :param evolve: Option to exclude evolution [boolean]\n :param tref: Fidicuial time at which initial parameters are referenced\n\n :returns: Vector of induced residuals\n \"\"\"\n\n # convert units\n mc *= eu.SOLAR2S # convert from solar masses to seconds\n dist *= eu.MPC2S # convert from Mpc to seconds\n\n # define initial orbital frequency \n w0 = N.pi * fgw\n phase0 /= 2 # orbital phase\n w053 = w0**(-5/3)\n\n # define variable for later use\n cosgwtheta, cosgwphi = N.cos(gwtheta), N.cos(gwphi)\n singwtheta, singwphi = N.sin(gwtheta), N.sin(gwphi)\n sin2psi, cos2psi = N.sin(2*psi), N.cos(2*psi)\n incfac1, incfac2 = 0.5*(3+N.cos(2*inc)), 2*N.cos(inc)\n\n # unit vectors to GW source\n m = N.array([singwphi, -cosgwphi, 0.0])\n n = N.array([-cosgwtheta*cosgwphi, -cosgwtheta*singwphi, singwtheta])\n omhat = N.array([-singwtheta*cosgwphi, -singwtheta*singwphi, -cosgwtheta])\n\n # various factors invloving GW parameters\n fac1 = 256/5 * mc**(5/3) * w0**(8/3) \n fac2 = 1/32/mc**(5/3)\n fac3 = mc**(5/3)/dist\n\n # pulsar location\n if 'RAJ' and 'DECJ' in psr.pars():\n ptheta = N.pi/2 - psr['DECJ'].val\n pphi = psr['RAJ'].val\n elif 'ELONG' and 'ELAT' in psr.pars():\n fac = 180./N.pi\n coords = ephem.Equatorial(ephem.Ecliptic(str(psr['ELONG'].val*fac), \n str(psr['ELAT'].val*fac)))\n\n ptheta = N.pi/2 - float(repr(coords.dec))\n pphi = float(repr(coords.ra))\n\n # use definition from Sesana et al 2010 and Ellis et al 2012\n phat = N.array([N.sin(ptheta)*N.cos(pphi), N.sin(ptheta)*N.sin(pphi),\\\n N.cos(ptheta)])\n\n fplus = 0.5 * (N.dot(m, phat)**2 - N.dot(n, phat)**2) / (1+N.dot(omhat, phat))\n fcross = (N.dot(m, phat)*N.dot(n, phat)) / (1 + N.dot(omhat, phat))\n cosMu = -N.dot(omhat, phat)\n\n\n # get values from pulsar object\n toas = psr.toas()*86400 - tref\n if pphase is not None:\n pd = pphase/(2*N.pi*fgw*(1-cosMu)) / eu.KPC2S\n else:\n pd = pdist\n\n # convert units\n pd *= eu.KPC2S # convert from kpc to seconds\n \n # get pulsar time\n tp = toas-pd*(1-cosMu)\n\n # evolution\n if evolve:\n\n # calculate time dependent frequency at earth and pulsar\n omega = w0 * (1 - fac1 * toas)**(-3/8)\n omega_p = w0 * (1 - fac1 * tp)**(-3/8)\n\n # calculate time dependent phase\n phase = phase0 + fac2 * (w053 - omega**(-5/3))\n phase_p = phase0 + fac2 * (w053 - omega_p**(-5/3))\n \n # use approximation that frequency does not evlolve over observation time\n elif phase_approx:\n \n # frequencies\n omega = w0\n omega_p = w0 * (1 + fac1 * pd*(1-cosMu))**(-3/8)\n \n # phases\n phase = phase0 + omega * toas\n phase_p = phase0 + fac2 * (w053 - omega_p**(-5/3)) + omega_p*toas\n \n # no evolution\n else: \n \n # monochromatic\n omega = w0\n omega_p = omega\n \n # phases\n phase = phase0 + omega * toas\n phase_p = phase0 + omega * tp\n \n\n # define time dependent coefficients\n At = N.sin(2*phase) * incfac1\n Bt = N.cos(2*phase) * incfac2\n At_p = N.sin(2*phase_p) * incfac1\n Bt_p = N.cos(2*phase_p) * incfac2\n\n # now define time dependent amplitudes\n alpha = fac3 / omega**(1/3)\n alpha_p = fac3 / omega_p**(1/3)\n\n # define rplus and rcross\n rplus = alpha * (At*cos2psi + Bt*sin2psi)\n rcross = alpha * (-At*sin2psi + Bt*cos2psi)\n rplus_p = alpha_p * (At_p*cos2psi + Bt_p*sin2psi)\n rcross_p = alpha_p * (-At_p*sin2psi + Bt_p*cos2psi)\n\n # residuals\n if psrTerm:\n res = fplus*(rplus_p-rplus)+fcross*(rcross_p-rcross)\n else:\n res = -fplus*rplus - fcross*rcross\n\n psr.stoas[:] += res/86400"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding an eccentric GW to the end of an existing GW file.", "response": "def add_ecc_cgw(psr, gwtheta, gwphi, mc, dist, F, inc, psi, gamma0,\n e0, l0, q, nmax=100, nset=None, pd=None, periEv=True,\n psrTerm=True, tref=0, check=True, useFile=True):\n \"\"\"\n Simulate GW from eccentric SMBHB. Waveform models from\n Taylor et al. (2015) and Barack and Cutler (2004).\n\n WARNING: This residual waveform is only accurate if the\n GW frequency is not significantly evolving over the \n observation time of the pulsar.\n\n :param psr: pulsar object\n :param gwtheta: Polar angle of GW source in celestial coords [radians]\n :param gwphi: Azimuthal angle of GW source in celestial coords [radians]\n :param mc: Chirp mass of SMBMB [solar masses]\n :param dist: Luminosity distance to SMBMB [Mpc]\n :param F: Orbital frequency of SMBHB [Hz]\n :param inc: Inclination of GW source [radians]\n :param psi: Polarization of GW source [radians]\n :param gamma0: Initial angle of periastron [radians]\n :param e0: Initial eccentricity of SMBHB\n :param l0: Initial mean anomaly [radians]\n :param q: Mass ratio of SMBHB\n :param nmax: Number of harmonics to use in waveform decomposition\n :param nset: Fix the number of harmonics to be injected\n :param pd: Pulsar distance [kpc]\n :param periEv: Evolve the position of periapsis [boolean] \n :param psrTerm: Option to include pulsar term [boolean] \n :param tref: Fiducial time at which initial parameters are referenced [s]\n :param check: Check if frequency evolves significantly over obs. time\n :param useFile: Use pre-computed table of number of harmonics vs eccentricity\n\n :returns: Vector of induced residuals\n \"\"\"\n \n # define variable for later use\n cosgwtheta, cosgwphi = N.cos(gwtheta), N.cos(gwphi)\n singwtheta, singwphi = N.sin(gwtheta), N.sin(gwphi)\n sin2psi, cos2psi = N.sin(2*psi), N.cos(2*psi)\n\n # unit vectors to GW source\n m = N.array([singwphi, -cosgwphi, 0.0])\n n = N.array([-cosgwtheta*cosgwphi, -cosgwtheta*singwphi, singwtheta])\n omhat = N.array([-singwtheta*cosgwphi, -singwtheta*singwphi, -cosgwtheta])\n \n # pulsar location\n if 'RAJ' and 'DECJ' in psr.pars():\n ptheta = N.pi/2 - psr['DECJ'].val\n pphi = psr['RAJ'].val\n elif 'ELONG' and 'ELAT' in psr.pars():\n fac = 180./N.pi\n coords = ephem.Equatorial(ephem.Ecliptic(str(psr['ELONG'].val*fac), str(psr['ELAT'].val*fac)))\n\n ptheta = N.pi/2 - float(repr(coords.dec))\n pphi = float(repr(coords.ra))\n\n\n # use definition from Sesana et al 2010 and Ellis et al 2012\n phat = N.array([N.sin(ptheta)*N.cos(pphi), N.sin(ptheta)*N.sin(pphi),\\\n N.cos(ptheta)])\n\n fplus = 0.5 * (N.dot(m, phat)**2 - N.dot(n, phat)**2) / (1+N.dot(omhat, phat))\n fcross = (N.dot(m, phat)*N.dot(n, phat)) / (1 + N.dot(omhat, phat))\n cosMu = -N.dot(omhat, phat)\n\n # get values from pulsar object\n toas = N.double(psr.toas())*86400 - tref\n \n if check:\n # check that frequency is not evolving significantly over obs. time\n y = eu.solve_coupled_ecc_solution(F, e0, gamma0, l0, mc, q,\n N.array([0.0,toas.max()]))\n \n # initial and final values over observation time\n Fc0, ec0, gc0, phic0 = y[0,:]\n Fc1, ec1, gc1, phic1 = y[-1,:]\n\n # observation time\n Tobs = 1/(toas.max()-toas.min())\n\n if N.abs(Fc0-Fc1) > 1/Tobs:\n print('WARNING: Frequency is evolving over more than one frequency bin.')\n print('F0 = {0}, F1 = {1}, delta f = {2}'.format(Fc0, Fc1, 1/Tobs))\n \n # get gammadot for earth term\n if periEv==False:\n gammadot = 0.0\n else:\n gammadot = eu.get_gammadot(F, mc, q, e0)\n\n if nset is not None:\n nharm = nset\n elif useFile:\n if e0 > 0.001 and e0 < 0.999:\n nharm = min(int(ecc_interp(e0)), nmax) + 1\n elif e0 < 0.001:\n nharm = 3\n else:\n nharm = nmax\n else:\n nharm = nmax\n \n ##### earth term #####\n splus, scross = eu.calculate_splus_scross(nharm, mc, dist, F, e0, toas,\n l0, gamma0, gammadot, inc)\n \n ##### pulsar term #####\n if psrTerm:\n\n # convert units\n pd *= eu.KPC2S # convert from kpc to seconds\n \n # get pulsar time\n tp = toas - pd * (1-cosMu)\n\n # solve coupled system of equations to get pulsar term values\n y = eu.solve_coupled_ecc_solution(F, e0, gamma0, l0, mc, q,\n N.array([0.0, tp.min()]))\n \n # get pulsar term values\n if N.any(y):\n Fp, ep, gp, lp = y[-1,:]\n\n # get gammadot at pulsar term\n gammadotp = eu.get_gammadot(Fp, mc, q, ep)\n\n if useFile:\n if ep > 0.001 and ep < 0.999:\n nharm = min(int(ecc_interp(ep)), nmax)\n elif ep < 0.001:\n nharm = 3\n else:\n nharm = nmax\n else:\n nharm = nmax\n\n\n splusp, scrossp = eu.calculate_splus_scross(nharm, mc, dist, Fp,\n ep, toas, lp, gp,\n gammadotp, inc)\n\n rr = (fplus*cos2psi - fcross*sin2psi) * (splusp - splus) + \\\n (fplus*sin2psi + fcross*cos2psi) * (scrossp - scross)\n\n else:\n rr = N.zeros(len(p.toas))\n \n else:\n rr = - (fplus*cos2psi - fcross*sin2psi) * splus - \\\n (fplus*sin2psi + fcross*cos2psi) * scross\n \n psr.stoas[:] += rr/86400"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfunctions to extend an interpolation function to an anonymization function to extrapolation anonymization function.", "response": "def extrap1d(interpolator):\n \"\"\"\n Function to extend an interpolation function to an\n extrapolation function.\n\n :param interpolator: scipy interp1d object\n\n :returns ufunclike: extension of function to extrapolation\n \"\"\"\n \n xs = interpolator.x\n ys = interpolator.y\n\n def pointwise(x):\n if x < xs[0]:\n return ys[0] # +(x-xs[0])*(ys[1]-ys[0])/(xs[1]-xs[0])\n elif x > xs[-1]:\n return ys[-1] # +(x-xs[-1])*(ys[-1]-ys[-2])/(xs[-1]-xs[-2])\n else:\n return interpolator(x)\n\n def ufunclike(xs):\n return N.array(map(pointwise, N.array(xs)))\n\n return ufunclike"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef createGWB(psr, Amp, gam, noCorr=False, seed=None, turnover=False,\n clm=[N.sqrt(4.0*N.pi)], lmax=0, f0=1e-9, beta=1,\n power=1, userSpec=None, npts=600, howml=10):\n \"\"\"\n Function to create GW-induced residuals from a stochastic GWB as defined\n in Chamberlin, Creighton, Demorest, et al. (2014).\n \n :param psr: pulsar object for single pulsar\n :param Amp: Amplitude of red noise in GW units \n :param gam: Red noise power law spectral index\n :param noCorr: Add red noise with no spatial correlations\n :param seed: Random number seed\n :param turnover: Produce spectrum with turnover at frequency f0\n :param clm: coefficients of spherical harmonic decomposition of GW power\n :param lmax: maximum multipole of GW power decomposition\n :param f0: Frequency of spectrum turnover\n :param beta: Spectral index of power spectram for f << f0\n :param power: Fudge factor for flatness of spectrum turnover\n :param userSpec: User-supplied characteristic strain spectrum \n (first column is freqs, second is spectrum)\n :param npts: Number of points used in interpolation\n :param howml: Lowest frequency is 1/(howml * T) \n \n :returns: list of residuals for each pulsar \n \"\"\"\n\n if seed is not None:\n N.random.seed(seed)\n\n # number of pulsars\n Npulsars = len(psr)\n\n # gw start and end times for entire data set\n start = N.min([p.toas().min()*86400 for p in psr]) - 86400\n stop = N.max([p.toas().max()*86400 for p in psr]) + 86400\n \n # duration of the signal\n dur = stop - start\n \n # get maximum number of points\n if npts is None:\n # default to cadence of 2 weeks\n npts = dur/(86400*14)\n\n # make a vector of evenly sampled data points\n ut = N.linspace(start, stop, npts)\n\n # time resolution in days\n dt = dur/npts\n\n # compute the overlap reduction function\n if noCorr:\n ORF = N.diag(N.ones(Npulsars)*2)\n else:\n psrlocs = N.zeros((Npulsars,2))\n \n for ii in range(Npulsars):\n if 'RAJ' and 'DECJ' in psr[ii].pars():\n psrlocs[ii] = N.double(psr[ii]['RAJ'].val), N.double(psr[ii]['DECJ'].val)\n elif 'ELONG' and 'ELAT' in psr[ii].pars():\n fac = 180./N.pi\n # check for B name\n if 'B' in psr[ii].name:\n epoch = '1950'\n else:\n epoch = '2000'\n coords = ephem.Equatorial(ephem.Ecliptic(str(psr[ii]['ELONG'].val*fac),\n str(psr[ii]['ELAT'].val*fac)),\n epoch=epoch)\n psrlocs[ii] = float(repr(coords.ra)), float(repr(coords.dec))\n\n psrlocs[:,1] = N.pi/2. - psrlocs[:,1]\n anisbasis = N.array(anis.CorrBasis(psrlocs,lmax)) \n ORF = sum(clm[kk]*anisbasis[kk] for kk in range(len(anisbasis)))\n ORF *= 2.0\n\n # Define frequencies spanning from DC to Nyquist. \n # This is a vector spanning these frequencies in increments of 1/(dur*howml).\n f = N.arange(0, 1/(2*dt), 1/(dur*howml))\n f[0] = f[1] # avoid divide by 0 warning\n Nf = len(f)\n\n # Use Cholesky transform to take 'square root' of ORF\n M = N.linalg.cholesky(ORF)\n\n # Create random frequency series from zero mean, unit variance, Gaussian distributions\n w = N.zeros((Npulsars, Nf), complex)\n for ll in range(Npulsars):\n w[ll,:] = N.random.randn(Nf) + 1j*N.random.randn(Nf)\n\n # strain amplitude\n if userSpec is None:\n \n f1yr = 1/3.16e7\n alpha = -0.5 * (gam-3)\n hcf = Amp * (f/f1yr)**(alpha)\n if turnover:\n si = alpha - beta\n hcf /= (1+(f/f0)**(power*si))**(1/power)\n\n elif userSpec is not None:\n \n freqs = userSpec[:,0]\n if len(userSpec[:,0]) != len(freqs):\n raise ValueError(\"Number of supplied spectral points does not match number of frequencies!\")\n else:\n fspec_in = interp.interp1d(N.log10(freqs), N.log10(userSpec[:,1]), kind='linear')\n fspec_ex = extrap1d(fspec_in)\n hcf = 10.0**fspec_ex(N.log10(f))\n\n C = 1 / 96 / N.pi**2 * hcf**2 / f**3 * dur * howml\n\n ### injection residuals in the frequency domain\n Res_f = N.dot(M, w)\n for ll in range(Npulsars):\n Res_f[ll] = Res_f[ll] * C**(0.5) # rescale by frequency dependent factor\n Res_f[ll,0] = 0\t\t\t # set DC bin to zero to avoid infinities\n Res_f[ll,-1] = 0\t\t # set Nyquist bin to zero also\n\n # Now fill in bins after Nyquist (for fft data packing) and take inverse FT\n Res_f2 = N.zeros((Npulsars, 2*Nf-2), complex) \n Res_t = N.zeros((Npulsars, 2*Nf-2))\n Res_f2[:,0:Nf] = Res_f[:,0:Nf]\n Res_f2[:, Nf:(2*Nf-2)] = N.conj(Res_f[:,(Nf-2):0:-1])\n Res_t = N.real(N.fft.ifft(Res_f2)/dt)\n\n # shorten data and interpolate onto TOAs\n Res = N.zeros((Npulsars, npts))\n res_gw = []\n for ll in range(Npulsars):\n Res[ll,:] = Res_t[ll, 10:(npts+10)]\n f = interp.interp1d(ut, Res[ll,:], kind='linear')\n res_gw.append(f(psr[ll].toas()*86400))\n\n #return res_gw\n ct = 0\n for p in psr:\n p.stoas[:] += res_gw[ct]/86400.0\n ct += 1", "response": "Function to create GW - induced residuals from a single pulsar object with red noise and power law."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef computeORFMatrix(psr):\n\n # begin loop over all pulsar pairs and calculate ORF\n npsr = len(psr)\n ORF = N.zeros((npsr, npsr))\n phati = N.zeros(3)\n phatj = N.zeros(3)\n ptheta = [N.pi/2 - p['DECJ'].val for p in psr]\n pphi = [p['RAJ'].val for p in psr]\n for ll in range(0, npsr):\n phati[0] = N.cos(pphi[ll]) * N.sin(ptheta[ll])\n phati[1] = N.sin(pphi[ll]) * N.sin(ptheta[ll])\n phati[2] = N.cos(ptheta[ll])\n\n for kk in range(0, npsr):\n phatj[0] = N.cos(pphi[kk]) * N.sin(ptheta[kk])\n phatj[1] = N.sin(pphi[kk]) * N.sin(ptheta[kk])\n phatj[2] = N.cos(ptheta[kk])\n \n if ll != kk:\n xip = (1.-N.sum(phati*phatj)) / 2.\n ORF[ll, kk] = 3.*( 1./3. + xip * ( N.log(xip) -1./6.) )\n else:\n ORF[ll, kk] = 2.0\n\n return ORF", "response": "Compute the ORF matrix for the given list of pulsar objects."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nplot residuals compute unweighted rms residual.", "response": "def plotres(psr,deleted=False,group=None,**kwargs):\n \"\"\"Plot residuals, compute unweighted rms residual.\"\"\"\n\n res, t, errs = psr.residuals(), psr.toas(), psr.toaerrs\n \n if (not deleted) and N.any(psr.deleted != 0):\n res, t, errs = res[psr.deleted == 0], t[psr.deleted == 0], errs[psr.deleted == 0]\n print(\"Plotting {0}/{1} nondeleted points.\".format(len(res),psr.nobs))\n\n meanres = math.sqrt(N.mean(res**2)) / 1e-6\n \n if group is None:\n i = N.argsort(t)\n P.errorbar(t[i],res[i]/1e-6,yerr=errs[i],fmt='x',**kwargs)\n else:\n if (not deleted) and N.any(psr.deleted):\n flagmask = psr.flagvals(group)[~psr.deleted]\n else:\n flagmask = psr.flagvals(group)\n\n unique = list(set(flagmask))\n \n for flagval in unique:\n f = (flagmask == flagval)\n flagres, flagt, flagerrs = res[f], t[f], errs[f]\n i = N.argsort(flagt)\n P.errorbar(flagt[i],flagres[i]/1e-6,yerr=flagerrs[i],fmt='x',**kwargs)\n \n P.legend(unique,numpoints=1,bbox_to_anchor=(1.1,1.1))\n\n P.xlabel('MJD'); P.ylabel('res [us]')\n P.title(\"{0} - rms res = {1:.2f} us\".format(psr.name,meanres))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef plotgwsrc(gwb):\n theta, phi, omega, polarization = gwb.gw_dist()\n\n rho = phi-N.pi\n eta = 0.5*N.pi - theta\n\n # I don't know how to get rid of the RuntimeWarning -- RvH, Oct 10, 2014:\n # /Users/vhaaster/env/dev/lib/python2.7/site-packages/matplotlib/projections/geo.py:485:\n # RuntimeWarning: invalid value encountered in arcsin theta = np.arcsin(y / np.sqrt(2))\n #old_settings = N.seterr(invalid='ignore')\n\n P.title(\"GWB source population\")\n ax = P.axes(projection='mollweide')\n\n foo = P.scatter(rho, eta, marker='.', s=1)\n #bar = N.seterr(**old_settings)\n\n return foo", "response": "Plot a GWB source population as a mollweide projection."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef loglike(pulsar,efac=1.0,equad=None,jitter=None,Ared=None,gammared=None,marginalize=True,normalize=True,redcomponents=10,usedeleted=True):\n\n mask = Mask(pulsar,usedeleted) \n\n err = 1.0e-6 * mask(pulsar.toaerrs)\n Cdiag = (efac*err)**2\n\n if equad:\n Cdiag = Cdiag + (1e-6*equad)**2 * N.ones(len(err))\n\n if Ared:\n redf, F = _setuprednoise(pulsar,redcomponents)\n F = mask(F)\n phi = Ared**2 * redf**(-gammared)\n\n if jitter:\n # quantize at 1 second; U plays the role of redF\n t, U = _quantize(86400.0 * mask(pulsar.toas()),1.0)\n phi_j = (1e-6*jitter)**2 * N.ones(U.shape[1])\n\n # stack the basis arrays if we're also doing red noise\n phi = N.hstack((phi,phi_j)) if Ared else phi_j\n F = N.hstack((F,U)) if Ared else U\n\n if Ared or jitter:\n # Lentati formulation for correlated noise\n invphi = N.diag(1/phi)\n Ninv = N.diag(1/Cdiag)\n NinvF = dot(Ninv,F) # could be accelerated\n X = invphi + dot(F.T,NinvF) # invphi + FTNinvF\n\n Cinv = Ninv - dot(NinvF,N.linalg.inv(X),NinvF.T)\n logCdet = N.sum(N.log(Cdiag)) + N.sum(N.log(phi)) + N.linalg.slogdet(X)[1] # check\n else:\n # noise is all diagonal\n Cinv = N.diag(1/Cdiag)\n logCdet = N.sum(N.log(Cdiag))\n\n if marginalize:\n M = mask(pulsar.designmatrix())\n res = mask(N.array(pulsar.residuals(updatebats=False),'d'))\n \n CinvM = N.dot(Cinv,M)\n A = dot(M.T,CinvM)\n\n invA = N.linalg.inv(A)\n CinvMres = dot(res,CinvM)\n\n ret = -0.5 * dot(res,Cinv,res) + 0.5 * dot(CinvMres,invA,CinvMres.T) \n\n if normalize:\n ret = ret - 0.5 * logCdet - 0.5 * N.linalg.slogdet(A)[1] - 0.5 * (M.shape[0] - M.shape[1]) * math.log(2.0*math.pi)\n else:\n res = mask(N.array(pulsar.residuals(),'d'))\n\n ret = -0.5 * dot(res,Cinv,res)\n\n if normalize:\n ret = ret - 0.5 * logCdet - 0.5 * len(res) * math.log(2.0*math.pi)\n\n return ret", "response": "Returns the log - likelihood for the given pulsar parameters."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef expandranges(parlist):\n\n ret = []\n\n for par in parlist:\n # match anything of the form XXX{number1-number2}\n m = re.match('(.*)\\{([0-9]+)\\-([0-9]+)\\}',par)\n\n if m is None:\n ret.append(par)\n else:\n # (these are strings)\n root, number1, number2 = m.group(1), m.group(2), m.group(3)\n\n # if number1 begins with 0s, number parameters as 00, 01, 02, ...,\n # otherwise go with 0, 1, 2, ...\n fmt = '{{0}}{{1:0{0}d}}'.format(len(number1)) if number1[0] == '0' else '{0}{1:d}'\n \n ret = ret + [fmt.format(root,i) for i in range(int(m.group(2)),int(m.group(3))+1)]\n\n return ret", "response": "Rewrite a list of parameters by expanding ranges ( e. g. log10_efac1 - 10 into\n individual parameters."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _findrange(parlist,roots=['JUMP','DMXR1_','DMXR2_','DMX_','efac','log10_efac']):\n\n rootdict = {root: [] for root in roots}\n\n res = []\n for par in parlist:\n found = False\n for root in roots:\n if len(par) > len(root) and par[:len(root)] == root:\n rootdict[root].append(int(par[len(root):]))\n found = True\n if not found:\n res.append(par)\n\n for root in roots:\n if rootdict[root]:\n if len(rootdict[root]) > 1:\n rmin, rmax = min(rootdict[root]), max(rootdict[root])\n res.append('{0}{{{1}-{2}}}{3}'.format(root,rmin,rmax,\n '(incomplete)' if rmax - rmin != len(rootdict[root]) - 1 else ''))\n else:\n res.append('{0}{1}'.format(root,rootdict[root][0]))\n return res", "response": "Rewrite a list of parameters name by detecting ranges ( e. g. JUMP1 JUMP2... and log10_efac."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nmerges one every skip clouds into a single emcee population.", "response": "def merge(data,skip=50,fraction=1.0):\n \"\"\"Merge one every 'skip' clouds into a single emcee population,\n using the later 'fraction' of the run.\"\"\"\n\n w,s,d = data.chains.shape\n \n start = int((1.0 - fraction) * s)\n total = int((s - start) / skip)\n \n return data.chains[:,start::skip,:].reshape((w*total,d))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsieving an emcee clouds by excluding walkers with search variable index smaller than min or larger than max.", "response": "def cull(data,index,min=None,max=None): \n \"\"\"Sieve an emcee clouds by excluding walkers with search variable 'index'\n smaller than 'min' or larger than 'max'.\"\"\"\n\n ret = data\n\n if min is not None:\n ret = ret[ret[:,index] > min,:]\n\n if max is not None:\n ret = ret[ret[:,index] < max,:]\n \n return ret"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nmakes interpolation function from eccentricity file to determine number of harmonics to use for a given eccentricity.", "response": "def make_ecc_interpolant():\n \"\"\"\n Make interpolation function from eccentricity file to\n determine number of harmonics to use for a given\n eccentricity.\n\n :returns: interpolant\n \"\"\"\n pth = resource_filename(Requirement.parse('libstempo'),\n 'libstempo/ecc_vs_nharm.txt')\n\n fil = np.loadtxt(pth)\n\n return interp1d(fil[:,0], fil[:,1])"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncomputes eccentricity derivative from Taylor et al. 2015", "response": "def get_edot(F, mc, e):\n \"\"\"\n Compute eccentricity derivative from Taylor et al. (2015)\n\n :param F: Orbital frequency [Hz]\n :param mc: Chirp mass of binary [Solar Mass]\n :param e: Eccentricity of binary\n\n :returns: de/dt\n\n \"\"\"\n\n # chirp mass\n mc *= SOLAR2S\n\n dedt = -304/(15*mc) * (2*np.pi*mc*F)**(8/3) * e * \\\n (1 + 121/304*e**2) / ((1-e**2)**(5/2))\n\n return dedt"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncompute frequency derivative from Taylor et al. 2015", "response": "def get_Fdot(F, mc, e):\n \"\"\"\n Compute frequency derivative from Taylor et al. (2015)\n\n :param F: Orbital frequency [Hz]\n :param mc: Chirp mass of binary [Solar Mass]\n :param e: Eccentricity of binary\n\n :returns: dF/dt\n\n \"\"\"\n\n # chirp mass\n mc *= SOLAR2S\n\n dFdt = 48 / (5*np.pi*mc**2) * (2*np.pi*mc*F)**(11/3) * \\\n (1 + 73/24*e**2 + 37/96*e**4) / ((1-e**2)**(7/2))\n\n return dFdt"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_gammadot(F, mc, q, e):\n\n # chirp mass\n mc *= SOLAR2S\n\n #total mass\n m = (((1+q)**2)/q)**(3/5) * mc\n\n dgdt = 6*np.pi*F * (2*np.pi*F*m)**(2/3) / (1-e**2) * \\\n (1 + 0.25*(2*np.pi*F*m)**(2/3)/(1-e**2)*(26-15*e**2))\n\n return dgdt", "response": "Compute gamma dot from Barack and Cutler 2004"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_coupled_ecc_eqns(y, t, mc, q):\n \n F = y[0]\n e = y[1]\n gamma = y[2]\n phase = y[3]\n \n #total mass\n m = (((1+q)**2)/q)**(3/5) * mc \n \n dFdt = get_Fdot(F, mc, e)\n dedt = get_edot(F, mc, e)\n dgdt = get_gammadot(F, mc, q, e)\n dphasedt = 2*np.pi*F\n \n return np.array([dFdt, dedt, dgdt, dphasedt])", "response": "This function computes the coupled differential and non - differential equations from Peters and Barack & Cutler."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsolving the coupled ECC problem at a given time.", "response": "def solve_coupled_ecc_solution(F0, e0, gamma0, phase0, mc, q, t):\n \"\"\"\n Compute the solution to the coupled system of equations\n from from Peters (1964) and Barack & Cutler (2004) at \n a given time.\n \n :param F0: Initial orbital frequency [Hz]\n :param e0: Initial orbital eccentricity\n :param gamma0: Initial angle of precession of periastron [rad]\n :param mc: Chirp mass of binary [Solar Mass]\n :param q: Mass ratio of binary\n :param t: Time at which to evaluate solution [s]\n \n :returns: (F(t), e(t), gamma(t), phase(t))\n \n \"\"\"\n \n y0 = np.array([F0, e0, gamma0, phase0])\n\n y, infodict = odeint(get_coupled_ecc_eqns, y0, t, args=(mc,q), full_output=True)\n \n if infodict['message'] == 'Integration successful.':\n ret = y\n else:\n ret = 0\n \n return ret"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_an(n, mc, dl, F, e):\n \n # convert to seconds\n mc *= SOLAR2S\n dl *= MPC2S\n \n omega = 2 * np.pi * F\n \n amp = n * mc**(5/3) * omega**(2/3) / dl\n \n ret = -amp * (ss.jn(n-2,n*e) - 2*e*ss.jn(n-1,n*e) +\n (2/n)*ss.jn(n,n*e) + 2*e*ss.jn(n+1,n*e) -\n ss.jn(n+2,n*e))\n\n return ret", "response": "Compute a_n from Eq. 22 of Taylor et al. 2015."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncomputes b_n from Eq. 22 of Taylor et al. 2015.", "response": "def get_bn(n, mc, dl, F, e):\n \"\"\"\n Compute b_n from Eq. 22 of Taylor et al. (2015).\n \n :param n: Harmonic number\n :param mc: Chirp mass of binary [Solar Mass]\n :param dl: Luminosity distance [Mpc]\n :param F: Orbital frequency of binary [Hz]\n :param e: Orbital Eccentricity\n \n :returns: b_n\n \n \"\"\"\n \n # convert to seconds\n mc *= SOLAR2S\n dl *= MPC2S \n \n omega = 2 * np.pi * F\n \n amp = n * mc**(5/3) * omega**(2/3) / dl\n \n ret = -amp * np.sqrt(1-e**2) *(ss.jn(n-2,n*e) - 2*ss.jn(n,n*e) +\n ss.jn(n+2,n*e)) \n\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_cn(n, mc, dl, F, e):\n \n # convert to seconds\n mc *= SOLAR2S\n dl *= MPC2S\n \n omega = 2 * np.pi * F\n \n amp = 2 * mc**(5/3) * omega**(2/3) / dl\n \n ret = amp * ss.jn(n,n*e) / (n * omega)\n\n return ret", "response": "Compute c_n from Eq. 22 of Taylor et al. 2015."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef calculate_splus_scross(nmax, mc, dl, F, e, t, l0, gamma, gammadot, inc):\n \n n = np.arange(1, nmax)\n\n # time dependent amplitudes\n an = get_an(n, mc, dl, F, e)\n bn = get_bn(n, mc, dl, F, e)\n cn = get_cn(n, mc, dl, F, e)\n\n # time dependent terms\n omega = 2*np.pi*F\n gt = gamma + gammadot * t\n lt = l0 + omega * t\n\n # tiled phase\n phase1 = n * np.tile(lt, (nmax-1,1)).T\n phase2 = np.tile(gt, (nmax-1,1)).T\n phasep = phase1 + 2*phase2\n phasem = phase1 - 2*phase2\n\n # intermediate terms\n sp = np.sin(phasem)/(n*omega-2*gammadot) + \\\n np.sin(phasep)/(n*omega+2*gammadot)\n sm = np.sin(phasem)/(n*omega-2*gammadot) - \\\n np.sin(phasep)/(n*omega+2*gammadot)\n cp = np.cos(phasem)/(n*omega-2*gammadot) + \\\n np.cos(phasep)/(n*omega+2*gammadot)\n cm = np.cos(phasem)/(n*omega-2*gammadot) - \\\n np.cos(phasep)/(n*omega+2*gammadot)\n \n\n splus_n = -0.5 * (1+np.cos(inc)**2) * (an*sp - bn*sm) + \\\n (1-np.cos(inc)**2)*cn * np.sin(phase1)\n scross_n = np.cos(inc) * (an*cm - bn*cp)\n \n\n return np.sum(splus_n, axis=1), np.sum(scross_n, axis=1)", "response": "Calculates the splus and scross summed over all harmonics."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef run(LogLikelihood,\n Prior,\n n_dims,\n n_params = None,\n n_clustering_params = None, wrapped_params = None,\n importance_nested_sampling = True,\n multimodal = True, const_efficiency_mode = False, n_live_points = 400,\n evidence_tolerance = 0.5, sampling_efficiency = 0.8,\n n_iter_before_update = 100, null_log_evidence = -1e90,\n max_modes = 100, mode_tolerance = -1e90,\n outputfiles_basename = \"./multinest-\", seed = -1, verbose = False,\n resume = True, context = None, write_output = True, log_zero = -1e100,\n max_iter = 0, init_MPI = True, dump_callback = None):\n \"\"\"\n Runs MultiNest\n\n The most important parameters are the two log-probability functions Prior\n and LogLikelihood. They are called by MultiNest.\n\n Prior should transform the unit cube into the parameter cube. Here\n is an example for a uniform prior::\n\n def Prior(cube, ndim, nparams):\n for i in range(ndim):\n cube[i] = cube[i] * 10 * math.pi\n\n The LogLikelihood function gets this parameter cube and should\n return the logarithm of the likelihood.\n Here is the example for the eggbox problem::\n\n def Loglike(cube, ndim, nparams):\n chi = 1.\n\n for i in range(ndim):\n chi *= math.cos(cube[i] / 2.)\n return math.pow(2. + chi, 5)\n\n Some of the parameters are explained below. Otherwise consult the\n MultiNest documentation.\n\n @param importance_nested_sampling:\n If True, Multinest will use Importance Nested Sampling (INS). Read http://arxiv.org/abs/1306.2144\n for more details on INS. Please read the MultiNest README file before using the INS in MultiNest v3.0.\n\n @param n_params:\n Total no. of parameters, should be equal to ndims in most cases\n but if you need to store some additional\n parameters with the actual parameters then you need to pass\n them through the likelihood routine.\n\n @param sampling_efficiency:\n defines the sampling efficiency. 0.8 and 0.3 are recommended\n for parameter estimation & evidence evalutation\n respectively.\n use 'parameter' or 'model' to select the respective default\n values\n\n @param mode_tolerance:\n MultiNest can find multiple modes & also specify which samples belong to which mode. It might be\n desirable to have separate samples & mode statistics for modes with local log-evidence value greater than a\n particular value in which case Ztol should be set to that value. If there isn't any particularly interesting\n Ztol value, then Ztol should be set to a very large negative number (e.g. -1e90).\n\n @param evidence_tolerance:\n A value of 0.5 should give good enough accuracy.\n\n @param n_clustering_params:\n If mmodal is T, MultiNest will attempt to separate out the\n modes. Mode separation is done through a clustering\n algorithm. Mode separation can be done on all the parameters\n (in which case nCdims should be set to ndims) & it\n can also be done on a subset of parameters (in which case\n nCdims < ndims) which might be advantageous as\n clustering is less accurate as the dimensionality increases.\n If nCdims < ndims then mode separation is done on\n the first nCdims parameters.\n\n @param null_log_evidence:\n If mmodal is T, MultiNest can find multiple modes & also specify\n which samples belong to which mode. It might be\n desirable to have separate samples & mode statistics for modes\n with local log-evidence value greater than a\n particular value in which case nullZ should be set to that\n value. If there isn't any particulrly interesting\n nullZ value, then nullZ should be set to a very large negative\n number (e.g. -1.d90).\n\n @param init_MPI:\n initialize MPI routines?, relevant only if compiling with MPI\n\n @param log_zero:\n points with loglike < logZero will be ignored by MultiNest\n\n @param max_iter:\n maximum number of iterations. 0 is unlimited.\n\n @param write_output:\n write output files? This is required for analysis.\n\n @param dump_callback:\n a callback function for dumping the current status\n\n \"\"\"\n\n if n_params == None:\n n_params = n_dims\n if n_clustering_params == None:\n n_clustering_params = n_dims\n if wrapped_params == None:\n wrapped_params = [0] * n_dims\n\n WrappedType = c_int * len(wrapped_params)\n wraps = WrappedType(*wrapped_params)\n\n if sampling_efficiency == 'parameter':\n sampling_efficiency = 0.8\n if sampling_efficiency == 'model':\n sampling_efficiency = 0.3\n\n # MV 20130923\n\n loglike_type = CFUNCTYPE(c_double,\n POINTER(c_double),c_int,c_int,c_void_p)\n\n dumper_type = CFUNCTYPE(c_void_p,\n c_int,c_int,c_int,\n POINTER(c_double),POINTER(c_double),POINTER(c_double),\n c_double,c_double,c_double,c_void_p)\n\n if hasattr(LogLikelihood,'loglike') and hasattr(Prior,'remap') and hasattr(Prior,'prior'):\n def loglike(cube,ndim,nparams,nullcontext):\n # we're not using context with libstempo.like objects\n\n pprior = Prior.premap(cube)\n\n # mappers are supposed to throw a ValueError if they get out of range\n try:\n pars = Prior.remap(cube)\n except ValueError:\n return -N.inf\n\n prior = pprior * Prior.prior(pars)\n \n return -N.inf if not prior else math.log(prior) + LogLikelihood.loglike(pars)\n else:\n def loglike(cube,ndim,nparams,nullcontext):\n # it's actually easier to use the context, if any, at the Python level\n # and pass a null pointer to MultiNest...\n\n args = [cube,ndim,nparams] + ([] if context is None else context)\n\n if Prior:\n Prior(*args)\n\n return LogLikelihood(*args)\n\n def dumper(nSamples,nlive,nPar,\n physLive,posterior,paramConstr,\n maxLogLike,logZ,logZerr,nullcontext):\n\n if dump_callback:\n # It's not clear to me what the desired PyMultiNest dumper callback\n # syntax is... but this should pass back the right numpy arrays,\n # without copies. Untested!\n pc = as_array(paramConstr,shape=(nPar,4))\n\n dump_callback(nSamples,nlive,nPar,\n as_array(physLive,shape=(nPar+1,nlive)).T,\n as_array(posterior,shape=(nPar+2,nSamples)).T,\n (pc[0,:],pc[1,:],pc[2,:],pc[3,:]), # (mean,std,bestfit,map)\n maxLogLike,logZ,logZerr)\n\n # MV 20130923: currently we support only multinest 3.2 (24 parameters),\n # but it would not be a problem to build up the parameter list dynamically\n\n lib.run(c_bool(importance_nested_sampling),c_bool(multimodal),c_bool(const_efficiency_mode),\n c_int(n_live_points),c_double(evidence_tolerance),\n c_double(sampling_efficiency),c_int(n_dims),c_int(n_params),\n c_int(n_clustering_params),c_int(max_modes),\n c_int(n_iter_before_update),c_double(mode_tolerance),\n create_string_buffer(outputfiles_basename.encode()), # MV 20130923: need a regular C string\n c_int(seed),wraps,\n c_bool(verbose),c_bool(resume),\n c_bool(write_output),c_bool(init_MPI),\n c_double(log_zero),c_int(max_iter),\n loglike_type(loglike),dumper_type(dumper),\n c_void_p(0))", "response": "Runs the MultiNest algorithm."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef best_kmers(dt, response, sequence, k=6, consider_shift=True, n_cores=1,\n seq_align=\"start\", trim_seq_len=None):\n \"\"\"\n Find best k-mers for CONCISE initialization.\n\n Args:\n dt (pd.DataFrame): Table containing response variable and sequence.\n response (str): Name of the column used as the reponse variable.\n sequence (str): Name of the column storing the DNA/RNA sequences.\n k (int): Desired k-mer length.\n n_cores (int): Number of cores to use for computation. It can use up to 3 cores.\n consider_shift (boolean): When performing stepwise k-mer selection. Is TATTTA similar to ATTTAG?\n seq_align (str): one of ``{\"start\", \"end\"}``. To which end should we align sequences?\n trim_seq_len (int): Consider only first `trim_seq_len` bases of each sequence when generating the sequence design matrix. If :python:`None`, set :py:attr:`trim_seq_len` to the longest sequence length, hence whole sequences are considered.\n\n Returns:\n string list: Best set of motifs for this dataset sorted with respect to\n confidence (best candidate occuring first).\n\n\n Details:\n First a lasso model gets fitted to get a set of initial motifs. Next, the best\n subset of unrelated motifs is selected by stepwise selection.\n \"\"\"\n y = dt[response]\n seq = dt[sequence]\n\n if trim_seq_len is not None:\n seq = pad_sequences(seq, align=seq_align, maxlen=trim_seq_len)\n seq = [s.replace(\"N\", \"\") for s in seq]\n\n dt_kmer = kmer_count(seq, k)\n Xsp = csc_matrix(dt_kmer)\n en = ElasticNet(alpha=1, standardize=False, n_splits=3)\n en.fit(Xsp, y)\n # which coefficients are nonzero?=\n nonzero_kmers = dt_kmer.columns.values[en.coef_ != 0].tolist()\n\n # perform stepwise selection\n #\n # TODO - how do we deal with the intercept?\n\n # largest number of motifs where they don't differ by more than 1 k-mer\n def find_next_best(dt_kmer, y, selected_kmers, to_be_selected_kmers, consider_shift=True):\n \"\"\"\n perform stepwise model selection while preventing to add a motif similar to the\n already selected motifs.\n \"\"\"\n F, pval = f_regression(dt_kmer[to_be_selected_kmers], y)\n kmer = to_be_selected_kmers.pop(pval.argmin())\n selected_kmers.append(kmer)\n\n def select_criterion(s1, s2, consider_shift=True):\n if hamming_distance(s1, s2) <= 1:\n return False\n if consider_shift and hamming_distance(s1[1:], s2[:-1]) == 0:\n return False\n if consider_shift and hamming_distance(s1[:-1], s2[1:]) == 0:\n return False\n return True\n\n to_be_selected_kmers = [ckmer for ckmer in to_be_selected_kmers\n if select_criterion(ckmer, kmer, consider_shift)]\n if len(to_be_selected_kmers) == 0:\n return selected_kmers\n else:\n # regress out the new feature\n lm = LinearRegression()\n lm.fit(dt_kmer[selected_kmers], y)\n y_new = y - lm.predict(dt_kmer[selected_kmers])\n return find_next_best(dt_kmer, y_new, selected_kmers, to_be_selected_kmers, consider_shift)\n\n selected_kmers = find_next_best(dt_kmer, y, [], nonzero_kmers, consider_shift)\n\n return selected_kmers", "response": "This function finds the k - mers for CONCISE initialization."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ngenerate k - mer counts from a list of DNA sequences", "response": "def kmer_count(seq_list, k):\n \"\"\"\n Generate k-mer counts from a set of sequences\n\n Args:\n seq_list (iterable): List of DNA sequences (with letters from {A, C, G, T})\n k (int): K in k-mer.\n Returns:\n pandas.DataFrame: Count matrix for seach sequence in seq_list\n\n Example:\n >>> kmer_count([\"ACGTTAT\", \"GACGCGA\"], 2)\n AA AC AG AT CA CC CG CT GA GC GG GT TA TC TG TT\n 0 0 1 0 1 0 0 1 0 0 0 0 1 1 0 0 1\n 1 0 1 0 0 0 0 2 0 2 1 0 0 0 0 0 0\n \"\"\"\n # generate all k-mers\n all_kmers = generate_all_kmers(k)\n kmer_count_list = []\n for seq in seq_list:\n kmer_count_list.append([seq.count(kmer) for kmer in all_kmers])\n return pd.DataFrame(kmer_count_list, columns=all_kmers)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngenerates all possible k - mers in a sequence.", "response": "def generate_all_kmers(k):\n \"\"\"\n Generate all possible k-mers\n\n Example:\n >>> generate_all_kmers(2)\n ['AA', 'AC', 'AG', 'AT', 'CA', 'CC', 'CG', 'CT', 'GA', 'GC', 'GG', 'GT', 'TA', 'TC', 'TG', 'TT']\n \"\"\"\n bases = ['A', 'C', 'G', 'T']\n return [''.join(p) for p in itertools.product(bases, repeat=k)]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef dict_to_numpy_dict(obj_dict):\n return {key: np.asarray(value) if value is not None else None for key, value in obj_dict.items()}", "response": "Convert a dictionary of lists into a dictionary of numpy arrays\n "} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef rec_dict_to_numpy_dict(obj_dict):\n if type(obj_dict) == dict:\n return {key: rec_dict_to_numpy_dict(value) if value is not None else None for key, value in obj_dict.items()}\n elif obj_dict is None:\n return None\n else:\n return np.asarray(obj_dict)", "response": "Recursively converts a dict to a numpy array."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef compare_numpy_dict(a, b, exact=True):\n if type(a) != type(b) and type(a) != np.ndarray and type(b) != np.ndarray:\n return False\n # go through a dictionary\n if type(a) == dict and type(b) == dict:\n if not a.keys() == b.keys():\n return False\n for key in a.keys():\n res = compare_numpy_dict(a[key], b[key], exact)\n if res == False:\n print(\"false for key = \", key)\n return False\n return True\n\n # if type(a) == np.ndarray and type(b) == np.ndarray:\n if type(a) == np.ndarray or type(b) == np.ndarray:\n if exact:\n return (a == b).all()\n else:\n return np.testing.assert_almost_equal(a, b)\n\n if a is None and b is None:\n return True\n\n raise NotImplementedError", "response": "Compare two recursive numpy dictionaries a and b."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_knots(start, end, n_bases=10, spline_order=3):\n x_range = end - start\n start = start - x_range * 0.001\n end = end + x_range * 0.001\n\n # mgcv annotation\n m = spline_order - 1\n nk = n_bases - m # number of interior knots\n\n dknots = (end - start) / (nk - 1)\n knots = np.linspace(start=start - dknots * (m + 1),\n stop=end + dknots * (m + 1),\n num=nk + 2 * m + 2)\n return knots.astype(np.float32)", "response": "Function to get the knots of a single knot in a single spline."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_X_spline(x, knots, n_bases=10, spline_order=3, add_intercept=True):\n if len(x.shape) is not 1:\n raise ValueError(\"x has to be 1 dimentional\")\n\n tck = [knots, np.zeros(n_bases), spline_order]\n\n X = np.zeros([len(x), n_bases])\n\n for i in range(n_bases):\n vec = np.zeros(n_bases)\n vec[i] = 1.0\n tck[1] = vec\n\n X[:, i] = si.splev(x, tck, der=0)\n\n if add_intercept is True:\n ones = np.ones_like(X[:, :1])\n X = np.hstack([ones, X])\n\n return X.astype(np.float32)", "response": "Returns the X_spline of the knots."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the penalty matrix S Returns np. array of shape n_bases + add_intercept", "response": "def getS(self, add_intercept=False):\n \"\"\"Get the penalty matrix S\n\n Returns\n np.array, of shape (n_bases + add_intercept, n_bases + add_intercept)\n \"\"\"\n S = self.S\n if add_intercept is True:\n # S <- cbind(0, rbind(0, S)) # in R\n zeros = np.zeros_like(S[:1, :])\n S = np.vstack([zeros, S])\n\n zeros = np.zeros_like(S[:, :1])\n S = np.hstack([zeros, S])\n return S"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_metadata():\n motifs = _load_motifs()\n\n motif_names = sorted(list(motifs.keys()))\n df = pd.Series(motif_names).str.split(expand=True)\n df.rename(columns={0: \"PWM_id\", 1: \"info1\", 2: \"info2\"}, inplace=True)\n\n # compute the consensus\n consensus = pd.Series([PWM(motifs[m]).get_consensus() for m in motif_names])\n df[\"consensus\"] = consensus\n return df", "response": "Get pandas. DataFrame with metadata about the PWM s."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_pwm_list(motif_name_list, pseudocountProb=0.0001):\n l = _load_motifs()\n l = {k.split()[0]: v for k, v in l.items()}\n pwm_list = [PWM(l[m] + pseudocountProb, name=m) for m in motif_name_list]\n return pwm_list", "response": "Get a list of ENCODE PWM s."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef auc(y_true, y_pred, round=True):\n y_true, y_pred = _mask_value_nan(y_true, y_pred)\n\n if round:\n y_true = y_true.round()\n if len(y_true) == 0 or len(np.unique(y_true)) < 2:\n return np.nan\n return skm.roc_auc_score(y_true, y_pred)", "response": "Area under the ROC curve"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef recall_at_precision(y_true, y_pred, precision):\n y_true, y_pred = _mask_value_nan(y_true, y_pred)\n precision, recall, _ = skm.precision_recall_curve(y_true, y_pred)\n return recall[np.searchsorted(precision - precision, 0)]", "response": "Recall at a certain precision threshold\n "} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntrue positive rate tp / fn", "response": "def tpr(y_true, y_pred, round=True):\n \"\"\"True positive rate `tp / (tp + fn)`\n \"\"\"\n y_true, y_pred = _mask_value_nan(y_true, y_pred)\n if round:\n y_true = np.round(y_true)\n y_pred = np.round(y_pred)\n return skm.recall_score(y_true, y_pred)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ntrues negative rate tn / fp", "response": "def tnr(y_true, y_pred, round=True):\n \"\"\"True negative rate `tn / (tn + fp)`\n \"\"\"\n y_true, y_pred = _mask_value_nan(y_true, y_pred)\n if round:\n y_true = np.round(y_true)\n y_pred = np.round(y_pred)\n c = skm.confusion_matrix(y_true, y_pred)\n return c[0, 0] / c[0].sum()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef f1(y_true, y_pred, round=True):\n y_true, y_pred = _mask_value_nan(y_true, y_pred)\n if round:\n y_true = np.round(y_true)\n y_pred = np.round(y_pred)\n return skm.f1_score(y_true, y_pred)", "response": "F1 score of a cluster."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef cor(y_true, y_pred):\n y_true, y_pred = _mask_nan(y_true, y_pred)\n return np.corrcoef(y_true, y_pred)[0, 1]", "response": "Compute Pearson correlation coefficient."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef mad(y_true, y_pred):\n y_true, y_pred = _mask_nan(y_true, y_pred)\n return np.mean(np.abs(y_true - y_pred))", "response": "Median absolute deviation of the data."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef mse(y_true, y_pred):\n y_true, y_pred = _mask_nan(y_true, y_pred)\n return ((y_true - y_pred) ** 2).mean(axis=None)", "response": "Mean squared error of a series of data."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfractioning of variance explained.", "response": "def var_explained(y_true, y_pred):\n \"\"\"Fraction of variance explained.\n \"\"\"\n y_true, y_pred = _mask_nan(y_true, y_pred)\n var_resid = np.var(y_true - y_pred)\n var_y_true = np.var(y_true)\n return 1 - var_resid / var_y_true"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef contingency_table(y, z):\n y = K.cast(K.round(y), K.floatx())\n z = K.cast(K.round(z), K.floatx())\n\n def count_matches(y, z):\n return K.sum(K.cast(y, K.floatx()) * K.cast(z, K.floatx()))\n\n ones = K.ones_like(y)\n zeros = K.zeros_like(y)\n y_ones = K.equal(y, ones)\n y_zeros = K.equal(y, zeros)\n z_ones = K.equal(z, ones)\n z_zeros = K.equal(z, zeros)\n\n tp = count_matches(y_ones, z_ones)\n tn = count_matches(y_zeros, z_zeros)\n fp = count_matches(y_zeros, z_ones)\n fn = count_matches(y_ones, z_zeros)\n\n return (tp, tn, fp, fn)", "response": "Return the contingency table for a given set of y and z."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef tpr(y, z):\n tp, tn, fp, fn = contingency_table(y, z)\n return tp / (tp + fn)", "response": "True positive rate y z"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntruing negative rate y / z", "response": "def tnr(y, z):\n \"\"\"True negative rate `tn / (tn + fp)`\n \"\"\"\n tp, tn, fp, fn = contingency_table(y, z)\n return tn / (tn + fp)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef accuracy(y, z):\n tp, tn, fp, fn = contingency_table(y, z)\n return (tp + tn) / (tp + tn + fp + fn)", "response": "Classification accuracy `(tp + tn) / (tp + tn + fp + fn)`"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef f1(y, z):\n _recall = recall(y, z)\n _prec = precision(y, z)\n return 2 * (_prec * _recall) / (_prec + _recall)", "response": "F1 score of a single node in a tree."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef cat_acc(y, z):\n weights = _cat_sample_weights(y)\n _acc = K.cast(K.equal(K.argmax(y, axis=-1),\n K.argmax(z, axis=-1)),\n K.floatx())\n _acc = K.sum(_acc * weights) / K.sum(weights)\n return _acc", "response": "Classification accuracy for multi - categorical case"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef var_explained(y_true, y_pred):\n var_resid = K.var(y_true - y_pred)\n var_y_true = K.var(y_true)\n return 1 - var_resid / var_y_true", "response": "Fraction of variance explained."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting k - fold indices generator", "response": "def split_KFold_idx(train, cv_n_folds=5, stratified=False, random_state=None):\n \"\"\"Get k-fold indices generator\n \"\"\"\n test_len(train)\n y = train[1]\n n_rows = y.shape[0]\n if stratified:\n if len(y.shape) > 1:\n if y.shape[1] > 1:\n raise ValueError(\"Can't use stratified K-fold with multi-column response variable\")\n else:\n y = y[:, 0]\n # http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html#sklearn.model_selection.StratifiedKFold.split\n return model_selection.StratifiedKFold(n_splits=cv_n_folds, shuffle=True, random_state=random_state)\\\n .split(X=np.zeros((n_rows, 1)), y=y)\n else:\n return model_selection.KFold(n_splits=cv_n_folds, shuffle=True, random_state=random_state)\\\n .split(X=np.zeros((n_rows, 1)))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef prepare_data(dt, features, response, sequence, id_column=None, seq_align=\"end\", trim_seq_len=None):\n if type(response) is str:\n response = [response]\n\n X_feat = np.array(dt[features], dtype=\"float32\")\n y = np.array(dt[response], dtype=\"float32\")\n X_seq = encodeDNA(seq_vec=dt[sequence],\n maxlen=trim_seq_len,\n seq_align=seq_align)\n X_seq = np.array(X_seq, dtype=\"float32\")\n id_vec = np.array(dt[id_column])\n\n return X_feat, X_seq, y, id_vec", "response": "Prepare data for Concise. train or ConciseCV. train."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _trunc(x, minval=None, maxval=None):\n x = np.copy(x)\n if minval is not None:\n x[x < minval] = minval\n if maxval is not None:\n x[x > maxval] = maxval\n return x", "response": "Truncate vector values to have values on range [ minval maxval )"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fit(self, x):\n assert x.ndim > 1\n self.data_min_ = np.min(x, axis=tuple(range(x.ndim - 1)))\n self.data_max_ = np.max(x, axis=tuple(range(x.ndim - 1)))\n\n if self.share_knots:\n self.data_min_[:] = np.min(self.data_min_)\n self.data_max_[:] = np.max(self.data_max_)", "response": "Fit the knot placement to the data."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef transform(self, x, warn=True):\n # 1. split across last dimension\n # 2. re-use ranges\n # 3. Merge\n array_list = [encodeSplines(x[..., i].reshape((-1, 1)),\n n_bases=self.n_bases,\n spline_order=self.degree,\n warn=warn,\n start=self.data_min_[i],\n end=self.data_max_[i]).reshape(x[..., i].shape + (self.n_bases,))\n for i in range(x.shape[-1])]\n return np.stack(array_list, axis=-2)", "response": "Transform a set of data into a list of values."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef InputCodon(seq_length, ignore_stop_codons=True, name=None, **kwargs):\n if ignore_stop_codons:\n vocab = CODONS\n else:\n vocab = CODONS + STOP_CODONS\n\n assert seq_length % 3 == 0\n return Input((seq_length / 3, len(vocab)), name=name, **kwargs)", "response": "Input placeholder for array returned by encodeCodon."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ninputting placeholder for array returned by encodeAA.", "response": "def InputAA(seq_length, name=None, **kwargs):\n \"\"\"Input placeholder for array returned by `encodeAA`\n\n Wrapper for: `keras.layers.Input((seq_length, 22), name=name, **kwargs)`\n \"\"\"\n return Input((seq_length, len(AMINO_ACIDS)), name=name, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ninputs wrapper for encodeRNAStructure Wrapper for keras. layers. Input", "response": "def InputRNAStructure(seq_length, name=None, **kwargs):\n \"\"\"Input placeholder for array returned by `encodeRNAStructure`\n\n Wrapper for: `keras.layers.Input((seq_length, 5), name=name, **kwargs)`\n \"\"\"\n return Input((seq_length, len(RNAplfold_PROFILES)), name=name, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef InputSplines(seq_length, n_bases=10, name=None, **kwargs):\n return Input((seq_length, n_bases), name=name, **kwargs)", "response": "Input placeholder for array returned by encodeSplines."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef InputSplines1D(seq_length, n_bases=10, name=None, **kwargs):\n return Input((seq_length, n_bases), name=name, **kwargs)", "response": "Input wrapper for encodeSplines."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ninput wrapper around keras. layers. Input class.", "response": "def InputDNAQuantity(seq_length, n_features=1, name=None, **kwargs):\n \"\"\"Convenience wrapper around `keras.layers.Input`:\n\n `Input((seq_length, n_features), name=name, **kwargs)`\n \"\"\"\n return Input((seq_length, n_features), name=name, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef InputDNAQuantitySplines(seq_length, n_bases=10, name=\"DNASmoothPosition\", **kwargs):\n return Input((seq_length, n_bases), name=name, **kwargs)", "response": "Convenience wrapper around keras. layers. Input"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nplot weights as a heatmap", "response": "def _plot_weights_heatmap(self, index=None, figsize=None, **kwargs):\n \"\"\"Plot weights as a heatmap\n\n index = can be a particular index or a list of indicies\n **kwargs - additional arguments to concise.utils.plot.heatmap\n \"\"\"\n W = self.get_weights()[0]\n if index is None:\n index = np.arange(W.shape[2])\n\n fig = heatmap(np.swapaxes(W[:, :, index], 0, 1), plot_name=\"filter: \",\n vocab=self.VOCAB, figsize=figsize, **kwargs)\n # plt.show()\n return fig"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _plot_weights_motif(self, index, plot_type=\"motif_raw\",\n background_probs=DEFAULT_BASE_BACKGROUND,\n ncol=1,\n figsize=None):\n \"\"\"Index can only be a single int\n \"\"\"\n\n w_all = self.get_weights()\n\n if len(w_all) == 0:\n raise Exception(\"Layer needs to be initialized first\")\n W = w_all[0]\n if index is None:\n index = np.arange(W.shape[2])\n\n if isinstance(index, int):\n index = [index]\n fig = plt.figure(figsize=figsize)\n\n if plot_type == \"motif_pwm\" and plot_type in self.AVAILABLE_PLOTS:\n arr = pssm_array2pwm_array(W, background_probs)\n elif plot_type == \"motif_raw\" and plot_type in self.AVAILABLE_PLOTS:\n arr = W\n elif plot_type == \"motif_pwm_info\" and plot_type in self.AVAILABLE_PLOTS:\n quasi_pwm = pssm_array2pwm_array(W, background_probs)\n arr = _pwm2pwm_info(quasi_pwm)\n else:\n raise ValueError(\"plot_type needs to be from {0}\".format(self.AVAILABLE_PLOTS))\n\n fig = seqlogo_fig(arr, vocab=self.VOCAB_name, figsize=figsize, ncol=ncol, plot_name=\"filter: \")\n # fig.show()\n return fig", "response": "Plot the motifs for a single entry in the sequence."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nplotting the weights of the current object.", "response": "def plot_weights(self, index=None, plot_type=\"motif_raw\", figsize=None, ncol=1, **kwargs):\n \"\"\"Plot filters as heatmap or motifs\n\n index = can be a particular index or a list of indicies\n **kwargs - additional arguments to concise.utils.plot.heatmap\n \"\"\"\n\n if \"heatmap\" in self.AVAILABLE_PLOTS and plot_type == \"heatmap\":\n return self._plot_weights_heatmap(index=index, figsize=figsize, ncol=ncol, **kwargs)\n elif plot_type[:5] == \"motif\":\n return self._plot_weights_motif(index=index, plot_type=plot_type, figsize=figsize, ncol=ncol, **kwargs)\n else:\n raise ValueError(\"plot_type needs to be from {0}\".format(self.AVAILABLE_PLOTS))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _check_pwm_list(pwm_list):\n for pwm in pwm_list:\n if not isinstance(pwm, PWM):\n raise TypeError(\"element {0} of pwm_list is not of type PWM\".format(pwm))\n return True", "response": "Check the input validity of the PWM list."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds noise with truncnorm from numpy. random. truncnorm from numpy. random. truncnorm", "response": "def _truncated_normal(mean,\n stddev,\n seed=None,\n normalize=True,\n alpha=0.01):\n ''' Add noise with truncnorm from numpy.\n Bounded (0.001,0.999)\n '''\n # within range ()\n # provide entry to chose which adding noise way to use\n if seed is not None:\n np.random.seed(seed)\n if stddev == 0:\n X = mean\n else:\n gen_X = truncnorm((alpha - mean) / stddev,\n ((1 - alpha) - mean) / stddev,\n loc=mean, scale=stddev)\n X = gen_X.rvs() + mean\n if normalize:\n # Normalize, column sum to 1\n col_sums = X.sum(1)\n X = X / col_sums[:, np.newaxis]\n return X"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nplotting a heatmap from weight matrix w vmin vmax diverge_color = True will plot a heatmap from the first axis.", "response": "def heatmap(w, vmin=None, vmax=None, diverge_color=False,\n ncol=1,\n plot_name=None, vocab=[\"A\", \"C\", \"G\", \"T\"], figsize=(6, 2)):\n \"\"\"Plot a heatmap from weight matrix w\n\n vmin, vmax = z axis range\n diverge_color = Should we use diverging colors?\n plot_name = plot_title\n vocab = vocabulary (corresponds to the first axis)\n \"\"\"\n # Generate y and x values from the dimension lengths\n assert len(vocab) == w.shape[0]\n plt_y = np.arange(w.shape[0] + 1) + 0.5\n plt_x = np.arange(w.shape[1] + 1) - 0.5\n z_min = w.min()\n z_max = w.max()\n\n if vmin is None:\n vmin = z_min\n if vmax is None:\n vmax = z_max\n\n if diverge_color:\n color_map = plt.cm.RdBu\n else:\n color_map = plt.cm.Blues\n\n fig = plt.figure(figsize=figsize)\n # multiple axis\n if len(w.shape) == 3:\n #\n n_plots = w.shape[2]\n nrow = math.ceil(n_plots / ncol)\n else:\n n_plots = 1\n nrow = 1\n ncol = 1\n\n for i in range(n_plots):\n if len(w.shape) == 3:\n w_cur = w[:, :, i]\n else:\n w_cur = w\n ax = plt.subplot(nrow, ncol, i + 1)\n plt.tight_layout()\n im = ax.pcolormesh(plt_x, plt_y, w_cur, cmap=color_map,\n vmin=vmin, vmax=vmax, edgecolors=\"white\")\n ax.grid(False)\n ax.set_yticklabels([\"\"] + vocab, minor=False)\n ax.yaxis.set_major_locator(MaxNLocator(integer=True))\n ax.set_xticks(np.arange(w_cur.shape[1] + 1))\n ax.set_xlim(plt_x.min(), plt_x.max())\n ax.set_ylim(plt_y.min(), plt_y.max())\n\n # nice scale location:\n # http://stackoverflow.com/questions/18195758/set-matplotlib-colorbar-size-to-match-graph\n divider = make_axes_locatable(ax)\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.05)\n fig.colorbar(im, cax=cax)\n\n if plot_name is not None:\n if n_plots > 0:\n pln = plot_name + \" {0}\".format(i)\n else:\n pln = plot_name\n ax.set_title(pln)\n ax.set_aspect('equal')\n return fig"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef standardize_polygons_str(data_str):\n # find all of the polygons in the letter (for instance an A\n # needs to be constructed from 2 polygons)\n path_strs = re.findall(\"\\(\\(([^\\)]+?)\\)\\)\", data_str.strip())\n\n # convert the data into a numpy array\n polygons_data = []\n for path_str in path_strs:\n data = np.array([\n tuple(map(float, x.split())) for x in path_str.strip().split(\",\")])\n polygons_data.append(data)\n\n # standardize the coordinates\n min_coords = np.vstack(data.min(0) for data in polygons_data).min(0)\n max_coords = np.vstack(data.max(0) for data in polygons_data).max(0)\n for data in polygons_data:\n data[:, ] -= min_coords\n data[:, ] /= (max_coords - min_coords)\n\n polygons = []\n for data in polygons_data:\n polygons.append(load_wkt(\n \"POLYGON((%s))\" % \",\".join(\" \".join(map(str, x)) for x in data)))\n\n return tuple(polygons)", "response": "Given a POLYGON string standardize the coordinates to a 1x1 grid."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd a letter to matplotlib axis ax.", "response": "def add_letter_to_axis(ax, let, col, x, y, height):\n \"\"\"Add 'let' with position x,y and height height to matplotlib axis 'ax'.\n \"\"\"\n if len(let) == 2:\n colors = [col, \"white\"]\n elif len(let) == 1:\n colors = [col]\n else:\n raise ValueError(\"3 or more Polygons are not supported\")\n\n for polygon, color in zip(let, colors):\n new_polygon = affinity.scale(\n polygon, yfact=height, origin=(0, 0, 0))\n new_polygon = affinity.translate(\n new_polygon, xoff=x, yoff=y)\n patch = PolygonPatch(\n new_polygon, edgecolor=color, facecolor=color)\n ax.add_patch(patch)\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef seqlogo(letter_heights, vocab=\"DNA\", ax=None):\n ax = ax or plt.gca()\n\n assert letter_heights.shape[1] == len(VOCABS[vocab])\n x_range = [1, letter_heights.shape[0]]\n pos_heights = np.copy(letter_heights)\n pos_heights[letter_heights < 0] = 0\n neg_heights = np.copy(letter_heights)\n neg_heights[letter_heights > 0] = 0\n\n for x_pos, heights in enumerate(letter_heights):\n letters_and_heights = sorted(zip(heights, list(VOCABS[vocab].keys())))\n y_pos_pos = 0.0\n y_neg_pos = 0.0\n for height, letter in letters_and_heights:\n color = VOCABS[vocab][letter]\n polygons = letter_polygons[letter]\n if height > 0:\n add_letter_to_axis(ax, polygons, color, 0.5 + x_pos, y_pos_pos, height)\n y_pos_pos += height\n else:\n add_letter_to_axis(ax, polygons, color, 0.5 + x_pos, y_neg_pos, height)\n y_neg_pos += height\n\n # if add_hline:\n # ax.axhline(color=\"black\", linewidth=1)\n ax.set_xlim(x_range[0] - 1, x_range[1] + 1)\n ax.grid(False)\n ax.set_xticks(list(range(*x_range)) + [x_range[-1]])\n ax.set_aspect(aspect='auto', adjustable='box')\n ax.autoscale_view()", "response": "Make a sequence logo plot."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_cv_accuracy(res):\n ac_list = [(accuracy[\"train_acc_final\"],\n accuracy[\"test_acc_final\"]\n )\n for accuracy, weights in res]\n ac = np.array(ac_list)\n\n perf = {\n \"mean_train_acc\": np.mean(ac[:, 0]),\n \"std_train_acc\": np.std(ac[:, 0]),\n \"mean_test_acc\": np.mean(ac[:, 1]),\n \"std_test_acc\": np.std(ac[:, 1]),\n\n }\n return perf", "response": "Extract the cv accuracy from the model\n "} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert a one - hot encoded array back to a string", "response": "def one_hot2string(arr, vocab):\n \"\"\"Convert a one-hot encoded array back to string\n \"\"\"\n tokens = one_hot2token(arr)\n indexToLetter = _get_index_dict(vocab)\n\n return [''.join([indexToLetter[x] for x in row]) for row in tokens]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef tokenize(seq, vocab, neutral_vocab=[]):\n # Req: all vocabs have the same length\n if isinstance(neutral_vocab, str):\n neutral_vocab = [neutral_vocab]\n\n nchar = len(vocab[0])\n for l in vocab + neutral_vocab:\n assert len(l) == nchar\n assert len(seq) % nchar == 0 # since we are using striding\n\n vocab_dict = _get_vocab_dict(vocab)\n for l in neutral_vocab:\n vocab_dict[l] = -1\n\n # current performance bottleneck\n return [vocab_dict[seq[(i * nchar):((i + 1) * nchar)]] for i in range(len(seq) // nchar)]", "response": "Convert sequence to integers\nsgi"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef token2one_hot(tvec, vocab_size):\n arr = np.zeros((len(tvec), vocab_size))\n\n tvec_range = np.arange(len(tvec))\n tvec = np.asarray(tvec)\n arr[tvec_range[tvec >= 0], tvec[tvec >= 0]] = 1\n return arr", "response": "Convert a token vector into one - hot array."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconvert a list of genetic sequences into one - hot - encoded array.", "response": "def encodeSequence(seq_vec, vocab, neutral_vocab, maxlen=None,\n seq_align=\"start\", pad_value=\"N\", encode_type=\"one_hot\"):\n \"\"\"Convert a list of genetic sequences into one-hot-encoded array.\n\n # Arguments\n seq_vec: list of strings (genetic sequences)\n vocab: list of chars: List of \"words\" to use as the vocabulary. Can be strings of length>0,\n but all need to have the same length. For DNA, this is: [\"A\", \"C\", \"G\", \"T\"].\n neutral_vocab: list of chars: Values used to pad the sequence or represent unknown-values. For DNA, this is: [\"N\"].\n maxlen: int or None,\n Should we trim (subset) the resulting sequence. If None don't trim.\n Note that trims wrt the align parameter.\n It should be smaller than the longest sequence.\n seq_align: character; 'end' or 'start'\n To which end should we align sequences?\n encode_type: \"one_hot\" or \"token\". \"token\" represents each vocab element as a positive integer from 1 to len(vocab) + 1.\n neutral_vocab is represented with 0.\n\n # Returns\n Array with shape for encode_type:\n\n - \"one_hot\": `(len(seq_vec), maxlen, len(vocab))`\n - \"token\": `(len(seq_vec), maxlen)`\n\n If `maxlen=None`, it gets the value of the longest sequence length from `seq_vec`.\n \"\"\"\n if isinstance(neutral_vocab, str):\n neutral_vocab = [neutral_vocab]\n if isinstance(seq_vec, str):\n raise ValueError(\"seq_vec should be an iterable returning \" +\n \"strings not a string itself\")\n assert len(vocab[0]) == len(pad_value)\n assert pad_value in neutral_vocab\n\n assert encode_type in [\"one_hot\", \"token\"]\n\n seq_vec = pad_sequences(seq_vec, maxlen=maxlen,\n align=seq_align, value=pad_value)\n\n if encode_type == \"one_hot\":\n arr_list = [token2one_hot(tokenize(seq, vocab, neutral_vocab), len(vocab))\n for i, seq in enumerate(seq_vec)]\n elif encode_type == \"token\":\n arr_list = [1 + np.array(tokenize(seq, vocab, neutral_vocab)) for seq in seq_vec]\n # we add 1 to be compatible with keras: https://keras.io/layers/embeddings/\n # indexes > 0, 0 = padding element\n\n return np.stack(arr_list)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef encodeDNA(seq_vec, maxlen=None, seq_align=\"start\"):\n return encodeSequence(seq_vec,\n vocab=DNA,\n neutral_vocab=\"N\",\n maxlen=maxlen,\n seq_align=seq_align,\n pad_value=\"N\",\n encode_type=\"one_hot\")", "response": "Convert the DNA sequence into 1 - hot - encoding numpy array"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconverts the RNA sequence into 1 - hot - encoding numpy array as for encodeDNA", "response": "def encodeRNA(seq_vec, maxlen=None, seq_align=\"start\"):\n \"\"\"Convert the RNA sequence into 1-hot-encoding numpy array as for encodeDNA\n \"\"\"\n return encodeSequence(seq_vec,\n vocab=RNA,\n neutral_vocab=\"N\",\n maxlen=maxlen,\n seq_align=seq_align,\n pad_value=\"N\",\n encode_type=\"one_hot\")"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconverts the Codon sequence into 1 - hot - encoding numpy array.", "response": "def encodeCodon(seq_vec, ignore_stop_codons=True, maxlen=None, seq_align=\"start\", encode_type=\"one_hot\"):\n \"\"\"Convert the Codon sequence into 1-hot-encoding numpy array\n\n # Arguments\n seq_vec: List of strings/DNA sequences\n ignore_stop_codons: boolean; if True, STOP_CODONS are omitted from one-hot encoding.\n maxlen: Maximum sequence length. See `pad_sequences` for more detail\n seq_align: How to align the sequences of variable lengths. See `pad_sequences` for more detail\n encode_type: can be `\"one_hot\"` or `token` for token encoding of codons (incremental integer ).\n\n # Returns\n numpy.ndarray of shape `(len(seq_vec), maxlen / 3, 61 if ignore_stop_codons else 64)`\n \"\"\"\n if ignore_stop_codons:\n vocab = CODONS\n neutral_vocab = STOP_CODONS + [\"NNN\"]\n else:\n vocab = CODONS + STOP_CODONS\n neutral_vocab = [\"NNN\"]\n\n # replace all U's with A's?\n seq_vec = [str(seq).replace(\"U\", \"T\") for seq in seq_vec]\n\n return encodeSequence(seq_vec,\n vocab=vocab,\n neutral_vocab=neutral_vocab,\n maxlen=maxlen,\n seq_align=seq_align,\n pad_value=\"NNN\",\n encode_type=encode_type)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef encodeAA(seq_vec, maxlen=None, seq_align=\"start\", encode_type=\"one_hot\"):\n return encodeSequence(seq_vec,\n vocab=AMINO_ACIDS,\n neutral_vocab=\"_\",\n maxlen=maxlen,\n seq_align=seq_align,\n pad_value=\"_\",\n encode_type=encode_type)", "response": "Convert the Amino - acid sequence into 1 - hot - encoding numpy array."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a list of sequences with common length.", "response": "def pad_sequences(sequence_vec, maxlen=None, align=\"end\", value=\"N\"):\n \"\"\"Pad and/or trim a list of sequences to have common length. Procedure:\n\n 1. Pad the sequence with N's or any other string or list element (`value`)\n 2. Subset the sequence\n\n # Note\n See also: https://keras.io/preprocessing/sequence/\n Aplicable also for lists of characters\n\n # Arguments\n sequence_vec: list of chars or lists\n List of sequences that can have various lengths\n value: Neutral element to pad the sequence with. Can be `str` or `list`.\n maxlen: int or None; Final lenght of sequences.\n If None, maxlen is set to the longest sequence length.\n align: character; 'start', 'end' or 'center'\n To which end to align the sequences when triming/padding. See examples bellow.\n\n # Returns\n List of sequences of the same class as sequence_vec\n\n # Example\n\n ```python\n >>> sequence_vec = ['CTTACTCAGA', 'TCTTTA']\n >>> pad_sequences(sequence_vec, 10, align=\"start\", value=\"N\")\n ['CTTACTCAGA', 'TCTTTANNNN']\n >>> pad_sequences(sequence_vec, 10, align=\"end\", value=\"N\")\n ['CTTACTCAGA', 'NNNNTCTTTA']\n >>> pad_sequences(sequence_vec, 4, align=\"center\", value=\"N\")\n ['ACTC', 'CTTT']\n ```\n \"\"\"\n\n # neutral element type checking\n assert isinstance(value, list) or isinstance(value, str)\n assert isinstance(value, type(sequence_vec[0]))\n assert not isinstance(sequence_vec, str)\n assert isinstance(sequence_vec[0], list) or isinstance(sequence_vec[0], str)\n\n max_seq_len = max([len(seq) for seq in sequence_vec])\n\n if maxlen is None:\n maxlen = max_seq_len\n else:\n maxlen = int(maxlen)\n\n if max_seq_len < maxlen:\n print(\"WARNING: Maximum sequence length (%s) is less than maxlen (%s)\" % (max_seq_len, maxlen))\n max_seq_len = maxlen\n\n # check the case when len > 1\n for seq in sequence_vec:\n if not len(seq) % len(value) == 0:\n raise ValueError(\"All sequences need to be dividable by len(value)\")\n if not maxlen % len(value) == 0:\n raise ValueError(\"maxlen needs to be dividable by len(value)\")\n\n # pad and subset\n def pad(seq, max_seq_len, value=\"N\", align=\"end\"):\n seq_len = len(seq)\n assert max_seq_len >= seq_len\n if align is \"end\":\n n_left = max_seq_len - seq_len\n n_right = 0\n elif align is \"start\":\n n_right = max_seq_len - seq_len\n n_left = 0\n elif align is \"center\":\n n_left = (max_seq_len - seq_len) // 2 + (max_seq_len - seq_len) % 2\n n_right = (max_seq_len - seq_len) // 2\n else:\n raise ValueError(\"align can be of: end, start or center\")\n\n # normalize for the length\n n_left = n_left // len(value)\n n_right = n_right // len(value)\n\n return value * n_left + seq + value * n_right\n\n def trim(seq, maxlen, align=\"end\"):\n seq_len = len(seq)\n\n assert maxlen <= seq_len\n if align is \"end\":\n return seq[-maxlen:]\n elif align is \"start\":\n return seq[0:maxlen]\n elif align is \"center\":\n dl = seq_len - maxlen\n n_left = dl // 2 + dl % 2\n n_right = seq_len - dl // 2\n return seq[n_left:n_right]\n else:\n raise ValueError(\"align can be of: end, start or center\")\n\n padded_sequence_vec = [pad(seq, max(max_seq_len, maxlen),\n value=value, align=align) for seq in sequence_vec]\n padded_sequence_vec = [trim(seq, maxlen, align=align) for seq in padded_sequence_vec]\n\n return padded_sequence_vec"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef extract_landmarks(gtf, landmarks=ALL_LANDMARKS):\n if isinstance(gtf, str):\n _logger.info(\"Reading gtf file..\")\n gtf = read_gtf(gtf)\n _logger.info(\"Done\")\n\n _logger.info(\"Running landmark extractors..\")\n # landmarks to a dictionary with a function\n assert isinstance(landmarks, (list, tuple, set, dict))\n if isinstance(landmarks, dict):\n landmarks = {k: _get_fun(v) for k, v in landmarks.items()}\n else:\n landmarks = {_to_string(fn_str): _get_fun(fn_str)\n for fn_str in landmarks}\n\n r = {k: _validate_pos(v(gtf)) for k, v in landmarks.items()}\n _logger.info(\"Done!\")\n return r", "response": "Given a GFF file and a dictionary of landmarks extract the positions of the sequence in that file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _validate_pos(df):\n assert isinstance(df, pd.DataFrame)\n assert [\"seqname\", \"position\", \"strand\"] == df.columns.tolist()\n assert df.position.dtype == np.dtype(\"int64\")\n assert df.strand.dtype == np.dtype(\"O\")\n assert df.seqname.dtype == np.dtype(\"O\")\n return df", "response": "Validates the returned positional object\n "} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndefines a huber loss for the current time step.", "response": "def huber_loss(tensor, k=1, scope=None):\n \"\"\"Define a huber loss https://en.wikipedia.org/wiki/Huber_loss \n tensor: tensor to regularize.\n k: value of k in the huber loss\n scope: Optional scope for op_scope.\n\n Huber loss:\n f(x) = if |x| <= k:\n 0.5 * x^2\n else:\n k * |x| - 0.5 * k^2\n\n Returns:\n the L1 loss op.\n \"\"\"\n # assert k >= 0\n with tf.name_scope(scope, 'L1Loss', [tensor]):\n loss = tf.reduce_mean(tf.select(tf.abs(tensor) < k,\n 0.5 * tf.square(tensor),\n k * tf.abs(tensor) - 0.5 * k ^ 2)\n )\n return loss"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_metadata():\n dt = pd.read_table(ATTRACT_METADTA)\n dt.rename(columns={\"Matrix_id\": \"PWM_id\"}, inplace=True)\n # put to firt place\n cols = ['PWM_id'] + [col for col in dt if col != 'PWM_id']\n\n # rename Matrix_id to PWM_id\n return dt[cols]", "response": "Get pandas. DataFrame with metadata about the Attract PWM s."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget a list of Attract PWM s.", "response": "def get_pwm_list(pwm_id_list, pseudocountProb=0.0001):\n \"\"\"Get a list of Attract PWM's.\n\n # Arguments\n pwm_id_list: List of id's from the `PWM_id` column in `get_metadata()` table\n pseudocountProb: Added pseudocount probabilities to the PWM\n\n # Returns\n List of `concise.utils.pwm.PWM` instances.\n \"\"\"\n l = load_motif_db(ATTRACT_PWM)\n l = {k.split()[0]: v for k, v in l.items()}\n pwm_list = [PWM(l[str(m)] + pseudocountProb, name=m) for m in pwm_id_list]\n return pwm_list"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef mask_loss(loss, mask_value=MASK_VALUE):\n loss_fn = kloss.deserialize(loss)\n\n def masked_loss_fn(y_true, y_pred):\n # currently not suppoerd with NA's:\n # - there is no K.is_nan impolementation in keras.backend\n # - https://github.com/fchollet/keras/issues/1628\n mask = K.cast(K.not_equal(y_true, mask_value), K.floatx())\n\n # we divide by the mean to correct for the number of done loss evaluations\n return loss_fn(y_true * mask, y_pred * mask) / K.mean(mask)\n\n masked_loss_fn.__name__ = loss + \"_masked\"\n return masked_loss_fn", "response": "Generates a new keras loss function that ignores values where y_true == mask_value"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget a list of HOCOMOCO PWMs.", "response": "def get_pwm_list(pwm_id_list, pseudocountProb=0.0001):\n \"\"\"Get a list of HOCOMOCO PWM's.\n\n # Arguments\n pwm_id_list: List of id's from the `PWM_id` column in `get_metadata()` table\n pseudocountProb: Added pseudocount probabilities to the PWM\n\n # Returns\n List of `concise.utils.pwm.PWM` instances.\n \"\"\"\n l = load_motif_db(HOCOMOCO_PWM)\n l = {k.split()[0]: v for k, v in l.items()}\n pwm_list = [PWM(_normalize_pwm(l[m]) + pseudocountProb, name=m) for m in pwm_id_list]\n return pwm_list"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a dictionary of trained weights.", "response": "def get_weights(self):\n \"\"\"\n Returns:\n dict: Model's trained weights.\n \"\"\"\n if self.is_trained() is False:\n # print(\"Model not fitted yet. Use object.fit() to fit the model.\")\n return None\n\n var_res = self._var_res\n weights = self._var_res_to_weights(var_res)\n # save to the side\n weights[\"final_bias_fit\"] = weights[\"final_bias\"]\n weights[\"feature_weights_fit\"] = weights[\"feature_weights\"]\n\n return weights"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _var_res_to_weights(self, var_res):\n # transform the weights into our form\n motif_base_weights_raw = var_res[\"motif_base_weights\"][0]\n motif_base_weights = np.swapaxes(motif_base_weights_raw, 0, 2)\n\n # get weights\n motif_weights = var_res[\"motif_weights\"]\n motif_bias = var_res[\"motif_bias\"]\n final_bias = var_res[\"final_bias\"]\n feature_weights = var_res[\"feature_weights\"]\n\n # get the GAM prediction:\n spline_pred = None\n spline_weights = None\n if self._param[\"n_splines\"] is not None:\n spline_pred = self._splines[\"X_spline\"].dot(var_res[\"spline_weights\"])\n\n if self._param[\"spline_exp\"] is True:\n spline_pred = np.exp(spline_pred)\n else:\n spline_pred = (spline_pred + 1)\n\n spline_pred.reshape([-1])\n spline_weights = var_res[\"spline_weights\"]\n\n weights = {\"motif_base_weights\": motif_base_weights,\n \"motif_weights\": motif_weights,\n \"motif_bias\": motif_bias,\n \"final_bias\": final_bias,\n \"feature_weights\": feature_weights,\n \"spline_pred\": spline_pred,\n \"spline_weights\": spline_weights\n }\n return weights", "response": "Convert a var_res to a dict of weights."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_var_res(self, graph, var, other_var):\n with tf.Session(graph=graph) as sess:\n sess.run(other_var[\"init\"])\n\n # all_vars = tf.all_variables()\n # print(\"All variable names\")\n # print([var.name for var in all_vars])\n # print(\"All variable values\")\n # print(sess.run(all_vars))\n var_res = self._get_var_res_sess(sess, var)\n return var_res", "response": "Get the weights from our graph and the other variable"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconverting a dictionary of numpy arrays to tf. Variables", "response": "def _convert_to_var(self, graph, var_res):\n \"\"\"\n Create tf.Variables from a list of numpy arrays\n\n var_res: dictionary of numpy arrays with the key names corresponding to var\n \"\"\"\n with graph.as_default():\n var = {}\n for key, value in var_res.items():\n if value is not None:\n var[key] = tf.Variable(value, name=\"tf_%s\" % key)\n else:\n var[key] = None\n return var"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ntraining the CONCISE model for the given feature design matrix X_feat X_seq y y_valid is a numpy array of length N_seq.", "response": "def train(self, X_feat, X_seq, y,\n X_feat_valid=None, X_seq_valid=None, y_valid=None,\n n_cores=3):\n \"\"\"Train the CONCISE model\n\n :py:attr:`X_feat`, :py:attr:`X_seq`, py:attr:`y` are preferrably returned by the :py:func:`concise.prepare_data` function.\n\n Args:\n X_feat: Numpy (float) array of shape :code:`(N, D)`. Feature design matrix storing :code:`N` training samples and :code:`D` features\n X_seq: Numpy (float) array of shape :code:`(N, 1, N_seq, 4)`. It represents 1-hot encoding of the DNA/RNA sequence.(:code:`N`-seqeuences of length :code:`N_seq`)\n y: Numpy (float) array of shape :code:`(N, 1)`. Response variable.\n X_feat_valid: :py:attr:`X_feat` used for model validation.\n X_seq_valid: :py:attr:`X_seq` used for model validation.\n y: :py:attr:`y` used for model validation.\n n_cores (int): Number of CPU cores used for training. If available, GPU is used for training and this argument is ignored.\n \"\"\"\n\n if X_feat_valid is None and X_seq_valid is None and y_valid is None:\n X_feat_valid = X_feat\n X_seq_valid = X_seq\n y_valid = y\n print(\"Using training samples also for validation \")\n\n # insert one dimension - backcompatiblity\n X_seq = np.expand_dims(X_seq, axis=1)\n X_seq_valid = np.expand_dims(X_seq_valid, axis=1)\n\n # TODO: implement the re-training feature\n if self.is_trained() is True:\n print(\"Model already fitted. Re-training feature not implemented yet\")\n return\n\n # input check\n assert X_seq.shape[0] == X_feat.shape[0] == y.shape[0]\n assert y.shape == (X_feat.shape[0], self._num_tasks)\n\n # extract data specific parameters\n self._param[\"seq_length\"] = X_seq.shape[2]\n self._param[\"n_add_features\"] = X_feat.shape[1]\n\n # more input check\n if not self._param[\"seq_length\"] == X_seq_valid.shape[2]:\n raise Exception(\"sequence lengths don't match\")\n\n # setup splines\n if self._param[\"n_splines\"] is not None:\n padd_loss = self._param[\"motif_length\"] - 1 # how much shorter is our sequence, since we don't use padding\n X_spline, S, _ = splines.get_gam_splines(start=0,\n end=self._param[\"seq_length\"] - padd_loss - 1, # -1 due to zero-indexing\n n_bases=self._param[\"n_splines\"],\n spline_order=3,\n add_intercept=False)\n self._splines = {\"X_spline\": X_spline,\n \"S\": S\n }\n\n # setup graph and variables\n self._graph = tf.Graph()\n self._var = self._get_var_initialization(self._graph, X_feat_train=X_feat, y_train=y)\n self._other_var = self._build_graph(self._graph, self._var)\n # TODO: save the intialized parameters\n var_res_init = self._get_var_res(self._graph, self._var, self._other_var)\n self.init_weights = self._var_res_to_weights(var_res=var_res_init)\n\n # finally train the model\n # - it saves the accuracy\n\n if self._param[\"optimizer\"] == \"adam\":\n _train = self._train_adam\n elif self._param[\"optimizer\"] == \"lbfgs\":\n _train = self._train_lbfgs\n else:\n raise Exception(\"Optimizer {} not implemented\".format(self._param[\"optimizer\"]))\n\n self._var_res = _train(X_feat, X_seq, y,\n X_feat_valid, X_seq_valid, y_valid,\n graph=self._graph, var=self._var, other_var=self._other_var,\n early_stop_patience=self._param[\"early_stop_patience\"],\n n_cores=n_cores)\n\n self._model_fitted = True\n\n # TODO: maybe:\n # - add y_train_accuracy\n # - y_train\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\npredicts y or y_pred from inside the tf session.", "response": "def _predict_in_session(self, sess, other_var, X_feat, X_seq, variable=\"y_pred\"):\n \"\"\"\n Predict y (or any other variable) from inside the tf session. Variable has to be in other_var\n \"\"\"\n # other_var[\"tf_X_seq\"]: X_seq, tf_y: y,\n feed_dict = {other_var[\"tf_X_feat\"]: X_feat,\n other_var[\"tf_X_seq\"]: X_seq}\n\n y_pred = sess.run(other_var[variable], feed_dict=feed_dict)\n return y_pred"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncompute the accuracy from inside the tf session", "response": "def _accuracy_in_session(self, sess, other_var, X_feat, X_seq, y):\n \"\"\"\n Compute the accuracy from inside the tf session\n \"\"\"\n y_pred = self._predict_in_session(sess, other_var, X_feat, X_seq)\n return ce.mse(y_pred, y)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef predict(self, X_feat, X_seq):\n\n # insert one dimension - backcompatiblity\n X_seq = np.expand_dims(X_seq, axis=1)\n\n return self._get_other_var(X_feat, X_seq, variable=\"y_pred\")", "response": "Predict the response variable y for new input data."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _get_other_var(self, X_feat, X_seq, variable=\"y_pred\"):\n if self.is_trained() is False:\n print(\"Model not fitted yet. Use object.fit() to fit the model.\")\n return\n\n # input check:\n assert X_seq.shape[0] == X_feat.shape[0]\n\n # TODO - check this\n # sequence can be wider or thinner?\n # assert self._param[\"seq_length\"] == X_seq.shape[2]\n assert self._param[\"n_add_features\"] == X_feat.shape[1]\n\n # setup graph and variables\n graph = tf.Graph()\n var = self._convert_to_var(graph, self._var_res)\n other_var = self._build_graph(graph, var)\n with tf.Session(graph=graph) as sess:\n sess.run(other_var[\"init\"])\n # predict\n y = self._predict_in_session(sess, other_var, X_feat, X_seq, variable)\n\n return y", "response": "Get the value of a variable from the other_vars"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the current object as a dictionary.", "response": "def to_dict(self):\n \"\"\"\n Returns:\n dict: Concise represented as a dictionary.\n \"\"\"\n final_res = {\n \"param\": self._param,\n \"unused_param\": self.unused_param,\n \"execution_time\": self._exec_time,\n \"output\": {\"accuracy\": self.get_accuracy(),\n \"weights\": self.get_weights(),\n \"splines\": self._splines\n }\n }\n return final_res"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _set_var_res(self, weights):\n if weights is None:\n return\n\n # layer 1\n motif_base_weights_raw = np.swapaxes(weights[\"motif_base_weights\"], 2, 0)\n motif_base_weights = motif_base_weights_raw[np.newaxis]\n motif_bias = weights[\"motif_bias\"]\n\n feature_weights = weights[\"feature_weights\"]\n spline_weights = weights[\"spline_weights\"]\n\n # filter\n motif_weights = weights[\"motif_weights\"]\n final_bias = weights[\"final_bias\"]\n\n var_res = {\n \"motif_base_weights\": motif_base_weights,\n \"motif_bias\": motif_bias,\n \"spline_weights\": spline_weights,\n \"feature_weights\": feature_weights,\n \"motif_weights\": motif_weights,\n \"final_bias\": final_bias\n }\n\n # cast everything to float32\n var_res = {key: value.astype(np.float32) if value is not None else None for key, value in var_res.items()}\n\n self._var_res = var_res", "response": "Transform the weights to var_res"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef from_dict(cls, obj_dict):\n\n # convert the output into a proper form\n obj_dict['output'] = helper.rec_dict_to_numpy_dict(obj_dict[\"output\"])\n\n helper.dict_to_numpy_dict(obj_dict['output'])\n if \"trained_global_model\" in obj_dict.keys():\n raise Exception(\"Found trained_global_model feature in dictionary. Use ConciseCV.load to load this file.\")\n dc = Concise(**obj_dict[\"param\"])\n\n # touch the hidden arguments\n dc._param = obj_dict[\"param\"]\n if obj_dict[\"output\"][\"weights\"] is None:\n dc._model_fitted = False\n else:\n dc._model_fitted = True\n dc._exec_time = obj_dict[\"execution_time\"]\n\n dc.unused_param = obj_dict[\"unused_param\"]\n dc._accuracy = obj_dict[\"output\"][\"accuracy\"]\n dc._splines = obj_dict[\"output\"][\"splines\"]\n\n weights = obj_dict[\"output\"][\"weights\"]\n\n if weights is not None:\n # fix the dimensionality of X_feat in case it was 0 dimensional\n if weights[\"feature_weights\"].shape == (0,):\n weights[\"feature_weights\"].shape = (0, obj_dict[\"param\"][\"num_tasks\"])\n dc._set_var_res(weights)\n\n return dc", "response": "Load the object from a dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef load(cls, file_path):\n\n # convert back to numpy\n data = helper.read_json(file_path)\n return Concise.from_dict(data)", "response": "Load the object from a JSON file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the used CV folds", "response": "def _get_folds(n_rows, n_folds, use_stored):\n \"\"\"\n Get the used CV folds\n \"\"\"\n # n_folds = self._n_folds\n # use_stored = self._use_stored_folds\n # n_rows = self._n_rows\n\n if use_stored is not None:\n # path = '~/concise/data-offline/lw-pombe/cv_folds_5.json'\n with open(os.path.expanduser(use_stored)) as json_file:\n json_data = json.load(json_file)\n\n # check if we have the same number of rows and folds:\n if json_data['N_rows'] != n_rows:\n raise Exception('N_rows from folds doesnt match the number of rows of X_seq, X_feat, y')\n\n if json_data['N_folds'] != n_folds:\n raise Exception('n_folds dont match', json_data['N_folds'], n_folds)\n\n kf = [(np.array(train), np.array(test)) for (train, test) in json_data['folds']]\n else:\n kf = KFold(n_splits=n_folds).split(np.zeros((n_rows, 1)))\n\n # store in a list\n i = 1\n folds = []\n for train, test in kf:\n fold = \"fold_\" + str(i)\n folds.append((fold, train, test))\n i = i + 1\n return folds"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ntraining the Concise model in cross - validation.", "response": "def train(self, X_feat, X_seq, y, id_vec=None, n_folds=10, use_stored_folds=None, n_cores=1,\n train_global_model=False):\n \"\"\"Train the Concise model in cross-validation.\n\n Args:\n X_feat: See :py:func:`concise.Concise.train`\n X_seq: See :py:func:`concise.Concise.train`\n y: See :py:func:`concise.Concise.train`\n id_vec: List of character id's used to differentiate the trainig samples. Returned by :py:func:`concise.prepare_data`.\n n_folds (int): Number of CV-folds to use.\n use_stored_folds (chr or None): File path to a .json file containing the fold information (as returned by :py:func:`concise.ConciseCV.get_folds`). If None, the folds are generated.\n n_cores (int): Number of CPU cores used for training. If available, GPU is used for training and this argument is ignored.\n train_global_model (bool): In addition to training the model in cross-validation, should the global model be fitted (using all the samples from :code:`(X_feat, X_seq, y)`). \n \"\"\"\n # TODO: input check - dimensions\n self._use_stored_folds = use_stored_folds\n self._n_folds = n_folds\n self._n_rows = X_feat.shape[0]\n\n # TODO: - fix the get_cv_accuracy\n # save:\n # - each model\n # - each model's performance\n # - each model's predictions\n # - globally:\n # - mean perfomance\n # - sd performance\n # - predictions\n\n self._kf = self._get_folds(self._n_rows, self._n_folds, self._use_stored_folds)\n\n cv_obj = {}\n\n if id_vec is None:\n id_vec = np.arange(1, self._n_rows + 1)\n\n best_val_acc_epoch_l = []\n for fold, train, test in self._kf:\n X_feat_train = X_feat[train]\n X_seq_train = X_seq[train]\n y_train = y[train]\n\n X_feat_test = X_feat[test]\n X_seq_test = X_seq[test]\n y_test = y[test]\n id_vec_test = id_vec[test]\n print(fold, \"/\", n_folds)\n\n # copy the object\n dc = copy.deepcopy(self._concise_model)\n dc.train(X_feat_train, X_seq_train, y_train,\n X_feat_test, X_seq_test, y_test,\n n_cores=n_cores\n )\n\n dc._test(X_feat_test, X_seq_test, y_test, id_vec_test)\n cv_obj[fold] = dc\n best_val_acc_epoch_l.append(dc.get_accuracy()[\"best_val_acc_epoch\"])\n self._cv_model = cv_obj\n\n # additionaly train the global model\n if train_global_model:\n dc = copy.deepcopy(self._concise_model)\n\n # overwrite n_epochs with the best average number of best epochs\n dc._param[\"n_epochs\"] = int(np.array(best_val_acc_epoch_l).mean())\n print(\"tranining global model with n_epochs = \" + str(dc._param[\"n_epochs\"]))\n\n dc.train(X_feat, X_seq, y,\n n_cores=n_cores\n )\n dc._test(X_feat, X_seq, y, id_vec)\n self._concise_global_model = dc"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_CV_prediction(self):\n # TODO: get it from the test_prediction ...\n # test_id, prediction\n # sort by test_id\n predict_vec = np.zeros((self._n_rows, self._concise_model._num_tasks))\n for fold, train, test in self._kf:\n acc = self._cv_model[fold].get_accuracy()\n predict_vec[test, :] = acc[\"y_test_prediction\"]\n return predict_vec", "response": "Returns the predicted test_prediction for all hold - out folds."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the prediction accuracy in CV.", "response": "def get_CV_accuracy(self):\n \"\"\"\n Returns:\n float: Prediction accuracy in CV.\n \"\"\"\n accuracy = {}\n for fold, train, test in self._kf:\n acc = self._cv_model[fold].get_accuracy()\n accuracy[fold] = acc[\"test_acc_final\"]\n return accuracy"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef to_dict(self):\n param = {\n \"n_folds\": self._n_folds,\n \"n_rows\": self._n_rows,\n \"use_stored_folds\": self._use_stored_folds\n }\n\n if self._concise_global_model is None:\n trained_global_model = None\n else:\n trained_global_model = self._concise_global_model.to_dict()\n\n obj_dict = {\"param\": param,\n \"folds\": self._kf,\n \"init_model\": self._concise_model.to_dict(),\n \"trained_global_model\": trained_global_model,\n \"output\": {fold: model.to_dict() for fold, model in self.get_CV_models().items()}\n }\n return obj_dict", "response": "Returns a dictionary representation of the ConciseCV."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef from_dict(cls, obj_dict):\n default_model = Concise()\n cvdc = ConciseCV(default_model)\n cvdc._from_dict(obj_dict)\n return cvdc", "response": "Load the object from a dictionary."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _from_dict(self, obj_dict):\n self._n_folds = obj_dict[\"param\"][\"n_folds\"]\n self._n_rows = obj_dict[\"param\"][\"n_rows\"]\n self._use_stored_folds = obj_dict[\"param\"][\"use_stored_folds\"]\n\n self._concise_model = Concise.from_dict(obj_dict[\"init_model\"])\n\n if obj_dict[\"trained_global_model\"] is None:\n self._concise_global_model = None\n else:\n self._concise_global_model = Concise.from_dict(obj_dict[\"trained_global_model\"])\n\n self._kf = [(fold, np.asarray(train), np.asarray(test)) for fold, train, test in obj_dict[\"folds\"]]\n self._cv_model = {fold: Concise.from_dict(model_dict) for fold, model_dict in obj_dict[\"output\"].items()}", "response": "Initialize a model from a dictionary."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nloading the object from a JSON file.", "response": "def load(cls, file_path):\n \"\"\"\n Load the object from a JSON file (saved with :py:func:`ConciseCV.save`)\n\n Returns:\n ConciseCV: Loaded ConciseCV object.\n \"\"\"\n data = helper.read_json(file_path)\n return ConciseCV.from_dict(data)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts pwm array to pssm array", "response": "def pwm_array2pssm_array(arr, background_probs=DEFAULT_BASE_BACKGROUND):\n \"\"\"Convert pwm array to pssm array\n \"\"\"\n b = background_probs2array(background_probs)\n b = b.reshape([1, 4, 1])\n return np.log(arr / b).astype(arr.dtype)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert pssm array to pwm array", "response": "def pssm_array2pwm_array(arr, background_probs=DEFAULT_BASE_BACKGROUND):\n \"\"\"Convert pssm array to pwm array\n \"\"\"\n b = background_probs2array(background_probs)\n b = b.reshape([1, 4, 1])\n return (np.exp(arr) * b).astype(arr.dtype)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreading the motif file in the following format and return a dictionary of numpy arrays.", "response": "def load_motif_db(filename, skipn_matrix=0):\n \"\"\"Read the motif file in the following format\n\n ```\n >motif_name\n 0.10.20.50.6\n ...\n >motif_name2\n ....\n ```\n\n Delim can be anything supported by np.loadtxt\n\n # Arguments\n filename: str, file path\n skipn_matrix: integer, number of characters to skip when reading\n the numeric matrix (for Encode = 2)\n\n # Returns\n Dictionary of numpy arrays\n\n \"\"\"\n\n # read-lines\n if filename.endswith(\".gz\"):\n f = gzip.open(filename, 'rt', encoding='utf-8')\n else:\n f = open(filename, 'r')\n lines = f.readlines()\n f.close()\n\n motifs_dict = {}\n motif_lines = \"\"\n motif_name = None\n\n def lines2matrix(lines):\n return np.loadtxt(StringIO(lines))\n\n for line in lines:\n if line.startswith(\">\"):\n if motif_lines:\n # lines -> matrix\n motifs_dict[motif_name] = lines2matrix(motif_lines)\n motif_name = line[1:].strip()\n motif_lines = \"\"\n else:\n motif_lines += line[skipn_matrix:]\n\n if motif_lines and motif_name is not None:\n motifs_dict[motif_name] = lines2matrix(motif_lines)\n\n return motifs_dict"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns an iterator over the fasta file Given a fasta file yield tuples of header sequence MimeType", "response": "def iter_fasta(file_path):\n \"\"\"Returns an iterator over the fasta file\n\n Given a fasta file. yield tuples of header, sequence\n\n Code modified from Brent Pedersen's:\n \"Correct Way To Parse A Fasta File In Python\"\n\n\n # Example\n\n ```python\n fasta = fasta_iter(\"hg19.fa\")\n for header, seq in fasta:\n print(header)\n ```\n \"\"\"\n fh = open(file_path)\n\n # ditch the boolean (x[0]) and just keep the header or sequence since\n # we know they alternate.\n faiter = (x[1] for x in groupby(fh, lambda line: line[0] == \">\"))\n\n for header in faiter:\n # drop the \">\"\n headerStr = header.__next__()[1:].strip()\n # join all sequence lines to one.\n seq = \"\".join(s.strip() for s in faiter.__next__())\n yield (headerStr, seq)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef write_fasta(file_path, seq_list, name_list=None):\n if name_list is None:\n name_list = [str(i) for i in range(len(seq_list))]\n\n # needs to be dict or seq\n with open(file_path, \"w\") as f:\n for i in range(len(seq_list)):\n f.write(\">\" + name_list[i] + \"\\n\" + seq_list[i] + \"\\n\")", "response": "Writes a fasta file containing the sequence list."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nruns RNA - PLL fold on the input FASTA file.", "response": "def run_RNAplfold(input_fasta, tmpdir, W=240, L=160, U=1):\n \"\"\"\n Arguments:\n W, Int: span - window length\n L, Int, maxiumm span\n U, Int, size of unpaired region\n \"\"\"\n\n profiles = RNAplfold_PROFILES_EXECUTE\n for i, P in enumerate(profiles):\n print(\"running {P}_RNAplfold... ({i}/{N})\".format(P=P, i=i + 1, N=len(profiles)))\n\n command = \"{bin}/{P}_RNAplfold\".format(bin=RNAplfold_BIN_DIR, P=P)\n file_out = \"{tmp}/{P}_profile.fa\".format(tmp=tmpdir, P=P)\n args = \" -W {W} -L {L} -u {U} < {fa} > {file_out}\".format(W=W, L=L, U=U, fa=input_fasta, file_out=file_out)\n\n os.system(command + args)\n\n # check if the file is empty\n if os.path.getsize(file_out) == 0:\n raise Exception(\"command wrote an empty file: {0}\".format(file_out))\n print(\"done!\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreading RNAplfold files and return a numpy array of the 2ndary structure.", "response": "def read_RNAplfold(tmpdir, maxlen=None, seq_align=\"start\", pad_with=\"E\"):\n \"\"\"\n pad_with = with which 2ndary structure should we pad the sequence?\n \"\"\"\n assert pad_with in {\"P\", \"H\", \"I\", \"M\", \"E\"}\n\n def read_profile(tmpdir, P):\n return [values.strip().split(\"\\t\")\n for seq_name, values in iter_fasta(\"{tmp}/{P}_profile.fa\".format(tmp=tmpdir, P=P))]\n\n def nelem(P, pad_width):\n \"\"\"get the right neutral element\n \"\"\"\n return 1 if P is pad_with else 0\n\n arr_hime = np.array([pad_sequences(read_profile(tmpdir, P),\n value=[nelem(P, pad_with)],\n align=seq_align,\n maxlen=maxlen)\n for P in RNAplfold_PROFILES_EXECUTE], dtype=\"float32\")\n\n # add the pairness column\n arr_p = 1 - arr_hime.sum(axis=0)[np.newaxis]\n arr = np.concatenate((arr_p, arr_hime))\n\n # reshape to: seq, seq_length, num_channels\n arr = np.moveaxis(arr, 0, 2)\n return arr"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncomputing the RNA secondary structure of a sequence vector.", "response": "def encodeRNAStructure(seq_vec, maxlen=None, seq_align=\"start\",\n W=240, L=160, U=1,\n tmpdir=\"/tmp/RNAplfold/\"):\n \"\"\"Compute RNA secondary structure with RNAplfold implemented in\n Kazan et al 2010, [doi](https://doi.org/10.1371/journal.pcbi.1000832).\n\n # Note\n Secondary structure is represented as the probability\n to be in the following states:\n - `[\"Pairedness\", \"Hairpin loop\", \"Internal loop\", \"Multi loop\", \"External region\"]`\n See Kazan et al 2010, [doi](https://doi.org/10.1371/journal.pcbi.1000832)\n for more information.\n\n\n # Arguments\n seq_vec: list of DNA/RNA sequences\n maxlen: Maximum sequence length. See `concise.preprocessing.pad_sequences` for more detail\n seq_align: How to align the sequences of variable lengths. See `concise.preprocessing.pad_sequences` for more detail\n W: Int; span - window length\n L: Int; maxiumm span\n U: Int; size of unpaired region\n tmpdir: Where to store the intermediary files of RNAplfold.\n\n # Note\n Recommended parameters:\n\n - for human, mouse use W, L, u : 240, 160, 1\n - for fly, yeast use W, L, u : 80, 40, 1\n\n # Returns\n np.ndarray of shape `(len(seq_vec), maxlen, 5)`\n \"\"\"\n # extend the tmpdir with uuid string to allow for parallel execution\n tmpdir = tmpdir + \"/\" + str(uuid4()) + \"/\"\n\n if not isinstance(seq_vec, list):\n seq_vec = seq_vec.tolist()\n if not os.path.exists(tmpdir):\n os.makedirs(tmpdir)\n\n fasta_path = tmpdir + \"/input.fasta\"\n write_fasta(fasta_path, seq_vec)\n run_RNAplfold(fasta_path, tmpdir, W=W, L=L, U=U)\n # 1. split the fasta into pieces\n # 2. run_RNAplfold for each of them\n # 3. Read the results\n return read_RNAplfold(tmpdir, maxlen, seq_align=seq_align, pad_with=\"E\")"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef ism(model, ref, ref_rc, alt, alt_rc, mutation_positions, out_annotation_all_outputs,\n output_filter_mask=None, out_annotation=None, diff_type=\"log_odds\", rc_handling=\"maximum\"):\n \"\"\"In-silico mutagenesis\n\n Using ISM in with diff_type 'log_odds' and rc_handling 'maximum' will produce predictions as used\n in [DeepSEA](http://www.nature.com/nmeth/journal/v12/n10/full/nmeth.3547.html). ISM offers two ways to\n calculate the difference between the outputs created by reference and alternative sequence and two\n different methods to select whether to use the output generated from the forward or from the\n reverse-complement sequences. To calculate \"e-values\" as mentioned in DeepSEA the same ISM prediction\n has to be performed on a randomised set of 1 million 1000genomes, MAF-matched variants to get a\n background of predicted effects of random SNPs.\n\n # Arguments\n model: Keras model\n ref: Input sequence with the reference genotype in the mutation position\n ref_rc: Reverse complement of the 'ref' argument\n alt: Input sequence with the alternative genotype in the mutation position\n alt_rc: Reverse complement of the 'alt' argument\n mutation_positions: Position on which the mutation was placed in the forward sequences\n out_annotation_all_outputs: Output labels of the model.\n output_filter_mask: Mask of boolean values indicating which model outputs should be used.\n Use this or 'out_annotation'\n out_annotation: List of outputs labels for which of the outputs (in case of a multi-task model) the\n predictions should be calculated.\n diff_type: \"log_odds\" or \"diff\". When set to 'log_odds' calculate scores based on log_odds, which assumes\n the model output is a probability. When set to 'diff' the model output for 'ref' is subtracted\n from 'alt'. Using 'log_odds' with outputs that are not in the range [0,1] nan will be returned.\n rc_handling: \"average\" or \"maximum\". Either average over the predictions derived from forward and\n reverse-complement predictions ('average') or pick the prediction with the bigger absolute\n value ('maximum').\n\n # Returns\n Dictionary with the key `ism` which contains a pandas DataFrame containing the calculated values\n for each (selected) model output and input sequence\n \"\"\"\n\n seqs = {\"ref\": ref, \"ref_rc\": ref_rc, \"alt\": alt, \"alt_rc\": alt_rc}\n assert diff_type in [\"log_odds\", \"diff\"]\n assert rc_handling in [\"average\", \"maximum\"]\n assert np.all([np.array(get_seq_len(ref)) == np.array(get_seq_len(seqs[k])) for k in seqs.keys() if k != \"ref\"])\n assert get_seq_len(ref)[0] == mutation_positions.shape[0]\n assert len(mutation_positions.shape) == 1\n\n # determine which outputs should be selected\n if output_filter_mask is None:\n if out_annotation is None:\n output_filter_mask = np.arange(out_annotation_all_outputs.shape[0])\n else:\n output_filter_mask = np.where(np.in1d(out_annotation_all_outputs, out_annotation))[0]\n\n # make sure the labels are assigned correctly\n out_annotation = out_annotation_all_outputs[output_filter_mask]\n\n preds = {}\n for k in seqs:\n # preds[k] = model.predict(seqs[k])\n preds[k] = np.array(model.predict(seqs[k])[..., output_filter_mask])\n\n if diff_type == \"log_odds\":\n if np.any([(preds[k].min() < 0 or preds[k].max() > 1) for k in preds]):\n warnings.warn(\"Using log_odds on model outputs that are not bound [0,1]\")\n diffs = np.log(preds[\"alt\"] / (1 - preds[\"alt\"])) - np.log(preds[\"ref\"] / (1 - preds[\"ref\"]))\n diffs_rc = np.log(preds[\"alt_rc\"] / (1 - preds[\"alt_rc\"])) - np.log(preds[\"ref_rc\"] / (1 - preds[\"ref_rc\"]))\n elif diff_type == \"diff\":\n diffs = preds[\"alt\"] - preds[\"ref\"]\n diffs_rc = preds[\"alt_rc\"] - preds[\"ref_rc\"]\n\n if rc_handling == \"average\":\n diffs = np.mean([diffs, diffs_rc], axis=0)\n elif rc_handling == \"maximum\":\n replace_filt = np.abs(diffs) < np.abs(diffs_rc)\n diffs[replace_filt] = diffs_rc[replace_filt]\n\n diffs = pd.DataFrame(diffs, columns=out_annotation)\n\n return {\"ism\": diffs}", "response": "This function is used to calculate the difference between the outputs of the model output for the given reference and alternative sequences. The model output for the given reference and alternative sequences is used to calculate the predictions of the model output for the given model output."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfit and evaluate a keras model for training", "response": "def _train_and_eval_single(train, valid, model,\n batch_size=32, epochs=300, use_weight=False,\n callbacks=[], eval_best=False, add_eval_metrics={}):\n \"\"\"Fit and evaluate a keras model\n\n eval_best: if True, load the checkpointed model for evaluation\n \"\"\"\n def _format_keras_history(history):\n \"\"\"nicely format keras history\n \"\"\"\n return {\"params\": history.params,\n \"loss\": merge_dicts({\"epoch\": history.epoch}, history.history),\n }\n if use_weight:\n sample_weight = train[2]\n else:\n sample_weight = None\n # train the model\n logger.info(\"Fit...\")\n history = History()\n model.fit(train[0], train[1],\n batch_size=batch_size,\n validation_data=valid[:2],\n epochs=epochs,\n sample_weight=sample_weight,\n verbose=2,\n callbacks=[history] + callbacks)\n\n # get history\n hist = _format_keras_history(history)\n # load and eval the best model\n if eval_best:\n mcp = [x for x in callbacks if isinstance(x, ModelCheckpoint)]\n assert len(mcp) == 1\n model = load_model(mcp[0].filepath)\n\n return eval_model(model, valid, add_eval_metrics), hist"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef eval_model(model, test, add_eval_metrics={}):\n # evaluate the model\n logger.info(\"Evaluate...\")\n # - model_metrics\n model_metrics_values = model.evaluate(test[0], test[1], verbose=0,\n batch_size=test[1].shape[0])\n # evaluation is done in a single pass to have more precise metics\n model_metrics = dict(zip(_listify(model.metrics_names),\n _listify(model_metrics_values)))\n # - eval_metrics\n y_true = test[1]\n y_pred = model.predict(test[0], verbose=0)\n eval_metrics = {k: v(y_true, y_pred) for k, v in add_eval_metrics.items()}\n\n # handle the case where the two metrics names intersect\n # - omit duplicates from eval_metrics\n intersected_keys = set(model_metrics).intersection(set(eval_metrics))\n if len(intersected_keys) > 0:\n logger.warning(\"Some metric names intersect: {0}. Ignoring the add_eval_metrics ones\".\n format(intersected_keys))\n eval_metrics = _delete_keys(eval_metrics, intersected_keys)\n\n return merge_dicts(model_metrics, eval_metrics)", "response": "Evaluate the Keras model on the test - set."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfeeds model_fn with train_data and param", "response": "def get_model(model_fn, train_data, param):\n \"\"\"Feed model_fn with train_data and param\n \"\"\"\n model_param = merge_dicts({\"train_data\": train_data}, param[\"model\"], param.get(\"shared\", {}))\n return model_fn(**model_param)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _delete_keys(dct, keys):\n c = deepcopy(dct)\n assert isinstance(keys, list)\n for k in keys:\n c.pop(k)\n return c", "response": "Returns a copy of dct with keys removed"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _mean_dict(dict_list):\n return {k: np.array([d[k] for d in dict_list]).mean()\n for k in dict_list[0].keys()}", "response": "Compute the mean value across a list of dictionaries\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nretrieve trial by tid", "response": "def get_trial(self, tid):\n \"\"\"Retrieve trial by tid\n \"\"\"\n lid = np.where(np.array(self.tids) == tid)[0][0]\n return self.trials[lid]"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nextends the original object to inject checking for stalled jobs and killing them if they are still running for too long", "response": "def count_by_state_unsynced(self, arg):\n \"\"\"Extends the original object in order to inject checking\n for stalled jobs and killing them if they are running for too long\n \"\"\"\n if self.kill_timeout is not None:\n self.delete_running(self.kill_timeout)\n return super(CMongoTrials, self).count_by_state_unsynced(arg)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndelete jobs stalled in the running state for too long", "response": "def delete_running(self, timeout_last_refresh=0, dry_run=False):\n \"\"\"Delete jobs stalled in the running state for too long\n\n timeout_last_refresh, int: number of seconds\n \"\"\"\n running_all = self.handle.jobs_running()\n running_timeout = [job for job in running_all\n if coarse_utcnow() > job[\"refresh_time\"] +\n timedelta(seconds=timeout_last_refresh)]\n if len(running_timeout) == 0:\n # Nothing to stop\n self.refresh_tids(None)\n return None\n\n if dry_run:\n logger.warning(\"Dry run. Not removing anything.\")\n\n logger.info(\"Removing {0}/{1} running jobs. # all jobs: {2} \".\n format(len(running_timeout), len(running_all), len(self)))\n\n now = coarse_utcnow()\n logger.info(\"Current utc time: {0}\".format(now))\n logger.info(\"Time horizont: {0}\".format(now - timedelta(seconds=timeout_last_refresh)))\n for job in running_timeout:\n logger.info(\"Removing job: \")\n pjob = job.to_dict()\n del pjob[\"misc\"] # ignore misc when printing\n logger.info(pprint.pformat(pjob))\n if not dry_run:\n self.handle.delete(job)\n logger.info(\"Job deleted\")\n self.refresh_tids(None)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget train history as pd. DataFrame", "response": "def train_history(self, tid=None):\n \"\"\"Get train history as pd.DataFrame\n \"\"\"\n\n def result2history(result):\n if isinstance(result[\"history\"], list):\n return pd.concat([pd.DataFrame(hist[\"loss\"]).assign(fold=i)\n for i, hist in enumerate(result[\"history\"])])\n else:\n return pd.DataFrame(result[\"history\"][\"loss\"])\n\n # use all\n if tid is None:\n tid = self.valid_tid()\n\n res = [result2history(t[\"result\"]).assign(tid=t[\"tid\"]) for t in self.trials\n if t[\"tid\"] in _listify(tid)]\n df = pd.concat(res)\n\n # reorder columns\n fold_name = [\"fold\"] if \"fold\" in df else []\n df = _put_first(df, [\"tid\"] + fold_name + [\"epoch\"])\n return df"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a pd. DataFrame view of the whole experiment", "response": "def as_df(self, ignore_vals=[\"history\"], separator=\".\", verbose=True):\n \"\"\"Return a pd.DataFrame view of the whole experiment\n \"\"\"\n\n def add_eval(res):\n if \"eval\" not in res:\n if isinstance(res[\"history\"], list):\n # take the average across all folds\n eval_names = list(res[\"history\"][0][\"loss\"].keys())\n eval_metrics = np.array([[v[-1] for k, v in hist[\"loss\"].items()]\n for hist in res[\"history\"]]).mean(axis=0).tolist()\n res[\"eval\"] = {eval_names[i]: eval_metrics[i] for i in range(len(eval_metrics))}\n else:\n res[\"eval\"] = {k: v[-1] for k, v in res[\"history\"][\"loss\"].items()}\n return res\n\n def add_n_epoch(df):\n df_epoch = self.train_history().groupby(\"tid\")[\"epoch\"].max().reset_index()\n df_epoch.rename(columns={\"epoch\": \"n_epoch\"}, inplace=True)\n return pd.merge(df, df_epoch, on=\"tid\", how=\"left\")\n\n results = self.get_ok_results(verbose=verbose)\n rp = [_flatten_dict(_delete_keys(add_eval(x), ignore_vals), separator) for x in results]\n df = pd.DataFrame.from_records(rp)\n\n df = add_n_epoch(df)\n\n first = [\"tid\", \"loss\", \"status\"]\n return _put_first(df, first)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nlists all trades in a market", "response": "def trades(ctx, market, limit, start, stop):\n \"\"\" List trades in a market\n \"\"\"\n market = Market(market, bitshares_instance=ctx.bitshares)\n t = [[\"time\", \"quote\", \"base\", \"price\"]]\n for trade in market.trades(limit, start=start, stop=stop):\n t.append(\n [\n str(trade[\"time\"]),\n str(trade[\"quote\"]),\n str(trade[\"base\"]),\n \"{:f} {}/{}\".format(\n trade[\"price\"],\n trade[\"base\"][\"asset\"][\"symbol\"],\n trade[\"quote\"][\"asset\"][\"symbol\"],\n ),\n ]\n )\n print_table(t)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef ticker(ctx, market):\n market = Market(market, bitshares_instance=ctx.bitshares)\n ticker = market.ticker()\n t = [[\"key\", \"value\"]]\n for key in ticker:\n t.append([key, str(ticker[key])])\n print_table(t)", "response": "Show ticker of a market"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef cancel(ctx, orders, account):\n print_tx(ctx.bitshares.cancel(orders, account=account))", "response": "Cancel one or multiple orders"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef orderbook(ctx, market):\n market = Market(market, bitshares_instance=ctx.bitshares)\n orderbook = market.orderbook()\n ta = {}\n ta[\"bids\"] = [[\"quote\", \"sum quote\", \"base\", \"sum base\", \"price\"]]\n cumsumquote = Amount(0, market[\"quote\"])\n cumsumbase = Amount(0, market[\"base\"])\n for order in orderbook[\"bids\"]:\n cumsumbase += order[\"base\"]\n cumsumquote += order[\"quote\"]\n ta[\"bids\"].append(\n [\n str(order[\"quote\"]),\n str(cumsumquote),\n str(order[\"base\"]),\n str(cumsumbase),\n \"{:f} {}/{}\".format(\n order[\"price\"],\n order[\"base\"][\"asset\"][\"symbol\"],\n order[\"quote\"][\"asset\"][\"symbol\"],\n ),\n ]\n )\n\n ta[\"asks\"] = [[\"price\", \"base\", \"sum base\", \"quote\", \"sum quote\"]]\n cumsumquote = Amount(0, market[\"quote\"])\n cumsumbase = Amount(0, market[\"base\"])\n for order in orderbook[\"asks\"]:\n cumsumbase += order[\"base\"]\n cumsumquote += order[\"quote\"]\n ta[\"asks\"].append(\n [\n \"{:f} {}/{}\".format(\n order[\"price\"],\n order[\"base\"][\"asset\"][\"symbol\"],\n order[\"quote\"][\"asset\"][\"symbol\"],\n ),\n str(order[\"base\"]),\n str(cumsumbase),\n str(order[\"quote\"]),\n str(cumsumquote),\n ]\n )\n t = [[\"bids\", \"asks\"]]\n t.append([format_table(ta[\"bids\"]), format_table(ta[\"asks\"])])\n print_table(t)", "response": "Show the orderbook of a particular market"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef buy(ctx, buy_amount, buy_asset, price, sell_asset, order_expiration, account):\n amount = Amount(buy_amount, buy_asset)\n price = Price(\n price, base=sell_asset, quote=buy_asset, bitshares_instance=ctx.bitshares\n )\n print_tx(\n price.market.buy(price, amount, account=account, expiration=order_expiration)\n )", "response": "Buy a specific asset at a certain rate against a base asset"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nlist open orders of an account", "response": "def openorders(ctx, account):\n \"\"\" List open orders of an account\n \"\"\"\n account = Account(\n account or config[\"default_account\"], bitshares_instance=ctx.bitshares\n )\n t = [[\"Price\", \"Quote\", \"Base\", \"ID\"]]\n for o in account.openorders:\n t.append(\n [\n \"{:f} {}/{}\".format(\n o[\"price\"],\n o[\"base\"][\"asset\"][\"symbol\"],\n o[\"quote\"][\"asset\"][\"symbol\"],\n ),\n str(o[\"quote\"]),\n str(o[\"base\"]),\n o[\"id\"],\n ]\n )\n print_table(t)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef cancelall(ctx, market, account):\n market = Market(market)\n ctx.bitshares.bundle = True\n market.cancel([x[\"id\"] for x in market.accountopenorders(account)], account=account)\n print_tx(ctx.bitshares.txbuffer.broadcast())", "response": "Cancel all open orders of an account in a market"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef spread(ctx, market, side, min, max, num, total, order_expiration, account):\n from tqdm import tqdm\n from numpy import linspace\n\n market = Market(market)\n ctx.bitshares.bundle = True\n\n if min < max:\n space = linspace(min, max, num)\n else:\n space = linspace(max, min, num)\n\n func = getattr(market, side)\n for p in tqdm(space):\n func(p, total / float(num), account=account, expiration=order_expiration)\n print_tx(ctx.bitshares.txbuffer.broadcast())", "response": "Place multiple orders at a given time period"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nborrow a bitasset/market-pegged asset", "response": "def borrow(ctx, amount, symbol, ratio, account):\n \"\"\" Borrow a bitasset/market-pegged asset\n \"\"\"\n from bitshares.dex import Dex\n\n dex = Dex(bitshares_instance=ctx.bitshares)\n print_tx(\n dex.borrow(Amount(amount, symbol), collateral_ratio=ratio, account=account)\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nupdate the collateral ratio of a call positions", "response": "def updateratio(ctx, symbol, ratio, account):\n \"\"\" Update the collateral ratio of a call positions\n \"\"\"\n from bitshares.dex import Dex\n\n dex = Dex(bitshares_instance=ctx.bitshares)\n print_tx(dex.adjust_collateral_ratio(symbol, ratio, account=account))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfunds the fee pool of an asset", "response": "def fundfeepool(ctx, symbol, amount, account):\n \"\"\" Fund the fee pool of an asset\n \"\"\"\n print_tx(ctx.bitshares.fund_fee_pool(symbol, amount, account=account))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nbids a collateral in the settlement fund", "response": "def bidcollateral(\n ctx, collateral_symbol, collateral_amount, debt_symbol, debt_amount, account\n):\n \"\"\" Bid for collateral in the settlement fund\n \"\"\"\n print_tx(\n ctx.bitshares.bid_collateral(\n Amount(collateral_amount, collateral_symbol),\n Amount(debt_amount, debt_symbol),\n account=account,\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef settle(ctx, symbol, amount, account):\n print_tx(ctx.bitshares.asset_settle(Amount(amount, symbol), account=account))", "response": "Fund the fee pool of an asset"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nlist accounts vesting balances", "response": "def votes(ctx, account, type):\n \"\"\" List accounts vesting balances\n \"\"\"\n if not isinstance(type, (list, tuple)):\n type = [type]\n account = Account(account, full=True)\n ret = {key: list() for key in Vote.types()}\n for vote in account[\"votes\"]:\n t = Vote.vote_type_from_id(vote[\"id\"])\n ret[t].append(vote)\n\n t = [[\"id\", \"url\", \"account\"]]\n for vote in ret[\"committee\"]:\n t.append(\n [vote[\"id\"], vote[\"url\"], Account(vote[\"committee_member_account\"])[\"name\"]]\n )\n\n if \"committee\" in type:\n t = [[\"id\", \"url\", \"account\", \"votes\"]]\n for vote in ret[\"committee\"]:\n t.append(\n [\n vote[\"id\"],\n vote[\"url\"],\n Account(vote[\"committee_member_account\"])[\"name\"],\n str(Amount({\"amount\": vote[\"total_votes\"], \"asset_id\": \"1.3.0\"})),\n ]\n )\n print_table(t)\n\n if \"witness\" in type:\n t = [\n [\n \"id\",\n \"account\",\n \"url\",\n \"votes\",\n \"last_confirmed_block_num\",\n \"total_missed\",\n \"westing\",\n ]\n ]\n for vote in ret[\"witness\"]:\n t.append(\n [\n vote[\"id\"],\n Account(vote[\"witness_account\"])[\"name\"],\n vote[\"url\"],\n str(Amount({\"amount\": vote[\"total_votes\"], \"asset_id\": \"1.3.0\"})),\n vote[\"last_confirmed_block_num\"],\n vote[\"total_missed\"],\n str(Vesting(vote.get(\"pay_vb\")).claimable)\n if vote.get(\"pay_vb\")\n else \"\",\n ]\n )\n print_table(t)\n\n if \"worker\" in type:\n t = [[\"id\", \"name/url\", \"daily_pay\", \"votes\", \"time\", \"account\"]]\n for vote in ret[\"worker\"]:\n votes = Amount({\"amount\": vote[\"total_votes_for\"], \"asset_id\": \"1.3.0\"})\n amount = Amount({\"amount\": vote[\"daily_pay\"], \"asset_id\": \"1.3.0\"})\n t.append(\n [\n vote[\"id\"],\n \"{name}\\n{url}\".format(**vote),\n str(amount),\n str(votes),\n \"{work_begin_date}\\n-\\n{work_end_date}\".format(**vote),\n str(Account(vote[\"worker_account\"])[\"name\"]),\n ]\n )\n print_table(t)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef info(ctx, objects):\n if not objects:\n t = [[\"Key\", \"Value\"]]\n info = ctx.bitshares.rpc.get_dynamic_global_properties()\n for key in info:\n t.append([key, info[key]])\n print_table(t)\n\n for obj in objects:\n # Block\n if re.match(\"^[0-9]*$\", obj):\n block = Block(obj, lazy=False, bitshares_instance=ctx.bitshares)\n t = [[\"Key\", \"Value\"]]\n for key in sorted(block):\n value = block[key]\n if key == \"transactions\":\n value = format_tx(value)\n t.append([key, value])\n print_table(t)\n # Object Id\n elif re.match(\"^\\d*\\.\\d*\\.\\d*$\", obj):\n data = ctx.bitshares.rpc.get_object(obj)\n if data:\n t = [[\"Key\", \"Value\"]]\n for key in sorted(data):\n value = data[key]\n if isinstance(value, dict) or isinstance(value, list):\n value = format_tx(value)\n t.append([key, value])\n print_table(t)\n else:\n print_message(\"Object %s unknown\" % obj, \"warning\")\n\n # Asset\n elif obj.upper() == obj and re.match(\"^[A-Z\\.]*$\", obj):\n data = Asset(obj)\n t = [[\"Key\", \"Value\"]]\n for key in sorted(data):\n value = data[key]\n if isinstance(value, dict):\n value = format_tx(value)\n t.append([key, value])\n print_table(t)\n\n # Public Key\n elif re.match(\"^BTS.{48,55}$\", obj):\n account = ctx.bitshares.wallet.getAccountFromPublicKey(obj)\n if account:\n t = [[\"Account\"]]\n t.append([account])\n print_table(t)\n else:\n print_message(\"Public Key not known: %s\" % obj, \"warning\")\n\n # Account name\n elif re.match(\"^[a-zA-Z0-9\\-\\._]{2,64}$\", obj):\n account = Account(obj, full=True)\n if account:\n t = [[\"Key\", \"Value\"]]\n for key in sorted(account):\n value = account[key]\n if isinstance(value, dict) or isinstance(value, list):\n value = format_tx(value)\n t.append([key, value])\n print_table(t)\n else:\n print_message(\"Account %s unknown\" % obj, \"warning\")\n\n elif \":\" in obj:\n vote = ctx.bitshares.rpc.lookup_vote_ids([obj])[0]\n if vote:\n t = [[\"Key\", \"Value\"]]\n for key in sorted(vote):\n value = vote[key]\n if isinstance(value, dict) or isinstance(value, list):\n value = format_tx(value)\n t.append([key, value])\n print_table(t)\n else:\n print_message(\"voteid %s unknown\" % obj, \"warning\")\n\n else:\n print_message(\"Couldn't identify object to read\", \"warning\")", "response": "Obtain all kinds of information for a list of objects."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef fees(ctx, currency):\n from bitsharesbase.operationids import getOperationNameForId\n from bitshares.market import Market\n\n market = Market(\"%s:%s\" % (currency, \"BTS\"))\n ticker = market.ticker()\n if \"quoteSettlement_price\" in ticker:\n price = ticker.get(\"quoteSettlement_price\")\n else:\n price = ticker.get(\"latest\", 0)\n price.invert()\n\n chain = Blockchain(bitshares_instance=ctx.bitshares)\n feesObj = chain.chainParameters().get(\"current_fees\")\n fees = feesObj[\"parameters\"]\n\n t = [[\"Operation\", \"Type\", \"Fee\", currency]]\n\n for fee in fees:\n for f in fee[1]:\n t.append(\n [\n highlight(getOperationNameForId(fee[0])),\n detail(f),\n detail(\n str(Amount({\"amount\": fee[1].get(f, 0), \"asset_id\": \"1.3.0\"}))\n ),\n detail(\n str(\n price\n * Amount({\"amount\": fee[1].get(f, 0), \"asset_id\": \"1.3.0\"})\n )\n ),\n ]\n )\n print_table(t)", "response": "List fees in a single market"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef create(ctx, to, amount, symbol, secret, hash, account, expiration):\n ctx.blockchain.blocking = True\n tx = ctx.blockchain.htlc_create(\n Amount(amount, symbol),\n to,\n secret,\n hash_type=hash,\n expiration=expiration,\n account=account,\n )\n tx.pop(\"trx\", None)\n print_tx(tx)\n results = tx.get(\"operation_results\", {})\n if results:\n htlc_id = results[0][1]\n print(\"Your htlc_id is: {}\".format(htlc_id))", "response": "Create an HTLC contract"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef redeem(ctx, htlc_id, secret, account):\n print_tx(ctx.blockchain.htlc_redeem(htlc_id, secret, account=account))", "response": "Redeem an HTLC contract"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef unique_bits(flags_class):\n flags_class = unique(flags_class)\n other_bits = 0\n for name, member in flags_class.__members_without_aliases__.items():\n bits = int(member)\n if other_bits & bits:\n for other_name, other_member in flags_class.__members_without_aliases__.items():\n if int(other_member) & bits:\n raise ValueError(\"%r: '%s' and '%s' have overlapping bits\" % (flags_class, other_name, name))\n else:\n other_bits |= bits", "response": "A decorator for flags classes to forbid declaring flags with overlapping bits."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef process_inline_members_definition(members):\n if isinstance(members, str):\n members = ((name, UNDEFINED) for name in members.replace(',', ' ').split())\n elif isinstance(members, (tuple, list, collections.Set)):\n if members and isinstance(next(iter(members)), str):\n members = ((name, UNDEFINED) for name in members)\n elif isinstance(members, collections.Mapping):\n members = members.items()\n return members", "response": "Process inline members definition."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntaking only the output of to_simple_str () and returns a new object.", "response": "def from_simple_str(cls, s):\n \"\"\" Accepts only the output of to_simple_str(). The output of __str__() is invalid as input. \"\"\"\n if not isinstance(s, str):\n raise TypeError(\"Expected an str instance, received %r\" % (s,))\n return cls(cls.bits_from_simple_str(s))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef from_str(cls, s):\n if not isinstance(s, str):\n raise TypeError(\"Expected an str instance, received %r\" % (s,))\n return cls(cls.bits_from_str(s))", "response": "Returns a new instance of the class from a string."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef bits_from_str(cls, s):\n try:\n if len(s) <= len(cls.__name__) or not s.startswith(cls.__name__):\n return cls.bits_from_simple_str(s)\n c = s[len(cls.__name__)]\n if c == '(':\n if not s.endswith(')'):\n raise ValueError\n return cls.bits_from_simple_str(s[len(cls.__name__)+1:-1])\n elif c == '.':\n member_name = s[len(cls.__name__)+1:]\n return int(cls.__all_members__[member_name])\n else:\n raise ValueError\n except ValueError as ex:\n if ex.args:\n raise\n raise ValueError(\"%s.%s: invalid input: %r\" % (cls.__name__, cls.bits_from_str.__name__, s))\n except KeyError as ex:\n raise ValueError(\"%s.%s: Invalid flag name '%s' in input: %r\" % (cls.__name__, cls.bits_from_str.__name__,\n ex.args[0], s))", "response": "Converts the output of __str__ into an integer."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\npublishes a new feed for a given market and price.", "response": "def newfeed(ctx, symbol, price, market, cer, mssr, mcr, account):\n \"\"\" Publish a price feed!\n\n Examples:\n\n \\b\n uptick newfeed USD 0.01 USD/BTS\n uptick newfeed USD 100 BTS/USD\n\n Core Exchange Rate (CER)\n \\b\n If no CER is provided, the cer will be the same as the settlement price\n with a 5% premium (Only if the 'market' is against the core asset (e.g.\n BTS)). The CER is always defined against the core asset (BTS). This\n means that if the backing asset is not the core asset (BTS), then you must\n specify your own cer as a float. The float `x` will be interpreted as\n `x BTS/SYMBOL`.\n \"\"\"\n if cer:\n cer = Price(cer, quote=symbol, base=\"1.3.0\", bitshares_instance=ctx.bitshares)\n\n print_tx(\n ctx.bitshares.publish_price_feed(\n symbol, Price(price, market), cer=cer, mssr=mssr, mcr=mcr, account=account\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngenerating a list of prices for each asset in assets.", "response": "def feeds(ctx, assets, pricethreshold, maxage):\n \"\"\" Price Feed Overview\n \"\"\"\n import builtins\n\n witnesses = Witnesses(bitshares_instance=ctx.bitshares)\n\n def test_price(p, ref):\n if math.fabs(float(p / ref) - 1.0) > pricethreshold / 100.0:\n return click.style(str(p), fg=\"red\")\n elif math.fabs(float(p / ref) - 1.0) > pricethreshold / 2.0 / 100.0:\n return click.style(str(p), fg=\"yellow\")\n else:\n return click.style(str(p), fg=\"green\")\n\n def price_diff(p, ref):\n d = (float(p) - float(ref)) / float(ref) * 100\n if math.fabs(d) >= 5:\n color = \"red\"\n elif math.fabs(d) >= 2.5:\n color = \"yellow\"\n else:\n color = \"green\"\n return click.style(\"{:8.2f}%\".format(d), fg=color)\n\n def test_date(d):\n t = d.replace(tzinfo=None)\n now = datetime.utcnow()\n if now < t + timedelta(minutes=maxage):\n return click.style(str(t), fg=\"green\")\n if now < t + timedelta(minutes=maxage / 2.0):\n return click.style(str(t), fg=\"yellow\")\n else:\n return click.style(str(t), fg=\"red\")\n\n output = \"\"\n for asset in tqdm(assets):\n t = PrettyTable(\n [\n \"Asset\",\n \"Producer\",\n \"Active Witness\",\n \"Date\",\n \"Settlement Price\",\n \"Core Exchange Price\",\n \"MCR\",\n \"SSPR\",\n \"delta\",\n ]\n )\n t.align = \"c\"\n t.align[\"Producer\"] = \"l\"\n asset = Asset(asset, full=True, bitshares_instance=ctx.bitshares)\n current_feed = asset.feed\n feeds = asset.feeds\n producingwitnesses = builtins.set()\n witness_accounts = [x[\"witness_account\"] for x in witnesses]\n for feed in tqdm(feeds):\n producingwitnesses.add(feed[\"producer\"][\"id\"])\n t.add_row(\n [\n asset[\"symbol\"],\n feed[\"producer\"][\"name\"],\n click.style(\n \"X\" if feed[\"producer\"][\"id\"] in witness_accounts else \"\",\n bold=True,\n ),\n test_date(feed[\"date\"]),\n test_price(\n feed[\"settlement_price\"], current_feed[\"settlement_price\"]\n ),\n test_price(\n feed[\"core_exchange_rate\"], current_feed[\"core_exchange_rate\"]\n ),\n feed[\"maintenance_collateral_ratio\"] / 10,\n feed[\"maximum_short_squeeze_ratio\"] / 10,\n price_diff(\n feed[\"core_exchange_rate\"], current_feed[\"core_exchange_rate\"]\n ),\n ]\n )\n for missing in builtins.set(witness_accounts).difference(producingwitnesses):\n witness = Witness(missing)\n t.add_row(\n [\n click.style(asset[\"symbol\"], bg=\"red\"),\n click.style(witness.account[\"name\"], bg=\"red\"),\n click.style(\n \"X\" if feed[\"producer\"][\"id\"] in witness_accounts else \"\",\n bold=True,\n ),\n click.style(str(datetime(1970, 1, 1))),\n click.style(\"missing\", bg=\"red\"),\n click.style(\"missing\", bg=\"red\"),\n click.style(\"missing\", bg=\"red\"),\n click.style(\"missing\", bg=\"red\"),\n click.style(\"missing\", bg=\"red\"),\n ]\n )\n output += t.get_string(sortby=\"Date\", reversesort=True)\n output += \"\\n\"\n click.echo(output)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef print_table(*args, **kwargs):\n t = format_table(*args, **kwargs)\n click.echo(t)", "response": "Print a table of available attributes."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconstructs a RPC call directly with the given arguments", "response": "def rpc(ctx, call, arguments, api):\n \"\"\" Construct RPC call directly\n \\b\n You can specify which API to send the call to:\n\n uptick rpc --api assets\n\n You can also specify lists using\n\n uptick rpc get_objects \"['2.0.0', '2.1.0']\"\n\n \"\"\"\n try:\n data = list(eval(d) for d in arguments)\n except:\n data = arguments\n ret = getattr(ctx.bitshares.rpc, call)(*data, api=api)\n print_dict(ret)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef createcommittee(ctx, url, account):\n print_tx(ctx.bitshares.create_committee_member(url, account=account))", "response": "Create a committee account for your account"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsigning a json - formatted transaction", "response": "def sign(ctx, filename):\n \"\"\" Sign a json-formatted transaction\n \"\"\"\n if filename:\n tx = filename.read()\n else:\n tx = sys.stdin.read()\n tx = TransactionBuilder(eval(tx), bitshares_instance=ctx.bitshares)\n tx.appendMissingSignatures()\n tx.sign()\n print_tx(tx.json())"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nobtain a random private key pair", "response": "def randomwif(prefix, num):\n \"\"\" Obtain a random private/public key pair\n \"\"\"\n from bitsharesbase.account import PrivateKey\n\n t = [[\"wif\", \"pubkey\"]]\n for n in range(0, num):\n wif = PrivateKey()\n t.append([str(wif), format(wif.pubkey, prefix)])\n print_table(t)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef approvewitness(ctx, witnesses, account):\n print_tx(ctx.bitshares.approvewitness(witnesses, account=account))", "response": "Approve witnesses for account"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nlist witnesses and relevant information", "response": "def witnesses(ctx):\n \"\"\" List witnesses and relevant information\n \"\"\"\n t = [\n [\n \"weight\",\n \"account\",\n \"signing_key\",\n \"vote_id\",\n \"url\",\n \"total_missed\",\n \"last_confirmed_block_num\",\n ]\n ]\n for witness in sorted(Witnesses(), key=lambda x: x.weight, reverse=True):\n witness.refresh()\n t.append(\n [\n \"{:.2f}%\".format(witness.weight * 100),\n witness.account[\"name\"],\n witness[\"signing_key\"],\n witness[\"vote_id\"],\n witness[\"url\"],\n witness[\"total_missed\"],\n witness[\"last_confirmed_block_num\"],\n ]\n )\n print_table(t)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nlists accounts vesting balances", "response": "def vesting(ctx, account):\n \"\"\" List accounts vesting balances\n \"\"\"\n account = Account(account, full=True)\n t = [[\"vesting_id\", \"claimable\"]]\n for vest in account[\"vesting_balances\"]:\n vesting = Vesting(vest)\n t.append([vesting[\"id\"], str(vesting.claimable)])\n print_table(t)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef claim(ctx, vestingid, account, amount):\n vesting = Vesting(vestingid)\n if amount:\n amount = Amount(float(amount), \"BTS\")\n else:\n amount = vesting.claimable\n print_tx(\n ctx.bitshares.vesting_balance_withdraw(\n vesting[\"id\"], amount=amount, account=vesting[\"owner\"]\n )\n )", "response": "Claim funds from the vesting balance\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreserving a amount of a symbol in a bitshares account", "response": "def reserve(ctx, amount, symbol, account):\n \"\"\" Reserve/Burn tokens\n \"\"\"\n print_tx(\n ctx.bitshares.reserve(\n Amount(amount, symbol, bitshares_instance=ctx.bitshares), account=account\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nrunning the Webserver and SocketIO and app", "response": "def run(context, port):\n \"\"\" Run the Webserver/SocketIO and app\n \"\"\"\n global ctx\n ctx = context\n app.run(port=port)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef offline(f):\n\n @click.pass_context\n @verbose\n def new_func(ctx, *args, **kwargs):\n ctx.obj[\"offline\"] = True\n ctx.bitshares = BitShares(**ctx.obj)\n ctx.blockchain = ctx.bitshares\n ctx.bitshares.set_shared_instance()\n return ctx.invoke(f, *args, **kwargs)\n\n return update_wrapper(new_func, f)", "response": "This decorator allows you to access the current object s offline bitshares."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef proposals(ctx, account):\n proposals = Proposals(account)\n t = [\n [\n \"id\",\n \"expiration\",\n \"proposer\",\n \"required approvals\",\n \"available approvals\",\n \"review period time\",\n \"proposal\",\n ]\n ]\n for proposal in proposals:\n t.append(\n [\n proposal[\"id\"],\n proposal[\"expiration_time\"],\n Account(proposal.proposer)[\"name\"],\n [\n Account(x)[\"name\"]\n for x in (\n proposal[\"required_active_approvals\"]\n + proposal[\"required_owner_approvals\"]\n )\n ],\n json.dumps(\n [Account(x)[\"name\"] for x in proposal[\"available_active_approvals\"]]\n + proposal[\"available_key_approvals\"]\n + proposal[\"available_owner_approvals\"],\n indent=1,\n ),\n proposal.get(\"review_period_time\", None),\n format_dict(proposal[\"proposed_transaction\"]),\n ]\n )\n\n print_table(t)", "response": "List proposals for a single account"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef sign(ctx, file, account):\n if not file:\n print_message(\"Prompting for message. Terminate with CTRL-D\", \"info\")\n file = click.get_text_stream(\"stdin\")\n m = Message(file.read(), bitshares_instance=ctx.bitshares)\n print_message(m.sign(account), \"info\")", "response": "Sign a message with an account"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nverifies a signed message", "response": "def verify(ctx, file, account):\n \"\"\" Verify a signed message\n \"\"\"\n if not file:\n print_message(\"Prompting for message. Terminate with CTRL-D\", \"info\")\n file = click.get_text_stream(\"stdin\")\n m = Message(file.read(), bitshares_instance=ctx.bitshares)\n try:\n if m.verify():\n print_message(\"Verified\", \"success\")\n else:\n print_message(\"not verified\", \"error\")\n except InvalidMessageSignature:\n print_message(\"Signature INVALID!\", \"error\")"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nallows a key or account to a bitshares base account", "response": "def allow(ctx, foreign_account, permission, weight, threshold, account):\n \"\"\" Add a key/account to an account's permission\n \"\"\"\n if not foreign_account:\n from bitsharesbase.account import PasswordKey\n\n pwd = click.prompt(\n \"Password for Key Derivation\", hide_input=True, confirmation_prompt=True\n )\n foreign_account = format(\n PasswordKey(account, pwd, permission).get_public(), \"BTS\"\n )\n print_tx(\n ctx.bitshares.allow(\n foreign_account,\n weight=weight,\n account=account,\n permission=permission,\n threshold=threshold,\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef disallow(ctx, foreign_account, permission, threshold, account):\n print_tx(\n ctx.bitshares.disallow(\n foreign_account, account=account, permission=permission, threshold=threshold\n )\n )", "response": "Disallow a key or account from a given permission."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef history(ctx, account, limit, type, csv, exclude, raw):\n from bitsharesbase.operations import getOperationNameForId\n\n t = [[\"#\", \"time (block)\", \"operation\", \"details\"]]\n\n for a in account:\n account = Account(a, bitshares_instance=ctx.bitshares)\n for b in account.history(limit=limit, only_ops=type, exclude_ops=exclude):\n block = BlockHeader(b[\"block_num\"])\n row = [\n b[\"id\"],\n \"%s (%s)\" % (block.time(), b[\"block_num\"]),\n \"{} ({})\".format(getOperationNameForId(b[\"op\"][0]), b[\"op\"][0]),\n pprintOperation(b) if not raw else json.dumps(b, indent=4),\n ]\n t.append(row)\n print_table(t)", "response": "Show the history of an account"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef transfer(ctx, to, amount, asset, memo, account):\n print_tx(ctx.bitshares.transfer(to, amount, asset, memo=memo, account=account))", "response": "Transfer a amount of asset to another asset"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef newaccount(ctx, accountname, account, password):\n print_tx(\n ctx.bitshares.create_account(accountname, registrar=account, password=password)\n )", "response": "Create a new account"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef cloneaccount(ctx, account_name, account):\n from bitsharesbase import transactions, operations\n\n account = Account(account)\n op = {\n \"fee\": {\"amount\": 0, \"asset_id\": \"1.3.0\"},\n \"registrar\": account[\"id\"],\n \"referrer\": account[\"id\"],\n \"referrer_percent\": 100,\n \"name\": account_name,\n \"owner\": account[\"owner\"],\n \"active\": account[\"active\"],\n \"options\": account[\"options\"],\n \"extensions\": {},\n \"prefix\": ctx.bitshares.rpc.chain_params[\"prefix\"],\n }\n op = operations.Account_create(**op)\n print_tx(ctx.bitshares.finalizeOp(op, account, \"active\"))", "response": "Clone an account\n\n This copies the owner and active permissions as well as the\n options (e.g. votes, memo key)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef changememokey(ctx, key, account):\n print_tx(ctx.bitshares.update_memo_key(key, account=account))", "response": "Change the memo key of an account"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef whitelist(ctx, whitelist_account, account):\n account = Account(account, blockchain_instance=ctx.blockchain)\n print_tx(account.whitelist(whitelist_account))", "response": "Add an account to a whitelist"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef blacklist(ctx, blacklist_account, account):\n account = Account(account, blockchain_instance=ctx.blockchain)\n print_tx(account.blacklist(blacklist_account))", "response": "Add an account to a blacklist"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nremoving an account from any list", "response": "def unlist(ctx, unlist_account, account):\n \"\"\" Remove an account from any list\n \"\"\"\n account = Account(account, blockchain_instance=ctx.blockchain)\n print_tx(account.nolist(unlist_account))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nset the proxy account for an account", "response": "def setproxy(ctx, proxy_account, account):\n \"\"\" Set the proxy account for an account\n \"\"\"\n print_tx(ctx.bitshares.set_proxy(proxy_account, account=account))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nlists call positions of an account or an asset", "response": "def calls(ctx, obj, limit):\n \"\"\" List call/short positions of an account or an asset\n \"\"\"\n if obj.upper() == obj:\n # Asset\n from bitshares.asset import Asset\n\n asset = Asset(obj, full=True)\n calls = asset.get_call_orders(limit)\n t = [[\"acount\", \"debt\", \"collateral\", \"call price\", \"ratio\"]]\n for call in calls:\n t.append(\n [\n str(call[\"account\"][\"name\"]),\n str(call[\"debt\"]),\n str(call[\"collateral\"]),\n str(call[\"call_price\"]),\n \"%.2f\" % (call[\"ratio\"]),\n ]\n )\n print_table(t)\n else:\n # Account\n from bitshares.dex import Dex\n\n dex = Dex(bitshares_instance=ctx.bitshares)\n calls = dex.list_debt_positions(account=obj)\n t = [[\"debt\", \"collateral\", \"call price\", \"ratio\"]]\n for symbol in calls:\n t.append(\n [\n str(calls[symbol][\"debt\"]),\n str(calls[symbol][\"collateral\"]),\n str(calls[symbol][\"call_price\"]),\n \"%.2f\" % (calls[symbol][\"ratio\"]),\n ]\n )\n print_table(t)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nshow pending settlement orders of a bitasset", "response": "def settlements(ctx, asset, limit):\n \"\"\" Show pending settlement orders of a bitasset\n \"\"\"\n from bitshares.asset import Asset\n\n asset = Asset(asset, full=True)\n if not asset.is_bitasset:\n print_message(\"{} is not a bitasset.\".format(asset[\"symbol\"]), \"warning\")\n sys.exit(1)\n calls = asset.get_settle_orders(limit)\n t = [[\"acount\", \"amount\", \"date\"]]\n for call in calls:\n t.append([str(call[\"account\"][\"name\"]), str(call[\"amount\"]), str(call[\"date\"])])\n print_table(t)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef approveworker(ctx, workers, account):\n print_tx(ctx.bitshares.approveworker(workers, account=account))", "response": "Approve workers and print out the approveworker status."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nlists all workers of an account", "response": "def workers(ctx, account, top):\n \"\"\" List all workers (of an account)\n \"\"\"\n workers = Workers(account)\n t = [[\"id\", \"name/url\", \"daily_pay\", \"votes\", \"time\", \"account\"]]\n workers_sorted = sorted(\n workers, key=lambda x: int(x[\"total_votes_for\"]), reverse=True\n )\n if top:\n workers_sorted = workers_sorted[: top + 1]\n for worker in workers_sorted:\n if worker[\"work_end_date\"] < datetime.datetime.utcnow():\n continue\n votes = Amount({\"amount\": worker[\"total_votes_for\"], \"asset_id\": \"1.3.0\"})\n amount = Amount({\"amount\": worker[\"daily_pay\"], \"asset_id\": \"1.3.0\"})\n t.append(\n [\n worker[\"id\"],\n \"{name}\\n{url}\".format(**worker),\n str(amount),\n str(votes),\n \"{work_begin_date:%Y-%m-%d}\\n-\\n{work_end_date:%Y-%m-%d}\".format(\n **worker\n ),\n str(Account(worker[\"worker_account\"])[\"name\"]),\n ]\n )\n print_table(t)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef addkey(ctx, key):\n if not key:\n while True:\n key = click.prompt(\n \"Private Key (wif) [Enter to quit]\",\n hide_input=True,\n show_default=False,\n default=\"exit\",\n )\n if not key or key == \"exit\":\n break\n try:\n ctx.bitshares.wallet.addPrivateKey(key)\n except Exception as e:\n click.echo(str(e))\n continue\n else:\n for k in key:\n try:\n ctx.bitshares.wallet.addPrivateKey(k)\n except Exception as e:\n click.echo(str(e))\n\n installedKeys = ctx.bitshares.wallet.getPublicKeys()\n if len(installedKeys) == 1:\n name = ctx.bitshares.wallet.getAccountFromPublicKey(installedKeys[0])\n if name: # only if a name to the key was found\n account = Account(name, bitshares_instance=ctx.bitshares)\n click.echo(\"=\" * 30)\n click.echo(\"Setting new default user: %s\" % account[\"name\"])\n click.echo()\n click.echo(\"You can change these settings with:\")\n click.echo(\" uptick set default_account \")\n click.echo(\"=\" * 30)\n config[\"default_account\"] = account[\"name\"]", "response": "Add a private key to the wallet"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndeleting a private key from the wallet", "response": "def delkey(ctx, pubkeys):\n \"\"\" Delete a private key from the wallet\n \"\"\"\n if not pubkeys:\n pubkeys = click.prompt(\"Public Keys\").split(\" \")\n if click.confirm(\n \"Are you sure you want to delete keys from your wallet?\\n\"\n \"This step is IRREVERSIBLE! If you don't have a backup, \"\n \"You may lose access to your account!\"\n ):\n for pub in pubkeys:\n ctx.bitshares.wallet.removePrivateKeyFromPublicKey(pub)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nobtains private key for given public key", "response": "def getkey(ctx, pubkey):\n \"\"\" Obtain private key in WIF format\n \"\"\"\n click.echo(ctx.bitshares.wallet.getPrivateKeyForPublicKey(pubkey))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef listkeys(ctx):\n t = [[\"Available Key\"]]\n for key in ctx.bitshares.wallet.getPublicKeys():\n t.append([key])\n print_table(t)", "response": "List all keys for all networks"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nlist accounts for the connected network", "response": "def listaccounts(ctx):\n \"\"\" List accounts (for the connected network)\n \"\"\"\n t = [[\"Name\", \"Key\", \"Owner\", \"Active\", \"Memo\"]]\n for key in tqdm(ctx.bitshares.wallet.getPublicKeys(True)):\n for account in ctx.bitshares.wallet.getAccountsFromPublicKey(key):\n account = Account(account)\n is_owner = key in [x[0] for x in account[\"owner\"][\"key_auths\"]]\n is_active = key in [x[0] for x in account[\"active\"][\"key_auths\"]]\n is_memo = key == account[\"options\"][\"memo_key\"]\n t.append([\n account[\"name\"],\n key,\n \"x\" if is_owner else \"\",\n \"x\" if is_active else \"\",\n \"x\" if is_memo else \"\",\n ])\n print_table(t)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nimport an account using an account password", "response": "def importaccount(ctx, account, role):\n \"\"\" Import an account using an account password\n \"\"\"\n from bitsharesbase.account import PasswordKey\n\n password = click.prompt(\"Account Passphrase\", hide_input=True)\n account = Account(account, bitshares_instance=ctx.bitshares)\n imported = False\n\n if role == \"owner\":\n owner_key = PasswordKey(account[\"name\"], password, role=\"owner\")\n owner_pubkey = format(\n owner_key.get_public_key(), ctx.bitshares.rpc.chain_params[\"prefix\"]\n )\n if owner_pubkey in [x[0] for x in account[\"owner\"][\"key_auths\"]]:\n print_message(\"Importing owner key!\")\n owner_privkey = owner_key.get_private_key()\n ctx.bitshares.wallet.addPrivateKey(owner_privkey)\n imported = True\n\n if role == \"active\":\n active_key = PasswordKey(account[\"name\"], password, role=\"active\")\n active_pubkey = format(\n active_key.get_public_key(), ctx.bitshares.rpc.chain_params[\"prefix\"]\n )\n if active_pubkey in [x[0] for x in account[\"active\"][\"key_auths\"]]:\n print_message(\"Importing active key!\")\n active_privkey = active_key.get_private_key()\n ctx.bitshares.wallet.addPrivateKey(active_privkey)\n imported = True\n\n if role == \"memo\":\n memo_key = PasswordKey(account[\"name\"], password, role=role)\n memo_pubkey = format(\n memo_key.get_public_key(), ctx.bitshares.rpc.chain_params[\"prefix\"]\n )\n if memo_pubkey == account[\"memo_key\"]:\n print_message(\"Importing memo key!\")\n memo_privkey = memo_key.get_private_key()\n ctx.bitshares.wallet.addPrivateKey(memo_privkey)\n imported = True\n\n if not imported:\n print_message(\"No matching key(s) found. Password correct?\", \"error\")"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate default config file", "response": "def create(ctx):\n \"\"\" Create default config file\n \"\"\"\n import shutil\n\n this_dir, this_filename = os.path.split(__file__)\n default_config_file = os.path.join(this_dir, \"apis/example-config.yaml\")\n config_file = ctx.obj[\"configfile\"]\n shutil.copyfile(default_config_file, config_file)\n print_message(\"Config file created: {}\".format(config_file))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nstarting the API according to the config file", "response": "def start(ctx):\n \"\"\" Start the API according to the config file\n \"\"\"\n module = ctx.config.get(\"api\", \"poloniex\")\n # unlockWallet\n if module == \"poloniex\":\n from .apis import poloniex\n\n poloniex.run(ctx, port=5000)\n else:\n print_message(\"Unkown 'api'!\", \"error\")"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef fetch(url, **kwargs):\n timeout = kwargs.pop('timeout', DEFAULT_TIMEOUT)\n kwargs['return_crawler'] = True\n crawler = wait_for(timeout, _fetch_in_reactor, url, **kwargs)\n if hasattr(crawler.spider, 'response'):\n return crawler.spider.response", "response": "Fetches an URL and returns the response."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncrawl an URL with given callback.", "response": "def crawl(url, callback, **kwargs):\n \"\"\"Crawls an URL with given callback.\n\n Parameters\n ----------\n url : str\n An URL to crawl.\n callback : callable\n A function to be used as spider callback for the given URL.\n spider_cls : scrapy.Spider (default: DefaultSpider)\n A spider class to be used in the crawler instance.\n capture_items : bool (default: True)\n If enabled, the scraped items are captured and returned.\n return_crawler : bool (default: False)\n If enabled, the crawler instance is returned. If ``capture_items`` is\n enabled, the scraped items is collected in ``crawler.items``.\n settings : dict, optional\n Custom crawler settings.\n timeout : int, (default: DEFAULT_TIMEOUT)\n Result wait timeout.\n\n Returns\n -------\n out\n By default, the scraped items. If ``return_crawler`` is ``True``,\n returns the crawler instance.\n\n Raises\n ------\n crochet.TimeoutError\n\n \"\"\"\n timeout = kwargs.pop('timeout', DEFAULT_TIMEOUT)\n return wait_for(timeout, _crawl_in_reactor, url, callback, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nrunning a spider and returns the scraped items by default.", "response": "def run_spider(spider_cls, **kwargs):\n \"\"\"Runs a spider and returns the scraped items (by default).\n\n Parameters\n ----------\n spider_cls : scrapy.Spider\n A spider class to run.\n capture_items : bool (default: True)\n If enabled, the scraped items are captured and returned.\n return_crawler : bool (default: False)\n If enabled, the crawler instance is returned. If ``capture_items`` is\n enabled, the scraped items is collected in ``crawler.items``.\n settings : dict, optional\n Custom crawler settings.\n timeout : int, (default: DEFAULT_TIMEOUT)\n Result wait timeout.\n\n Returns\n -------\n out : list or scrapy.crawler.Crawler instance\n The scraped items by default or the crawler instance if\n ``return_crawler`` is ``True``.\n\n Raises\n ------\n crochet.TimeoutError\n\n \"\"\"\n timeout = kwargs.pop('timeout', DEFAULT_TIMEOUT)\n return wait_for(timeout, _run_spider_in_reactor, spider_cls, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfetching an URL and returns the response.", "response": "def _fetch_in_reactor(url, spider_cls=DefaultSpider, **kwargs):\n \"\"\"Fetches an URL and returns the response.\n\n Parameters\n ----------\n url : str\n An URL to fetch.\n spider_cls : scrapy.Spider (default: DefaultSpider)\n A spider class to be used in the crawler.\n kwargs : dict, optional\n Additional arguments to be passed to ``_run_spider_in_reactor``.\n\n Returns\n -------\n crochet.EventualResult\n\n \"\"\"\n def parse(self, response):\n self.response = response\n req = Request(url) if isinstance(url, six.string_types) else url\n req.dont_filter = True\n req.meta['handle_httpstatus_all'] = True\n spider_cls = override_start_requests(spider_cls, [req], parse=parse)\n return _run_spider_in_reactor(spider_cls, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncrawl given URL with given callback.", "response": "def _crawl_in_reactor(url, callback, spider_cls=DefaultSpider, **kwargs):\n \"\"\"Crawls given URL with given callback.\n\n Parameters\n ----------\n url : str\n The URL to crawl.\n callback : callable\n Function to be used as callback for the request.\n spider_cls : scrapy.Spider (default: DefaultSpider)\n A spider class to be used in the crawler instance.\n kwargs : dict, optional\n Extra arguments to be passed to ``_run_spider_in_reactor``.\n\n Returns\n -------\n crochet.EventualResult\n\n \"\"\"\n spider_cls = override_start_requests(spider_cls, [url], callback)\n return _run_spider_in_reactor(spider_cls, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nrunning given spider inside the twisted reactdor.", "response": "def _run_spider_in_reactor(spider_cls, capture_items=True, return_crawler=False,\n settings=None, **kwargs):\n \"\"\"Runs given spider inside the twisted reactdor.\n\n Parameters\n ----------\n spider_cls : scrapy.Spider\n Spider to run.\n capture_items : bool (default: True)\n If enabled, the scraped items are captured and returned.\n return_crawler : bool (default: False)\n If enabled, the crawler instance is returned. If ``capture_items`` is\n enabled, the scraped items is collected in ``crawler.items``.\n settings : dict, optional\n Custom crawler settings.\n\n Returns\n -------\n out : crochet.EventualResult\n If ``capture_items`` is ``True``, returns scraped items. If\n ``return_crawler`` is ``True``, returns the crawler instance.\n\n \"\"\"\n settings = settings or {}\n crawler_settings = get_project_settings().copy()\n crawler_settings.setdict(default_settings)\n crawler_settings.setdict(settings)\n log_scrapy_info(crawler_settings)\n crawler = Crawler(spider_cls, crawler_settings)\n d = crawler.crawl(**kwargs)\n if capture_items:\n crawler.items = _OutputItems()\n crawler.signals.connect(crawler.items.append, signal=signals.item_scraped)\n d.addCallback(lambda _: crawler.items)\n if return_crawler:\n d.addCallback(lambda _: crawler)\n return d"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a new spider class overriding the start requests method.", "response": "def override_start_requests(spider_cls, start_urls, callback=None, **attrs):\n \"\"\"Returns a new spider class overriding the ``start_requests``.\n\n This function is useful to replace the start requests of an existing spider\n class on runtime.\n\n Parameters\n ----------\n spider_cls : scrapy.Spider\n Spider class to be used as base class.\n start_urls : iterable\n Iterable of URLs or ``Request`` objects.\n callback : callable, optional\n Callback for the start URLs.\n attrs : dict, optional\n Additional class attributes.\n\n Returns\n -------\n out : class\n A subclass of ``spider_cls`` with overrided ``start_requests`` method.\n\n \"\"\"\n def start_requests():\n for url in start_urls:\n req = Request(url, dont_filter=True) if isinstance(url, six.string_types) else url\n if callback is not None:\n req.callback = callback\n yield req\n attrs['start_requests'] = staticmethod(start_requests)\n return type(spider_cls.__name__, (spider_cls, ), attrs)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwaits for a result.", "response": "def wait_for(timeout, func, *args, **kwargs):\n \"\"\"Waits for a eventual result.\n\n Parameters\n ----------\n timeout : int\n How much time to wait, in seconds.\n func : callable\n A function that returns ``crochet.EventualResult``.\n args : tuple, optional\n Arguments for ``func``.\n kwargs : dict, optional\n Keyword arguments for ``func``.\n\n Returns\n -------\n out\n Given ``func`` result.\n\n Raises\n ------\n corchet.TimeoutError\n\n \"\"\"\n result = func(*args, **kwargs)\n try:\n return result.wait(timeout)\n except crochet.TimeoutError:\n result.cancel()\n raise"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef highlight(code, lexer='html', formatter='html', output_wrapper=None):\n if not pygments:\n raise TypeError(\"pygments module required\")\n if not isinstance(code, six.string_types):\n code = pprint.pformat(code)\n if isinstance(lexer, six.string_types):\n lexer = pygments.lexers.get_lexer_by_name(lexer)\n if isinstance(formatter, six.string_types):\n formatter = pygments.formatters.get_formatter_by_name(formatter)\n if formatter.name.lower() == 'html':\n formatter.full = True\n formatter.cssclass = \"pygments-%s\" % uuid.uuid4()\n if output_wrapper is None:\n output_wrapper = HTML\n return output_wrapper(pygments.highlight(code, lexer, formatter))", "response": "Highlights given code using pygments."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get_next_slot(self, tag):\n '''\n get the first unused supval table key, or 0 if the\n table is empty.\n useful for filling the supval table sequentially.\n '''\n slot = self._n.suplast(tag)\n if slot is None or slot == idaapi.BADNODE:\n return 0\n else:\n return slot + 1", "response": "get the next unused supval table key"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nrunning the git executable with the arguments given and returns a list of lines produced on its standard output.", "response": "def run_git(self, args, git_env=None):\n '''\n Runs the git executable with the arguments given and returns a list of\n lines produced on its standard output.\n '''\n\n popen_kwargs = {\n 'stdout': subprocess.PIPE,\n 'stderr': subprocess.PIPE,\n }\n\n if git_env:\n popen_kwargs['env'] = git_env\n\n if self._git_toplevel:\n popen_kwargs['cwd'] = self._git_toplevel\n\n git_process = subprocess.Popen(\n [GitRunner._git_executable] + args,\n **popen_kwargs\n )\n\n try:\n out, err = git_process.communicate()\n git_process.wait()\n except Exception as e:\n raise GitError(\"Couldn't run 'git {args}':{newline}{ex}\".format(\n args=' '.join(args),\n newline=os.linesep,\n ex=str(e)\n ))\n\n if (0 != git_process.returncode) or err:\n if err:\n err = err.decode('utf_8')\n raise GitError(\"'git {args}' failed with:{newline}{err}\".format(\n args=' '.join(args),\n newline=os.linesep,\n err=err\n ))\n\n if not out:\n raise ValueError(\"No output\")\n\n return out.decode('utf_8').splitlines()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadd a signature block to a MAR file.", "response": "def add_signature_block(src_fileobj, dest_fileobj, signing_algorithm, signature=None):\n \"\"\"Add a signature block to marfile, a MarReader object.\n\n Productversion and channel are preserved, but any existing signatures are overwritten.\n\n Args:\n src_fileobj (file object): The input MAR file to add a signature to\n dest_fileobj (file object): File object to write new MAR file to. Must be open in w+b mode.\n signing_algorithm (str): One of 'sha1', or 'sha384'\n signature (bytes): Signature to write, or None to use a dummy signature\n \"\"\"\n algo_id = {'sha1': 1, 'sha384': 2}[signing_algorithm]\n if not signature:\n signature = make_dummy_signature(algo_id)\n\n src_fileobj.seek(0)\n mardata = mar.parse_stream(src_fileobj)\n\n # Header\n header = mardata.header\n dest_fileobj.write(mar_header.build(header))\n\n # Signature block\n sig = dict(algorithm_id=algo_id,\n size=len(signature),\n signature=signature,\n )\n\n # This will be fixed up later\n filesize = 0\n sigs_offset = dest_fileobj.tell()\n sigs = sigs_header.build(dict(\n filesize=filesize,\n count=1,\n sigs=[sig],\n ))\n dest_fileobj.write(sigs)\n\n # Write the additional section\n dest_fileobj.write(extras_header.build(mardata.additional))\n\n # Write the data\n data_offset = dest_fileobj.tell()\n src_fileobj.seek(mardata.data_offset)\n write_to_file(takeexactly(src_fileobj, mardata.data_length), dest_fileobj)\n\n # Write the index\n index_offset = dest_fileobj.tell()\n\n index = mardata.index\n\n # Adjust the offsets\n data_offset_delta = data_offset - mardata.data_offset\n\n for e in index.entries:\n e.offset += data_offset_delta\n\n dest_fileobj.write(index_header.build(index))\n filesize = dest_fileobj.tell()\n\n # Go back and update the index offset and filesize\n dest_fileobj.seek(0)\n header.index_offset = index_offset\n dest_fileobj.write(mar_header.build(header))\n\n dest_fileobj.seek(sigs_offset)\n sigs = sigs_header.build(dict(\n filesize=filesize,\n count=1,\n sigs=[sig],\n ))\n dest_fileobj.write(sigs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding path to the MAR file.", "response": "def add(self, path, compress=None):\n \"\"\"Add `path` to the MAR file.\n\n If `path` is a file, it will be added directly.\n If `path` is a directory, it will be traversed recursively and all\n files inside will be added.\n\n Args:\n path (str): path to file or directory on disk to add to this MAR\n file\n compress (str): One of 'xz', 'bz2', or None. Defaults to None.\n \"\"\"\n if os.path.isdir(path):\n self.add_dir(path, compress)\n else:\n self.add_file(path, compress)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_dir(self, path, compress):\n if not os.path.isdir(path):\n raise ValueError('{} is not a directory'.format(path))\n for root, dirs, files in os.walk(path):\n for f in files:\n self.add_file(os.path.join(root, f), compress)", "response": "Adds all files under a directory path to the MAR file."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_fileobj(self, fileobj, path, compress, flags=None):\n f = file_iter(fileobj)\n flags = flags or os.stat(path) & 0o777\n return self.add_stream(f, path, compress, flags)", "response": "Add the contents of a file object to the MAR file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding the contents of an iterable to the MAR file.", "response": "def add_stream(self, stream, path, compress, flags):\n \"\"\"Add the contents of an iterable to the MAR file.\n\n Args:\n stream (iterable): yields blocks of data\n path (str): name of this file in the MAR file\n compress (str): One of 'xz', 'bz2', or None. Defaults to None.\n flags (int): permission of this file in the MAR file\n \"\"\"\n self.data_fileobj.seek(self.last_offset)\n\n if compress == 'bz2':\n stream = bz2_compress_stream(stream)\n elif compress == 'xz':\n stream = xz_compress_stream(stream)\n elif compress is None:\n pass\n else:\n raise ValueError('Unsupported compression type: {}'.format(compress))\n\n size = write_to_file(stream, self.data_fileobj)\n\n # On Windows, convert \\ to /\n # very difficult to mock this out for coverage on linux\n if os.sep == '\\\\': # pragma: no cover\n path = path.replace('\\\\', '/')\n\n e = dict(\n name=six.u(path),\n offset=self.last_offset,\n size=size,\n flags=flags,\n )\n self.entries.append(e)\n self.last_offset += e['size']"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_file(self, path, compress):\n if not os.path.isfile(path):\n raise ValueError('{} is not a file'.format(path))\n self.fileobj.seek(self.last_offset)\n\n with open(path, 'rb') as f:\n flags = os.stat(path).st_mode & 0o777\n self.add_fileobj(f, path, compress, flags)", "response": "Add a single file to the MAR file."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nwrites the MAR header to the file.", "response": "def write_header(self):\n \"\"\"Write the MAR header to the file.\n\n The MAR header includes the MAR magic bytes as well as the offset to\n where the index data can be found.\n \"\"\"\n self.fileobj.seek(0)\n header = mar_header.build(dict(index_offset=self.last_offset))\n self.fileobj.write(header)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef dummy_signatures(self):\n if not self.signing_algorithm:\n return []\n algo_id = {'sha1': 1, 'sha384': 2}[self.signing_algorithm]\n signature = make_dummy_signature(algo_id)\n return [(algo_id, signature)]", "response": "Create a dummy signature."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef calculate_signatures(self):\n if not self.signing_algorithm:\n return []\n\n algo_id = {'sha1': 1, 'sha384': 2}[self.signing_algorithm]\n hashers = [(algo_id, make_hasher(algo_id))]\n for block in get_signature_data(self.fileobj, self.filesize):\n [h.update(block) for (_, h) in hashers]\n\n signatures = [(algo_id, sign_hash(self.signing_key, h.finalize(), h.algorithm.name)) for (algo_id, h) in hashers]\n return signatures", "response": "Calculate the signatures for this MAR file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nwriting the signature data to the MAR file.", "response": "def write_signatures(self, signatures):\n \"\"\"Write signature data to the MAR file.\n\n Args:\n signatures (list): list of signature tuples of the form\n (algorithm_id, signature_data)\n\n \"\"\"\n self.fileobj.seek(self.signature_offset)\n sig_entries = [dict(algorithm_id=id_,\n size=len(sig),\n signature=sig)\n for (id_, sig) in signatures]\n\n sigs = sigs_header.build(dict(\n filesize=self.filesize,\n count=len(signatures),\n sigs=sig_entries,\n ))\n self.fileobj.write(sigs)\n signatures_len = len(sigs)\n self.additional_offset = self.signature_offset + signatures_len\n # sanity check; this should never happen\n if not self.additional_offset == self.fileobj.tell(): # pragma: no cover\n raise IOError('ended up at unexpected offset')"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nwriting the additional information to the MAR header.", "response": "def write_additional(self, productversion, channel):\n \"\"\"Write the additional information to the MAR header.\n\n Args:\n productversion (str): product and version string\n channel (str): channel string\n\n \"\"\"\n self.fileobj.seek(self.additional_offset)\n extras = extras_header.build(dict(\n count=1,\n sections=[dict(\n channel=six.u(channel),\n productversion=six.u(productversion),\n size=len(channel) + len(productversion) + 2 + 8,\n padding=b'',\n )],\n ))\n\n self.fileobj.write(extras)\n self.last_offset = self.fileobj.tell()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nwrite the index of all our files to the MAR file.", "response": "def write_index(self):\n \"\"\"Write the index of all our files to the MAR file.\"\"\"\n self.fileobj.seek(self.last_offset)\n index = index_header.build(dict(entries=self.entries))\n self.fileobj.write(index)\n self.filesize = self.fileobj.tell()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef finish(self):\n # Update the last_offset in the mar header\n self.write_header()\n # Write out the index of contents\n self.write_index()\n\n if not self.use_old_format:\n # Refresh the signature\n sigs = self.calculate_signatures()\n self.write_signatures(sigs)", "response": "Finalize the MAR file."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nbuilds argument parser for the CLI.", "response": "def build_argparser():\n \"\"\"Build argument parser for the CLI.\"\"\"\n parser = ArgumentParser('Utility for managing MAR files')\n create_group = parser.add_argument_group(\"Create a MAR file\")\n create_group.add_argument(\"-c\", \"--create\", metavar=\"MARFILE\", help=\"create MAR\")\n create_group.add_argument(\"-V\", \"--productversion\", dest=\"productversion\",\n help=\"product/version string\")\n create_group.add_argument(\"-H\", \"--channel\", dest=\"channel\",\n help=\"channel this MAR file is applicable to\")\n create_group.add_argument(\"files\", nargs=REMAINDER,\n help=\"files to add to the MAR file\")\n\n extract_group = parser.add_argument_group(\"Extract a MAR file\")\n extract_group.add_argument(\"-x\", \"--extract\", help=\"extract MAR\", metavar=\"MARFILE\")\n\n list_group = parser.add_argument_group(\"Print information on a MAR file\")\n list_group.add_argument(\"-t\", \"--list\", help=\"print out MAR contents\",\n metavar=\"MARFILE\")\n list_group.add_argument(\"-T\", \"--list-detailed\", metavar=\"MARFILE\",\n help=\"print out MAR contents including signatures\")\n\n verify_group = parser.add_argument_group(\"Verify a MAR file\")\n verify_group.add_argument(\"-v\", \"--verify\", metavar=\"MARFILE\",\n help=\"verify the marfile\")\n\n parser.add_argument(\"-j\", \"--bzip2\", action=\"store_const\", dest=\"compression\",\n const=\"bz2\", help=\"compress/decompress members with BZ2\")\n parser.add_argument(\"-J\", \"--xz\", action=\"store_const\", dest=\"compression\",\n const=\"xz\", help=\"compress/decompress archive with XZ\")\n parser.add_argument(\"--auto\", action=\"store_const\", dest=\"compression\",\n const=\"auto\", help=\"automatically decompress contents\")\n\n parser.add_argument(\"-k\", \"--keyfiles\", dest=\"keyfiles\", action='append',\n help=\"sign/verify with given key(s)\")\n parser.add_argument(\"-C\", \"--chdir\", dest=\"chdir\",\n help=\"chdir to this directory before creating or \"\n \"extracing; location of marfile isn't affected by \"\n \"this option.\")\n parser.add_argument(\"--verbose\", dest=\"loglevel\", action=\"store_const\",\n const=logging.DEBUG, default=logging.WARN,\n help=\"increase logging verbosity\")\n parser.add_argument('--version', action='version', version='mar version {}'.format(mardor.version_str))\n\n signing_group = parser.add_argument_group('Sign a MAR file')\n signing_group.add_argument('--hash', help='output hash for signing', choices=('sha1', 'sha384'))\n signing_group.add_argument('--asn1', help='format hash as an ASN1 DigestInfo block',\n default=False, action='store_true')\n signing_group.add_argument('--add-signature', help='inject signature', nargs=3,\n metavar=('input', 'output', 'signature'))\n\n return parser"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef do_extract(marfile, destdir, decompress):\n with open(marfile, 'rb') as f:\n with MarReader(f) as m:\n m.extract(str(destdir), decompress=decompress)", "response": "Extract the MAR file to the destdir."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets public keys for the given list of files.", "response": "def get_keys(keyfiles, signature_type):\n \"\"\"Get public keys for the given keyfiles.\n\n Args:\n keyfiles: List of filenames with public keys, or :mozilla- prefixed key\n names\n signature_type: one of 'sha1' or 'sha384'\n\n Returns:\n List of public keys as strings\n\n \"\"\"\n builtin_keys = {\n ('release', 'sha1'): [mardor.mozilla.release1_sha1, mardor.mozilla.release2_sha1],\n ('release', 'sha384'): [mardor.mozilla.release1_sha384, mardor.mozilla.release2_sha384],\n ('nightly', 'sha1'): [mardor.mozilla.nightly1_sha1, mardor.mozilla.nightly2_sha1],\n ('nightly', 'sha384'): [mardor.mozilla.nightly1_sha384, mardor.mozilla.nightly2_sha384],\n ('dep', 'sha1'): [mardor.mozilla.dep1_sha1, mardor.mozilla.dep2_sha1],\n ('dep', 'sha384'): [mardor.mozilla.dep1_sha384, mardor.mozilla.dep2_sha384],\n ('autograph-stage', 'sha384'): [mardor.mozilla.autograph_stage_sha384],\n }\n keys = []\n for keyfile in keyfiles:\n if keyfile.startswith(':mozilla-'):\n name = keyfile.split(':mozilla-')[1]\n try:\n keys.extend(builtin_keys[name, signature_type])\n except KeyError:\n raise ValueError('Invalid internal key name: {}'\n .format(keyfile))\n else:\n key = open(keyfile, 'rb').read()\n keys.append(key)\n return keys"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef do_verify(marfile, keyfiles=None):\n try:\n with open(marfile, 'rb') as f:\n with MarReader(f) as m:\n # Check various parts of the mar file\n # e.g. signature algorithms and additional block sections\n errors = m.get_errors()\n if errors:\n print(\"File is not well formed: {}\".format(errors))\n sys.exit(1)\n\n if keyfiles:\n try:\n keys = get_keys(keyfiles, m.signature_type)\n except ValueError as e:\n print(e)\n sys.exit(1)\n\n if any(m.verify(key) for key in keys):\n print(\"Verification OK\")\n return True\n else:\n print(\"Verification failed\")\n sys.exit(1)\n else:\n print(\"Verification OK\")\n return True\n\n except Exception as e:\n print(\"Error opening or parsing file: {}\".format(e))\n sys.exit(1)", "response": "Verify the MAR file."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nlist the MAR file.", "response": "def do_list(marfile, detailed=False):\n \"\"\"\n List the MAR file.\n\n Yields lines of text to output\n \"\"\"\n with open(marfile, 'rb') as f:\n with MarReader(f) as m:\n if detailed:\n if m.compression_type:\n yield \"Compression type: {}\".format(m.compression_type)\n if m.signature_type:\n yield \"Signature type: {}\".format(m.signature_type)\n if m.mardata.signatures:\n plural = \"s\" if (m.mardata.signatures.count == 0 or m.mardata.signatures.count > 1) else \"\"\n yield \"Signature block found with {} signature{}\".format(m.mardata.signatures.count, plural)\n for s in m.mardata.signatures.sigs:\n yield \"- Signature {} size {}\".format(s.algorithm_id, s.size)\n yield \"\"\n if m.mardata.additional:\n yield \"{} additional block found:\".format(len(m.mardata.additional.sections))\n for s in m.mardata.additional.sections:\n if s.id == 1:\n yield (\" - Product Information Block:\")\n yield (\" - MAR channel name: {}\".format(s.channel))\n yield (\" - Product version: {}\".format(s.productversion))\n yield \"\"\n else:\n yield (\"Unknown additional data\")\n yield (\"{:7s} {:7s} {:7s}\".format(\"SIZE\", \"MODE\", \"NAME\"))\n for e in m.mardata.index.entries:\n yield (\"{:<7d} {:04o} {}\".format(e.size, e.flags, e.name))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a new MAR file.", "response": "def do_create(marfile, files, compress, productversion=None, channel=None,\n signing_key=None, signing_algorithm=None):\n \"\"\"Create a new MAR file.\"\"\"\n with open(marfile, 'w+b') as f:\n with MarWriter(f, productversion=productversion, channel=channel,\n signing_key=signing_key,\n signing_algorithm=signing_algorithm,\n ) as m:\n for f in files:\n m.add(f, compress=compress)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef do_hash(hash_algo, marfile, asn1=False):\n # Add a dummy signature into a temporary file\n dst = tempfile.TemporaryFile()\n with open(marfile, 'rb') as f:\n add_signature_block(f, dst, hash_algo)\n\n dst.seek(0)\n\n with MarReader(dst) as m:\n hashes = m.calculate_hashes()\n h = hashes[0][1]\n if asn1:\n h = format_hash(h, hash_algo)\n\n print(base64.b64encode(h).decode('ascii'))", "response": "Output the hash for this MAR file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding a signature to the MAR file.", "response": "def do_add_signature(input_file, output_file, signature_file):\n \"\"\"Add a signature to the MAR file.\"\"\"\n signature = open(signature_file, 'rb').read()\n if len(signature) == 256:\n hash_algo = 'sha1'\n elif len(signature) == 512:\n hash_algo = 'sha384'\n else:\n raise ValueError()\n\n with open(output_file, 'w+b') as dst:\n with open(input_file, 'rb') as src:\n add_signature_block(src, dst, hash_algo, signature)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the signing key and signing algoritm from the commandline.", "response": "def get_key_from_cmdline(parser, args):\n \"\"\"Return the signing key and signing algoritm from the commandline.\"\"\"\n if args.keyfiles:\n signing_key = open(args.keyfiles[0], 'rb').read()\n bits = get_keysize(signing_key)\n if bits == 2048:\n signing_algorithm = 'sha1'\n elif bits == 4096:\n signing_algorithm = 'sha384'\n else:\n parser.error(\"Unsupported key size {} from key {}\".format(bits, args.keyfiles[0]))\n\n print(\"Using {} to sign using algorithm {!s}\".format(args.keyfiles[0], signing_algorithm))\n else:\n signing_key = None\n signing_algorithm = None\n\n return signing_key, signing_algorithm"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef main(argv=None):\n parser = build_argparser()\n\n args = parser.parse_args(argv)\n\n logging.basicConfig(level=args.loglevel, format=\"%(message)s\")\n\n check_args(parser, args)\n\n if args.extract:\n marfile = os.path.abspath(args.extract)\n if args.chdir:\n os.chdir(args.chdir)\n do_extract(marfile, os.getcwd(), args.compression)\n\n elif args.verify:\n do_verify(args.verify, args.keyfiles)\n\n elif args.list:\n print(\"\\n\".join(do_list(args.list)))\n\n elif args.list_detailed:\n print(\"\\n\".join(do_list(args.list_detailed, detailed=True)))\n\n elif args.create:\n marfile = os.path.abspath(args.create)\n signing_key, signing_algorithm = get_key_from_cmdline(parser, args)\n\n if args.chdir:\n os.chdir(args.chdir)\n do_create(marfile, args.files, args.compression,\n productversion=args.productversion, channel=args.channel,\n signing_key=signing_key, signing_algorithm=signing_algorithm)\n\n elif args.hash:\n do_hash(args.hash, args.files[0], args.asn1)\n\n elif args.add_signature:\n do_add_signature(args.add_signature[0], args.add_signature[1], args.add_signature[2])\n\n # sanity check; should never happen\n else: # pragma: no cover\n parser.error(\"Unsupported action\")", "response": "Main entry point for the ACS Entry Point."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the latest compresion type used in this MAR.", "response": "def compression_type(self):\n \"\"\"Return the latest compresion type used in this MAR.\n\n Returns:\n One of None, 'bz2', or 'xz'\n\n \"\"\"\n best_compression = None\n for e in self.mardata.index.entries:\n self.fileobj.seek(e.offset)\n magic = self.fileobj.read(10)\n compression = guess_compression(magic)\n if compression == 'xz':\n best_compression = 'xz'\n break\n elif compression == 'bz2' and best_compression is None:\n best_compression = 'bz2'\n return best_compression"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef signature_type(self):\n if not self.mardata.signatures:\n return None\n\n for sig in self.mardata.signatures.sigs:\n if sig.algorithm_id == 1:\n return 'sha1'\n elif sig.algorithm_id == 2:\n return 'sha384'\n else:\n return 'unknown'", "response": "Return the signature type used in this MAR."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef extract_entry(self, e, decompress='auto'):\n self.fileobj.seek(e.offset)\n stream = file_iter(self.fileobj)\n stream = takeexactly(stream, e.size)\n if decompress == 'auto':\n stream = auto_decompress_stream(stream)\n elif decompress == 'bz2':\n stream = bz2_decompress_stream(stream)\n elif decompress == 'xz':\n stream = xz_decompress_stream(stream)\n elif decompress is None:\n pass\n else:\n raise ValueError(\"Unsupported decompression type: {}\".format(decompress))\n\n for block in stream:\n yield block", "response": "Yields the blocks of data for this entry from this MAR file."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nextracts the entire MAR file into a directory.", "response": "def extract(self, destdir, decompress='auto'):\n \"\"\"Extract the entire MAR file into a directory.\n\n Args:\n destdir (str): A local directory on disk into which the contents of\n this MAR file will be extracted. Required parent directories\n will be created as necessary.\n decompress (obj, optional): Controls whether files are decompressed\n when extracted. Must be one of 'auto' or None. Defaults to\n 'auto'.\n \"\"\"\n for e in self.mardata.index.entries:\n name = e.name\n entry_path = safejoin(destdir, name)\n entry_dir = os.path.dirname(entry_path)\n mkdir(entry_dir)\n with open(entry_path, 'wb') as f:\n write_to_file(self.extract_entry(e, decompress), f)\n os.chmod(entry_path, e.flags)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nverifying that this MAR file is well formed. Returns a list of strings describing errors in the MAR file.", "response": "def get_errors(self):\n \"\"\"Verify that this MAR file is well formed.\n\n Returns:\n A list of strings describing errors in the MAR file\n None if this MAR file appears well formed.\n\n \"\"\"\n errors = []\n errors.extend(self._get_signature_errors())\n errors.extend(self._get_additional_errors())\n errors.extend(self._get_entry_errors())\n\n return errors if errors else None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef verify(self, verify_key):\n if not self.mardata.signatures or not self.mardata.signatures.sigs:\n # This MAR file can't be verified since it has no signatures\n return False\n\n hashers = []\n for sig in self.mardata.signatures.sigs:\n hashers.append((sig.algorithm_id, sig.signature, make_hasher(sig.algorithm_id)))\n\n assert len(hashers) == len(self.mardata.signatures.sigs)\n\n for block in get_signature_data(self.fileobj,\n self.mardata.signatures.filesize):\n [h.update(block) for (_, _, h) in hashers]\n\n for algo_id, sig, h in hashers:\n if not verify_signature(verify_key, sig, h.finalize(), h.algorithm.name):\n return False\n else:\n return True", "response": "Verify that this MAR file has a valid signature."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef productinfo(self):\n if not self.mardata.additional:\n return None\n\n for s in self.mardata.additional.sections:\n if s.id == 1:\n return str(s.productversion), str(s.channel)\n\n return None", "response": "Return the productversion and channel of this MAR if present."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncalculate hashes of the contents of this MAR file.", "response": "def calculate_hashes(self):\n \"\"\"Return hashes of the contents of this MAR file.\n\n The hashes depend on the algorithms defined in the MAR file's signature block.\n\n Returns:\n A list of (algorithm_id, hash) tuples\n\n \"\"\"\n hashers = []\n if not self.mardata.signatures:\n return []\n\n for s in self.mardata.signatures.sigs:\n h = make_hasher(s.algorithm_id)\n hashers.append((s.algorithm_id, h))\n\n for block in get_signature_data(self.fileobj, self.mardata.signatures.filesize):\n [h.update(block) for (_, h) in hashers]\n\n return [(algo_id, h.finalize()) for (algo_id, h) in hashers]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _has_extras(ctx):\n if not ctx.index.entries:\n return False\n\n return ctx.data_offset > 8 and ctx.data_offset > (ctx.signatures.offset_end + 8)", "response": "Determine if a MAR file has additional section blocks or not."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nloading the public key from a PEM encoded string.", "response": "def get_publickey(keydata):\n \"\"\"Load the public key from a PEM encoded string.\"\"\"\n try:\n key = serialization.load_pem_public_key(\n keydata,\n backend=default_backend(),\n )\n return key\n except ValueError:\n key = serialization.load_pem_private_key(\n keydata,\n password=None,\n backend=default_backend(),\n )\n key = key.public_key()\n return key"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_privatekey(keydata):\n key = serialization.load_pem_private_key(\n keydata,\n password=None,\n backend=default_backend(),\n )\n return key", "response": "Load the private key from a PEM encoded string."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreads data from MAR file that is required for MAR signatures.", "response": "def get_signature_data(fileobj, filesize):\n \"\"\"Read data from MAR file that is required for MAR signatures.\n\n Args:\n fileboj (file-like object): file-like object to read the MAR data from\n filesize (int): the total size of the file\n\n Yields:\n blocks of bytes representing the data required to generate or validate\n signatures.\n\n \"\"\"\n # Read everything except the signature entries\n # The first 8 bytes are covered, as is everything from the beginning\n # of the additional section to the end of the file. The signature\n # algorithm id and size fields are also covered.\n\n fileobj.seek(0)\n marfile = mar.parse_stream(fileobj)\n if not marfile.signatures:\n raise IOError(\"Can't generate signature data for file without signature blocks\")\n\n # MAR header\n fileobj.seek(0)\n block = fileobj.read(8)\n yield block\n\n # Signatures header\n sigs = sigs_header.parse_stream(fileobj)\n\n sig_types = [(sig.algorithm_id, sig.size) for sig in sigs.sigs]\n\n block = Int64ub.build(filesize) + Int32ub.build(sigs.count)\n yield block\n\n # Signature algorithm id and size per entry\n for algorithm_id, size in sig_types:\n block = Int32ub.build(algorithm_id) + Int32ub.build(size)\n yield block\n\n # Everything else in the file is covered\n for block in file_iter(fileobj):\n yield block"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make_hasher(algorithm_id):\n if algorithm_id == 1:\n return hashes.Hash(hashes.SHA1(), default_backend())\n elif algorithm_id == 2:\n return hashes.Hash(hashes.SHA384(), default_backend())\n else:\n raise ValueError(\"Unsupported signing algorithm: %s\" % algorithm_id)", "response": "Create a hashing object for the given signing algorithm."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef sign_hash(private_key, hash, hash_algo):\n hash_algo = _hash_algorithms[hash_algo]\n return get_privatekey(private_key).sign(\n hash,\n padding.PKCS1v15(),\n utils.Prehashed(hash_algo),\n )", "response": "Sign the given hash with the given private key."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef verify_signature(public_key, signature, hash, hash_algo):\n hash_algo = _hash_algorithms[hash_algo]\n try:\n return get_publickey(public_key).verify(\n signature,\n hash,\n padding.PKCS1v15(),\n utils.Prehashed(hash_algo),\n ) is None\n except InvalidSignature:\n return False", "response": "Verify the given signature is correct for the given public key."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngenerates an RSA keypair.", "response": "def make_rsa_keypair(bits):\n \"\"\"Generate an RSA keypair.\n\n Args:\n bits (int): number of bits to use for the key.\n\n Returns:\n (private_key, public_key) - both as PEM encoded strings\n\n \"\"\"\n private_key = rsa.generate_private_key(\n public_exponent=65537,\n key_size=bits,\n backend=default_backend(),\n )\n private_pem = private_key.private_bytes(\n encoding=serialization.Encoding.PEM,\n format=serialization.PrivateFormat.TraditionalOpenSSL,\n encryption_algorithm=serialization.NoEncryption(),\n )\n public_pem = private_key.public_key().public_bytes(\n encoding=serialization.Encoding.PEM,\n format=serialization.PublicFormat.SubjectPublicKeyInfo,\n )\n return private_pem, public_pem"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nmaking a directory and its parents.", "response": "def mkdir(path):\n \"\"\"Make a directory and its parents.\n\n Args:\n path (str): path to create\n\n Returns:\n None\n\n Raises:\n OSError if the directory cannot be created.\n\n \"\"\"\n try:\n os.makedirs(path)\n # sanity check\n if not os.path.isdir(path): # pragma: no cover\n raise IOError('path is not a directory')\n except OSError as e:\n # EEXIST\n if e.errno == 17 and os.path.isdir(path):\n return\n raise"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef takeexactly(iterable, size):\n total = 0\n for block in iterable:\n n = min(len(block), size - total)\n block = block[:n]\n if block:\n yield block\n total += len(block)\n if total >= size:\n break\n if total < size:\n raise ValueError('not enough data (yielded {} of {})')\n\n # sanity check; this should never happen\n if total != size: # pragma: no cover\n raise ValueError('yielded too much data')", "response": "Yields blocks from iterable until exactly size has been returned."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef write_to_file(src, dst):\n n = 0\n for block in src:\n dst.write(block)\n n += len(block)\n return n", "response": "Writes data from src into dst."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncompress data from src into a series of blocks of compressed data.", "response": "def bz2_compress_stream(src, level=9):\n \"\"\"Compress data from `src`.\n\n Args:\n src (iterable): iterable that yields blocks of data to compress\n level (int): compression level (1-9) default is 9\n\n Yields:\n blocks of compressed data\n\n \"\"\"\n compressor = bz2.BZ2Compressor(level)\n for block in src:\n encoded = compressor.compress(block)\n if encoded:\n yield encoded\n yield compressor.flush()"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndecompresses data from src.", "response": "def bz2_decompress_stream(src):\n \"\"\"Decompress data from `src`.\n\n Args:\n src (iterable): iterable that yields blocks of compressed data\n\n Yields:\n blocks of uncompressed data\n\n \"\"\"\n dec = bz2.BZ2Decompressor()\n for block in src:\n decoded = dec.decompress(block)\n if decoded:\n yield decoded"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncompressing data from src.", "response": "def xz_compress_stream(src):\n \"\"\"Compress data from `src`.\n\n Args:\n src (iterable): iterable that yields blocks of data to compress\n\n Yields:\n blocks of compressed data\n\n \"\"\"\n compressor = lzma.LZMACompressor(\n check=lzma.CHECK_CRC64,\n filters=[\n {\"id\": lzma.FILTER_X86},\n {\"id\": lzma.FILTER_LZMA2,\n \"preset\": lzma.PRESET_DEFAULT},\n ])\n for block in src:\n encoded = compressor.compress(block)\n if encoded:\n yield encoded\n yield compressor.flush()"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndecompresses data from src.", "response": "def xz_decompress_stream(src):\n \"\"\"Decompress data from `src`.\n\n Args:\n src (iterable): iterable that yields blocks of compressed data\n\n Yields:\n blocks of uncompressed data\n\n \"\"\"\n dec = lzma.LZMADecompressor()\n for block in src:\n decoded = dec.decompress(block)\n if decoded:\n yield decoded\n\n if dec.unused_data: # pragma: nocover; can't figure out how to test this\n raise IOError('Read unused data at end of compressed stream')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef auto_decompress_stream(src):\n block = next(src)\n compression = guess_compression(block)\n if compression == 'bz2':\n src = bz2_decompress_stream(chain([block], src))\n elif compression == 'xz':\n src = xz_decompress_stream(chain([block], src))\n else:\n src = chain([block], src)\n\n for block in src:\n yield block", "response": "Decompress data from src if required."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef path_is_inside(path, dirname):\n path = os.path.abspath(path)\n dirname = os.path.abspath(dirname)\n while len(path) >= len(dirname):\n if path == dirname:\n return True\n newpath = os.path.dirname(path)\n if newpath == path:\n return False\n path = newpath\n return False", "response": "Return True if path is under dirname."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef safejoin(base, *elements):\n # TODO: do we really want to be absolute here?\n base = os.path.abspath(base)\n path = os.path.join(base, *elements)\n path = os.path.normpath(path)\n if not path_is_inside(path, base):\n raise ValueError('target path is outside of the base path')\n return path", "response": "Safely joins paths together."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef filesize(fileobj):\n current = fileobj.tell()\n fileobj.seek(0, 2)\n end = fileobj.tell()\n fileobj.seek(current)\n return end", "response": "Return the number of bytes in the fileobj."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef close(self, destroy=False, sync=False, timeout=None):\n _SharedPV.close(self, destroy)\n if sync:\n # TODO: still not syncing PVA workers...\n _sync()\n self._disconnected.Wait(timeout=timeout)", "response": "Close the PV and disconnect any pending workers."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nprocessing queued work until interrupt is called", "response": "def handle(self):\n \"\"\"Process queued work until interrupt() is called\n \"\"\"\n while True:\n # TODO: Queue.get() (and anything using thread.allocate_lock\n # ignores signals :( so timeout periodically to allow delivery\n try:\n callable = None # ensure no lingering references to past work while blocking\n callable = self._Q.get(True, 1.0)\n except Empty:\n continue # retry on timeout\n try:\n if callable is self._stopit:\n break\n callable()\n except:\n _log.exception(\"Error from WorkQueue\")\n finally:\n self._Q.task_done()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set_debug(lvl):\n lvl = _lvlmap.get(lvl, lvl)\n assert lvl in _lvls, lvl\n _ClientProvider.set_debug(lvl)", "response": "Set PVA global debug print level."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncleans up all resources that are not needed by the P4P.", "response": "def cleanup():\n \"\"\"P4P sequenced shutdown. Intended to be atexit. Idenpotent.\n \"\"\"\n _log.debug(\"P4P atexit begins\")\n # clean provider registry\n from .server import clearProviders, _cleanup_servers\n clearProviders()\n\n # close client contexts\n from .client.raw import _cleanup_contexts\n _cleanup_contexts()\n\n # stop servers\n _cleanup_servers()\n\n # shutdown default work queue\n from .util import _defaultWorkQueue\n _defaultWorkQueue.stop()\n _log.debug(\"P4P atexit completes\")"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef forever(klass, *args, **kws):\n with klass(*args, **kws):\n _log.info(\"Running server\")\n try:\n while True:\n time.sleep(100)\n except KeyboardInterrupt:\n pass\n finally:\n _log.info(\"Stopping server\")", "response": "Create a server and block the calling thread until KeyboardInterrupt."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef buildType(valtype, extra=[], display=False, control=False, valueAlarm=False):\n isarray = valtype[:1] == 'a'\n F = [\n ('value', valtype),\n ('alarm', alarm),\n ('timeStamp', timeStamp),\n ]\n _metaHelper(F, valtype, display=display, control=control, valueAlarm=valueAlarm)\n F.extend(extra)\n return Type(id=\"epics:nt/NTScalarArray:1.0\" if isarray else \"epics:nt/NTScalar:1.0\",\n spec=F)", "response": "Build a Type object from a type code."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef wrap(self, value, timestamp=None):\n if isinstance(value, Value):\n return value\n elif isinstance(value, ntwrappercommon):\n return value.raw\n elif isinstance(value, dict):\n return self.Value(self.type, value)\n else:\n S, NS = divmod(float(timestamp or time.time()), 1.0)\n return self.Value(self.type, {\n 'value': value,\n 'timeStamp': {\n 'secondsPastEpoch': S,\n 'nanoseconds': NS * 1e9,\n },\n })", "response": "Pack python value into Value object"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nunpacking a Value into an augmented python type", "response": "def unwrap(klass, value):\n \"\"\"Unpack a Value into an augmented python type (selected from the 'value' field)\n \"\"\"\n assert isinstance(value, Value), value\n V = value.value\n try:\n T = klass.typeMap[type(V)]\n except KeyError:\n raise ValueError(\"Can't unwrap value of type %s\" % type(V))\n try:\n return T(value.value)._store(value)\n except Exception as e:\n raise ValueError(\"Can't construct %s around %s (%s): %s\" % (T, value, type(value), e))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef changed(self, *fields):\n S = super(Value, self).changed\n for fld in fields or (None,): # no args tests for any change\n if S(fld):\n return True\n return False", "response": "Test if one or more fields have changed."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a set containing the names of all changed fields in the current node.", "response": "def changedSet(self, expand=False, parents=False):\n \"\"\"\n :param bool expand: Whether to expand when entire sub-structures are marked as changed.\n If True, then sub-structures are expanded and only leaf fields will be included.\n If False, then a direct translation is made, which may include both leaf and sub-structure fields.\n :param bool parents: If True, include fake entries for parent sub-structures with leaf fields marked as changed.\n :returns: A :py:class:`set` of names of those fields marked as changed.\n\n Return a :py:class:`set` containing the names of all changed fields. ::\n\n A = Value(Type([\n ('x', 'i'),\n ('z', ('S', None, [\n ('a', 'i'),\n ('b', 'i'),\n ])),\n ]), {\n })\n\n A.mark('z')\n assert A.changedSet(expand=False) == {'z'} # only shows fields explicitly marked\n assert A.changedSet(expand=True) == {'z.a', 'z.b'} # actually used during network transmission\n A.mark('z.a') # redundant\n assert A.changedSet(expand=False) == {'z', 'z.a'}\n assert A.changedSet(expand=True) == {'z.a', 'z.b'}\n A.unmark('z')\n assert A.changedSet(expand=False) == {'z.a'}\n assert A.changedSet(expand=True) == {'z.a'}\n assert A.changedSet(expand=False, parents=True) == {'z', 'z.a'}\n assert A.changedSet(expand=True, parents=True) == {'z', 'z.a'}\n\n\n * expand=False, parents=False gives a direct mapping of the underlying BitSet as it would (get/monitor),\n or have been (put/rpc), moved over the network.\n * expand=True, parents=False gives the effective set of leaf fields which will be moved over the network.\n taking into account the use of whole sub-structure compress/shorthand bits.\n * expand=False, parents=True gives a way of testing if anything changed within a set of interesting fields\n (cf. set.intersect).\n \"\"\"\n return ValueBase.changedSet(self, expand, parents)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef wrap(self, value):\n attrib = getattr(value, 'attrib', {})\n\n S, NS = divmod(time.time(), 1.0)\n value = numpy.asarray(value) # loses any special/augmented attributes\n dims = list(value.shape)\n dims.reverse() # inner-most sent as left\n\n if 'ColorMode' not in attrib:\n # attempt to infer color mode from shape\n if value.ndim==2:\n attrib['ColorMode'] = 0 # gray\n\n elif value.ndim==3:\n for idx,dim in enumerate(dims):\n if dim==3: # assume it's a color\n attrib['ColorMode'] = 2 + idx # 2 - RGB1, 3 - RGB2, 4 - RGB3\n break # assume that the first is color, and any subsequent dim=3 is a thin ROI\n\n dataSize = value.nbytes\n\n return Value(self.type, {\n 'value': (self._code2u[value.dtype.char], value.flatten()),\n 'compressedSize': dataSize,\n 'uncompressedSize': dataSize,\n 'uniqueId': 0,\n 'timeStamp': {\n 'secondsPastEpoch': S,\n 'nanoseconds': NS * 1e9,\n },\n 'attribute': [{'name': K, 'value': V} for K, V in attrib.items()],\n 'dimension': [{'size': N,\n 'offset': 0,\n 'fullSize': N,\n 'binning': 1,\n 'reverse': False} for N in dims],\n })", "response": "Wrap numpy. ndarray as Value\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef unwrap(klass, value):\n V = value.value\n if V is None:\n # Union empty. treat as zero-length char array\n V = numpy.zeros((0,), dtype=numpy.uint8)\n return V.view(klass.ntndarray)._store(value)", "response": "Unwrap Value as NTNDArray\n "} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef unwrapHandler(handler, nt):\n def dounwrap(code, msg, val):\n _log.debug(\"Handler (%s, %s, %s) -> %s\", code, msg, LazyRepr(val), handler)\n try:\n if code == 0:\n handler(RemoteError(msg))\n elif code == 1:\n handler(Cancelled())\n else:\n if val is not None:\n val = nt.unwrap(val)\n handler(val)\n except:\n _log.exception(\"Exception in Operation handler\")\n return dounwrap", "response": "Wrap get rpc handler to unwrap Value\nWorkItem"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nclears internal Channel cache allowing currently unused channels to be implictly closed.", "response": "def disconnect(self, name=None):\n \"\"\"Clear internal Channel cache, allowing currently unused channels to be implictly closed.\n\n :param str name: None, to clear the entire cache, or a name string to clear only a certain entry.\n \"\"\"\n if name is None:\n self._channels = {}\n else:\n self._channels.pop(name)\n if self._ctxt is not None:\n self._ctxt.disconnect(name)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _request(self, process=None, wait=None):\n opts = []\n if process is not None:\n opts.append('process=%s' % process)\n if wait is not None:\n if wait:\n opts.append('wait=true')\n else:\n opts.append('wait=false')\n return 'field()record[%s]' % (','.join(opts))", "response": "helper for building pvRequests\n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get(self, name, handler, request=None):\n chan = self._channel(name)\n return _p4p.ClientOperation(chan, handler=unwrapHandler(handler, self._nt),\n pvRequest=wrapRequest(request), get=True, put=False)", "response": "Begin Fetch of current value of a PV\nTaxonomy."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nwriting a new value to a PV.", "response": "def put(self, name, handler, builder=None, request=None, get=True):\n \"\"\"Write a new value to a PV.\n\n :param name: A single name string or list of name strings\n :param callable handler: Completion notification. Called with None (success), RemoteError, or Cancelled\n :param callable builder: Called when the PV Put type is known. A builder is responsible\n for filling in the Value to be sent. builder(value)\n :param request: A :py:class:`p4p.Value` or string to qualify this request, or None to use a default.\n :param bool get: Whether to do a Get before the Put. If True then the value passed to the builder callable\n will be initialized with recent PV values. eg. use this with NTEnum to find the enumeration list.\n\n :returns: A object with a method cancel() which may be used to abort the operation.\n \"\"\"\n chan = self._channel(name)\n return _p4p.ClientOperation(chan, handler=unwrapHandler(handler, self._nt),\n builder=defaultBuilder(builder, self._nt),\n pvRequest=wrapRequest(request), get=get, put=True)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nperforms an RPC operation on the PV Taxonomy.", "response": "def rpc(self, name, handler, value, request=None):\n \"\"\"Perform RPC operation on PV\n\n :param name: A single name string or list of name strings\n :param callable handler: Completion notification. Called with a Value, RemoteError, or Cancelled\n :param request: A :py:class:`p4p.Value` or string to qualify this request, or None to use a default.\n\n :returns: A object with a method cancel() which may be used to abort the operation.\n \"\"\"\n chan = self._channel(name)\n if value is None:\n value = Value(Type([]))\n return _p4p.ClientOperation(chan, handler=unwrapHandler(handler, self._nt),\n value=value, pvRequest=wrapRequest(request), rpc=True)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsubscribing to a named PV.", "response": "def monitor(self, name, handler, request=None, **kws):\n \"\"\"Begin subscription to named PV\n\n :param str name: PV name string\n :param callable handler: Completion notification. Called with None (FIFO not empty), RemoteError, Cancelled, or Disconnected\n :param request: A :py:class:`p4p.Value` or string to qualify this request, or None to use a default.\n :param bool notify_disconnect: Whether disconnect (and done) notifications are delivered to the callback (as None).\n\n :returns: A Subscription\n \"\"\"\n chan = self._channel(name)\n return Subscription(context=self,\n nt=self._nt,\n channel=chan, handler=monHandler(handler), pvRequest=wrapRequest(request),\n **kws)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef close(self, destroy=False, sync=False):\n _SharedPV.close(self, destroy)\n if sync:\n return self._wait_closed()", "response": "Close PV, disconnecting any clients.\n\n :param bool destroy: Indicate \"permanent\" closure. Current clients will not see subsequent open().\n :param bool sync: When block until any pending onLastDisconnect() is delivered (timeout applies).\n :param float timeout: Applies only when sync=True. None for no timeout, otherwise a non-negative floating point value.\n\n close() with destory=True or sync=True will not prevent clients from re-connecting.\n New clients may prevent sync=True from succeeding.\n Prevent reconnection by __first__ stopping the Server, removing with :py:method:`StaticProvider.remove()`,\n or preventing a :py:class:`DynamicProvider` from making new channels to this SharedPV."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef timesout(deftimeout=5.0):\n def decorate(fn):\n assert asyncio.iscoroutinefunction(fn), \"Place @timesout before @coroutine\"\n\n @wraps(fn)\n @asyncio.coroutine\n def wrapper(*args, timeout=deftimeout, **kws):\n loop = kws.get('loop')\n fut = fn(*args, **kws)\n if timeout is None:\n yield from fut\n else:\n yield from asyncio.wait_for(fut, timeout=timeout, loop=loop)\n return wrapper\n return decorate", "response": "Decorator to implement an overall timeout on a high - level sequence."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfetch current value of some number of PVs.", "response": "def get(self, name, request=None):\n \"\"\"Fetch current value of some number of PVs.\n\n :param name: A single name string or list of name strings\n :param request: A :py:class:`p4p.Value` or string to qualify this request, or None to use a default.\n\n :returns: A p4p.Value, or list of same. Subject to :py:ref:`unwrap`.\n\n When invoked with a single name then returns is a single value.\n When invoked with a list of name, then returns a list of values. ::\n\n with Context('pva') as ctxt:\n V = yield from ctxt.get('pv:name')\n A, B = yield from ctxt.get(['pv:1', 'pv:2'])\n \"\"\"\n singlepv = isinstance(name, (bytes, str))\n if singlepv:\n return (yield from self._get_one(name, request=request))\n\n elif request is None:\n request = [None] * len(name)\n\n assert len(name) == len(request), (name, request)\n\n futs = [self._get_one(N, request=R) for N, R in zip(name, request)]\n\n ret = yield from asyncio.gather(*futs, loop=self.loop)\n\n return ret"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef put(self, name, values, request=None, process=None, wait=None, get=True):\n if request and (process or wait is not None):\n raise ValueError(\"request= is mutually exclusive to process= or wait=\")\n elif process or wait is not None:\n request = 'field()record[block=%s,process=%s]' % ('true' if wait else 'false', process or 'passive')\n\n singlepv = isinstance(name, (bytes, str))\n if singlepv:\n return (yield from self._put_one(name, values, request=request, get=get))\n\n elif request is None:\n request = [None] * len(name)\n\n assert len(name) == len(request), (name, request)\n assert len(name) == len(values), (name, values)\n\n futs = [self._put_one(N, V, request=R, get=get) for N, V, R in zip(name, values, request)]\n\n yield from asyncio.gather(*futs, loop=self.loop)", "response": "Write a new value of some number of PVs."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef monitor(self, name, cb, request=None, notify_disconnect=False):\n assert asyncio.iscoroutinefunction(cb), \"monitor callback must be coroutine\"\n R = Subscription(name, cb, notify_disconnect=notify_disconnect, loop=self.loop)\n cb = partial(self.loop.call_soon_threadsafe, R._event)\n\n R._S = super(Context, self).monitor(name, cb, request)\n return R", "response": "Create a subscription.\n\n :param str name: PV name string\n :param callable cb: Processing callback\n :param request: A :py:class:`p4p.Value` or string to qualify this request, or None to use a default.\n :param bool notify_disconnect: In additional to Values, the callback may also be call with instances of Exception.\n Specifically: Disconnected , RemoteError, or Cancelled\n :returns: a :py:class:`Subscription` instance\n\n The callable will be invoked with one argument which is either.\n\n * A p4p.Value (Subject to :py:ref:`unwrap`)\n * A sub-class of Exception (Disconnected , RemoteError, or Cancelled)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef close(self):\n if self._S is not None:\n # after .close() self._event should never be called\n self._S.close()\n # wait for Cancelled to be delivered\n self._evt.wait()\n self._S = None", "response": "Close the current subscription."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nforces close all Channels and cancel all Operations", "response": "def close(self):\n \"\"\"Force close all Channels and cancel all Operations\n \"\"\"\n if self._Q is not None:\n for T in self._T:\n self._Q.interrupt()\n for n, T in enumerate(self._T):\n _log.debug('Join Context worker %d', n)\n T.join()\n _log.debug('Joined Context workers')\n self._Q, self._T = None, None\n super(Context, self).close()"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nfetches the current value of some number of PVs.", "response": "def get(self, name, request=None, timeout=5.0, throw=True):\n \"\"\"Fetch current value of some number of PVs.\n\n :param name: A single name string or list of name strings\n :param request: A :py:class:`p4p.Value` or string to qualify this request, or None to use a default.\n :param float timeout: Operation timeout in seconds\n :param bool throw: When true, operation error throws an exception. If False then the Exception is returned instead of the Value\n\n :returns: A p4p.Value or Exception, or list of same. Subject to :py:ref:`unwrap`.\n\n When invoked with a single name then returns is a single value.\n When invoked with a list of name, then returns a list of values\n\n >>> ctxt = Context('pva')\n >>> V = ctxt.get('pv:name')\n >>> A, B = ctxt.get(['pv:1', 'pv:2'])\n >>>\n \"\"\"\n singlepv = isinstance(name, (bytes, unicode))\n if singlepv:\n name = [name]\n request = [request]\n\n elif request is None:\n request = [None] * len(name)\n\n assert len(name) == len(request), (name, request)\n\n # use Queue instead of Event to allow KeyboardInterrupt\n done = Queue()\n result = [TimeoutError()] * len(name)\n ops = [None] * len(name)\n\n raw_get = super(Context, self).get\n\n try:\n for i, (N, req) in enumerate(izip(name, request)):\n def cb(value, i=i):\n try:\n if not isinstance(value, Cancelled):\n done.put_nowait((value, i))\n _log.debug('get %s Q %s', N, LazyRepr(value))\n except:\n _log.exception(\"Error queuing get result %s\", value)\n\n _log.debug('get %s w/ %s', N, req)\n ops[i] = raw_get(N, cb, request=req)\n\n for _n in range(len(name)):\n try:\n value, i = done.get(timeout=timeout)\n except Empty:\n if throw:\n _log.debug('timeout %s after %s', name[i], timeout)\n raise TimeoutError()\n break\n _log.debug('got %s %s', name[i], LazyRepr(value))\n if throw and isinstance(value, Exception):\n raise value\n result[i] = value\n\n finally:\n [op and op.close() for op in ops]\n\n if singlepv:\n return result[0]\n else:\n return result"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwrites a new value of some number of PVs.", "response": "def put(self, name, values, request=None, timeout=5.0, throw=True,\n process=None, wait=None, get=True):\n \"\"\"Write a new value of some number of PVs.\n\n :param name: A single name string or list of name strings\n :param values: A single value, a list of values, a dict, a `Value`. May be modified by the constructor nt= argument.\n :param request: A :py:class:`p4p.Value` or string to qualify this request, or None to use a default.\n :param float timeout: Operation timeout in seconds\n :param bool throw: When true, operation error throws an exception.\n If False then the Exception is returned instead of the Value\n :param str process: Control remote processing. May be 'true', 'false', 'passive', or None.\n :param bool wait: Wait for all server processing to complete.\n :param bool get: Whether to do a Get before the Put. If True then the value passed to the builder callable\n will be initialized with recent PV values. eg. use this with NTEnum to find the enumeration list.\n\n :returns: A None or Exception, or list of same\n\n When invoked with a single name then returns is a single value.\n When invoked with a list of name, then returns a list of values\n\n If 'wait' or 'process' is specified, then 'request' must be omitted or None.\n\n >>> ctxt = Context('pva')\n >>> ctxt.put('pv:name', 5.0)\n >>> ctxt.put(['pv:1', 'pv:2'], [1.0, 2.0])\n >>> ctxt.put('pv:name', {'value':5})\n >>>\n\n The provided value(s) will be automatically coerced to the target type.\n If this is not possible then an Exception is raised/returned.\n\n Unless the provided value is a dict, it is assumed to be a plain value\n and an attempt is made to store it in '.value' field.\n \"\"\"\n if request and (process or wait is not None):\n raise ValueError(\"request= is mutually exclusive to process= or wait=\")\n elif process or wait is not None:\n request = 'field()record[block=%s,process=%s]' % ('true' if wait else 'false', process or 'passive')\n\n singlepv = isinstance(name, (bytes, unicode))\n if singlepv:\n name = [name]\n values = [values]\n request = [request]\n\n elif request is None:\n request = [None] * len(name)\n\n assert len(name) == len(request), (name, request)\n assert len(name) == len(values), (name, values)\n\n # use Queue instead of Event to allow KeyboardInterrupt\n done = Queue()\n result = [TimeoutError()] * len(name)\n ops = [None] * len(name)\n\n raw_put = super(Context, self).put\n\n try:\n for i, (n, value, req) in enumerate(izip(name, values, request)):\n if isinstance(value, (bytes, unicode)) and value[:1] == '{':\n try:\n value = json.loads(value)\n except ValueError:\n raise ValueError(\"Unable to interpret '%s' as json\" % value)\n\n # completion callback\n def cb(value, i=i):\n try:\n done.put_nowait((value, i))\n except:\n _log.exception(\"Error queuing put result %s\", LazyRepr(value))\n\n ops[i] = raw_put(n, cb, builder=value, request=req, get=get)\n\n for _n in range(len(name)):\n try:\n value, i = done.get(timeout=timeout)\n except Empty:\n if throw:\n raise TimeoutError()\n break\n if throw and isinstance(value, Exception):\n raise value\n result[i] = value\n\n if singlepv:\n return result[0]\n else:\n return result\n finally:\n [op and op.close() for op in ops]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef rpc(self, name, value, request=None, timeout=5.0, throw=True):\n done = Queue()\n\n op = super(Context, self).rpc(name, done.put_nowait, value, request=request)\n\n try:\n try:\n result = done.get(timeout=timeout)\n except Empty:\n result = TimeoutError()\n if throw and isinstance(result, Exception):\n raise result\n\n return result\n except:\n op.close()\n raise", "response": "Perform a Remote Procedure Call to get a specific value from the cache."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a subscription. :param str name: PV name string :param callable cb: Processing callback :param request: A :py:class:`p4p.Value` or string to qualify this request, or None to use a default. :param bool notify_disconnect: In additional to Values, the callback may also be call with instances of Exception. Specifically: Disconnected , RemoteError, or Cancelled :param WorkQueue queue: A work queue through which monitor callbacks are dispatched. :returns: a :py:class:`Subscription` instance The callable will be invoked with one argument which is either. * A p4p.Value (Subject to :py:ref:`unwrap`) * A sub-class of Exception (Disconnected , RemoteError, or Cancelled)", "response": "def monitor(self, name, cb, request=None, notify_disconnect=False, queue=None):\n \"\"\"Create a subscription.\n\n :param str name: PV name string\n :param callable cb: Processing callback\n :param request: A :py:class:`p4p.Value` or string to qualify this request, or None to use a default.\n :param bool notify_disconnect: In additional to Values, the callback may also be call with instances of Exception.\n Specifically: Disconnected , RemoteError, or Cancelled\n :param WorkQueue queue: A work queue through which monitor callbacks are dispatched.\n :returns: a :py:class:`Subscription` instance\n\n The callable will be invoked with one argument which is either.\n\n * A p4p.Value (Subject to :py:ref:`unwrap`)\n * A sub-class of Exception (Disconnected , RemoteError, or Cancelled)\n \"\"\"\n R = Subscription(self, name, cb, notify_disconnect=notify_disconnect, queue=queue)\n\n R._S = super(Context, self).monitor(name, R._event, request)\n return R"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nmarking the PV as opened and update its initial value.", "response": "def open(self, value, nt=None, wrap=None, unwrap=None):\n \"\"\"Mark the PV as opened an provide its initial value.\n This initial value is later updated with post().\n\n :param value: A Value, or appropriate object (see nt= and wrap= of the constructor).\n\n Any clients which have begun connecting which began connecting while\n this PV was in the close'd state will complete connecting.\n\n Only those fields of the value which are marked as changed will be stored.\n \"\"\"\n\n self._wrap = wrap or (nt and nt.wrap) or self._wrap\n self._unwrap = unwrap or (nt and nt.unwrap) or self._unwrap\n\n _SharedPV.open(self, self._wrap(value))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef rpc(rtype=None):\n wrap = None\n if rtype is None or isinstance(rtype, Type):\n pass\n elif isinstance(type, (list, tuple)):\n rtype = Type(rtype)\n elif hasattr(rtype, 'type'): # eg. one of the NT* helper classes\n wrap = rtype.wrap\n rtype = rtype.type\n else:\n raise TypeError(\"Not supported\")\n\n def wrapper(fn):\n if wrap is not None:\n orig = fn\n\n @wraps(orig)\n def wrapper2(*args, **kws):\n return wrap(orig(*args, **kws))\n fn = wrapper2\n\n fn._reply_Type = rtype\n return fn\n return wrapper", "response": "Decorator marks a method for export."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef quickRPCServer(provider, prefix, target,\n maxsize=20,\n workers=1,\n useenv=True, conf=None, isolate=False):\n \"\"\"Run an RPC server in the current thread\n\n Calls are handled sequentially, and always in the current thread, if workers=1 (the default).\n If workers>1 then calls are handled concurrently by a pool of worker threads.\n Requires NTURI style argument encoding.\n\n :param str provider: A provider name. Must be unique in this process.\n :param str prefix: PV name prefix. Along with method names, must be globally unique.\n :param target: The object which is exporting methods. (use the :func:`rpc` decorator)\n :param int maxsize: Number of pending RPC calls to be queued.\n :param int workers: Number of worker threads (default 1)\n :param useenv: Passed to :class:`~p4p.server.Server`\n :param conf: Passed to :class:`~p4p.server.Server`\n :param isolate: Passed to :class:`~p4p.server.Server`\n \"\"\"\n from p4p.server import Server\n import time\n queue = ThreadedWorkQueue(maxsize=maxsize, workers=workers)\n provider = NTURIDispatcher(queue, target=target, prefix=prefix, name=provider)\n threads = []\n server = Server(providers=[provider], useenv=useenv, conf=conf, isolate=isolate)\n with server, queue:\n while True:\n time.sleep(10.0)", "response": "Run an RPC server in the current thread."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef rpcproxy(spec):\n # inject our ctor first so we don't have to worry about super() non-sense.\n\n def _proxyinit(self, context=None, format={}, **kws):\n assert context is not None, context\n self.context = context\n self.format = format\n spec.__init__(self, **kws)\n obj = {'__init__': _proxyinit}\n\n for K, V in inspect.getmembers(spec, lambda M: hasattr(M, '_call_PV')):\n obj[K] = _wrapMethod(K, V)\n\n return type(spec.__name__, (RPCProxyBase, spec), obj)", "response": "Decorator to enable this class to proxy RPC client calls to the given PV names."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef unwrap(self, val):\n if val.getID()!=self.id:\n self._update(val)\n return self._unwrap(val)", "response": "Unpack a Value as some other python type returning a new object."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef buildType(valtype, extra=[]):\n assert valtype[:1] == 'a', 'valtype must be an array'\n return Type(id=\"epics:nt/NTMultiChannel:1.0\",\n spec=[\n ('value', valtype),\n ('channelName', 'as'),\n ('descriptor', 's'),\n ('alarm', alarm),\n ('timeStamp', timeStamp),\n ('severity', 'ai'),\n ('status', 'ai'),\n ('message', 'as'),\n ('secondsPastEpoch', 'al'),\n ('nanoseconds', 'ai'),\n ('userTag', 'ai'),\n ('isConnected', 'a?'),\n ] + extra)", "response": "Build a Type object from a type code."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nbuilds a table object.", "response": "def buildType(columns=[], extra=[]):\n \"\"\"Build a table\n\n :param list columns: List of column names and types. eg [('colA', 'd')]\n :param list extra: A list of tuples describing additional non-standard fields\n :returns: A :py:class:`Type`\n \"\"\"\n return Type(id=\"epics:nt/NTTable:1.0\",\n spec=[\n ('labels', 'as'),\n ('value', ('S', None, columns)),\n ('descriptor', 's'),\n ('alarm', alarm),\n ('timeStamp', timeStamp),\n ] + extra)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\npacks an iterable of dict into a Value object", "response": "def wrap(self, values):\n \"\"\"Pack an iterable of dict into a Value\n\n >>> T=NTTable([('A', 'ai'), ('B', 'as')])\n >>> V = T.wrap([\n {'A':42, 'B':'one'},\n {'A':43, 'B':'two'},\n ])\n \"\"\"\n if isinstance(values, Value):\n return values\n cols = dict([(L, []) for L in self.labels])\n try:\n # unzip list of dict\n for V in values:\n for L in self.labels:\n try:\n cols[L].append(V[L])\n except (IndexError, KeyError):\n pass\n # allow omit empty columns\n for L in self.labels:\n V = cols[L]\n if len(V) == 0:\n del cols[L]\n\n try:\n return self.Value(self.type, {\n 'labels': self.labels,\n 'value': cols,\n })\n except:\n _log.error(\"Failed to encode '%s' with %s\", cols, self.labels)\n raise\n except:\n _log.exception(\"Failed to wrap: %s\", values)\n raise"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef unwrap(value):\n ret = []\n\n # build lists of column names, and value\n lbl, cols = [], []\n for cname, cval in value.value.items():\n lbl.append(cname)\n cols.append(cval)\n\n # zip together column arrays to iterate over rows\n for rval in izip(*cols):\n # zip together column names and row values\n ret.append(OrderedDict(zip(lbl, rval)))\n\n return ret", "response": "Iterate an NTTable\n\n :returns: An iterator yielding an OrderedDict for each column"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef buildType(args):\n try:\n return Type(id=\"epics:nt/NTURI:1.0\", spec=[\n ('scheme', 's'),\n ('authority', 's'),\n ('path', 's'),\n ('query', ('S', None, args)),\n ])\n except Exception as e:\n raise ValueError('Unable to build NTURI compatible type from %s' % args)", "response": "Builds a NTURI compatible type from a list of query argument name and PVD type code."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nwrap argument values ( tuple or list with optional dict", "response": "def wrap(self, path, args=(), kws={}, scheme='', authority=''):\n \"\"\"Wrap argument values (tuple/list with optional dict) into Value\n\n :param str path: The PV name to which this call is made\n :param tuple args: Ordered arguments\n :param dict kws: Keyword arguments\n :rtype: Value\n \"\"\"\n # build dict of argument name+value\n AV = {}\n AV.update([A for A in kws.items() if A[1] is not None])\n AV.update([(N, V) for (N, _T), V in zip(self._args, args)])\n\n # list of argument name+type tuples for which a value was provided\n AT = [A for A in self._args if A[0] in AV]\n\n T = self.buildType(AT)\n try:\n return Value(T, {\n 'scheme': scheme,\n 'authority': authority,\n 'path': path,\n 'query': AV,\n })\n except Exception as e:\n raise ValueError('Unable to initialize NTURI %s from %s using %s' % (AT, AV, self._args))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\npacking python value into Value", "response": "def wrap(self, value, timestamp=None):\n \"\"\"Pack python value into Value\n \"\"\"\n V = self.type()\n S, NS = divmod(float(timestamp or time.time()), 1.0)\n V.timeStamp = {\n 'secondsPastEpoch': S,\n 'nanoseconds': NS * 1e9,\n }\n if isinstance(value, dict):\n # assume dict of index and choices list\n V.value = value\n self._choices = V['value.choices']\n else:\n # index or string\n self.assign(V, value)\n return V"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef unwrap(self, value):\n if value.changed('value.choices'):\n self._choices = value['value.choices']\n\n idx = value['value.index']\n ret = ntenum(idx)._store(value)\n try:\n ret.choice = self._choices[idx]\n except IndexError:\n pass # leave it as None\n return ret", "response": "Unpack a Value into an augmented python type"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nstores python value in Value", "response": "def assign(self, V, py):\n \"\"\"Store python value in Value\n \"\"\"\n if isinstance(py, (bytes, unicode)):\n for i,C in enumerate(V['value.choices'] or self._choices):\n if py==C:\n V['value.index'] = i\n return\n\n # attempt to parse as integer\n V['value.index'] = py"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncloses the PV and disconnect any workers.", "response": "def close(self, destroy=False, sync=False, timeout=None):\n \"\"\"Close PV, disconnecting any clients.\n\n :param bool destroy: Indicate \"permanent\" closure. Current clients will not see subsequent open().\n :param bool sync: When block until any pending onLastDisconnect() is delivered (timeout applies).\n :param float timeout: Applies only when sync=True. None for no timeout, otherwise a non-negative floating point value.\n\n close() with destory=True or sync=True will not prevent clients from re-connecting.\n New clients may prevent sync=True from succeeding.\n Prevent reconnection by __first__ stopping the Server, removing with :py:meth:`StaticProvider.remove()`,\n or preventing a :py:class:`DynamicProvider` from making new channels to this SharedPV.\n \"\"\"\n _SharedPV.close(self, destroy)\n if sync:\n # TODO: still not syncing PVA workers...\n self._queue.sync()\n self._disconnected.wait()"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncounts the number of instances of each type and class of each object in the list", "response": "def gcstats():\n \"\"\"Count the number of instances of each type/class\n\n :returns: A dict() mapping type (as a string) to an integer number of references\n \"\"\"\n all = gc.get_objects()\n _stats = {}\n\n for obj in all:\n K = type(obj)\n if K is StatsDelta:\n continue # avoid counting ourselves\n\n elif K is InstanceType: # instance of an old-style class\n K = getattr(obj, '__class__', K)\n\n # Track types as strings to avoid holding references\n K = str(K)\n\n try:\n _stats[K] += 1\n except KeyError:\n _stats[K] = 1\n\n # explicitly break the reference loop between the list and this frame,\n # which is contained in the list\n # This would otherwise prevent the list from being free'd\n del all\n\n return _stats"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef periodic(period=60.0, file=sys.stderr):\n import threading\n import time\n S = _StatsThread(period=period, file=file)\n T = threading.Thread(target=S)\n T.daemon = True\n T.start()", "response": "Start a daemon thread which will periodically print GC stats for the\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncollecting stats and print results to file", "response": "def collect(self, file=sys.stderr):\n \"\"\"Collect stats and print results to file\n\n :param file: A writable file-like object\n \"\"\"\n cur = gcstats()\n Ncur = len(cur)\n if self.stats is not None and file is not None:\n prev = self.stats\n Nprev = self.ntypes # may be less than len(prev)\n\n if Ncur != Nprev:\n print(\"# Types %d -> %d\" % (Nprev, Ncur), file=file)\n\n Scur, Sprev, first = set(cur), set(prev), True\n for T in Scur - Sprev: # new types\n if first:\n print('New Types', file=file)\n first = False\n print(' ', T, cur[T], file=file)\n\n first = True\n for T in Sprev - Scur: # collected types\n if first:\n print('Cleaned Types', file=file)\n first = False\n print(' ', T, -prev[T], file=file)\n\n first = True\n for T in Scur & Sprev:\n if cur[T] == prev[T]:\n continue\n if first:\n print('Known Types', file=file)\n first = False\n print(' ', T, cur[T], 'delta', cur[T] - prev[T], file=file)\n\n else: # first call\n print(\"All Types\", file=file)\n for T, C in cur.items():\n print(' ', T, C, file=file)\n\n self.stats, self.ntypes = cur, len(cur)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nfetches the current value of some number of PVs.", "response": "def get(self, name, request=None, timeout=5.0, throw=True):\n \"\"\"Fetch current value of some number of PVs.\n\n :param name: A single name string or list of name strings\n :param request: A :py:class:`p4p.Value` or string to qualify this request, or None to use a default.\n :param float timeout: Operation timeout in seconds\n :param bool throw: When true, operation error throws an exception. If False then the Exception is returned instead of the Value\n\n :returns: A p4p.Value or Exception, or list of same. Subject to :py:ref:`unwrap`.\n\n When invoked with a single name then returns is a single value.\n When invoked with a list of name, then returns a list of values\n\n >>> ctxt = Context('pva')\n >>> V = ctxt.get('pv:name')\n >>> A, B = ctxt.get(['pv:1', 'pv:2'])\n >>>\n \"\"\"\n singlepv = isinstance(name, (bytes, unicode))\n if singlepv:\n return self._get_one(name, request=request, timeout=timeout, throw=throw)\n\n elif request is None:\n request = [None] * len(name)\n\n assert len(name) == len(request), (name, request)\n\n return cothread.WaitForAll(\n [cothread.Spawn(self._get_one, N, request=R, timeout=timeout, throw=throw,\n raise_on_wait=True)\n for N, R in zip(name, request)]\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nwriting a new value of some number of PVs.", "response": "def put(self, name, values, request=None, process=None, wait=None, timeout=5.0, get=True, throw=True):\n \"\"\"Write a new value of some number of PVs.\n\n :param name: A single name string or list of name strings\n :param values: A single value, a list of values, a dict, a `Value`. May be modified by the constructor nt= argument.\n :param request: A :py:class:`p4p.Value` or string to qualify this request, or None to use a default.\n :param float timeout: Operation timeout in seconds\n :param bool throw: When true, operation error throws an exception.\n If False then the Exception is returned instead of the Value\n :param str process: Control remote processing. May be 'true', 'false', 'passive', or None.\n :param bool wait: Wait for all server processing to complete.\n :param bool get: Whether to do a Get before the Put. If True then the value passed to the builder callable\n will be initialized with recent PV values. eg. use this with NTEnum to find the enumeration list.\n\n :returns: A None or Exception, or list of same\n\n When invoked with a single name then returns is a single value.\n When invoked with a list of name, then returns a list of values\n\n If 'wait' or 'process' is specified, then 'request' must be omitted or None.\n\n >>> ctxt = Context('pva')\n >>> ctxt.put('pv:name', 5.0)\n >>> ctxt.put(['pv:1', 'pv:2'], [1.0, 2.0])\n >>> ctxt.put('pv:name', {'value':5})\n >>>\n\n The provided value(s) will be automatically coerced to the target type.\n If this is not possible then an Exception is raised/returned.\n\n Unless the provided value is a dict, it is assumed to be a plain value\n and an attempt is made to store it in '.value' field.\n \"\"\"\n if request and (process or wait is not None):\n raise ValueError(\"request= is mutually exclusive to process= or wait=\")\n elif process or wait is not None:\n request = 'field()record[block=%s,process=%s]' % ('true' if wait else 'false', process or 'passive')\n\n singlepv = isinstance(name, (bytes, unicode))\n if singlepv:\n return self._put_one(name, values, request=request, timeout=timeout, throw=throw, get=get)\n\n elif request is None:\n request = [None] * len(name)\n\n assert len(name) == len(request), (name, request)\n assert len(name) == len(values), (name, values)\n\n return cothread.WaitForAll(\n [cothread.Spawn(self._put_one, N, V, request=R, timeout=timeout, throw=throw, get=get,\n raise_on_wait=True)\n for N, V, R in zip(name, values, request)]\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a subscription. :param str name: PV name string :param callable cb: Processing callback :param request: A :py:class:`p4p.Value` or string to qualify this request, or None to use a default. :param bool notify_disconnect: In additional to Values, the callback may also be call with instances of Exception. Specifically: Disconnected , RemoteError, or Cancelled :returns: a :py:class:`Subscription` instance The callable will be invoked with one argument which is either. * A p4p.Value (Subject to :py:ref:`unwrap`) * A sub-class of Exception (Disconnected , RemoteError, or Cancelled)", "response": "def monitor(self, name, cb, request=None, notify_disconnect=False):\n \"\"\"Create a subscription.\n\n :param str name: PV name string\n :param callable cb: Processing callback\n :param request: A :py:class:`p4p.Value` or string to qualify this request, or None to use a default.\n :param bool notify_disconnect: In additional to Values, the callback may also be call with instances of Exception.\n Specifically: Disconnected , RemoteError, or Cancelled\n :returns: a :py:class:`Subscription` instance\n\n The callable will be invoked with one argument which is either.\n\n * A p4p.Value (Subject to :py:ref:`unwrap`)\n * A sub-class of Exception (Disconnected , RemoteError, or Cancelled)\n \"\"\"\n R = Subscription(name, cb, notify_disconnect=notify_disconnect)\n cb = partial(cothread.Callback, R._event)\n\n R._S = super(Context, self).monitor(name, cb, request)\n return R"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nclosing the current subscription.", "response": "def close(self):\n \"\"\"Close subscription.\n \"\"\"\n if self._S is not None:\n # after .close() self._event should never be called\n self._S.close()\n self._S = None\n self._Q.Signal(None)\n self._T.Wait()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a list of members of a specific chamber and congress.", "response": "def filter(self, chamber, congress=CURRENT_CONGRESS, **kwargs):\n \"\"\"\n Takes a chamber and Congress,\n OR state and district, returning a list of members\n \"\"\"\n check_chamber(chamber)\n\n kwargs.update(chamber=chamber, congress=congress)\n\n if 'state' in kwargs and 'district' in kwargs:\n path = (\"members/{chamber}/{state}/{district}/\"\n \"current.json\").format(**kwargs)\n\n elif 'state' in kwargs:\n path = (\"members/{chamber}/{state}/\"\n \"current.json\").format(**kwargs)\n\n else:\n path = (\"{congress}/{chamber}/\"\n \"members.json\").format(**kwargs)\n\n return self.fetch(path, parse=lambda r: r['results'])"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef bills(self, member_id, type='introduced'):\n \"Same as BillsClient.by_member\"\n path = \"members/{0}/bills/{1}.json\".format(member_id, type)\n return self.fetch(path)", "response": "Same as BillsClient. by_member"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef compare(self, first, second, chamber, type='votes', congress=CURRENT_CONGRESS):\n check_chamber(chamber)\n path = \"members/{first}/{type}/{second}/{congress}/{chamber}.json\"\n path = path.format(first=first, second=second, type=type,\n congress=congress, chamber=chamber)\n return self.fetch(path)", "response": "See how often two members voted together in a given Congress.\n Takes two member IDs, a chamber and a Congress number."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef by_member(self, member_id, type='introduced'):\n path = \"members/{member_id}/bills/{type}.json\".format(\n member_id=member_id, type=type)\n return self.fetch(path)", "response": "Returns a list of recent bills by a member ID and a type"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef upcoming(self, chamber, congress=CURRENT_CONGRESS):\n \"Shortcut for upcoming bills\"\n path = \"bills/upcoming/{chamber}.json\".format(chamber=chamber)\n return self.fetch(path)", "response": "Shortcut for upcoming bills"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef by_month(self, chamber, year=None, month=None):\n check_chamber(chamber)\n\n now = datetime.datetime.now()\n year = year or now.year\n month = month or now.month\n\n path = \"{chamber}/votes/{year}/{month}.json\".format(\n chamber=chamber, year=year, month=month)\n return self.fetch(path, parse=lambda r: r['results'])", "response": "Return a list of all the votes for a single month."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef by_range(self, chamber, start, end):\n check_chamber(chamber)\n\n start, end = parse_date(start), parse_date(end)\n if start > end:\n start, end = end, start\n\n path = \"{chamber}/votes/{start:%Y-%m-%d}/{end:%Y-%m-%d}.json\".format(\n chamber=chamber, start=start, end=end)\n return self.fetch(path, parse=lambda r: r['results'])", "response": "Return a list of all the votes cast in a chamber between two dates."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef by_date(self, chamber, date):\n \"Return votes cast in a chamber on a single day\"\n date = parse_date(date)\n return self.by_range(chamber, date, date)", "response": "Return votes cast in a chamber on a single day"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef today(self, chamber):\n \"Return today's votes in a given chamber\"\n now = datetime.date.today()\n return self.by_range(chamber, now, now)", "response": "Return today s votes in a given chamber"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef by_type(self, chamber, type, congress=CURRENT_CONGRESS):\n \"Return votes by type: missed, party, lone no, perfect\"\n check_chamber(chamber)\n\n path = \"{congress}/{chamber}/votes/{type}.json\".format(\n congress=congress, chamber=chamber, type=type)\n return self.fetch(path)", "response": "Return votes by type : missed party lone no perfect"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns votes on nominations from a given Congress", "response": "def nominations(self, congress=CURRENT_CONGRESS):\n \"Return votes on nominations from a given Congress\"\n path = \"{congress}/nominations.json\".format(congress=congress)\n return self.fetch(path)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfetching a specific resource from the Congress API.", "response": "def fetch(self, path, parse=lambda r: r['results'][0]):\n \"\"\"\n Make an API request, with authentication.\n\n This method can be used directly to fetch new endpoints\n or customize parsing.\n\n ::\n\n >>> from congress import Congress\n >>> client = Congress()\n >>> senate = client.fetch('115/senate/members.json')\n >>> print(senate['num_results'])\n 101\n\n \"\"\"\n url = self.BASE_URI + path\n headers = {'X-API-Key': self.apikey}\n\n log.debug(url)\n\n resp, content = self.http.request(url, headers=headers)\n content = u(content)\n content = json.loads(content)\n\n # handle errors\n if not content.get('status') == 'OK':\n\n if \"errors\" in content and content['errors'][0]['error'] == \"Record not found\":\n raise NotFound(path)\n\n if content.get('status') == '404':\n raise NotFound(path)\n\n raise CongressError(content, resp, url)\n\n if callable(parse):\n content = parse(content)\n\n return content"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse_date(s):\n if isinstance(s, (datetime.datetime, datetime.date)):\n return s\n try:\n from dateutil.parser import parse\n except ImportError:\n parse = lambda d: datetime.datetime.strptime(d, \"%Y-%m-%d\")\n return parse(s)", "response": "Parse a date using dateutil. parser. parse falling back to datetime. strptime"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef append(self, v):\n '''\n >>> d = DiskVarArray('/tmp/test3', dtype='uint32')\n >>> d.append([1, 2, 3, 4])\n >>> d.__getitem__(0)\n memmap([1, 2, 3, 4], dtype=uint32)\n >>> d.append([5, 6, 7, 8])\n >>> d.__getitem__(1)\n memmap([5, 6, 7, 8], dtype=uint32)\n >>> shutil.rmtree('/tmp/test3', ignore_errors=True)\n '''\n self.index.append(len(self.data))\n self.data.extend(v)", "response": "Append a new entry to the array."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef destroy(self):\n '''\n >>> import numpy as np\n >>> d = DiskVarArray('/tmp/test4', dtype='uint32')\n >>> d.append([1, 2, 3, 4])\n >>> d.destroy # doctest:+ELLIPSIS\n >\n >>> shutil.rmtree('/tmp/test4', ignore_errors=True)\n '''\n\n self.data.destroy()\n self.data = None\n\n self.index.destroy()\n self.index = None", "response": ">>> import numpy as np\n >>> d = DiskVarArray('/tmp/test4', dtype='uint32')\n >>> d.append([1, 2, 3, 4])\n >>> d.destroy # doctest:+ELLIPSIS\n >\n >>> shutil.rmtree('/tmp/test4', ignore_errors=True)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef append(self, v):\n '''\n >>> import numpy as np\n >>> da = DiskArray('/tmp/test.array', shape=(0, 3), growby=3, dtype=np.float32)\n >>> print(da[:])\n []\n >>> data = np.array([[2,3,4], [1, 2, 3]])\n >>> da.append(data[0])\n >>> print(da[:])\n\t[[2. 3. 4.]\n\t [0. 0. 0.]\n\t [0. 0. 0.]]\n '''\n\n # FIXME: for now we only support\n # append along axis 0 and only\n # for 1d and 2d arrays\n\n # FIXME: for now we only support\n # appending one item at a time\n\n nrows = self._shape[0]\n nrows_capacity = self._capacity_shape[0]\n\n if nrows == nrows_capacity:\n self._capacity_shape = self._incr_shape(self._capacity_shape, self._growby)\n self._update_ndarray()\n\n shapelen = len(self._shape)\n\n if shapelen not in (1, 2):\n raise AppendNotSupported(shapelen)\n\n self.data[nrows] = v\n self._shape = self._incr_shape(self._shape, 1)", "response": "Append a new entry to the array."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef extend(self, v):\n '''\n >>> import numpy as np\n >>> da = DiskArray('/tmp/test.array', shape=(0, 3), capacity=(10, 3), dtype=np.float32)\n >>> print(da[:])\n\t[[2. 3. 4.]\n\t [0. 0. 0.]\n\t [0. 0. 0.]\n\t [0. 0. 0.]\n\t [0. 0. 0.]\n\t [0. 0. 0.]\n\t [0. 0. 0.]\n\t [0. 0. 0.]\n\t [0. 0. 0.]\n\t [0. 0. 0.]]\n >>> data = np.array([[2,3,4], [1, 2, 3]])\n >>> da.extend(data)\n >>> print(da[:])\n [[2. 3. 4.]\n [1. 2. 3.]\n [0. 0. 0.]\n [0. 0. 0.]\n [0. 0. 0.]\n [0. 0. 0.]\n [0. 0. 0.]\n [0. 0. 0.]\n [0. 0. 0.]\n [0. 0. 0.]]\n >>> os.remove('/tmp/test.array')\n '''\n\n nrows = self._shape[0]\n nrows_capacity = self._capacity_shape[0]\n remaining_capacity = nrows_capacity - nrows\n\n if remaining_capacity < len(v):\n diff = len(v) - remaining_capacity\n self._capacity_shape = self._incr_shape(self._capacity_shape, diff)\n self._update_ndarray()\n\n self.data[nrows:nrows+len(v)] = v\n self._shape = self._incr_shape(self._shape, len(v))", "response": "Extend the array with the elements in v."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _prep_spark_sql_groupby(self):\n # Strip the index info\n non_index_columns = filter(lambda x: x not in self._prdd._index_names,\n self._prdd._column_names())\n self._grouped_spark_sql = (self._prdd.to_spark_sql()\n .select(non_index_columns)\n .groupBy(self._by))\n self._columns = filter(lambda x: x != self._by,\n non_index_columns)", "response": "Used Spark SQL group approach"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _prep_pandas_groupby(self):\n myargs = self._myargs\n mykwargs = self._mykwargs\n\n def extract_keys(groupedFrame):\n for key, group in groupedFrame:\n yield (key, group)\n\n def group_and_extract(frame):\n return extract_keys(frame.groupby(*myargs, **mykwargs))\n\n self._baseRDD = self._prdd._rdd()\n self._distributedRDD = self._baseRDD.flatMap(group_and_extract)\n self._mergedRDD = self._sortIfNeeded(\n self._group(self._distributedRDD))", "response": "Prepare the old school pandas group by based approach."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ngroup together the values with the same key.", "response": "def _group(self, rdd):\n \"\"\"Group together the values with the same key.\"\"\"\n return rdd.reduceByKey(lambda x, y: x.append(y))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef groups(self):\n self._prep_pandas_groupby()\n\n def extract_group_labels(frame):\n return (frame[0], frame[1].index.values)\n\n return self._mergedRDD.map(extract_group_labels).collectAsMap()", "response": "Returns a dict mapping group name to group labels."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a dict mapping group name to group indices.", "response": "def indices(self):\n \"\"\"Returns dict {group name -> group indices}.\"\"\"\n self._prep_pandas_groupby()\n\n def extract_group_indices(frame):\n return (frame[0], frame[1].index)\n\n return self._mergedRDD.map(extract_group_indices).collectAsMap()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef median(self):\n self._prep_pandas_groupby()\n return DataFrame.fromDataFrameRDD(\n self._regroup_mergedRDD().values().map(\n lambda x: x.median()), self.sql_ctx)", "response": "Compute the median of the items in the group by the entry s keys."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef mean(self):\n if self._can_use_new_school():\n self._prep_spark_sql_groupby()\n import pyspark.sql.functions as func\n return self._use_aggregation(func.mean)\n self._prep_pandas_groupby()\n return DataFrame.fromDataFrameRDD(\n self._regroup_mergedRDD().values().map(\n lambda x: x.mean()), self.sql_ctx)", "response": "Compute the mean of the items in the current DataFrame."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef var(self, ddof=1):\n self._prep_pandas_groupby()\n return DataFrame.fromDataFrameRDD(\n self._regroup_mergedRDD().values().map(\n lambda x: x.var(ddof=ddof)), self.sql_ctx)", "response": "Compute the standard deviation of the items in the log."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef sum(self):\n if self._can_use_new_school():\n self._prep_spark_sql_groupby()\n import pyspark.sql.functions as func\n return self._use_aggregation(func.sum)\n self._prep_pandas_groupby()\n myargs = self._myargs\n mykwargs = self._mykwargs\n\n def create_combiner(x):\n return x.groupby(*myargs, **mykwargs).sum()\n\n def merge_value(x, y):\n return pd.concat([x, create_combiner(y)])\n\n def merge_combiner(x, y):\n return x + y\n\n rddOfSum = self._sortIfNeeded(self._distributedRDD.combineByKey(\n create_combiner,\n merge_value,\n merge_combiner)).values()\n return DataFrame.fromDataFrameRDD(rddOfSum, self.sql_ctx)", "response": "Compute the sum for each group."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating aggregate expressions using the provided function f with the result coming back as the original column name.", "response": "def _create_exprs_using_func(self, f, columns):\n \"\"\"Create aggregate expressions using the provided function\n with the result coming back as the original column name.\"\"\"\n expressions = map(lambda c: f(c).alias(c),\n self._columns)\n return expressions"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _use_aggregation(self, agg, columns=None):\n if not columns:\n columns = self._columns\n from pyspark.sql import functions as F\n aggs = map(lambda column: agg(column).alias(column), self._columns)\n aggRdd = self._grouped_spark_sql.agg(*aggs)\n df = DataFrame.from_schema_rdd(aggRdd, self._by)\n return df", "response": "Compute the result using the aggregation function provided."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _regroup_mergedRDD(self):\n myargs = self._myargs\n mykwargs = self._mykwargs\n self._prep_pandas_groupby()\n\n def regroup(df):\n return df.groupby(*myargs, **mykwargs)\n\n return self._mergedRDD.mapValues(regroup)", "response": "Return a new RDD with the grouped elements."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntaking the nth element of each grouby.", "response": "def nth(self, n, *args, **kwargs):\n \"\"\"Take the nth element of each grouby.\"\"\"\n # TODO: Stop collecting the entire frame for each key.\n self._prep_pandas_groupby()\n myargs = self._myargs\n mykwargs = self._mykwargs\n nthRDD = self._regroup_mergedRDD().mapValues(\n lambda r: r.nth(\n n, *args, **kwargs)).values()\n return DataFrame.fromDataFrameRDD(nthRDD, self.sql_ctx)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef aggregate(self, f):\n if self._can_use_new_school() and f == pd.Series.kurtosis:\n self._prep_spark_sql_groupby()\n import custom_functions as CF\n return self._use_aggregation(CF.kurtosis)\n else:\n self._prep_pandas_groupby()\n return DataFrame.fromDataFrameRDD(\n self._regroup_mergedRDD().values().map(\n lambda g: g.aggregate(f)), self.sql_ctx)", "response": "Apply the aggregation function f to the items in the current DataFrame."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\napply the provided function and combine the results together in the same way as apply from groupby in pandas.", "response": "def apply(self, func, *args, **kwargs):\n \"\"\"Apply the provided function and combine the results together in the\n same way as apply from groupby in pandas.\n\n This returns a DataFrame.\n \"\"\"\n self._prep_pandas_groupby()\n\n def key_by_index(data):\n \"\"\"Key each row by its index.\n \"\"\"\n # TODO: Is there a better way to do this?\n for key, row in data.iterrows():\n yield (key, pd.DataFrame.from_dict(\n dict([(key, row)]), orient='index'))\n\n myargs = self._myargs\n mykwargs = self._mykwargs\n regroupedRDD = self._distributedRDD.mapValues(\n lambda data: data.groupby(*myargs, **mykwargs))\n appliedRDD = regroupedRDD.map(\n lambda key_data: key_data[1].apply(func, *args, **kwargs))\n reKeyedRDD = appliedRDD.flatMap(key_by_index)\n dataframe = self._sortIfNeeded(reKeyedRDD).values()\n return DataFrame.fromDataFrameRDD(dataframe, self.sql_ctx)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _create_function(name, doc=\"\"):\n def _(col):\n spark_ctx = SparkContext._active_spark_context\n java_ctx = (getattr(spark_ctx._jvm.com.sparklingpandas.functions,\n name)\n (col._java_ctx if isinstance(col, Column) else col))\n return Column(java_ctx)\n _.__name__ = name\n _.__doc__ = doc\n return _", "response": "Create a function for aggregator by name"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nadds another DataFrame to the PStatCounter.", "response": "def merge(self, frame):\n \"\"\"\n Add another DataFrame to the PStatCounter.\n \"\"\"\n for column, values in frame.iteritems():\n # Temporary hack, fix later\n counter = self._counters.get(column)\n for value in values:\n if counter is not None:\n counter.merge(value)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef merge_pstats(self, other):\n if not isinstance(other, PStatCounter):\n raise Exception(\"Can only merge PStatcounters!\")\n\n for column, counter in self._counters.items():\n other_counter = other._counters.get(column)\n self._counters[column] = counter.mergeStats(other_counter)\n\n return self", "response": "Merge all of the stats counters of this PStatCounter with the other PStatCounter."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding another DataFrame to our accumulated stats for each column.", "response": "def merge(self, frame):\n \"\"\"\n Add another DataFrame to the accumulated stats for each column.\n Parameters\n ----------\n frame: pandas DataFrame we will update our stats counter with.\n \"\"\"\n for column_name, _ in self._column_stats.items():\n data_arr = frame[[column_name]].values\n count, min_max_tup, mean, _, _, _ = \\\n scistats.describe(data_arr)\n stats_counter = StatCounter()\n stats_counter.n = count\n stats_counter.mu = mean\n stats_counter.m2 = np.sum((data_arr - mean) ** 2)\n stats_counter.minValue, stats_counter.maxValue = min_max_tup\n self._column_stats[column_name] = self._column_stats[\n column_name].mergeStats(stats_counter)\n return self"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef merge_stats(self, other_col_counters):\n for column_name, _ in self._column_stats.items():\n self._column_stats[column_name] = self._column_stats[column_name] \\\n .mergeStats(other_col_counters._column_stats[column_name])\n return self", "response": "Merge statistics from a different column stats counter in to this one."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn an RDD of Panda DataFrame objects. This can be expensive especially if we don t do narrow transformation after and get it back to Spark SQL land quickly.", "response": "def _rdd(self):\n \"\"\"Return an RDD of Panda DataFrame objects. This can be expensive\n especially if we don't do a narrow transformation after and get it back\n to Spark SQL land quickly.\"\"\"\n columns = self._schema_rdd.columns\n index_names = self._index_names\n\n def fromRecords(records):\n if not records:\n return []\n else:\n loaded_df = pd.DataFrame.from_records([records],\n columns=columns)\n indexed_df = _update_index_on_df(loaded_df, index_names)\n return [indexed_df]\n\n return self._schema_rdd.rdd.flatMap(fromRecords)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the column names of the current resource table.", "response": "def _column_names(self):\n \"\"\"Return the column names\"\"\"\n index_names = set(_normalize_index_names(self._index_names))\n column_names = [col_name for col_name in self._schema_rdd.columns if\n col_name not in index_names]\n return column_names"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert the underlying SchmeaRDD to an RDD of DataFrames. This is hella slow.", "response": "def _evil_apply_with_dataframes(self, func, preserves_cols=False):\n \"\"\"Convert the underlying SchmeaRDD to an RDD of DataFrames.\n apply the provide function and convert the result back.\n This is hella slow.\"\"\"\n source_rdd = self._rdd()\n result_rdd = func(source_rdd)\n # By default we don't know what the columns & indexes are so we let\n # from_rdd_of_dataframes look at the first partition to determine them.\n column_idxs = None\n if preserves_cols:\n index_names = self._index_names\n # Remove indexes from the columns\n columns = self._schema_rdd.columns[len(self._index_names):]\n column_idxs = (columns, index_names)\n return self.from_rdd_of_dataframes(\n result_rdd, column_idxs=column_idxs)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _first_as_df(self):\n columns = self._schema_rdd.columns\n df = pd.DataFrame.from_records(\n [self._schema_rdd.first()],\n columns=self._schema_rdd.columns)\n df = _update_index_on_df(df, self._index_names)\n return df", "response": "Gets the first row as a Panda s DataFrame. Useful for functions like\n dtypes & ftypes"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntakes an RDD of Panda s DataFrames and return a Dataframe.", "response": "def from_rdd_of_dataframes(self, rdd, column_idxs=None):\n \"\"\"Take an RDD of Panda's DataFrames and return a Dataframe.\n If the columns and indexes are already known (e.g. applyMap)\n then supplying them with columnsIndexes will skip eveluating\n the first partition to determine index info.\"\"\"\n def frame_to_spark_sql(frame):\n \"\"\"Convert a Panda's DataFrame into Spark SQL Rows\"\"\"\n return [r.tolist() for r in frame.to_records()]\n\n def frame_to_schema_and_idx_names(frames):\n \"\"\"Returns the schema and index names of the frames. Useful\n if the frame is large and we wish to avoid transfering\n the entire frame. Only bothers to apply once per partiton\"\"\"\n try:\n frame = frames.next()\n return [(list(frame.columns), list(frame.index.names))]\n except StopIteration:\n return []\n\n # Store if the RDD was persisted so we don't uncache an\n # explicitly cached input.\n was_persisted = rdd.is_cached\n # If we haven't been supplied with the schema info cache the RDD\n # since we are going to eveluate the first partition and then eveluate\n # the entire RDD as part of creating a Spark DataFrame.\n (schema, index_names) = ([], [])\n if not column_idxs:\n rdd.cache()\n (schema, index_names) = rdd.mapPartitions(\n frame_to_schema_and_idx_names).first()\n else:\n (schema, index_names) = column_idxs\n # Add the index_names to the schema.\n index_names = _normalize_index_names(index_names)\n schema = index_names + schema\n ddf = DataFrame.from_schema_rdd(\n self.sql_ctx.createDataFrame(rdd.flatMap(frame_to_spark_sql),\n schema=schema))\n ddf._index_names = index_names\n if not was_persisted:\n rdd.unpersist()\n return ddf"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef fromDataFrameRDD(cls, rdd, sql_ctx):\n result = DataFrame(None, sql_ctx)\n return result.from_rdd_of_dataframes(rdd)", "response": "Construct a DataFrame from an RDD of DataFrames."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a new DataFrame by applying a function to each element of each Panda DataFrame.", "response": "def applymap(self, f, **kwargs):\n \"\"\"Return a new DataFrame by applying a function to each element of each\n Panda DataFrame.\"\"\"\n def transform_rdd(rdd):\n return rdd.map(lambda data: data.applymap(f), **kwargs)\n return self._evil_apply_with_dataframes(transform_rdd,\n preserves_cols=True)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a groupby on the schema rdd. This returns a GroupBy object.", "response": "def groupby(self, by=None, axis=0, level=None, as_index=True, sort=True,\n group_keys=True, squeeze=False):\n \"\"\"Returns a groupby on the schema rdd. This returns a GroupBy object.\n Note that grouping by a column name will be faster than most other\n options due to implementation.\"\"\"\n from sparklingpandas.groupby import GroupBy\n return GroupBy(self, by=by, axis=axis, level=level, as_index=as_index,\n sort=sort, group_keys=group_keys, squeeze=squeeze)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncollecting the elements in an DataFrame and concatenate the partition.", "response": "def collect(self):\n \"\"\"Collect the elements in an DataFrame\n and concatenate the partition.\"\"\"\n local_df = self._schema_rdd.toPandas()\n correct_idx_df = _update_index_on_df(local_df, self._index_names)\n return correct_idx_df"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef stats(self, columns):\n assert (not isinstance(columns, basestring)), \"columns should be a \" \\\n \"list of strs, \" \\\n \"not a str!\"\n assert isinstance(columns, list), \"columns should be a list!\"\n\n from pyspark.sql import functions as F\n functions = [F.min, F.max, F.avg, F.count]\n aggs = list(\n self._flatmap(lambda column: map(lambda f: f(column), functions),\n columns))\n return PStats(self.from_schema_rdd(self._schema_rdd.agg(*aggs)))", "response": "Compute the stats for each column provided in columns."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a new PRDD by applying a function to each element of each pandas DataFrame.", "response": "def applymap(self, func, **kwargs):\n \"\"\"Return a new PRDD by applying a function to each element of each\n pandas DataFrame.\"\"\"\n return self.from_rdd(\n self._rdd.map(lambda data: data.applymap(func), **kwargs))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef groupby(self, *args, **kwargs):\n from sparklingpandas.groupby import GroupBy\n return GroupBy(self._rdd, *args, **kwargs)", "response": "Takes the same parameters as groupby on DataFrame.\n This returns a Sparkling Pandas\n object which supports many of the same operations as regular\n GroupBy but not all."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncollect the elements in an PRDD and concatenate the partition.", "response": "def collect(self):\n \"\"\"Collect the elements in an PRDD and concatenate the partition.\"\"\"\n # The order of the frame order appends is based on the implementation\n # of reduce which calls our function with\n # f(valueToBeAdded, accumulator) so we do our reduce implementation.\n def append_frames(frame_a, frame_b):\n return frame_a.append(frame_b)\n return self._custom_rdd_reduce(append_frames)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nproviding a custom RDD reduce which preserves ordering if the RDD has been sorted.", "response": "def _custom_rdd_reduce(self, reduce_func):\n \"\"\"Provides a custom RDD reduce which preserves ordering if the RDD has\n been sorted. This is useful for us because we need this functionality\n as many pandas operations support sorting the results. The standard\n reduce in PySpark does not have this property. Note that when PySpark\n no longer does partition reduces locally this code will also need to\n be updated.\"\"\"\n def accumulating_iter(iterator):\n acc = None\n for obj in iterator:\n if acc is None:\n acc = obj\n else:\n acc = reduce_func(acc, obj)\n if acc is not None:\n yield acc\n vals = self._rdd.mapPartitions(accumulating_iter).collect()\n return reduce(accumulating_iter, vals)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef stats(self, columns):\n def reduce_func(sc1, sc2):\n return sc1.merge_pstats(sc2)\n\n return self._rdd.mapPartitions(lambda partition: [\n PStatCounter(dataframes=partition, columns=columns)])\\\n .reduce(reduce_func)", "response": "Compute the stats for each column provided in columns."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef read_csv(self, file_path, use_whole_file=False, names=None, skiprows=0,\n *args, **kwargs):\n \"\"\"Read a CSV file in and parse it into Pandas DataFrames. By default,\n the first row from the first partition of that data is parsed and used\n as the column names for the data from. If no 'names' param is\n provided we parse the first row of the first partition of data and\n use it for column names.\n\n Parameters\n ----------\n file_path: string\n Path to input. Any valid file path in Spark works here, eg:\n 'file:///my/path/in/local/file/system' or 'hdfs:/user/juliet/'\n use_whole_file: boolean\n Whether of not to use the whole file.\n names: list of strings, optional\n skiprows: integer, optional\n indicates how many rows of input to skip. This will\n only be applied to the first partition of the data (so if\n #skiprows > #row in first partition this will not work). Generally\n this shouldn't be an issue for small values of skiprows.\n No other value of header is supported.\n All additional parameters available in pandas.read_csv() are usable\n here.\n\n Returns\n -------\n A SparklingPandas DataFrame that contains the data from the\n specified file.\n \"\"\"\n def csv_file(partition_number, files):\n # pylint: disable=unexpected-keyword-arg\n file_count = 0\n for _, contents in files:\n # Only skip lines on the first file\n if partition_number == 0 and file_count == 0 and _skiprows > 0:\n yield pandas.read_csv(\n sio(contents), *args,\n header=None,\n names=mynames,\n skiprows=_skiprows,\n **kwargs)\n else:\n file_count += 1\n yield pandas.read_csv(\n sio(contents), *args,\n header=None,\n names=mynames,\n **kwargs)\n\n def csv_rows(partition_number, rows):\n # pylint: disable=unexpected-keyword-arg\n in_str = \"\\n\".join(rows)\n if partition_number == 0:\n return iter([\n pandas.read_csv(\n sio(in_str), *args, header=None,\n names=mynames,\n skiprows=_skiprows,\n **kwargs)])\n else:\n # could use .iterows instead?\n return iter([pandas.read_csv(sio(in_str), *args, header=None,\n names=mynames, **kwargs)])\n\n # If we need to peak at the first partition and determine the column\n # names\n mynames = None\n _skiprows = skiprows\n if names:\n mynames = names\n else:\n # In the future we could avoid this expensive call.\n first_line = self.spark_ctx.textFile(file_path).first()\n frame = pandas.read_csv(sio(first_line), **kwargs)\n # pylint sees frame as a tuple despite it being a DataFrame\n mynames = list(frame.columns)\n _skiprows += 1\n\n # Do the actual load\n if use_whole_file:\n return self.from_pandas_rdd(\n self.spark_ctx.wholeTextFiles(file_path)\n .mapPartitionsWithIndex(csv_file))\n else:\n return self.from_pandas_rdd(\n self.spark_ctx.textFile(file_path)\n .mapPartitionsWithIndex(csv_rows))", "response": "Read a CSV file in and parse it into Pandas DataFrames."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nload a text file storing one JSON object per line as a DataFrame.", "response": "def jsonFile(self, path, schema=None, sampling_ratio=1.0):\n \"\"\"Loads a text file storing one JSON object per line as a\n L{DataFrame}.\n Parameters\n ----------\n path: string\n The path of the json files to load. Should be Hadoop style\n paths (e.g. hdfs://..., file://... etc.).\n schema: StructType, optional\n If you know the schema of your input data you can specify it. The\n schema is specified using Spark SQL's schema format. If not\n specified will sample the json records to determine the schema.\n Spark SQL's schema format is documented (somewhat) in the\n \"Programmatically Specifying the Schema\" of the Spark SQL\n programming guide at: http://bit.ly/sparkSQLprogrammingGuide\n sampling_ratio: int, default=1.0\n Percentage of the records to sample when infering schema.\n Defaults to all records for safety, but you may be able to set to\n a lower ratio if the same fields are present accross records or\n your input is of sufficient size.\n Returns\n -------\n A L{DataFrame} of the contents of the json files.\n \"\"\"\n schema_rdd = self.sql_ctx.jsonFile(path, schema, sampling_ratio)\n return self.from_spark_rdd(schema_rdd)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nmake a Sparkling Pandas dataframe from a local Pandas DataFrame.", "response": "def from_pd_data_frame(self, local_df):\n \"\"\"Make a Sparkling Pandas dataframe from a local Pandas DataFrame.\n The intend use is for testing or joining distributed data with local\n data.\n The types are re-infered, so they may not match.\n Parameters\n ----------\n local_df: Pandas DataFrame\n The data to turn into a distributed Sparkling Pandas DataFrame.\n See http://bit.ly/pandasDataFrame for docs.\n Returns\n -------\n A Sparkling Pandas DataFrame.\n \"\"\"\n def frame_to_rows(frame):\n \"\"\"Convert a Pandas DataFrame into a list of Spark SQL Rows\"\"\"\n # TODO: Convert to row objects directly?\n return [r.tolist() for r in frame.to_records()]\n schema = list(local_df.columns)\n index_names = list(local_df.index.names)\n index_names = _normalize_index_names(index_names)\n schema = index_names + schema\n rows = self.spark_ctx.parallelize(frame_to_rows(local_df))\n sp_df = DataFrame.from_schema_rdd(\n self.sql_ctx.createDataFrame(\n rows,\n schema=schema,\n # Look at all the rows, should be ok since coming from\n # a local dataset\n samplingRatio=1))\n sp_df._index_names = index_names\n return sp_df"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef sql(self, query):\n return DataFrame.from_spark_rdd(self.sql_ctx.sql(query), self.sql_ctx)", "response": "Perform a SQL query and create a Pandas DataFrame of the result."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef table(self, table):\n return DataFrame.from_spark_rdd(self.sql_ctx.table(table),\n self.sql_ctx)", "response": "Returns the provided Spark SQL table as a Pandas DataFrame."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef DataFrame(self, elements, *args, **kwargs):\n return self.from_pd_data_frame(pandas.DataFrame(\n elements,\n *args,\n **kwargs))", "response": "Create a Sparkling Pandas DataFrame for the provided elements."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef read_json(self, file_path,\n *args, **kwargs):\n \"\"\"Read a json file in and parse it into Pandas DataFrames.\n If no names is provided we use the first row for the names.\n Currently, it is not possible to skip the first n rows of a file.\n Headers are provided in the json file and not specified separately.\n\n Parameters\n ----------\n file_path: string\n Path to input. Any valid file path in Spark works here, eg:\n 'my/path/in/local/file/system' or 'hdfs:/user/juliet/'\n Other than skipRows, all additional parameters available in\n pandas.read_csv() are usable here.\n\n Returns\n -------\n A SparklingPandas DataFrame that contains the data from the\n specified file.\n \"\"\"\n def json_file_to_df(files):\n \"\"\" Transforms a JSON file into a list of data\"\"\"\n for _, contents in files:\n yield pandas.read_json(sio(contents), *args, **kwargs)\n\n return self.from_pandas_rdd(self.spark_ctx.wholeTextFiles(file_path)\n .mapPartitions(json_file_to_df))", "response": "Read a json file in and parse it into Pandas DataFrames."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef do_first(self):\n # Load generator module and create instance if we haven't already\n if (not self.srcfile):\n raise IIIFError(text=(\"No generator specified\"))\n if (not self.gen):\n try:\n (name, ext) = os.path.splitext(self.srcfile)\n (pack, mod) = os.path.split(name)\n module_name = 'iiif.generators.' + mod\n try:\n module = sys.modules[module_name]\n except KeyError:\n self.logger.debug(\n \"Loading generator module %s\" % (module_name))\n # Would be nice to use importlib but this is available only\n # in python 2.7 and higher\n pack = __import__(module_name) # returns iiif package\n module = getattr(pack.generators, mod)\n self.gen = module.PixelGen()\n except ImportError:\n raise IIIFError(\n text=(\"Failed to load generator %s\" % (str(self.srcfile))))\n (self.width, self.height) = self.gen.size", "response": "Load generator set size."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nparsing arguments instantiate IIIFStatic run.", "response": "def main():\n \"\"\"Parse arguments, instantiate IIIFStatic, run.\"\"\"\n if (sys.version_info < (2, 7)):\n sys.exit(\"This program requires python version 2.7 or later\")\n\n # Options and arguments\n p = optparse.OptionParser(description='IIIF Image API static file generator',\n usage='usage: %prog [options] file [[file2..]] (-h for help)',\n version='%prog ' + __version__)\n\n p.add_option('--dst', '-d', action='store', default='/tmp',\n help=\"Destination directory for images [default '%default']\")\n p.add_option('--tilesize', '-t', action='store', type='int', default=512,\n help=\"Tilesize in pixels [default %default]\")\n p.add_option('--api-version', '--api', '-a', action='store', default='2.1',\n help=\"API version, may be 1.1, 2.0 or 2.1 [default %default]\")\n p.add_option('--prefix', '-p', action='store', default=None,\n help=\"URI prefix for where the images will be served from (default '%default'). \"\n \"An empty prefix may be OK if the HTML page including the image shares the \"\n \"the same root on the same server as the images, otherwise a full URL should \"\n \"be specified. This is used to construct the @id in the info.json\")\n p.add_option('--identifier', '-i', action='store', default=None,\n help=\"Identifier for the image that will be used in place of the filename \"\n \"(minus extension). Notes that this option cannot be used if more than \"\n \"one image file is to be processed\")\n p.add_option('--extra', '-e', action='append', default=[],\n help=\"Extra request parameters to be used to generate static files, may be \"\n \"repeated (e.g. '/full/90,/0/default.jpg' for a 90 wide thumnail)\")\n p.add_option('--write-html', action='store', default=None,\n help=\"Write HTML page to the specified directory using the 'identifier.html' \"\n \"as the filename. HTML will launch OpenSeadragon for this image and to \"\n \"display some of information about info.json and tile locations. HTML will \"\n \"assume OpenSeadragon at relative path openseadragonVVV/openseadragon.min.js \"\n \"and user-interface icons in openseadragonVVV/images, where VVV are the \"\n \"three parts of the version number. The --include-osd flag is also specified \"\n \"then OpenSeadragon will be copied to these locations\")\n p.add_option('--include-osd', action='store_true',\n help=\"Include OpenSeadragon files with --write-html flag\")\n p.add_option('--osd-version', action='store', default='2.0.0',\n help=\"Generate static images for older versions of OpenSeadragon. Use of versions \"\n \"prior to 1.2.1 will force use of /w,h/ for size parameter instead of /w,/. \"\n \"Likely useful only in combination with --api-version=1.1 \"\n \"[default %default]\")\n p.add_option('--osd-width', action='store', type='int', default='500',\n help=\"Width of OpenSeadragon pane in pixels. Applies only with \"\n \"--write-html [default %default]\")\n p.add_option('--osd-height', action='store', type='int', default='500',\n help=\"Height of OpenSeadragon pane in pixels. Applies only with \"\n \"--write-html [default %default]\")\n p.add_option('--generator', action='store_true', default=False,\n help=\"Use named generator modules in iiif.generators package instead \"\n \"of a starting image [default %default]\")\n p.add_option('--max-image-pixels', action='store', type='int', default=0,\n help=\"Set the maximum number of pixels in an image. A non-zero value \"\n \"will set a hard limit on the image size. If left unset then the \"\n \"default configuration of the Python Image Libary (PIL) will give \"\n \"a DecompressionBombWarning if the image size exceeds a default \"\n \"maximum, but otherwise continue as normal\")\n p.add_option('--dryrun', '-n', action='store_true',\n help=\"Do not write anything, say what would be done\")\n p.add_option('--quiet', '-q', action='store_true',\n help=\"Quite (no output unless there is a warning/error)\")\n p.add_option('--verbose', '-v', action='store_true',\n help=\"Verbose\")\n\n (opt, sources) = p.parse_args()\n\n level = logging.DEBUG if (opt.verbose) else \\\n logging.WARNING if (opt.quiet) else logging.INFO\n logging.basicConfig(format='%(name)s: %(message)s',\n level=level)\n logger = logging.getLogger(os.path.basename(__file__))\n\n if (not opt.write_html and opt.include_osd):\n logger.warn(\n \"--include-osd has no effect without --write-html, ignoring\")\n if (len(sources) == 0):\n logger.warn(\"No sources specified, nothing to do, bye! (-h for help)\")\n elif (len(sources) > 1 and opt.identifier):\n logger.error(\n \"Cannot use --identifier/-i option with multiple sources, aborting.\")\n else:\n try:\n sg = IIIFStatic(dst=opt.dst, tilesize=opt.tilesize,\n api_version=opt.api_version, dryrun=opt.dryrun,\n prefix=opt.prefix, osd_version=opt.osd_version,\n generator=opt.generator,\n max_image_pixels=opt.max_image_pixels,\n extras=opt.extra)\n for source in sources:\n # File or directory (or neither)?\n if (os.path.isfile(source) or opt.generator):\n logger.info(\"source file: %s\" % (source))\n sg.generate(source, identifier=opt.identifier)\n if (opt.write_html):\n sg.write_html(html_dir=opt.write_html, include_osd=opt.include_osd,\n osd_width=opt.osd_width, osd_height=opt.osd_height)\n elif (os.path.isdir(source)):\n logger.warn(\n \"Ignoring source '%s': directory coversion not supported\" % (source))\n else:\n logger.warn(\n \"Ignoring source '%s': neither file nor path\" % (source))\n except (IIIFStaticError, IIIFError) as e:\n # catch known errors and report nicely...\n logger.error(\"Error: \" + str(e))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef clear(self):\n # API parameters\n self.identifier = None\n self.region = None\n self.size = None\n self.rotation = None\n self.quality = None\n self.format = None\n self.info = None\n # Derived data and flags\n self.region_full = False\n self.region_square = False\n self.region_pct = False\n self.region_xywh = None # (x,y,w,h)\n self.size_full = False\n self.size_max = False # new in 2.1\n self.size_pct = None\n self.size_bang = None\n self.size_wh = None # (w,h)\n self.rotation_mirror = False\n self.rotation_deg = 0.0", "response": "Clear all data that might pertain to an IIIF URL."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nset the api_version and associated configurations.", "response": "def api_version(self, v):\n \"\"\"Set the api_version and associated configurations.\"\"\"\n self._api_version = v\n if (self._api_version >= '2.0'):\n self.default_quality = 'default'\n self.allowed_qualities = ['default', 'color', 'bitonal', 'gray']\n else: # versions 1.0 and 1.1\n self.default_quality = 'native'\n self.allowed_qualities = ['native', 'color', 'bitonal', 'grey']"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nbuilds a URL path for an IIIF Image or Information request.", "response": "def url(self, **params):\n \"\"\"Build a URL path for image or info request.\n\n An IIIF Image request with parameterized form is assumed unless\n the info parameter is specified, in which case an Image Information\n request URI is constructred.\n \"\"\"\n self._setattrs(**params)\n path = self.baseurl + self.quote(self.identifier) + \"/\"\n if (self.info):\n # info request\n path += \"info\"\n format = self.format if self.format else \"json\"\n else:\n # region\n if self.region:\n region = self.region\n elif self.region_xywh:\n region = \"%d,%d,%d,%d\" % tuple(self.region_xywh)\n else:\n region = \"full\"\n # size\n if self.size:\n size = self.size\n elif self.size_wh:\n if (self.size_wh[0] is None):\n size = \",%d\" % (self.size_wh[1])\n elif (self.size_wh[1] is None):\n size = \"%d,\" % (self.size_wh[0])\n else:\n size = \"%d,%d\" % (self.size_wh[0], self.size_wh[1])\n elif (self.size_max and self.api_version >= '2.1'):\n size = 'max'\n else:\n size = \"full\"\n # rotation and quality\n rotation = self.rotation if self.rotation else \"0\"\n quality = self.quality if self.quality else self.default_quality\n # parameterized form\n path += self.quote(region) + \"/\" +\\\n self.quote(size) + \"/\" +\\\n self.quote(rotation) + \"/\" +\\\n self.quote(quality)\n format = self.format\n if (format):\n path += \".\" + format\n return(path)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse an IIIF API URL path and each component.", "response": "def parse_url(self, url):\n \"\"\"Parse an IIIF API URL path and each component.\n\n Will parse a URL or URL path that accords with either the\n parametrized or info request forms. Will raise an\n IIIFRequestError on failure. A wrapper for the split_url()\n and parse_parameters() methods.\n\n Note that behavior of split_url() depends on whether\n self.identifier is set.\n \"\"\"\n self.split_url(url)\n if (not self.info):\n self.parse_parameters()\n return(self)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef split_url(self, url):\n # clear data first\n identifier = self.identifier\n self.clear()\n # url must start with baseurl if set (including slash)\n if (self.baseurl is not None):\n (path, num) = re.subn('^' + self.baseurl, '', url, 1)\n if (num != 1):\n raise IIIFRequestError(\n text=\"Request URL does not start with base URL\")\n url = path\n # Break up by path segments, count to decide format\n segs = url.split('/')\n if (identifier is not None):\n segs.insert(0, identifier)\n elif (self.allow_slashes_in_identifier):\n segs = self._allow_slashes_in_identifier_munger(segs)\n # Now have segments with identifier as first\n if (len(segs) > 5):\n raise IIIFRequestPathError(\n text=\"Request URL (%s) has too many path segments\" % url)\n elif (len(segs) == 5):\n self.identifier = urlunquote(segs[0])\n self.region = urlunquote(segs[1])\n self.size = urlunquote(segs[2])\n self.rotation = urlunquote(segs[3])\n self.quality = self.strip_format(urlunquote(segs[4]))\n self.info = False\n elif (len(segs) == 2):\n self.identifier = urlunquote(segs[0])\n info_name = self.strip_format(urlunquote(segs[1]))\n if (info_name != \"info\"):\n raise IIIFRequestError(\n text=\"Bad name for Image Information\")\n if (self.api_version == '1.0'):\n if (self.format not in ['json', 'xml']):\n raise IIIFRequestError(\n text=\"Invalid format for Image Information (json and xml allowed)\")\n elif (self.format != 'json'):\n raise IIIFRequestError(\n text=\"Invalid format for Image Information (only json allowed)\")\n self.info = True\n elif (len(segs) == 1):\n self.identifier = urlunquote(segs[0])\n raise IIIFRequestBaseURI()\n else:\n raise IIIFRequestPathError(\n text=\"Bad number of path segments in request\")\n return(self)", "response": "Parse an IIIF API URL path into components."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef strip_format(self, str_and_format):\n m = re.match(\"(.+)\\.([a-zA-Z]\\w*)$\", str_and_format)\n if (m):\n # There is a format string at end, chop off and store\n str_and_format = m.group(1)\n self.format = (m.group(2) if (m.group(2) is not None) else '')\n return(str_and_format)", "response": "Look for optional. fmt at end of URI.\n and store it in self. format"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nparses the parameters of an Image Information request.", "response": "def parse_parameters(self):\n \"\"\"Parse the parameters of an Image Information request.\n\n Will throw an IIIFRequestError on failure, set attributes on\n success. Care is taken not to change any of the artibutes\n which store path components. All parsed values are stored\n in new attributes.\n \"\"\"\n self.parse_region()\n self.parse_size()\n self.parse_rotation()\n self.parse_quality()\n self.parse_format()"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nparsing the region component of the path and set self. region_full self. region_square and self. region_pct to True.", "response": "def parse_region(self):\n \"\"\"Parse the region component of the path.\n\n /full/ -> self.region_full = True (test this first)\n /square/ -> self.region_square = True (test this second)\n /x,y,w,h/ -> self.region_xywh = (x,y,w,h)\n /pct:x,y,w,h/ -> self.region_xywh and self.region_pct = True\n\n Will throw errors if the parameters are illegal according to the\n specification but does not know about and thus cannot do any tests\n against any image being manipulated.\n \"\"\"\n self.region_full = False\n self.region_square = False\n self.region_pct = False\n if (self.region is None or self.region == 'full'):\n self.region_full = True\n return\n if (self.api_version >= '2.1' and self.region == 'square'):\n self.region_square = True\n return\n xywh = self.region\n pct_match = re.match('pct:(.*)$', self.region)\n if (pct_match):\n xywh = pct_match.group(1)\n self.region_pct = True\n # Now whether this was pct: or now, we expect 4 values...\n str_values = xywh.split(',', 5)\n if (len(str_values) != 4):\n raise IIIFRequestError(\n code=400, parameter=\"region\",\n text=\"Bad number of values in region specification, \"\n \"must be x,y,w,h but got %d value(s) from '%s'\" %\n (len(str_values), xywh))\n values = []\n for str_value in str_values:\n # Must be either integer (not pct) or interger/float (pct)\n if (pct_match):\n try:\n # This is rather more permissive that the iiif spec\n value = float(str_value)\n except ValueError:\n raise IIIFRequestError(\n parameter=\"region\",\n text=\"Bad floating point value for percentage in \"\n \"region (%s).\" % str_value)\n if (value > 100.0):\n raise IIIFRequestError(\n parameter=\"region\",\n text=\"Percentage over value over 100.0 in region \"\n \"(%s).\" % str_value)\n else:\n try:\n value = int(str_value)\n except ValueError:\n raise IIIFRequestError(\n parameter=\"region\",\n text=\"Bad integer value in region (%s).\" % str_value)\n if (value < 0):\n raise IIIFRequestError(\n parameter=\"region\",\n text=\"Negative values not allowed in region (%s).\" %\n str_value)\n values.append(value)\n # Zero size region is w or h are zero (careful that they may be float)\n if (values[2] == 0.0 or values[3] == 0.0):\n raise IIIFZeroSizeError(\n code=400, parameter=\"region\",\n text=\"Zero size region specified (%s)).\" % xywh)\n self.region_xywh = values"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef parse_size(self, size=None):\n if (size is not None):\n self.size = size\n self.size_pct = None\n self.size_bang = False\n self.size_full = False\n self.size_wh = (None, None)\n if (self.size is None or self.size == 'full'):\n self.size_full = True\n return\n elif (self.size == 'max' and self.api_version >= '2.1'):\n self.size_max = True\n return\n pct_match = re.match('pct:(.*)$', self.size)\n if (pct_match is not None):\n pct_str = pct_match.group(1)\n try:\n self.size_pct = float(pct_str)\n except ValueError:\n raise IIIFRequestError(\n code=400, parameter=\"size\",\n text=\"Percentage size value must be a number, got \"\n \"'%s'.\" % (pct_str))\n # Note that Image API specificaton places no upper limit on\n # size so none is implemented here.\n if (self.size_pct < 0.0):\n raise IIIFRequestError(\n code=400, parameter=\"size\",\n text=\"Base size percentage, must be > 0.0, got %f.\" %\n (self.size_pct))\n else:\n if (self.size[0] == '!'):\n # Have \"!w,h\" form\n size_no_bang = self.size[1:]\n (mw, mh) = self._parse_w_comma_h(size_no_bang, 'size')\n if (mw is None or mh is None):\n raise IIIFRequestError(\n code=400, parameter=\"size\",\n text=\"Illegal size requested: both w,h must be \"\n \"specified in !w,h requests.\")\n self.size_wh = (mw, mh)\n self.size_bang = True\n else:\n # Must now be \"w,h\", \"w,\" or \",h\"\n self.size_wh = self._parse_w_comma_h(self.size, 'size')\n # Sanity check w,h\n (w, h) = self.size_wh\n if ((w is not None and w <= 0) or\n (h is not None and h <= 0)):\n raise IIIFZeroSizeError(\n code=400, parameter='size',\n text=\"Size parameters request zero size result image.\")", "response": "Parse the size component of the path."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nparsing non negative integer from string istr.", "response": "def _parse_non_negative_int(self, istr, name):\n \"\"\"Parse integer from string (istr).\n\n The (name) parameter is used just for IIIFRequestError message\n generation to indicate what the error is in.\n \"\"\"\n if (istr == ''):\n return(None)\n try:\n i = int(istr)\n except ValueError:\n raise ValueError(\"Failed to extract integer value for %s\" % (name))\n if (i < 0):\n raise ValueError(\"Illegal negative value for %s\" % (name))\n return(i)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nchecking and interprets rotation value and sets self. rotation_deg and self. rotation_mirror to True.", "response": "def parse_rotation(self, rotation=None):\n \"\"\"Check and interpret rotation.\n\n Uses value of self.rotation as starting point unless rotation\n parameter is specified in the call. Sets self.rotation_deg to a\n floating point number 0 <= angle < 360. Includes translation of\n 360 to 0. If there is a prefix bang (!) then self.rotation_mirror\n will be set True, otherwise it will be False.\n \"\"\"\n if (rotation is not None):\n self.rotation = rotation\n self.rotation_deg = 0.0\n self.rotation_mirror = False\n if (self.rotation is None):\n return\n # Look for ! prefix first\n if (self.rotation[0] == '!'):\n self.rotation_mirror = True\n self.rotation = self.rotation[1:]\n # Interpret value now\n try:\n self.rotation_deg = float(self.rotation)\n except ValueError:\n raise IIIFRequestError(\n code=400, parameter=\"rotation\",\n text=\"Bad rotation value, must be a number, got '%s'.\" %\n (self.rotation))\n if (self.rotation_deg < 0.0 or self.rotation_deg > 360.0):\n raise IIIFRequestError(\n code=400, parameter=\"rotation\",\n text=\"Illegal rotation value, must be 0 <= rotation \"\n \"<= 360, got %f.\" % (self.rotation_deg))\n elif (self.rotation_deg == 360.0):\n # The spec admits 360 as valid, but change to 0\n self.rotation_deg = 0.0"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse_quality(self):\n if (self.quality is None):\n self.quality_val = self.default_quality\n elif (self.quality not in self.allowed_qualities):\n raise IIIFRequestError(\n code=400, parameter=\"quality\",\n text=\"The quality parameter must be '%s', got '%s'.\" %\n (\"', '\".join(self.allowed_qualities), self.quality))\n else:\n self.quality_val = self.quality", "response": "Checks the quality parameterater. Sets self. quality_val based on simple substitution of\n native for default. Checks for three valid values\n is None and throws an IIIFRequestError if the quality parameter is not valid."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck format parameter. All formats values listed in the specification are lowercase alphanumeric value commonly used as file extensions. To leave opportunity for extension here just do a limited sanity check on characters and length.", "response": "def parse_format(self):\n \"\"\"Check format parameter.\n\n All formats values listed in the specification are lowercase\n alphanumeric value commonly used as file extensions. To leave\n opportunity for extension here just do a limited sanity check\n on characters and length.\n \"\"\"\n if (self.format is not None and\n not re.match(r'''\\w{1,20}$''', self.format)):\n raise IIIFRequestError(\n parameter='format',\n text='Bad format parameter')"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef is_scaled_full_image(self):\n return(self.region_full and\n self.size_wh[0] is not None and\n self.size_wh[1] is not None and\n not self.size_bang and\n self.rotation_deg == 0.0 and\n self.quality == self.default_quality and\n self.format == 'jpg')", "response": "True if this request is for a scaled full image."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting config from defaults config file and parse arguments.", "response": "def get_config(base_dir=''):\n \"\"\"Get config from defaults, config file and/or parse arguments.\n\n Uses configargparse to allow argments to be set from a config file\n or via command line arguments.\n\n base_dir - set a specific base directory for file/path defaults.\n \"\"\"\n p = configargparse.ArgParser(description='IIIF Image API Reference Service',\n default_config_files=[os.path.join(base_dir, 'iiif_reference_server.cfg')],\n formatter_class=configargparse.ArgumentDefaultsHelpFormatter)\n add_shared_configs(p, base_dir)\n p.add('--scale-factors', default='auto',\n help=\"Set of tile scale factors or 'auto' to calculate for each image \"\n \"such that there are tiles up to the full image\")\n p.add('--api-versions', default='1.0,1.1,2.0,2.1',\n help=\"Set of API versions to support\")\n args = p.parse_args()\n\n if (args.debug):\n args.verbose = True\n elif (args.verbose):\n args.quiet = False\n\n # Split list arguments\n args.scale_factors = split_comma_argument(args.scale_factors)\n args.api_versions = split_comma_argument(args.api_versions)\n\n logging_level = logging.WARNING\n if args.verbose:\n logging_level = logging.INFO\n elif args.quiet:\n logging_level = logging.ERROR\n logging.basicConfig(format='%(name)s: %(message)s', level=logging_level)\n\n return(args)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating referece server Flask application with one or more IIIF handlers.", "response": "def create_reference_server_flask_app(cfg):\n \"\"\"Create referece server Flask application with one or more IIIF handlers.\"\"\"\n # Create Flask app\n app = Flask(__name__)\n Flask.secret_key = \"SECRET_HERE\"\n app.debug = cfg.debug\n # Install request handlers\n client_prefixes = dict()\n for api_version in cfg.api_versions:\n handler_config = Config(cfg)\n handler_config.api_version = api_version\n handler_config.klass_name = 'pil'\n handler_config.auth_type = 'none'\n # Set same prefix on local server as expected on iiif.io\n handler_config.prefix = \"api/image/%s/example/reference\" % (api_version)\n handler_config.client_prefix = handler_config.prefix\n add_handler(app, handler_config)\n return app"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef pixel(self, x, y, size=None):\n if (size is None):\n size = self.sz\n # Have we go to the smallest element?\n if (size <= 3):\n if (_middle(x, y)):\n return (None)\n else:\n return (0, 0, 0)\n divisor = size // 3\n if (_middle(x // divisor, y // divisor)):\n return None\n return self.pixel(x % divisor, y % divisor, divisor)", "response": "Return color for a pixel."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef id(self, value):\n i = value.rfind('/')\n if (i > 0):\n self.server_and_prefix = value[:i]\n self.identifier = value[(i + 1):]\n elif (i == 0):\n self.server_and_prefix = ''\n self.identifier = value[(i + 1):]\n else:\n self.server_and_prefix = ''\n self.identifier = value", "response": "Split into server_and_prefix and identifier."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nset up normal values for given api_version.", "response": "def set_version_info(self, api_version=None):\n \"\"\"Set up normal values for given api_version.\n\n Will use current value of self.api_version if a version number\n is not specified in the call. Will raise an IIIFInfoError\n \"\"\"\n if (api_version is None):\n api_version = self.api_version\n if (api_version not in CONF):\n raise IIIFInfoError(\"Unknown API version %s\" % (api_version))\n self.params = CONF[api_version]['params']\n self.array_params = CONF[api_version]['array_params']\n self.complex_params = CONF[api_version]['complex_params']\n for a in ('context', 'compliance_prefix', 'compliance_suffix',\n 'protocol', 'required_params'):\n if (a in CONF[api_version]):\n self._setattr(a, CONF[api_version][a])"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef compliance(self, value):\n if (self.api_version < '2.0'):\n self.profile = value\n else:\n try:\n self.profile[0] = value\n except AttributeError:\n # handle case where profile not initialized as array\n self.profile = [value]", "response": "Set the compliance profile URI."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef level(self):\n m = re.match(\n self.compliance_prefix +\n r'(\\d)' +\n self.compliance_suffix +\n r'$',\n self.compliance)\n if (m):\n return int(m.group(1))\n raise IIIFInfoError(\n \"Bad compliance profile URI, failed to extract level number\")", "response": "Extract level number from compliance profile URI."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef level(self, value):\n self.compliance = self.compliance_prefix + \\\n (\"%d\" % value) + self.compliance_suffix", "response": "Build profile URI from level."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_service(self, service):\n if (self.service is None):\n self.service = service\n elif (isinstance(self.service, dict)):\n self.service = [self.service, service]\n else:\n self.service.append(service)", "response": "Add a service description. Handles transition from self. service = None and self. service = dict for single service and then self. service = list for multiple services."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate(self):\n errors = []\n for param in self.required_params:\n if (not hasattr(self, param) or getattr(self, param) is None):\n errors.append(\"missing %s parameter\" % (param))\n if (len(errors) > 0):\n raise IIIFInfoError(\"Bad data for info.json: \" + \", \".join(errors))\n return True", "response": "Validate this object as Image API data. Raise IIIFInfoError with helpful message if not valid."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef as_json(self, validate=True):\n if (validate):\n self.validate()\n json_dict = {}\n if (self.api_version > '1.0'):\n json_dict['@context'] = self.context\n params_to_write = set(self.params)\n params_to_write.discard('identifier')\n if (self.identifier):\n if (self.api_version == '1.0'):\n json_dict['identifier'] = self.identifier # local id\n else:\n json_dict['@id'] = self.id # URI\n params_to_write.discard('profile')\n if (self.compliance):\n if (self.api_version < '2.0'):\n json_dict['profile'] = self.compliance\n else:\n # FIXME - need to support extra profile features\n json_dict['profile'] = [self.compliance]\n d = {}\n if (self.formats is not None):\n d['formats'] = self.formats\n if (self.qualities is not None):\n d['qualities'] = self.qualities\n if (self.supports is not None):\n d['supports'] = self.supports\n if (len(d) > 0):\n json_dict['profile'].append(d)\n params_to_write.discard('formats')\n params_to_write.discard('qualities')\n params_to_write.discard('supports')\n for param in params_to_write:\n if (hasattr(self, param) and\n getattr(self, param) is not None):\n json_dict[param] = getattr(self, param)\n return(json.dumps(json_dict, sort_keys=True, indent=2))", "response": "Return a JSON representation of the info."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nread info. json from file like object fh.", "response": "def read(self, fh, api_version=None):\n \"\"\"Read info.json from file like object.\n\n Parameters:\n fh -- file like object supporting fh.read()\n api_version -- IIIF Image API version expected\n\n If api_version is set then the parsing will assume this API version,\n else the version will be determined from the incoming data. NOTE that\n the value of self.api_version is NOT used in this routine.\n If an api_version is specified and there is a @context specified then\n an IIIFInfoError will be raised unless these match. If no known\n @context is present and no api_version set then an IIIFInfoError\n will be raised.\n \"\"\"\n j = json.load(fh)\n #\n # @context and API version\n self.context = None\n if (api_version == '1.0'):\n # v1.0 did not have a @context so we simply take the version\n # passed in\n self.api_version = api_version\n elif ('@context' in j):\n # determine API version from context\n self.context = j['@context']\n api_version_read = None\n for v in CONF:\n if (v > '1.0' and self.context == CONF[v]['context']):\n api_version_read = v\n break\n if (api_version_read is None):\n raise IIIFInfoError(\n \"Unknown @context, cannot determine API version (%s)\" %\n (self.context))\n else:\n if (api_version is not None and\n api_version != api_version_read):\n raise IIIFInfoError(\n \"Expected API version '%s' but got @context for API version '%s'\" %\n (api_version, api_version_read))\n else:\n self.api_version = api_version_read\n else: # no @context and not 1.0\n if (api_version is None):\n raise IIIFInfoError(\"No @context (and no default given)\")\n self.api_version = api_version\n self.set_version_info()\n #\n # @id or identifier\n if (self.api_version == '1.0'):\n if ('identifier' in j):\n self.id = j['identifier']\n else:\n raise IIIFInfoError(\"Missing identifier in info.json\")\n else:\n if ('@id' in j):\n self.id = j['@id']\n else:\n raise IIIFInfoError(\"Missing @id in info.json\")\n #\n # other params\n for param in self.params:\n if (param == 'identifier'):\n continue # dealt with above\n if (param in j):\n if (param in self.complex_params):\n # use function ref in complex_params to parse, optional\n # dst to map to a different property name\n self._setattr(param, self.complex_params[\n param](self, j[param]))\n else:\n self._setattr(param, j[param])\n return True"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsetting PIL limit on image size to load if non - zero.", "response": "def set_max_image_pixels(self, pixels):\n \"\"\"Set PIL limit on pixel size of images to load if non-zero.\n\n WARNING: This is a global setting in PIL, it is\n not local to this manipulator instance!\n\n Setting a value here will not only set the given limit but\n also convert the PIL \"DecompressionBombWarning\" into an\n error. Thus setting a moderate limit sets a hard limit on\n image size loaded, setting a very large limit will have the\n effect of disabling the warning.\n \"\"\"\n if (pixels):\n Image.MAX_IMAGE_PIXELS = pixels\n Image.warnings.simplefilter(\n 'error', Image.DecompressionBombWarning)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef do_first(self):\n self.logger.debug(\"do_first: src=%s\" % (self.srcfile))\n try:\n self.image = Image.open(self.srcfile)\n except Image.DecompressionBombWarning as e:\n # This exception will be raised only if PIL has been\n # configured to raise an error in the case of images\n # that exceeed Image.MAX_IMAGE_PIXELS, with\n # Image.warnings.simplefilter('error', Image.DecompressionBombWarning)\n raise IIIFError(text=(\"Image size limit exceeded (PIL: %s)\" % (str(e))))\n except Exception as e:\n raise IIIFError(text=(\"Failed to read image (PIL: %s)\" % (str(e))))\n (self.width, self.height) = self.image.size", "response": "Create PIL object from input image file. Will result in self. image and self. width and self. height being set to the image dimensions."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\napplying rotation and mirroring.", "response": "def do_rotation(self, mirror, rot):\n \"\"\"Apply rotation and/or mirroring.\"\"\"\n if (not mirror and rot == 0.0):\n self.logger.debug(\"rotation: no rotation (nop)\")\n else:\n # FIXME - with PIL one can use the transpose() method to do 90deg\n # FIXME - rotations as well as mirroring. This would be more efficient\n # FIXME - for these cases than mirror _then_ rotate.\n if (mirror):\n self.logger.debug(\"rotation: mirror (about vertical axis)\")\n self.image = self.image.transpose(Image.FLIP_LEFT_RIGHT)\n if (rot != 0.0):\n self.logger.debug(\"rotation: by %f degrees clockwise\" % (rot))\n self.image = self.image.rotate(-rot, expand=True)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef do_quality(self, quality):\n if (quality == 'grey' or quality == 'gray'):\n # Checking for 1.1 gray or 20.0 grey elsewhere\n self.logger.debug(\"quality: converting to gray\")\n self.image = self.image.convert('L')\n elif (quality == 'bitonal'):\n self.logger.debug(\"quality: converting to bitonal\")\n self.image = self.image.convert('1')\n else: # color or default/native (which we take as color)\n # Deal first with conversions from I;16* formats which Pillow\n # appears not to handle properly, resulting in mostly white images\n # if we convert directly. See:\n # \n if (self.image.mode.startswith('I;16')):\n self.logger.debug(\"quality: fudged conversion from mode %s to I\"\n % (self.image.mode))\n self.image = self.image.convert('I')\n self.image = self.image.point(lambda i: i * (1.0 / 256.0))\n if (self.image.mode not in ('1', 'L', 'RGB', 'RGBA')):\n # Need to convert from palette etc. in order to write out\n self.logger.debug(\"quality: converting from mode %s to RGB\"\n % (self.image.mode))\n self.image = self.image.convert('RGB')\n else:\n self.logger.debug(\"quality: quality (nop)\")", "response": "Apply value of quality parameter."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef do_format(self, format):\n fmt = ('jpg' if (format is None) else format)\n if (fmt == 'png'):\n self.logger.debug(\"format: png\")\n self.mime_type = \"image/png\"\n self.output_format = fmt\n format = 'png'\n elif (fmt == 'jpg'):\n self.logger.debug(\"format: jpg\")\n self.mime_type = \"image/jpeg\"\n self.output_format = fmt\n format = 'jpeg'\n elif (fmt == 'webp'):\n self.logger.debug(\"format: webp\")\n self.mime_type = \"image/webp\"\n self.output_format = fmt\n format = 'webp'\n else:\n raise IIIFError(code=415, parameter='format',\n text=\"Unsupported output file format (%s), only png,jpg,webp are supported.\" % (fmt))\n if (self.outfile is None):\n # Create temp\n f = tempfile.NamedTemporaryFile(delete=False)\n self.outfile = f.name\n self.outtmp = f.name\n self.image.save(f, format=format)\n else:\n # Save to specified location\n self.image.save(self.outfile, format=format)", "response": "Apply format selection.\n\n Assume that for tiling applications we want jpg so return\n that unless an explicit format is requested."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef cleanup(self):\n if self.image:\n try:\n self.image.close()\n except Exception:\n pass\n if (self.outtmp is not None):\n try:\n os.unlink(self.outtmp)\n except OSError as e:\n self.logger.warning(\"Failed to cleanup tmp output file %s\"\n % (self.outtmp))", "response": "Cleanup: ensure image closed and remove temporary output file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning color for a pixel.", "response": "def pixel(self, x, y, size=None, red=0):\n \"\"\"Return color for a pixel.\n\n The paremeter size is used to recursively follow down to smaller\n and smaller middle squares (number 5). The paremeter red is used\n to shade from black to red going toward the middle of the figure.\n \"\"\"\n if (size is None):\n size = self.sz\n # Have we go to the smallest element?\n if (size <= 3):\n if (_num(x, y) % 2):\n return (red, 0, 0)\n else:\n return None\n divisor = size // 3\n n = _num(x // divisor, y // divisor)\n if (n == 5):\n # Middle square further divided\n return self.pixel(x % divisor, y % divisor,\n divisor, min(red + 25, 255))\n elif (n % 2):\n return (red, 0, 0)\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates HTML page as string.", "response": "def html_page(title=\"Page Title\", body=\"\"):\n \"\"\"Create HTML page as string.\"\"\"\n html = \"\\n%s\\n\\n\" % (title)\n html += \"

%s

\\n\" % (title)\n html += body\n html += \"\\n\\n\"\n return html"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nshowing list of identifiers for this prefix.", "response": "def identifiers(config):\n \"\"\"Show list of identifiers for this prefix.\n\n Handles both the case of local file based identifiers and\n also image generators.\n\n Arguments:\n config - configuration object in which:\n config.klass_name - 'gen' if a generator function\n config.generator_dir - directory for generator code\n config.image_dir - directory for images\n\n Returns:\n ids - a list of ids\n \"\"\"\n ids = []\n if (config.klass_name == 'gen'):\n for generator in os.listdir(config.generator_dir):\n if (generator == '__init__.py'):\n continue\n (gid, ext) = os.path.splitext(generator)\n if (ext == '.py' and\n os.path.isfile(os.path.join(config.generator_dir, generator))):\n ids.append(gid)\n else:\n for image_file in os.listdir(config.image_dir):\n (iid, ext) = os.path.splitext(image_file)\n if (ext in ['.jpg', '.png', '.tif'] and\n os.path.isfile(os.path.join(config.image_dir, image_file))):\n ids.append(iid)\n return ids"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns URI composed of scheme server port and prefix.", "response": "def host_port_prefix(host, port, prefix):\n \"\"\"Return URI composed of scheme, server, port, and prefix.\"\"\"\n uri = \"http://\" + host\n if (port != 80):\n uri += ':' + str(port)\n if (prefix):\n uri += '/' + prefix\n return uri"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nflasking handler to produce HTML response for OpenSeadragon view of identifier. Arguments: config - Config object for this IIIF handler identifier - identifier of image/generator prefix - path prefix **args - other aguments ignored", "response": "def osd_page_handler(config=None, identifier=None, prefix=None, **args):\n \"\"\"Flask handler to produce HTML response for OpenSeadragon view of identifier.\n\n Arguments:\n config - Config object for this IIIF handler\n identifier - identifier of image/generator\n prefix - path prefix\n **args - other aguments ignored\n \"\"\"\n template_dir = os.path.join(os.path.dirname(__file__), 'templates')\n with open(os.path.join(template_dir, 'testserver_osd.html'), 'r') as f:\n template = f.read()\n d = dict(prefix=prefix,\n identifier=identifier,\n api_version=config.api_version,\n osd_version='2.0.0',\n osd_uri='/openseadragon200/openseadragon.min.js',\n osd_images_prefix='/openseadragon200/images',\n osd_height=500,\n osd_width=500,\n info_json_uri='info.json')\n return make_response(Template(template).safe_substitute(d))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nparses an HTTP Accept header and returns a list with pairs of media_type and q_value.", "response": "def parse_accept_header(accept):\n \"\"\"Parse an HTTP Accept header.\n\n Parses *accept*, returning a list with pairs of\n (media_type, q_value), ordered by q values.\n\n Adapted from \n \"\"\"\n result = []\n for media_range in accept.split(\",\"):\n parts = media_range.split(\";\")\n media_type = parts.pop(0).strip()\n media_params = []\n q = 1.0\n for part in parts:\n (key, value) = part.lstrip().split(\"=\", 1)\n if key == \"q\":\n q = float(value)\n else:\n media_params.append((key, value))\n result.append((media_type, tuple(media_params), q))\n result.sort(key=lambda x: -x[2])\n return result"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nparse the Authenticate header.", "response": "def parse_authorization_header(value):\n \"\"\"Parse the Authenticate header.\n\n Returns nothing on failure, opts hash on success with type='basic' or 'digest'\n and other params.\n\n \n \n \n \"\"\"\n try:\n (auth_type, auth_info) = value.split(' ', 1)\n auth_type = auth_type.lower()\n except ValueError:\n return\n if (auth_type == 'basic'):\n try:\n decoded = base64.b64decode(auth_info).decode(\n 'utf-8') # b64decode gives bytes in python3\n (username, password) = decoded.split(':', 1)\n except (ValueError, TypeError): # py3, py2\n return\n return {'type': 'basic', 'username': username, 'password': password}\n elif (auth_type == 'digest'):\n try:\n auth_map = parse_keqv_list(parse_http_list(auth_info))\n except ValueError:\n return\n logging.debug(auth_map)\n for key in 'username', 'realm', 'nonce', 'uri', 'response':\n if key not in auth_map:\n return\n if 'qop' in auth_map and ('nc' not in auth_map or 'cnonce' not in auth_map):\n return\n auth_map['type'] = 'digest'\n return auth_map\n else:\n # unknown auth type\n return"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nparsing the Accept header and look for preferred type in supported list.", "response": "def do_conneg(accept, supported):\n \"\"\"Parse accept header and look for preferred type in supported list.\n\n Arguments:\n accept - HTTP Accept header\n supported - list of MIME type supported by the server\n\n Returns:\n supported MIME type with highest q value in request, else None.\n\n FIXME - Should replace this with negotiator2\n \"\"\"\n for result in parse_accept_header(accept):\n mime_type = result[0]\n if (mime_type in supported):\n return mime_type\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef setup_auth_paths(app, auth, prefix, params):\n base = urljoin('/', prefix + '/') # Must end in slash\n app.add_url_rule(base + 'login', prefix + 'login_handler',\n auth.login_handler, defaults=params)\n app.add_url_rule(base + 'logout', prefix + 'logout_handler',\n auth.logout_handler, defaults=params)\n if (auth.client_id_handler):\n app.add_url_rule(base + 'client', prefix + 'client_id_handler',\n auth.client_id_handler, defaults=params)\n app.add_url_rule(base + 'token', prefix + 'access_token_handler',\n auth.access_token_handler, defaults=params)\n if (auth.home_handler):\n app.add_url_rule(base + 'home', prefix + 'home_handler',\n auth.home_handler, defaults=params)", "response": "Add URL rules for auth paths."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nmake a prefix string based on configuration parameters.", "response": "def make_prefix(api_version, manipulator, auth_type):\n \"\"\"Make prefix string based on configuration parameters.\"\"\"\n prefix = \"%s_%s\" % (api_version, manipulator)\n if (auth_type and auth_type != 'none'):\n prefix += '_' + auth_type\n return prefix"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsplitting a comma separated option into a list.", "response": "def split_comma_argument(comma_sep_str):\n \"\"\"Split a comma separated option into a list.\"\"\"\n terms = []\n for term in comma_sep_str.split(','):\n if term:\n terms.append(term)\n return terms"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding shared configuration for the given object.", "response": "def add_shared_configs(p, base_dir=''):\n \"\"\"Add configargparser/argparse configs for shared argument.\n\n Arguments:\n p - configargparse.ArgParser object\n base_dir - base directory for file/path defaults.\n \"\"\"\n p.add('--host', default='localhost',\n help=\"Service host\")\n p.add('--port', '-p', type=int, default=8000,\n help=\"Service port\")\n p.add('--app-host', default=None,\n help=\"Local application host for reverse proxy deployment, \"\n \"as opposed to service --host (must also specify --app-port)\")\n p.add('--app-port', type=int, default=None,\n help=\"Local application port for reverse proxy deployment, \"\n \"as opposed to service --port (must also specify --app-host)\")\n p.add('--image-dir', '-d', default=os.path.join(base_dir, 'testimages'),\n help=\"Image file directory\")\n p.add('--generator-dir', default=os.path.join(base_dir, 'iiif/generators'),\n help=\"Generator directory for manipulator='gen'\")\n p.add('--tile-height', type=int, default=512,\n help=\"Tile height\")\n p.add('--tile-width', type=int, default=512,\n help=\"Tile width\")\n p.add('--gauth-client-secret', default=os.path.join(base_dir, 'client_secret.json'),\n help=\"Name of file with Google auth client secret\")\n p.add('--include-osd', action='store_true',\n help=\"Include a page with OpenSeadragon for each source\")\n p.add('--access-cookie-lifetime', type=int, default=3600,\n help=\"Set access cookie lifetime for authenticated access in seconds\")\n p.add('--access-token-lifetime', type=int, default=10,\n help=\"Set access token lifetime for authenticated access in seconds\")\n p.add('--config', is_config_file=True, default=None,\n help='Read config from given file path')\n p.add('--debug', action='store_true',\n help=\"Set debug mode for web application. INSECURE!\")\n p.add('--verbose', '-v', action='store_true',\n help=\"Be verbose\")\n p.add('--quiet', '-q', action='store_true',\n help=\"Minimal output only\")"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_handler(app, config):\n auth = None\n if (config.auth_type is None or config.auth_type == 'none'):\n pass\n elif (config.auth_type == 'gauth'):\n from iiif.auth_google import IIIFAuthGoogle\n auth = IIIFAuthGoogle(client_secret_file=config.gauth_client_secret_file)\n elif (config.auth_type == 'basic'):\n from iiif.auth_basic import IIIFAuthBasic\n auth = IIIFAuthBasic()\n elif (config.auth_type == 'clickthrough'):\n from iiif.auth_clickthrough import IIIFAuthClickthrough\n auth = IIIFAuthClickthrough()\n elif (config.auth_type == 'kiosk'):\n from iiif.auth_kiosk import IIIFAuthKiosk\n auth = IIIFAuthKiosk()\n elif (config.auth_type == 'external'):\n from iiif.auth_external import IIIFAuthExternal\n auth = IIIFAuthExternal()\n else:\n logging.error(\"Unknown auth type %s, ignoring\" % (config.auth_type))\n return\n if (auth is not None):\n auth.access_cookie_lifetime = config.access_cookie_lifetime\n auth.access_token_lifetime = config.access_token_lifetime\n klass = None\n if (config.klass_name == 'pil'):\n from iiif.manipulator_pil import IIIFManipulatorPIL\n klass = IIIFManipulatorPIL\n elif (config.klass_name == 'netpbm'):\n from iiif.manipulator_netpbm import IIIFManipulatorNetpbm\n klass = IIIFManipulatorNetpbm\n elif (config.klass_name == 'dummy'):\n from iiif.manipulator import IIIFManipulator\n klass = IIIFManipulator\n elif (config.klass_name == 'gen'):\n from iiif.manipulator_gen import IIIFManipulatorGen\n klass = IIIFManipulatorGen\n else:\n logging.error(\"Unknown manipulator type %s, ignoring\" % (config.klass_name))\n return\n base = urljoin('/', config.prefix + '/') # ensure has trailing slash\n client_base = urljoin('/', config.client_prefix + '/') # ensure has trailing slash\n logging.warning(\"Installing %s IIIFManipulator at %s v%s %s\" %\n (config.klass_name, base, config.api_version, config.auth_type))\n params = dict(config=config, klass=klass, auth=auth, prefix=config.client_prefix)\n app.add_url_rule(base.rstrip('/'), 'prefix_index_page',\n prefix_index_page, defaults={'config': config})\n app.add_url_rule(base, 'prefix_index_page',\n prefix_index_page, defaults={'config': config})\n app.add_url_rule(base + '/info.json',\n 'options_handler', options_handler, methods=['OPTIONS'])\n app.add_url_rule(base + '/info.json',\n 'iiif_info_handler', iiif_info_handler, methods=['GET'], defaults=params)\n if (config.include_osd):\n app.add_url_rule(base + '/osd.html',\n 'osd_page_handler', osd_page_handler, methods=['GET'], defaults=params)\n app.add_url_rule(base + '/',\n 'iiif_image_handler', iiif_image_handler, methods=['GET'], defaults=params)\n if (auth):\n setup_auth_paths(app, auth, config.client_prefix, params)\n # redirects to info.json must come after auth\n app.add_url_rule(base + '',\n 'iiif_info_handler',\n redirect_to=client_base + '/info.json')\n app.add_url_rule(base + '/',\n 'iiif_info_handler',\n redirect_to=client_base + '/info.json')\n return True", "response": "Adds a single IIIF Image API handler to the app."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef serve_static(filename=None, prefix='', basedir=None):\n if not basedir:\n basedir = os.path.join(os.path.dirname(__file__), 'third_party')\n return send_from_directory(os.path.join(basedir, prefix), filename)", "response": "Handler for static files: server filename in basedir/prefix.\n\n If not specified then basedir defaults to the local third_party\n directory."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef write_pid_file():\n pidfile = os.path.basename(sys.argv[0])[:-3] + '.pid' # strip .py, add .pid\n with open(pidfile, 'w') as fh:\n fh.write(\"%d\\n\" % os.getpid())\n fh.close()", "response": "Write a file with the PID of this server instance."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef setup_app(app, cfg):\n # Set up app_host and app_port in case that we are running\n # under reverse proxy setup, otherwise they default to\n # config.host and config.port.\n if (cfg.app_host and cfg.app_port):\n logging.warning(\"Reverse proxy for service at http://%s:%d/ ...\" % (cfg.host, cfg.port))\n app.wsgi_app = ReverseProxied(app.wsgi_app, cfg.host)\n elif (cfg.app_host or cfg.app_port):\n logging.critical(\"Must specify both app-host and app-port for reverse proxy configuration, aborting\")\n sys.exit(1)\n else:\n cfg.app_host = cfg.host\n cfg.app_port = cfg.port\n logging.warning(\"Setup server on http://%s:%d/ ...\" % (cfg.app_host, cfg.app_port))\n return(app)", "response": "Setup Flask app and handle reverse proxy setup if configured."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef server_and_prefix(self):\n return(host_port_prefix(self.config.host, self.config.port, self.prefix))", "response": "Server and prefix from config."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef file(self):\n file = None\n if (self.config.klass_name == 'gen'):\n for ext in ['.py']:\n file = os.path.join(\n self.config.generator_dir, self.identifier + ext)\n if (os.path.isfile(file)):\n return file\n else:\n for ext in ['.jpg', '.png', '.tif']:\n file = os.path.join(self.config.image_dir,\n self.identifier + ext)\n if (os.path.isfile(file)):\n return file\n # failed, show list of available identifiers as error\n available = \"\\n \".join(identifiers(self.config))\n raise IIIFError(code=404, parameter=\"identifier\",\n text=\"Image resource '\" + self.identifier + \"' not found. Local resources available:\" + available + \"\\n\")", "response": "Filename property for the source image for the current identifier."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds IIIF Compliance level header to response.", "response": "def add_compliance_header(self):\n \"\"\"Add IIIF Compliance level header to response.\"\"\"\n if (self.manipulator.compliance_uri is not None):\n self.headers['Link'] = '<' + \\\n self.manipulator.compliance_uri + '>;rel=\"profile\"'"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef make_response(self, content, code=200, headers=None):\n if headers:\n for header in headers:\n self.headers[header] = headers[header]\n return make_response(content, code, self.headers)", "response": "Wrapper around Flask. make_response which adds any local headers."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef image_information_response(self):\n dr = degraded_request(self.identifier)\n if (dr):\n self.logger.info(\"image_information: degraded %s -> %s\" %\n (self.identifier, dr))\n self.degraded = self.identifier\n self.identifier = dr\n else:\n self.logger.info(\"image_information: %s\" % (self.identifier))\n # get size\n self.manipulator.srcfile = self.file\n self.manipulator.do_first()\n # most of info.json comes from config, a few things specific to image\n info = {'tile_height': self.config.tile_height,\n 'tile_width': self.config.tile_width,\n 'scale_factors': self.config.scale_factors\n }\n # calculate scale factors if not hard-coded\n if ('auto' in self.config.scale_factors):\n info['scale_factors'] = self.manipulator.scale_factors(\n self.config.tile_width, self.config.tile_height)\n i = IIIFInfo(conf=info, api_version=self.api_version)\n i.server_and_prefix = self.server_and_prefix\n i.identifier = self.iiif.identifier\n i.width = self.manipulator.width\n i.height = self.manipulator.height\n if (self.api_version >= '2.0'):\n # FIXME - should come from manipulator\n i.qualities = [\"default\", \"color\", \"gray\"]\n else:\n # FIXME - should come from manipulator\n i.qualities = [\"native\", \"color\", \"gray\"]\n i.formats = [\"jpg\", \"png\"] # FIXME - should come from manipulator\n if (self.auth):\n self.auth.add_services(i)\n return self.make_response(i.as_json(),\n headers={\"Content-Type\": self.json_mime_type})", "response": "Parse image information request and create response."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef image_request_response(self, path):\n # Parse the request in path\n if (len(path) > 1024):\n raise IIIFError(code=414,\n text=\"URI Too Long: Max 1024 chars, got %d\\n\" % len(path))\n try:\n self.iiif.identifier = self.identifier\n self.iiif.parse_url(path)\n except IIIFRequestPathError as e:\n # Reraise as IIIFError with code=404 because we can't tell\n # whether there was an encoded slash in the identifier or\n # whether there was a bad number of path segments.\n raise IIIFError(code=404, text=e.text)\n except IIIFError as e:\n # Pass through\n raise e\n except Exception as e:\n # Something completely unexpected => 500\n raise IIIFError(code=500,\n text=\"Internal Server Error: unexpected exception parsing request (\" + str(e) + \")\")\n dr = degraded_request(self.identifier)\n if (dr):\n self.logger.info(\"image_request: degraded %s -> %s\" %\n (self.identifier, dr))\n self.degraded = self.identifier\n self.identifier = dr\n self.iiif.quality = 'gray'\n else:\n # Parsed request OK, attempt to fulfill\n self.logger.info(\"image_request: %s\" % (self.identifier))\n file = self.file\n self.manipulator.srcfile = file\n self.manipulator.do_first()\n if (self.api_version < '2.0' and\n self.iiif.format is None and\n 'Accept' in request.headers):\n # In 1.0 and 1.1 conneg was specified as an alternative to format, see:\n # http://iiif.io/api/image/1.0/#format\n # http://iiif.io/api/image/1.1/#parameters-format\n formats = {'image/jpeg': 'jpg', 'image/tiff': 'tif',\n 'image/png': 'png', 'image/gif': 'gif',\n 'image/jp2': 'jps', 'application/pdf': 'pdf'}\n accept = do_conneg(request.headers['Accept'], list(formats.keys()))\n # Ignore Accept header if not recognized, should this be an error\n # instead?\n if (accept in formats):\n self.iiif.format = formats[accept]\n (outfile, mime_type) = self.manipulator.derive(file, self.iiif)\n # FIXME - find efficient way to serve file with headers\n self.add_compliance_header()\n return send_file(outfile, mimetype=mime_type)", "response": "Parse the request and create response."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmaking response for an IIIFError e", "response": "def error_response(self, e):\n \"\"\"Make response for an IIIFError e.\n\n Also add compliance header.\n \"\"\"\n self.add_compliance_header()\n return self.make_response(*e.image_server_response(self.api_version))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef error_response(self, code, content=''):\n self.send_response(code)\n self.send_header('Content-Type', 'text/xml')\n self.add_compliance_header()\n self.end_headers()\n self.wfile.write(content)", "response": "Construct and send an error response."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef do_GET(self):\n self.compliance_uri = None\n self.iiif = IIIFRequest(baseurl='/')\n try:\n (of, mime_type) = self.do_GET_body()\n if (not of):\n raise IIIFError(\"Unexpected failure to open result image\")\n self.send_response(200, 'OK')\n if (mime_type is not None):\n self.send_header('Content-Type', mime_type)\n self.add_compliance_header()\n self.end_headers()\n while (1):\n buffer = of.read(8192)\n if (not buffer):\n break\n self.wfile.write(buffer)\n # Now cleanup\n self.manipulator.cleanup()\n except IIIFError as e:\n if (self.debug):\n e.text += \"\\nRequest parameters:\\n\" + str(self.iiif)\n self.error_response(e.code, str(e))\n except Exception as ue:\n # Anything else becomes a 500 Internal Server Error\n e = IIIFError(code=500, text=\"Something went wrong... %s ---- %s.\\n\" %\n (str(ue), traceback.format_exc()))\n if (self.debug):\n e.text += \"\\nRequest parameters:\\n\" + str(self.iiif)\n self.error_response(e.code, str(e))", "response": "Implement the HTTP GET method."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef do_GET_body(self):\n iiif = self.iiif\n if (len(self.path) > 1024):\n raise IIIFError(code=414,\n text=\"URI Too Long: Max 1024 chars, got %d\\n\" % len(self.path))\n try:\n # self.path has leading / then identifier/params...\n self.path = self.path.lstrip('/')\n sys.stderr.write(\"path = %s\" % (self.path))\n iiif.parse_url(self.path)\n except Exception as e:\n # Something completely unexpected => 500\n raise IIIFError(code=500,\n text=\"Internal Server Error: unexpected exception parsing request (\" + str(e) + \")\")\n # Now we have a full iiif request\n if (re.match('[\\w\\.\\-]+$', iiif.identifier)):\n file = os.path.join(TESTIMAGE_DIR, iiif.identifier)\n if (not os.path.isfile(file)):\n images_available = \"\"\n for image_file in os.listdir(TESTIMAGE_DIR):\n if (os.path.isfile(os.path.join(TESTIMAGE_DIR, image_file))):\n images_available += \" \" + image_file + \"\\n\"\n raise IIIFError(code=404, parameter=\"identifier\",\n text=\"Image resource '\" + iiif.identifier + \"' not found. Local image files available:\\n\" + images_available)\n else:\n raise IIIFError(code=404, parameter=\"identifier\",\n text=\"Image resource '\" + iiif.identifier + \"' not found. Only local test images and http: URIs for images are supported.\\n\")\n # Now know image is OK\n manipulator = IIIFRequestHandler.manipulator_class()\n # Stash manipulator object so we can cleanup after reading file\n self.manipulator = manipulator\n self.compliance_uri = manipulator.compliance_uri\n if (iiif.info):\n # get size\n manipulator.srcfile = file\n manipulator.do_first()\n # most of info.json comes from config, a few things\n # specific to image\n i = IIIFInfo()\n i.identifier = self.iiif.identifier\n i.width = manipulator.width\n i.height = manipulator.height\n import io\n return(io.StringIO(i.as_json()), \"application/json\")\n else:\n (outfile, mime_type) = manipulator.derive(file, iiif)\n return(open(outfile, 'r'), mime_type)", "response": "Create the GET body of the IIIF request."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset a random cookie prefix unless one is specified.", "response": "def set_cookie_prefix(self, cookie_prefix=None):\n \"\"\"Set a random cookie prefix unless one is specified.\n\n In order to run multiple demonstration auth services on the\n same server we need to have different cookie names for each\n auth domain. Unless cookie_prefix is set, generate a random\n one.\n \"\"\"\n if (cookie_prefix is None):\n self.cookie_prefix = \"%06d_\" % int(random.random() * 1000000)\n else:\n self.cookie_prefix = cookie_prefix"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_services(self, info):\n if (self.login_uri):\n svc = self.login_service_description()\n svcs = []\n if (self.logout_uri):\n svcs.append(self.logout_service_description())\n if (self.client_id_uri):\n svcs.append(self.client_id_service_description())\n if (self.access_token_uri):\n svcs.append(self.access_token_service_description())\n # Add one as direct child of service property, else array for >1\n if (len(svcs) == 1):\n svc['service'] = svcs[0]\n elif (len(svcs) > 1):\n svc['service'] = svcs\n info.add_service(svc)", "response": "Add auth services to an IIIFInfo object."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the login service description.", "response": "def login_service_description(self):\n \"\"\"Login service description.\n\n The login service description _MUST_ include the token service\n description. The authentication pattern is indicated via the\n profile URI which is built using self.auth_pattern.\n \"\"\"\n label = 'Login to ' + self.name\n if (self.auth_type):\n label = label + ' (' + self.auth_type + ')'\n desc = {\"@id\": self.login_uri,\n \"profile\": self.profile_base + self.auth_pattern,\n \"label\": label}\n if (self.header):\n desc['header'] = self.header\n if (self.description):\n desc['description'] = self.description\n return desc"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the access token response structure.", "response": "def access_token_response(self, token, message_id=None):\n \"\"\"Access token response structure.\n\n Success if token is set, otherwise (None, empty string) give error\n response. If message_id is set then an extra messageId attribute is\n set in the response to handle postMessage() responses.\n \"\"\"\n if (token):\n data = {\"accessToken\": token,\n \"expiresIn\": self.access_token_lifetime}\n if (message_id):\n data['messageId'] = message_id\n else:\n data = {\"error\": \"client_unauthorized\",\n \"description\": \"No authorization details received\"}\n return data"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef scheme_host_port_prefix(self, scheme='http', host='host',\n port=None, prefix=None):\n \"\"\"Return URI composed of scheme, server, port, and prefix.\"\"\"\n uri = scheme + '://' + host\n if (port and not ((scheme == 'http' and port == 80) or\n (scheme == 'https' and port == 443))):\n uri += ':' + str(port)\n if (prefix):\n uri += '/' + prefix\n return uri", "response": "Return URI composed of scheme server port and prefix."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngenerate a random cookie or token string not in container.", "response": "def _generate_random_string(self, container, length=20):\n \"\"\"Generate a random cookie or token string not in container.\n\n The cookie or token should be secure in the sense that it should not\n be likely to be able guess a value. Because it is not derived from\n anything else, there is no vulnerability of the token from computation,\n or possible leakage of information from the token.\n \"\"\"\n while True:\n s = ''.join([random.SystemRandom().choice(string.digits + string.ascii_letters)\n for n in range(length)])\n if (s not in container):\n break\n return s"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef access_cookie(self, account):\n if (self.account_allowed(account)):\n cookie = self._generate_random_string(self.access_cookies)\n self.access_cookies[cookie] = int(time.time())\n return cookie\n else:\n return None", "response": "Make and store an access cookie for a given account."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef access_cookie_valid(self, cookie, log_msg):\n if (cookie in self.access_cookies):\n age = int(time.time()) - self.access_cookies[cookie]\n if (age <= (self.access_cookie_lifetime + 1)):\n self.logger.info(log_msg + \" \" + cookie +\n \" ACCEPTED COOKIE (%ds old)\" % age)\n return True\n # Expired...\n self.logger.info(log_msg + \" \" + cookie +\n \" EXPIRED COOKIE (%ds old > %ds)\" %\n (age, self.access_cookie_lifetime))\n # Keep cookie for 2x lifetim in order to generate\n # helpful expired message\n if (age > (self.access_cookie_lifetime * 2)):\n del self.access_cookies[cookie]\n return False\n else:\n self.logger.info(log_msg + \" \" + cookie + \" REJECTED COOKIE\")\n return False", "response": "Check if the access cookie is valid."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate and store an access token for the access cookie. Return the access token.", "response": "def access_token(self, cookie):\n \"\"\"Make and store access token as proxy for the access cookie.\n\n Create an access token to act as a proxy for access cookie, add it to\n the dict of accepted access tokens with (cookie, current timestamp)\n as the value. Return the access token. Return None if cookie is not set.\n \"\"\"\n if (cookie):\n token = self._generate_random_string(self.access_tokens)\n self.access_tokens[token] = (cookie, int(time.time()))\n return token\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nchecks token validity. Returns True if the token is valid False otherwise.", "response": "def access_token_valid(self, token, log_msg):\n \"\"\"Check token validity.\n\n Returns true if the token is valid. The set of allowed access tokens\n is stored in self.access_tokens.\n\n Uses log_msg as prefix to info level log message of acceptance or\n rejection.\n \"\"\"\n if (token in self.access_tokens):\n (cookie, issue_time) = self.access_tokens[token]\n age = int(time.time()) - issue_time\n if (age <= (self.access_token_lifetime + 1)):\n self.logger.info(log_msg + \" \" + token +\n \" ACCEPTED TOKEN (%ds old)\" % age)\n return True\n # Expired...\n self.logger.info(log_msg + \" \" + token +\n \" EXPIRED TOKEN (%ds old > %ds)\" %\n (age, self.access_token_lifetime))\n # Keep token for 2x lifetim in order to generate\n # helpful expired message\n if (age > (self.access_token_lifetime * 2)):\n del self.access_tokens[token]\n return False\n else:\n self.logger.info(log_msg + \" \" + token + \" REJECTED TOKEN\")\n return False"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef info_authn(self):\n authz_header = request.headers.get('Authorization', '[none]')\n if (not authz_header.startswith('Bearer ')):\n return False\n token = authz_header[7:]\n return self.access_token_valid(\n token, \"info_authn: Authorization header\")", "response": "Check if user is authenticated for info. json."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef image_authn(self):\n authn_cookie = request.cookies.get(\n self.access_cookie_name, default='[none]')\n return self.access_cookie_valid(authn_cookie, \"image_authn: auth cookie\")", "response": "Check if user is authenticated for image requests."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget access token based on cookie sent with this request. This handler deals with two cases: 1) Non-browser client (indicated by no messageId set in request) where the response is a simple JSON response. 2) Browser client (indicate by messageId setin request) where the request must be made from a an iFrame and the response is sent as JSON wrapped in HTML containing a postMessage() script that conveys the access token to the viewer.", "response": "def access_token_handler(self, **args):\n \"\"\"Get access token based on cookie sent with this request.\n\n This handler deals with two cases:\n\n 1) Non-browser client (indicated by no messageId set in request)\n where the response is a simple JSON response.\n\n 2) Browser client (indicate by messageId setin request) where\n the request must be made from a an iFrame and the response is\n sent as JSON wrapped in HTML containing a postMessage() script\n that conveys the access token to the viewer.\n \"\"\"\n message_id = request.args.get('messageId', default=None)\n origin = request.args.get('origin', default='unknown_origin')\n self.logger.info(\"access_token_handler: origin = \" + origin)\n account = request.cookies.get(self.account_cookie_name, default='')\n token = self.access_token(account)\n\n # Build JSON response\n data_str = json.dumps(self.access_token_response(token, message_id))\n ct = \"application/json\"\n\n # If message_id is set the wrap in HTML with postMessage JavaScript\n # for a browser client\n if (message_id is not None):\n data_str = \"\"\"\n\n
postMessage ACCESS TOKEN %s
\n\n\n\n\"\"\" % (token, data_str, origin)\n ct = \"text/html\"\n\n # Send response along with cookie\n response = make_response(data_str, 200, {'Content-Type': ct})\n if (token):\n self.logger.info(\n \"access_token_handler: setting access token = \" + token)\n # Set the cookie for the image content\n cookie = self.access_cookie(token)\n self.logger.info(\n \"access_token_handler: setting access cookie = \" + cookie)\n response.set_cookie(self.access_cookie_name, cookie)\n else:\n self.logger.info(\n \"access_token_handler: auth failed, sending error\")\n response.headers['Access-control-allow-origin'] = '*'\n return response"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef find_binaries(cls, tmpdir=None, shellsetup=None, pnmdir=None):\n cls.tmpdir = ('/tmp' if (tmpdir is None) else tmpdir)\n # Shell setup command (e.g set library path)\n cls.shellsetup = ('' if (shellsetup is None) else shellsetup)\n if (pnmdir is None):\n cls.pnmdir = '/usr/bin'\n for dir in ('/usr/local/bin', '/sw/bin'):\n if (os.path.isfile(os.path.join(dir, 'pngtopnm'))):\n cls.pnmdir = dir\n else:\n cls.pnmdir = pnmdir\n # Recklessly assume everything else under cls.pnmdir\n cls.pngtopnm = os.path.join(cls.pnmdir, 'pngtopnm')\n cls.jpegtopnm = os.path.join(cls.pnmdir, 'jpegtopnm')\n cls.pnmfile = os.path.join(cls.pnmdir, 'pnmfile')\n cls.pnmcut = os.path.join(cls.pnmdir, 'pnmcut')\n cls.pnmscale = os.path.join(cls.pnmdir, 'pnmscale')\n cls.pnmrotate = os.path.join(cls.pnmdir, 'pnmrotate')\n cls.pnmflip = os.path.join(cls.pnmdir, 'pnmflip')\n cls.pnmtopng = os.path.join(cls.pnmdir, 'pnmtopng')\n cls.ppmtopgm = os.path.join(cls.pnmdir, 'ppmtopgm')\n cls.pnmtotiff = os.path.join(cls.pnmdir, 'pnmtotiff')\n cls.pnmtojpeg = os.path.join(cls.pnmdir, 'pnmtojpeg')\n cls.pamditherbw = os.path.join(cls.pnmdir, 'pamditherbw')\n # Need djatoka to get jp2 output\n cls.djatoka_comp = '/Users/simeon/packages/adore-djatoka-1.1/bin/compress.sh'", "response": "Find all binary locations under the given directory."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating PNM file from input image file.", "response": "def do_first(self):\n \"\"\"Create PNM file from input image file.\"\"\"\n pid = os.getpid()\n self.basename = os.path.join(self.tmpdir, 'iiif_netpbm_' + str(pid))\n outfile = self.basename + '.pnm'\n # Convert source file to pnm\n filetype = self.file_type(self.srcfile)\n if (filetype == 'png'):\n if (self.shell_call(self.pngtopnm + ' ' + self.srcfile + ' > ' + outfile)):\n raise IIIFError(text=\"Oops... got error from pngtopnm.\")\n elif (filetype == 'jpg'):\n if (self.shell_call(self.jpegtopnm + ' ' + self.srcfile + ' > ' + outfile)):\n raise IIIFError(text=\"Oops... got error from jpegtopnm.\")\n else:\n raise IIIFError(code='501',\n text='bad input file format (only know how to read png/jpeg)')\n self.tmpfile = outfile\n # Get size\n (self.width, self.height) = self.image_size(self.tmpfile)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\napplying rotation and mirroring.", "response": "def do_rotation(self, mirror, rot):\n \"\"\"Apply rotation and/or mirroring.\"\"\"\n infile = self.tmpfile\n outfile = self.basename + '.rot'\n # NOTE: pnmrotate: angle must be between -90 and 90 and\n # rotations is CCW not CW per IIIF spec\n #\n # BUG in pnmrotate: +90 and -90 rotations the output image\n # size may be off. See for example a 1000x1000 image becoming\n # 1004x1000:\n #\n # simeon@RottenApple iiif>file testimages/67352ccc-d1b0-11e1-89ae-279075081939.png\n # testimages/67352ccc-d1b0-11e1-89ae-279075081939.png: PNG image data, 1000 x 1000, 8-bit/color RGB, non-interlaced\n # simeon@RottenApple iiif>cat testimages/67352ccc-d1b0-11e1-89ae-279075081939.png | pngtopnm | pnmrotate -90 | pnmtopng > a.png; file a.png; rm a.png\n # a.png: PNG image data, 1004 x 1000, 8-bit/color RGB, non-interlaced\n # simeon@RottenApple iiif>cat testimages/67352ccc-d1b0-11e1-89ae-279075081939.png | pngtopnm | pnmrotate 90 | pnmtopng > a.png; file a.png; rm a.png\n # a.png: PNG image data, 1004 x 1000, 8-bit/color RGB, non-interlaced\n #\n # WORKAROUND is to add a pnmscale for the 90degree case, some\n # simeon@RottenApple iiif>cat testimages/67352ccc-d1b0-11e1-89ae-279075081939.png | pngtopnm | pnmrotate -90| pnmscale -width 1000 -height 1000 | pnmtopng > a.png; file a.png; rm a.png\n # a.png: PNG image data, 1000 x 1000, 8-bit/color RGB, non-interlaced\n #\n # FIXME - add mirroring\n #\n if (rot == 0.0):\n # print \"rotation: no rotation\"\n self.tmpfile = infile\n elif (rot <= 90.0 or rot >= 270.0):\n if (rot >= 270.0):\n rot -= 360.0\n # print \"rotation: by %f degrees clockwise\" % (rot)\n if (self.shell_call('cat ' + infile + ' | ' + self.pnmrotate + ' -background=#FFF ' + str(-rot) + ' > ' + outfile)):\n raise IIIFError(\n text=\"Oops... got nonzero output from pnmrotate.\")\n self.tmpfile = outfile\n else:\n # Between 90 and 270 = flip and then -90 to 90\n rot -= 180.0\n # print \"rotation: by %f degrees clockwise\" % (rot)\n if (self.shell_call('cat ' + infile + ' | ' + self.pnmflip + ' -rotate180 | ' + self.pnmrotate + ' ' + str(-rot) + ' > ' + outfile)):\n raise IIIFError(\n text=\"Oops... got nonzero output from pnmrotate.\")\n self.tmpfile = outfile\n # Fixup size for 90s\n if (abs(rot % 180.0 - 90.0) < 0.001):\n outfile2 = self.basename + '.rot2'\n if (self.shell_call('cat ' + self.tmpfile + ' | ' + self.pnmscale + ' -width ' + str(self.height) + ' -height ' + str(self.width) + ' > ' + outfile2)):\n raise IIIFError(\n text=\"Oops... failed to fixup size after pnmrotate.\")\n self.tmpfile = outfile2"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef do_quality(self, quality):\n infile = self.tmpfile\n outfile = self.basename + '.col'\n # Quality (bit-depth):\n if (quality == 'grey' or quality == 'gray'):\n if (self.shell_call('cat ' + infile + ' | ' + self.ppmtopgm + ' > ' + outfile)):\n raise IIIFError(\n text=\"Oops... got nonzero output from ppmtopgm.\")\n self.tmpfile = outfile\n elif (quality == 'bitonal'):\n if (self.shell_call('cat ' + infile + ' | ' + self.ppmtopgm + ' | ' + self.pamditherbw + ' > ' + outfile)):\n raise IIIFError(\n text=\"Oops... got nonzero output from ppmtopgm.\")\n self.tmpfile = outfile\n elif ((quality == 'native' and self.api_version < '2.0') or\n (quality == 'default' and self.api_version >= '2.0') or\n quality == 'color'):\n self.tmpfile = infile\n else:\n raise IIIFError(code=400, parameter='quality',\n text=\"Unknown quality parameter value requested.\")", "response": "Apply value of quality parameter."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef file_type(self, file):\n try:\n magic_text = magic.from_file(file)\n if (isinstance(magic_text, bytes)):\n # In python2 and travis python3 (?!) decode to get unicode string\n magic_text = magic_text.decode('utf-8')\n except (TypeError, IOError):\n return\n if (re.search('PNG image data', magic_text)):\n return('png')\n elif (re.search('JPEG image data', magic_text)):\n return('jpg')\n # failed\n return", "response": "Use python - magic to determine file type."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef image_size(self, pnmfile):\n pout = os.popen(self.shellsetup + self.pnmfile + ' ' + pnmfile, 'r')\n pnmfileout = pout.read(200)\n pout.close()\n m = re.search(', (\\d+) by (\\d+) ', pnmfileout)\n if (m is None):\n raise IIIFError(\n text=\"Bad output from pnmfile when trying to get size.\")\n w = int(m.group(1))\n h = int(m.group(2))\n # print \"pnmfile output = %s\" % (pnmfileout)\n # print \"image size = %d,%d\" % (w,h)\n return(w, h)", "response": "Get width and height of an image pnmfile."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef shell_call(self, shellcmd):\n return(subprocess.call(self.shellsetup + shellcmd, shell=True))", "response": "Shell call with necessary setup first."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef cleanup(self):\n for file in glob.glob(self.basename + '*'):\n os.unlink(file)", "response": "Clean up any temporary files."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef image_server_response(self, api_version=None):\n headers = dict(self.headers)\n if (api_version < '1.1'):\n headers['Content-Type'] = 'text/xml'\n response = self.as_xml()\n else:\n headers['Content-Type'] = 'text/plain'\n response = self.as_txt()\n return(response, self.code, headers)", "response": "Return the response code and headers for image server error response."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the error as an XML document.", "response": "def as_xml(self):\n \"\"\"XML representation of the error to be used in HTTP response.\n\n This XML format follows the IIIF Image API v1.0 specification,\n see \n \"\"\"\n # Build tree\n spacing = (\"\\n\" if (self.pretty_xml) else \"\")\n root = Element('error', {'xmlns': I3F_NS})\n root.text = spacing\n e_parameter = Element('parameter', {})\n e_parameter.text = self.parameter\n e_parameter.tail = spacing\n root.append(e_parameter)\n if (self.text):\n e_text = Element('text', {})\n e_text.text = self.text\n e_text.tail = spacing\n root.append(e_text)\n # Write out as XML document to return\n tree = ElementTree(root)\n xml_buf = io.BytesIO()\n if (sys.version_info < (2, 7)):\n tree.write(xml_buf, encoding='UTF-8')\n else:\n tree.write(xml_buf, encoding='UTF-8',\n xml_declaration=True, method='xml')\n return(xml_buf.getvalue().decode('utf-8'))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef as_txt(self):\n s = \"IIIF Image Server Error\\n\\n\"\n s += self.text if (self.text) else 'UNKNOWN_ERROR'\n s += \"\\n\\n\"\n if (self.parameter):\n s += \"parameter=%s\\n\" % self.parameter\n if (self.code):\n s += \"code=%d\\n\\n\" % self.code\n for header in sorted(self.headers):\n s += \"header %s=%s\\n\" % (header, self.headers[header])\n return s", "response": "Return a string representation of the error response."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef login_handler(self, config=None, prefix=None, **args):\n params = {\n 'response_type': 'code',\n 'client_id': self.google_api_client_id,\n 'redirect_uri': self.scheme_host_port_prefix(\n 'http', config.host, config.port, prefix) + '/home',\n 'scope': self.google_api_scope,\n 'state': self.request_args_get('next', default=''),\n }\n url = self.google_oauth2_url + 'auth?' + urlencode(params)\n return self.login_handler_redirect(url)", "response": "OAuth starts here redirect user to Google."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nmake request to Google API to get token.", "response": "def google_get_token(self, config, prefix):\n \"\"\"Make request to Google API to get token.\"\"\"\n params = {\n 'code': self.request_args_get(\n 'code',\n default=''),\n 'client_id': self.google_api_client_id,\n 'client_secret': self.google_api_client_secret,\n 'redirect_uri': self.scheme_host_port_prefix(\n 'http', config.host, config.port, prefix) + '/home',\n 'grant_type': 'authorization_code',\n }\n payload = urlencode(params).encode('utf-8')\n url = self.google_oauth2_url + 'token'\n req = Request(url, payload)\n json_str = urlopen(req).read()\n return json.loads(json_str.decode('utf-8'))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nmakes request to Google API to get profile data for the user.", "response": "def google_get_data(self, config, response):\n \"\"\"Make request to Google API to get profile data for the user.\"\"\"\n params = {\n 'access_token': response['access_token'],\n }\n payload = urlencode(params)\n url = self.google_api_url + 'userinfo?' + payload\n req = Request(url)\n json_str = urlopen(req).read()\n return json.loads(json_str.decode('utf-8'))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef compliance_uri(self):\n if (self.api_version == '1.0'):\n uri_pattern = r'http://library.stanford.edu/iiif/image-api/compliance.html#level%d'\n elif (self.api_version == '1.1'):\n uri_pattern = r'http://library.stanford.edu/iiif/image-api/1.1/compliance.html#level%d'\n elif (self.api_version == '2.0' or\n self.api_version == '2.1'):\n uri_pattern = r'http://iiif.io/api/image/2/level%d.json'\n else:\n return\n if (self.compliance_level is None):\n return\n return(uri_pattern % self.compliance_level)", "response": "Returns the compliance URI based on api_version and complicance_level."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef derive(self, srcfile=None, request=None, outfile=None):\n # set if specified\n if (srcfile is not None):\n self.srcfile = srcfile\n if (request is not None):\n self.request = request\n if (outfile is not None):\n self.outfile = outfile\n if (self.outfile is not None):\n # create path to output dir if necessary\n dir = os.path.dirname(self.outfile)\n if (not os.path.exists(dir)):\n os.makedirs(dir)\n #\n self.do_first()\n (x, y, w, h) = self.region_to_apply()\n self.do_region(x, y, w, h)\n (w, h) = self.size_to_apply()\n self.do_size(w, h)\n (mirror, rot) = self.rotation_to_apply(no_mirror=True)\n self.do_rotation(mirror, rot)\n (quality) = self.quality_to_apply()\n self.do_quality(quality)\n self.do_format(self.request.format)\n self.do_last()\n return(self.outfile, self.mime_type)", "response": "Derive an image from one source image file."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef do_region(self, x, y, w, h):\n if (x is not None):\n raise IIIFError(code=501, parameter=\"region\",\n text=\"Null manipulator supports only region=/full/.\")", "response": "Null implementation of region selection."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef do_rotation(self, mirror, rot):\n if (mirror):\n raise IIIFError(code=501, parameter=\"rotation\",\n text=\"Null manipulator does not support mirroring.\")\n if (rot != 0.0):\n raise IIIFError(code=501, parameter=\"rotation\",\n text=\"Null manipulator supports only rotation=(0|360).\")", "response": "Null implementation of rotate and mirror and or rotate."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef do_quality(self, quality):\n if (self.api_version >= '2.0'):\n if (quality != \"default\"):\n raise IIIFError(code=501, parameter=\"default\",\n text=\"Null manipulator supports only quality=default.\")\n else: # versions 1.0 and 1.1\n if (quality != \"native\"):\n raise IIIFError(code=501, parameter=\"native\",\n text=\"Null manipulator supports only quality=native.\")", "response": "Null implementation of quality."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nnulls implementation of format selection.", "response": "def do_format(self, format):\n \"\"\"Null implementation of format selection.\n\n This is the last step, this null implementation does not accept any\n specification of a format because we don't even know what the input\n format is.\n \"\"\"\n if (format is not None):\n raise IIIFError(code=415, parameter=\"format\",\n text=\"Null manipulator does not support specification of output format.\")\n #\n if (self.outfile is None):\n self.outfile = self.srcfile\n else:\n try:\n shutil.copyfile(self.srcfile, self.outfile)\n except IOError as e:\n raise IIIFError(code=500,\n text=\"Failed to copy file (%s).\" % (str(e)))\n self.mime_type = None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef region_to_apply(self):\n if (self.request.region_full or\n (self.request.region_pct and\n self.request.region_xywh == (0, 0, 100, 100))):\n return(None, None, None, None)\n # Cannot do anything else unless we know size (in self.width and\n # self.height)\n if (self.width <= 0 or self.height <= 0):\n raise IIIFError(code=501, parameter='region',\n text=\"Region parameters require knowledge of image size which is not implemented.\")\n if (self.request.region_square):\n if (self.width <= self.height):\n y_offset = (self.height - self.width) / 2\n return(0, y_offset, self.width, self.width)\n else: # self.width>self.height\n x_offset = (self.width - self.height) / 2\n return(x_offset, 0, self.height, self.height)\n # pct or explicit pixel sizes\n pct = self.request.region_pct\n (x, y, w, h) = self.request.region_xywh\n # Convert pct to pixels based on actual size\n if (pct):\n x = int((x / 100.0) * self.width + 0.5)\n y = int((y / 100.0) * self.height + 0.5)\n w = int((w / 100.0) * self.width + 0.5)\n h = int((h / 100.0) * self.height + 0.5)\n # Check if boundary extends beyond image and truncate\n if ((x + w) > self.width):\n w = self.width - x\n if ((y + h) > self.height):\n h = self.height - y\n # Final check to see if we have the whole image\n if (w == 0 or h == 0):\n raise IIIFZeroSizeError(code=400, parameter='region',\n text=\"Region parameters would result in zero size result image.\")\n if (x == 0 and y == 0 and w == self.width and h == self.height):\n return(None, None, None, None)\n return(x, y, w, h)", "response": "Return the x y w h parameters to extract given image width and height."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncalculating size of image scaled using size parameters.", "response": "def size_to_apply(self):\n \"\"\"Calculate size of image scaled using size parameters.\n\n Assumes current image width and height are available in self.width and\n self.height, and self.request is IIIFRequest object.\n\n Formats are: w, ,h w,h pct:p !w,h full max\n\n Returns (None,None) if no scaling is required.\n\n If max is requested and neither max_area or max_width are\n specified then this is the same as full. Otherwise the limits\n are used to determine the size.\n \"\"\"\n if (self.request.size_full or self.request.size_pct == 100.0):\n # full size\n return(None, None)\n # Not trivially full size, look at possibilities in turn\n w = self.width\n h = self.height\n if (self.request.size_max):\n # use size limits if present, else full\n if (self.max_area and self.max_area < (w * h)):\n scale = (float(self.max_area) / float(w * h)) ** 0.5\n w = int(w * scale + 0.5)\n h = int(h * scale + 0.5)\n if (self.max_width):\n max_height = self.max_height if self.max_height is not None else self.max_width\n if (self.max_width < w):\n # calculate wrt original width, height rather than\n # w, h to avoid compounding rounding issues\n scale = float(self.max_width) / float(self.width)\n w = int(self.width * scale + 0.5)\n h = int(self.height * scale + 0.5)\n if (max_height < h):\n scale = float(max_height) / float(self.height)\n w = int(self.width * scale + 0.5)\n h = int(self.height * scale + 0.5)\n elif (self.request.size_pct is not None):\n w = int(self.width * self.request.size_pct / 100.0 + 0.5)\n h = int(self.height * self.request.size_pct / 100.0 + 0.5)\n elif (self.request.size_bang):\n # Have \"!w,h\" form\n (mw, mh) = self.request.size_wh\n # Pick smaller fraction and then work from that...\n frac = min((float(mw) / float(self.width)),\n (float(mh) / float(self.height)))\n w = int(self.width * frac + 0.5)\n h = int(self.height * frac + 0.5)\n else:\n # Must now be \"w,h\", \"w,\" or \",h\". If both are specified then this will the size,\n # otherwise find other to keep aspect ratio\n (w, h) = self.request.size_wh\n if (w is None):\n w = int(self.width * h / self.height + 0.5)\n elif (h is None):\n h = int(self.height * w / self.width + 0.5)\n # Now have w,h, sanity check and return\n if (w == 0 or h == 0):\n raise IIIFZeroSizeError(\n code=400, parameter='size',\n text=\"Size parameter would result in zero size result image (%d,%d).\" % (w, h))\n # Below would be test for scaling up image size, this is allowed by spec\n # if ( w>self.width or h>self.height ):\n # raise IIIFError(code=400,parameter='size',\n # text=\"Size requests scaling up image to larger than orginal.\")\n if (w == self.width and h == self.height):\n return(None, None)\n return(w, h)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncheck and interpret rotation.", "response": "def rotation_to_apply(self, only90s=False, no_mirror=False):\n \"\"\"Check and interpret rotation.\n\n Returns a truth value as to whether to mirror, and a floating point\n number 0 <= angle < 360 (degrees).\n \"\"\"\n rotation = self.request.rotation_deg\n if (no_mirror and self.request.rotation_mirror):\n raise IIIFError(code=501, parameter=\"rotation\",\n text=\"This implementation does not support mirroring.\")\n if (only90s and (rotation != 0.0 and rotation != 90.0 and\n rotation != 180.0 and rotation != 270.0)):\n raise IIIFError(code=501, parameter=\"rotation\",\n text=\"This implementation supports only 0,90,180,270 degree rotations.\")\n return(self.request.rotation_mirror, rotation)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef quality_to_apply(self):\n if (self.request.quality is None):\n if (self.api_version <= '1.1'):\n return('native')\n else:\n return('default')\n return(self.request.quality)", "response": "Value of quality parameter to use in processing request."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a set of scale factors for given tile and window size.", "response": "def scale_factors(self, tile_width, tile_height=None):\n \"\"\"Return a set of scale factors for given tile and window size.\n\n Gives a set of scale factors, starting at 1, and in multiples\n of 2. Largest scale_factor is so that one tile will cover the\n entire image (self.width,self.height).\n\n If tile_height is not specified then tiles are assumed to be\n squares of tile_width pixels.\n \"\"\"\n if (not tile_height):\n tile_height = tile_width\n sf = 1\n scale_factors = [sf]\n for j in range(30): # limit of 2^30, should be enough!\n sf = 2 * sf\n if (tile_width * sf > self.width and\n tile_height * sf > self.height):\n break\n scale_factors.append(sf)\n return scale_factors"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef pixel(self, x, y, size=None):\n if (size is None):\n size = self.sz\n # Have we go to the smallest element?\n if (size <= 3):\n if (_not_diagonal(x, y)):\n return None\n else:\n return (0, 0, 0)\n divisor = size // 3\n if (_not_diagonal(x // divisor, y // divisor)):\n return None\n return self.pixel(x % divisor, y % divisor, divisor)", "response": "Return color for a pixel."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncoloring of pixel that reached limit after n iterations.", "response": "def color(self, n):\n \"\"\"Color of pixel that reached limit after n iterations.\n\n Returns a color tuple for use with PIL, tending toward\n red as we tend toward self.max_iter iterations.\n \"\"\"\n red = int(n * self.shade_factor)\n if (red > 255):\n red = 255\n return (red, 50, 100)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef mpixel(self, z, n=0):\n z = z * z + self.c\n if (abs(z) > 2.0):\n return self.color(n)\n n += 1\n if (n > self.max_iter):\n return None\n return self.mpixel(z, n)", "response": "Iteration in Mandlebrot coordinate z."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef pixel(self, ix, iy):\n x = (ix - self.xoffset + 0.5) / self.scale\n y = (iy - self.yoffset + 0.5) / self.scale\n z = complex(x, y)\n self.set_c(z)\n return self.mpixel(z)", "response": "Return the color at the specified pixel."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nparses extra request parameters to IIIFRequest object.", "response": "def parse_extra(self, extra):\n \"\"\"Parse extra request parameters to IIIFRequest object.\"\"\"\n if extra.startswith('/'):\n extra = extra[1:]\n r = IIIFRequest(identifier='dummy',\n api_version=self.api_version)\n r.parse_url(extra)\n if (r.info):\n raise IIIFStaticError(\"Attempt to specify Image Information in extras.\")\n return(r)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget appropriate portion of config.", "response": "def get_osd_config(self, osd_version):\n \"\"\"Select appropriate portion of config.\n\n If the version requested is not supported the raise an exception with\n a helpful error message listing the versions supported.\n \"\"\"\n if (osd_version in self.osd_config):\n return(self.osd_config[osd_version])\n else:\n raise IIIFStaticError(\"OpenSeadragon version %s not supported, available versions are %s\" %\n (osd_version, ', '.join(sorted(self.osd_config.keys()))))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef generate(self, src=None, identifier=None):\n self.src = src\n self.identifier = identifier\n # Get image details and calculate tiles\n im = self.manipulator_klass()\n im.srcfile = self.src\n im.set_max_image_pixels(self.max_image_pixels)\n im.do_first()\n width = im.width\n height = im.height\n scale_factors = im.scale_factors(self.tilesize)\n # Setup destination and IIIF identifier\n self.setup_destination()\n # Write out images\n for (region, size) in static_partial_tile_sizes(width, height, self.tilesize, scale_factors):\n self.generate_tile(region, size)\n sizes = []\n for size in static_full_sizes(width, height, self.tilesize):\n # See https://github.com/zimeon/iiif/issues/9\n sizes.append({'width': size[0], 'height': size[1]})\n self.generate_tile('full', size)\n for request in self.extras:\n request.identifier = self.identifier\n if (request.is_scaled_full_image()):\n sizes.append({'width': request.size_wh[0],\n 'height': request.size_wh[1]})\n self.generate_file(request)\n # Write info.json\n qualities = ['default'] if (self.api_version > '1.1') else ['native']\n info = IIIFInfo(level=0, server_and_prefix=self.prefix, identifier=self.identifier,\n width=width, height=height, scale_factors=scale_factors,\n tile_width=self.tilesize, tile_height=self.tilesize,\n formats=['jpg'], qualities=qualities, sizes=sizes,\n api_version=self.api_version)\n json_file = os.path.join(self.dst, self.identifier, 'info.json')\n if (self.dryrun):\n self.logger.warning(\n \"dryrun mode, would write the following files:\")\n self.logger.warning(\"%s / %s/%s\" %\n (self.dst, self.identifier, 'info.json'))\n else:\n with open(json_file, 'w') as f:\n f.write(info.as_json())\n f.close()\n self.logger.info(\"%s / %s/%s\" %\n (self.dst, self.identifier, 'info.json'))\n self.logger.debug(\"Written %s\" % (json_file))", "response": "Generate static files for one source image."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef generate_tile(self, region, size):\n r = IIIFRequest(identifier=self.identifier,\n api_version=self.api_version)\n if (region == 'full'):\n r.region_full = True\n else:\n r.region_xywh = region # [rx,ry,rw,rh]\n r.size_wh = size # [sw,sh]\n r.format = 'jpg'\n self.generate_file(r, True)", "response": "Generate one tile for this given region size of this image."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef generate_file(self, r, undistorted=False):\n use_canonical = self.get_osd_config(self.osd_version)['use_canonical']\n height = None\n if (undistorted and use_canonical):\n height = r.size_wh[1]\n r.size_wh = [r.size_wh[0], None] # [sw,sh] -> [sw,]\n path = r.url()\n # Generate...\n if (self.dryrun):\n self.logger.info(\"%s / %s\" % (self.dst, path))\n else:\n m = self.manipulator_klass(api_version=self.api_version)\n try:\n m.derive(srcfile=self.src, request=r,\n outfile=os.path.join(self.dst, path))\n self.logger.info(\"%s / %s\" % (self.dst, path))\n except IIIFZeroSizeError:\n self.logger.info(\"%s / %s - zero size, skipped\" %\n (self.dst, path))\n return # done if zero size\n if (r.region_full and use_canonical and height is not None):\n # In v2.0 of the spec, the canonical URI form `w,` for scaled\n # images of the full region was introduced. This is somewhat at\n # odds with the requirement for `w,h` specified in `sizes` to\n # be available, and has problems of precision with tall narrow\n # images. Hopefully will be fixed in 3.0 but for now symlink\n # the `w,h` form to the `w,` dirs so that might use the specified\n # `w,h` also work. See\n # \n #\n # FIXME - This is ugly because we duplicate code in\n # iiif.request.url to construct the partial URL\n region_dir = os.path.join(r.quote(r.identifier), \"full\")\n wh_dir = \"%d,%d\" % (r.size_wh[0], height)\n wh_path = os.path.join(region_dir, wh_dir)\n wc_dir = \"%d,\" % (r.size_wh[0])\n wc_path = os.path.join(region_dir, wc_dir)\n if (not self.dryrun):\n ln = os.path.join(self.dst, wh_path)\n if (os.path.exists(ln)):\n os.remove(ln)\n os.symlink(wc_dir, ln)\n self.logger.info(\"%s / %s -> %s\" % (self.dst, wh_path, wc_path))", "response": "Generate file for IIIFRequest object r."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nwriting HTML test page using OpenSeadragon.", "response": "def write_html(self, html_dir='/tmp', include_osd=False,\n osd_width=500, osd_height=500):\n \"\"\"Write HTML test page using OpenSeadragon for the tiles generated.\n\n Assumes that the generate(..) method has already been called to set up\n identifier etc. Parameters:\n html_dir - output directory for HTML files, will be created if it\n does not already exist\n include_osd - true to include OpenSeadragon code\n osd_width - width of OpenSeadragon pane in pixels\n osd_height - height of OpenSeadragon pane in pixels\n \"\"\"\n osd_config = self.get_osd_config(self.osd_version)\n osd_base = osd_config['base']\n osd_dir = osd_config['dir'] # relative to base\n osd_js = os.path.join(osd_dir, osd_config['js'])\n osd_images = os.path.join(osd_dir, osd_config['images'])\n if (os.path.isdir(html_dir)):\n # Exists, fine\n pass\n elif (os.path.isfile(html_dir)):\n raise IIIFStaticError(\n \"Can't write to directory %s: a file of that name exists\" % html_dir)\n else:\n os.makedirs(html_dir)\n self.logger.info(\"Writing HTML to %s\" % (html_dir))\n with open(os.path.join(self.template_dir, 'static_osd.html'), 'r') as f:\n template = f.read()\n outfile = self.identifier + '.html'\n outpath = os.path.join(html_dir, outfile)\n with open(outpath, 'w') as f:\n info_json_uri = '/'.join([self.identifier, 'info.json'])\n if (self.prefix):\n info_json_uri = '/'.join([self.prefix, info_json_uri])\n d = dict(identifier=self.identifier,\n api_version=self.api_version,\n osd_version=self.osd_version,\n osd_uri=osd_js,\n osd_images_prefix=osd_images,\n osd_height=osd_width,\n osd_width=osd_height,\n info_json_uri=info_json_uri)\n f.write(Template(template).safe_substitute(d))\n self.logger.info(\"%s / %s\" % (html_dir, outfile))\n # Do we want to copy OSD in there too? If so, do it only if\n # we haven't already\n if (include_osd):\n if (self.copied_osd):\n self.logger.info(\"OpenSeadragon already copied\")\n else:\n # Make directory, copy JavaScript and icons (from osd_images)\n osd_path = os.path.join(html_dir, osd_dir)\n if (not os.path.isdir(osd_path)):\n os.makedirs(osd_path)\n shutil.copyfile(os.path.join(osd_base, osd_js),\n os.path.join(html_dir, osd_js))\n self.logger.info(\"%s / %s\" % (html_dir, osd_js))\n osd_images_path = os.path.join(html_dir, osd_images)\n if (os.path.isdir(osd_images_path)):\n self.logger.warning(\n \"OpenSeadragon images directory (%s) already exists, skipping\"\n % osd_images_path)\n else:\n shutil.copytree(os.path.join(osd_base, osd_images),\n osd_images_path)\n self.logger.info(\"%s / %s/*\" % (html_dir, osd_images))\n self.copied_osd = True"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_value(key, obj, default=missing):\n if isinstance(key, int):\n return _get_value_for_key(key, obj, default)\n return _get_value_for_keys(key.split('.'), obj, default)", "response": "Helper for pulling a value off various types of objects"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _handle_load_unknown(self, data, original):\n for key, val in original.items():\n if key not in self.fields:\n data[key] = val\n return data", "response": "Preserve unknown keys during deserialization."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _handle_dump_unknown(self, data, original):\n for key, val in original.items():\n if key not in self.fields:\n data[key] = val\n return data", "response": "Preserve unknown keys during deserialization."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef validate_headers(spec, data):\n validated_data = {\n spec.VERSION: data[spec.VERSION],\n spec.KIND: data[spec.KIND],\n }\n\n if data.get(spec.LOGGING):\n validated_data[spec.LOGGING] = LoggingConfig.from_dict(\n data[spec.LOGGING])\n\n if data.get(spec.TAGS):\n validated_data[spec.TAGS] = data[spec.TAGS]\n\n if data.get(spec.HP_TUNING):\n validated_data[spec.HP_TUNING] = HPTuningConfig.from_dict(\n data[spec.HP_TUNING])\n\n return validated_data", "response": "Validates the headers data and creates the config objects"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef validate(spec, data):\n data = copy.deepcopy(data)\n validated_data = {}\n\n def validate_keys(section, config, section_data):\n if not isinstance(section_data, dict) or section == spec.MODEL:\n return\n\n extra_args = [key for key in section_data.keys() if key not in config.SCHEMA().fields]\n if extra_args:\n raise PolyaxonfileError('Extra arguments passed for `{}`: {}'.format(\n section, extra_args))\n\n def add_validated_section(section, config):\n if data.get(section):\n section_data = data[section]\n validate_keys(section=section, config=config, section_data=section_data)\n validated_data[section] = config.from_dict(section_data)\n\n add_validated_section(spec.ENVIRONMENT, spec.ENVIRONMENT_CONFIG)\n add_validated_section(spec.BUILD, BuildConfig)\n add_validated_section(spec.RUN, RunConfig)\n add_validated_section(spec.MODEL, ModelConfig)\n add_validated_section(spec.TRAIN, TrainConfig)\n add_validated_section(spec.EVAL, EvalConfig)\n\n return validated_data", "response": "Validates the data and creates the config objects"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nvalidating distributed experiment replicas", "response": "def validate_replicas(self, data):\n \"\"\"Validate distributed experiment\"\"\"\n environment = data.get('environment')\n if environment and environment.replicas:\n validate_replicas(data.get('framework'), environment.replicas)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_experiment_spec(self, matrix_declaration):\n parsed_data = Parser.parse(self, self._data, matrix_declaration)\n del parsed_data[self.HP_TUNING]\n validator.validate(spec=self, data=parsed_data)\n return ExperimentSpecification(values=[parsed_data, {'kind': self._EXPERIMENT}])", "response": "Returns an experiment spec for this group spec and the given matrix declaration."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_build_spec(self):\n if BaseSpecification.BUILD not in self._data:\n return None\n return BuildConfig.from_dict(self._data[BaseSpecification.BUILD])", "response": "Returns a BuildConfig object for this group spec."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate_matrix(self, data):\n is_grid_search = (\n data.get('grid_search') is not None or\n (data.get('grid_search') is None and\n data.get('random_search') is None and\n data.get('hyperband') is None and\n data.get('bo') is None)\n )\n is_bo = data.get('bo') is not None\n validate_matrix(data.get('matrix'), is_grid_search=is_grid_search, is_bo=is_bo)", "response": "Validates matrix data and creates the config objects"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef anchor(self):\n if not self._anchor:\n self._anchor = str(uuid.uuid4())\n return self._anchor", "response": "Generate a html anchor name"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef html(self):\n cases = [x.html() for x in self.cases]\n\n return \"\"\"\n
\n \n
\n
Test Class: {name}
\n
\n {cases}\n
\n
\n
\n \"\"\".format(anchor=self.anchor(),\n name=tag.text(self.name),\n count=len(cases),\n cases=\"\".join(cases))", "response": "Render this test class as html"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef html(self):\n return \"\"\"\n
{name}
\n
{value}
\n \"\"\".format(name=tag.text(self.name), value=tag.text(self.value))", "response": "Render those properties as html"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef html(self):\n failure = \"\"\n skipped = None\n stdout = tag.text(self.stdout)\n stderr = tag.text(self.stderr)\n\n if self.skipped:\n skipped = \"\"\"\n
\n
Skipped: {msg}
\n
{skip}
\n
\n \"\"\".format(msg=tag.text(self.skipped_msg),\n skip=tag.text(self.skipped))\n\n if self.failed():\n failure = \"\"\"\n
\n
Failed: {msg}
\n
{fail}
\n
\n \"\"\".format(msg=tag.text(self.failure_msg),\n fail=tag.text(self.failure))\n\n properties = [x.html() for x in self.properties]\n\n return \"\"\"\n \n
\n
\n {testname}
\n {testclassname}
\n Time Taken: {duration}s\n
\n {skipped}\n {failure}\n
\n {properties}\n
Stdout
\n
{stdout}
\n
\n
Stderr
\n
{stderr}
\n
\n
\n \"\"\".format(anchor=self.anchor(),\n testname=self.name,\n testclassname=self.testclass.name,\n duration=self.duration,\n failure=failure,\n skipped=skipped,\n properties=\"\".join(properties),\n stdout=stdout,\n stderr=stderr)", "response": "Render this test case as HTML."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning all testcases in this instance", "response": "def all(self):\n \"\"\"\n Return all testcases\n :return:\n \"\"\"\n tests = list()\n for testclass in self.classes:\n tests.extend(self.classes[testclass].cases)\n return tests"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn all the passing testcases", "response": "def passed(self):\n \"\"\"\n Return all the passing testcases\n :return:\n \"\"\"\n return [test for test in self.all() if not test.failed() and not test.skipped()]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef toc(self):\n fails = \"\"\n skips = \"\"\n\n if len(self.failed()):\n faillist = list()\n for failure in self.failed():\n faillist.append(\n \"\"\"\n
  • \n {name}\n
  • \n \"\"\".format(anchor=failure.anchor(),\n name=tag.text(\n failure.testclass.name + '.' + failure.name)))\n\n fails = \"\"\"\n
  • Failures\n
      {faillist}
    \n
  • \n \"\"\".format(faillist=\"\".join(faillist))\n\n if len(self.skipped()):\n skiplist = list()\n for skipped in self.skipped():\n skiplist.append(\n \"\"\"\n
  • \n {name}\n
  • \n \"\"\".format(anchor=skipped.anchor(),\n name=tag.text(\n skipped.testclass.name + skipped.name)))\n\n skips = \"\"\"\n
  • Skipped\n
      {skiplist}
    \n
  • \n \"\"\".format(skiplist=\"\".join(skiplist))\n\n classlist = list()\n for classname in self.classes:\n testclass = self.classes[classname]\n\n cases = list()\n for testcase in testclass.cases:\n if \"pkcs11\" in testcase.name:\n assert True\n\n cases.append(\n \"\"\"\n
  • \n {name}\n
  • \n \"\"\".format(anchor=testcase.anchor(),\n name=tag.text(testcase.name)))\n\n classlist.append(\"\"\"\n
  • \n {name}\n
      \n {cases}\n
    \n
  • \n \"\"\".format(anchor=testclass.anchor(),\n name=testclass.name,\n cases=\"\".join(cases)))\n\n return \"\"\"\n
      \n {failed}\n {skips}\n
    • All Test Classes\n
        {classlist}
      \n
    • \n
    \n \"\"\".format(failed=fails,\n skips=skips,\n classlist=\"\".join(classlist))", "response": "Return a html table of contents of the current object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef html(self):\n classes = list()\n\n package = \"\"\n if self.package is not None:\n package = \"Package: \" + self.package + \"
    \"\n\n for classname in self.classes:\n classes.append(self.classes[classname].html())\n\n errs = \"\"\n for error in self.errors:\n if not len(errs):\n errs += \"Errors\"\n for part in [\"type\", \"message\", \"text\"]:\n if part in error:\n errs += \"{}
    {}
    \".format(\n part,\n tag.text(error[part]))\n\n stdio = \"\"\n if self.stderr or self.stdout:\n stdio += \"Output\"\n if self.stderr:\n stdio += \"Stderr
    {}
    \".format(\n tag.text(self.stderr))\n if self.stdout:\n stdio += \"Stdout
    {}
    \".format(\n tag.text(self.stdout))\n\n props = \"\"\n if len(self.properties):\n props += \"\"\n propnames = sorted(self.properties)\n for prop in propnames:\n props += \"\".format(prop, self.properties[prop])\n props += \"
    {}{}
    \"\n\n return \"\"\"\n
    \n

    Test Suite: {name}

    \n {package}\n {properties}\n \n \n \n \n {errs}\n {stdio}\n
    Duration{duration} sec
    Test Cases{count}
    Failures{fails}
    \n
    \n

    Results Index

    \n {toc}\n
    \n

    Test Results

    \n
    \n {classes}\n
    \n
    \n \"\"\".format(name=tag.text(self.name),\n anchor=self.anchor(),\n duration=self.duration,\n errs=errs,\n stdio=stdio,\n toc=self.toc(),\n package=package,\n properties=props,\n classes=\"\".join(classes),\n count=len(self.all()),\n fails=len(self.failed()))", "response": "Render this as html."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the content of the css file", "response": "def get_css(self):\n \"\"\"\n Return the content of the css file\n :return:\n \"\"\"\n thisdir = os.path.dirname(os.path.abspath(__file__))\n with open(os.path.join(thisdir, self.css), \"r\") as cssfile:\n return cssfile.read()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\npopulating the report from the xml xml", "response": "def process(self):\n \"\"\"\n populate the report from the xml\n :return:\n \"\"\"\n suites = None\n if isinstance(self.tree, ET.Element):\n root = self.tree\n else:\n root = self.tree.getroot()\n\n if root.tag == \"testrun\":\n root = root[0]\n\n if root.tag == \"testsuite\":\n suites = [root]\n\n if root.tag == \"testsuites\":\n suites = [x for x in root]\n\n assert suites, \"could not find test suites in results xml\"\n\n for suite in suites:\n cursuite = Suite()\n self.suites.append(cursuite)\n cursuite.name = suite.attrib[\"name\"]\n if \"package\" in suite.attrib:\n cursuite.package = suite.attrib[\"package\"]\n cursuite.duration = float(suite.attrib.get(\"time\", '0').replace(',',''))\n\n for element in suite:\n if element.tag == \"error\":\n # top level error?\n errtag = {\n \"message\": element.attrib.get(\"message\", \"\"),\n \"type\": element.attrib.get(\"type\", \"\"),\n \"text\": element.text\n }\n cursuite.errors.append(errtag)\n if element.tag == \"system-out\":\n cursuite.stdout = element.text\n if element.tag == \"system-err\":\n cursuite.stderr = element.text\n\n if element.tag == \"properties\":\n for prop in element:\n if prop.tag == \"property\":\n cursuite.properties[prop.attrib[\"name\"]] = prop.attrib[\"value\"]\n\n if element.tag == \"testcase\":\n testcase = element\n\n if not testcase.attrib.get(\"classname\", None):\n testcase.attrib[\"classname\"] = NO_CLASSNAME\n\n if testcase.attrib[\"classname\"] not in cursuite:\n testclass = Class()\n testclass.name = testcase.attrib[\"classname\"]\n cursuite[testclass.name] = testclass\n\n testclass = cursuite[testcase.attrib[\"classname\"]]\n newcase = Case()\n newcase.name = testcase.attrib[\"name\"]\n newcase.testclass = testclass\n newcase.duration = float(testcase.attrib.get(\"time\", '0').replace(',',''))\n testclass.cases.append(newcase)\n\n # does this test case have any children?\n for child in testcase:\n if child.tag == \"skipped\":\n newcase.skipped = child.text\n if \"message\" in child.attrib:\n newcase.skipped_msg = child.attrib[\"message\"]\n elif child.tag == \"system-out\":\n newcase.stdout = child.text\n elif child.tag == \"system-err\":\n newcase.stderr = child.text\n elif child.tag == \"failure\":\n newcase.failure = child.text\n if \"message\" in child.attrib:\n newcase.failure_msg = child.attrib[\"message\"]\n elif child.tag == \"error\":\n newcase.failure = child.text\n if \"message\" in child.attrib:\n newcase.failure_msg = child.attrib[\"message\"]\n elif child.tag == \"properties\":\n for property in child:\n newproperty = Property()\n newproperty.name = property.attrib[\"name\"]\n newproperty.value = property.attrib[\"value\"]\n newcase.properties.append(newproperty)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nmakes a table of contents listing each suite", "response": "def toc(self):\n \"\"\"\n If this report has multiple suite results, make a table of contents listing each suite\n :return:\n \"\"\"\n if len(self.suites) > 1:\n tochtml = \"
      \"\n for suite in self.suites:\n tochtml += '
    • {name}
    • '.format(\n anchor=suite.anchor(),\n name=tag.text(suite.name))\n tochtml += \"
    \"\n return tochtml\n else:\n return \"\""} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nrenders the test suite as an HTML report with links to errors first.", "response": "def html(self):\n \"\"\"\n Render the test suite as a HTML report with links to errors first.\n :return:\n \"\"\"\n\n page = self.get_html_head()\n page += \"

    Test Report

    \"\n page += self.toc()\n for suite in self.suites:\n page += suite.html()\n page += \"\"\n\n return page"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef run(args):\n (opts, args) = PARSER.parse_args(args) if args else PARSER.parse_args()\n if not len(args):\n PARSER.print_usage()\n sys.exit(1)\n\n outfilename = args[0] + \".html\"\n if len(args) > 1:\n outfilename = args[1]\n\n report = parser.Junit(args[0])\n html = report.html()\n\n with open(outfilename, \"wb\") as outfile:\n outfile.write(html.encode('utf-8'))", "response": "Run this tool\n :param args:\n :return:"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngenerates a series of remarks that the user should consider when interpreting the summary statistics.", "response": "def _generate_remark_str(self, end_of_line = '\\n'):\n \"\"\"\n Generate a series of remarks that the user should consider\n when interpreting the summary statistics.\n \"\"\"\n warnings = {}\n msg = '{}'.format(end_of_line)\n\n # generate warnings for continuous variables\n if self._continuous:\n # highlight far outliers\n outlier_mask = self.cont_describe.far_outliers > 1\n outlier_vars = list(self.cont_describe.far_outliers[outlier_mask].dropna(how='all').index)\n if outlier_vars:\n warnings[\"Warning, Tukey test indicates far outliers in\"] = outlier_vars\n\n # highlight possible multimodal distributions using hartigan's dip test\n # -1 values indicate NaN\n modal_mask = (self.cont_describe.diptest >= 0) & (self.cont_describe.diptest <= 0.05)\n modal_vars = list(self.cont_describe.diptest[modal_mask].dropna(how='all').index)\n if modal_vars:\n warnings[\"Warning, Hartigan's Dip Test reports possible multimodal distributions for\"] = modal_vars\n\n # highlight non normal distributions\n # -1 values indicate NaN\n modal_mask = (self.cont_describe.normaltest >= 0) & (self.cont_describe.normaltest <= 0.001)\n modal_vars = list(self.cont_describe.normaltest[modal_mask].dropna(how='all').index)\n if modal_vars:\n warnings[\"Warning, test for normality reports non-normal distributions for\"] = modal_vars\n\n\n # create the warning string\n for n,k in enumerate(sorted(warnings)):\n msg += '[{}] {}: {}.{}'.format(n+1,k,', '.join(warnings[k]), end_of_line)\n\n return msg"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _detect_categorical_columns(self,data):\n # assume all non-numerical and date columns are categorical\n numeric_cols = set(data._get_numeric_data().columns.values)\n date_cols = set(data.select_dtypes(include=[np.datetime64]).columns)\n likely_cat = set(data.columns) - numeric_cols\n likely_cat = list(likely_cat - date_cols)\n # check proportion of unique values if numerical\n for var in data._get_numeric_data().columns:\n likely_flag = 1.0 * data[var].nunique()/data[var].count() < 0.05\n if likely_flag:\n likely_cat.append(var)\n return likely_cat", "response": "Detect categorical columns if they are not specified."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _std(self,x):\n return np.nanstd(x.values,ddof=self._ddof)", "response": "Compute standard deviation of a set of nan values."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _tukey(self,x,threshold):\n vals = x.values[~np.isnan(x.values)]\n try:\n q1, q3 = np.percentile(vals, [25, 75])\n iqr = q3 - q1\n low_bound = q1 - (iqr * threshold)\n high_bound = q3 + (iqr * threshold)\n outliers = np.where((vals > high_bound) | (vals < low_bound))\n except:\n outliers = []\n return outliers", "response": "Count outliers according to Tukey s rule."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncompute the number of outliers in a given time series", "response": "def _outliers(self,x):\n \"\"\"\n Compute number of outliers\n \"\"\"\n outliers = self._tukey(x, threshold = 1.5)\n return np.size(outliers)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncomputes the number of far out outliers for a given time series", "response": "def _far_outliers(self,x):\n \"\"\"\n Compute number of \"far out\" outliers\n \"\"\"\n outliers = self._tukey(x, threshold = 3.0)\n return np.size(outliers)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncompute the summary of the IQR - level logarithmic state of the input series.", "response": "def _t1_summary(self,x):\n \"\"\"\n Compute median [IQR] or mean (Std) for the input series.\n\n Parameters\n ----------\n x : pandas Series\n Series of values to be summarised.\n \"\"\"\n # set decimal places\n if isinstance(self._decimals,int):\n n = self._decimals\n elif isinstance(self._decimals,dict):\n try:\n n = self._decimals[x.name]\n except:\n n = 1\n else:\n n = 1\n warnings.warn('The decimals arg must be an int or dict. Defaulting to {} d.p.'.format(n))\n\n if x.name in self._nonnormal:\n f = '{{:.{}f}} [{{:.{}f}},{{:.{}f}}]'.format(n,n,n)\n return f.format(np.nanmedian(x.values),\n np.nanpercentile(x.values,25), np.nanpercentile(x.values,75))\n else:\n f = '{{:.{}f}} ({{:.{}f}})'.format(n,n)\n return f.format(np.nanmean(x.values),\n np.nanstd(x.values,ddof=self._ddof))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates the DataFrame that describe the continuous data.", "response": "def _create_cont_describe(self,data):\n \"\"\"\n Describe the continuous data.\n\n Parameters\n ----------\n data : pandas DataFrame\n The input dataset.\n\n Returns\n ----------\n df_cont : pandas DataFrame\n Summarise the continuous variables.\n \"\"\"\n aggfuncs = [pd.Series.count,np.mean,np.median,self._std,\n self._q25,self._q75,min,max,self._t1_summary,self._diptest,\n self._outliers,self._far_outliers,self._normaltest]\n\n # coerce continuous data to numeric\n cont_data = data[self._continuous].apply(pd.to_numeric, errors='coerce')\n # check all data in each continuous column is numeric\n bad_cols = cont_data.count() != data[self._continuous].count()\n bad_cols = cont_data.columns[bad_cols]\n if len(bad_cols)>0:\n raise InputError(\"\"\"The following continuous column(s) have non-numeric values: {}.\n Either specify the column(s) as categorical or remove the non-numeric values.\"\"\".format(bad_cols.values))\n\n # check for coerced column containing all NaN to warn user\n for column in cont_data.columns[cont_data.count() == 0]:\n self._non_continuous_warning(column)\n\n if self._groupby:\n # add the groupby column back\n cont_data = cont_data.merge(data[[self._groupby]],\n left_index=True, right_index=True)\n\n # group and aggregate data\n df_cont = pd.pivot_table(cont_data,\n columns=[self._groupby],\n aggfunc=aggfuncs)\n else:\n # if no groupby, just add single group column\n df_cont = cont_data.apply(aggfuncs).T\n df_cont.columns.name = 'overall'\n df_cont.columns = pd.MultiIndex.from_product([df_cont.columns,\n ['overall']])\n\n df_cont.index.rename('variable',inplace=True)\n\n # remove prefix underscore from column names (e.g. _std -> std)\n agg_rename = df_cont.columns.levels[0]\n agg_rename = [x[1:] if x[0]=='_' else x for x in agg_rename]\n df_cont.columns.set_levels(agg_rename, level=0, inplace=True)\n\n return df_cont"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _create_cat_describe(self,data):\n group_dict = {}\n\n for g in self._groupbylvls:\n if self._groupby:\n d_slice = data.loc[data[self._groupby] == g, self._categorical]\n else:\n d_slice = data[self._categorical].copy()\n\n # create a dataframe with freq, proportion\n df = d_slice.copy()\n\n # convert type to string to avoid int converted to boolean, avoiding nans\n for column in df.columns:\n df[column] = [str(row) if not pd.isnull(row) else None for row in df[column].values]\n\n df = df.melt().groupby(['variable','value']).size().to_frame(name='freq')\n df.index.set_names('level', level=1, inplace=True)\n df['percent'] = df['freq'].div(df.freq.sum(level=0),level=0).astype(float)* 100\n\n # set number of decimal places for percent\n if isinstance(self._decimals,int):\n n = self._decimals\n f = '{{:.{}f}}'.format(n)\n df['percent'] = df['percent'].astype(float).map(f.format)\n elif isinstance(self._decimals,dict):\n df.loc[:,'percent'] = df.apply(self._format_cat, axis=1)\n else:\n n = 1\n f = '{{:.{}f}}'.format(n)\n df['percent'] = df['percent'].astype(float).map(f.format)\n\n # add n column, listing total non-null values for each variable\n ct = d_slice.count().to_frame(name='n')\n ct.index.name = 'variable'\n df = df.join(ct)\n\n # add null count\n nulls = d_slice.isnull().sum().to_frame(name='isnull')\n nulls.index.name = 'variable'\n # only save null count to the first category for each variable\n # do this by extracting the first category from the df row index\n levels = df.reset_index()[['variable','level']].groupby('variable').first()\n # add this category to the nulls table\n nulls = nulls.join(levels)\n nulls.set_index('level', append=True, inplace=True)\n # join nulls to categorical\n df = df.join(nulls)\n\n # add summary column\n df['t1_summary'] = df.freq.map(str) + ' (' + df.percent.map(str) + ')'\n\n # add to dictionary\n group_dict[g] = df\n\n df_cat = pd.concat(group_dict,axis=1)\n # ensure the groups are the 2nd level of the column index\n if df_cat.columns.nlevels>1:\n df_cat = df_cat.swaplevel(0, 1, axis=1).sort_index(axis=1,level=0)\n\n return df_cat", "response": "Create a dataframe that describe the categorical data."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a table containing p - values for significance tests.", "response": "def _create_significance_table(self,data):\n \"\"\"\n Create a table containing p-values for significance tests. Add features of\n the distributions and the p-values to the dataframe.\n\n Parameters\n ----------\n data : pandas DataFrame\n The input dataset.\n\n Returns\n ----------\n df : pandas DataFrame\n A table containing the p-values, test name, etc.\n \"\"\"\n # list features of the variable e.g. matched, paired, n_expected\n df=pd.DataFrame(index=self._continuous+self._categorical,\n columns=['continuous','nonnormal','min_observed','pval','ptest'])\n\n df.index.rename('variable', inplace=True)\n df['continuous'] = np.where(df.index.isin(self._continuous),True,False)\n df['nonnormal'] = np.where(df.index.isin(self._nonnormal),True,False)\n\n # list values for each variable, grouped by groupby levels\n for v in df.index:\n is_continuous = df.loc[v]['continuous']\n is_categorical = ~df.loc[v]['continuous']\n is_normal = ~df.loc[v]['nonnormal']\n\n # if continuous, group data into list of lists\n if is_continuous:\n catlevels = None\n grouped_data = []\n for s in self._groupbylvls:\n lvl_data = data.loc[data[self._groupby]==s, v]\n # coerce to numeric and drop non-numeric data\n lvl_data = lvl_data.apply(pd.to_numeric, errors='coerce').dropna()\n # append to overall group data\n grouped_data.append(lvl_data.values)\n min_observed = len(min(grouped_data,key=len))\n # if categorical, create contingency table\n elif is_categorical:\n catlevels = sorted(data[v].astype('category').cat.categories)\n grouped_data = pd.crosstab(data[self._groupby].rename('_groupby_var_'),data[v])\n min_observed = grouped_data.sum(axis=1).min()\n\n # minimum number of observations across all levels\n df.loc[v,'min_observed'] = min_observed\n\n # compute pvalues\n df.loc[v,'pval'],df.loc[v,'ptest'] = self._p_test(v,\n grouped_data,is_continuous,is_categorical,\n is_normal,min_observed,catlevels)\n\n return df"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _create_cont_table(self,data):\n # remove the t1_summary level\n table = self.cont_describe[['t1_summary']].copy()\n table.columns = table.columns.droplevel(level=0)\n\n # add a column of null counts as 1-count() from previous function\n nulltable = data[self._continuous].isnull().sum().to_frame(name='isnull')\n try:\n table = table.join(nulltable)\n except TypeError: # if columns form a CategoricalIndex, need to convert to string first\n table.columns = table.columns.astype(str)\n table = table.join(nulltable)\n\n # add an empty level column, for joining with cat table\n table['level'] = ''\n table.set_index([table.index,'level'],inplace=True)\n\n # add pval column\n if self._pval and self._pval_adjust:\n table = table.join(self._significance_table[['pval (adjusted)','ptest']])\n elif self._pval:\n table = table.join(self._significance_table[['pval','ptest']])\n\n return table", "response": "Create tableone for continuous data."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _create_cat_table(self,data):\n table = self.cat_describe['t1_summary'].copy()\n # add the total count of null values across all levels\n isnull = data[self._categorical].isnull().sum().to_frame(name='isnull')\n isnull.index.rename('variable', inplace=True)\n try:\n table = table.join(isnull)\n except TypeError: # if columns form a CategoricalIndex, need to convert to string first\n table.columns = table.columns.astype(str)\n table = table.join(isnull)\n\n # add pval column\n if self._pval and self._pval_adjust:\n table = table.join(self._significance_table[['pval (adjusted)','ptest']])\n elif self._pval:\n table = table.join(self._significance_table[['pval','ptest']])\n\n return table", "response": "Create table one for categorical data."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _create_tableone(self,data):\n if self._continuous and self._categorical:\n\n # support pandas<=0.22\n try: \n table = pd.concat([self.cont_table,self.cat_table],sort=False)\n except: \n table = pd.concat([self.cont_table,self.cat_table])\n elif self._continuous:\n table = self.cont_table\n elif self._categorical:\n table = self.cat_table\n\n # round pval column and convert to string\n if self._pval and self._pval_adjust:\n table['pval (adjusted)'] = table['pval (adjusted)'].apply('{:.3f}'.format).astype(str)\n table.loc[table['pval (adjusted)'] == '0.000', 'pval (adjusted)'] = '<0.001'\n elif self._pval:\n table['pval'] = table['pval'].apply('{:.3f}'.format).astype(str)\n table.loc[table['pval'] == '0.000', 'pval'] = '<0.001'\n\n # sort the table rows\n table.reset_index().set_index(['variable','level'], inplace=True)\n if self._sort:\n # alphabetical\n new_index = sorted(table.index.values)\n else:\n # sort by the columns argument\n new_index = sorted(table.index.values,key=lambda x: self._columns.index(x[0]))\n table = table.reindex(new_index)\n\n # if a limit has been set on the number of categorical variables\n # then re-order the variables by frequency\n if self._limit:\n levelcounts = data[self._categorical].nunique()\n levelcounts = levelcounts[levelcounts >= self._limit]\n for v,_ in levelcounts.iteritems():\n count = data[v].value_counts().sort_values(ascending=False)\n new_index = [(v, i) for i in count.index]\n # restructure to match orig_index\n new_index_array=np.empty((len(new_index),), dtype=object)\n new_index_array[:]=[tuple(i) for i in new_index]\n orig_index = table.index.values.copy()\n orig_index[table.index.get_loc(v)] = new_index_array\n table = table.reindex(orig_index)\n\n # inserts n row\n n_row = pd.DataFrame(columns = ['variable','level','isnull'])\n n_row.set_index(['variable','level'], inplace=True)\n n_row.loc['n', ''] = None\n\n # support pandas<=0.22\n try:\n table = pd.concat([n_row,table],sort=False)\n except:\n table = pd.concat([n_row,table])\n\n if self._groupbylvls == ['overall']:\n table.loc['n','overall'] = len(data.index)\n else:\n for g in self._groupbylvls:\n ct = data[self._groupby][data[self._groupby]==g].count()\n table.loc['n',g] = ct\n\n # only display data in first level row\n dupe_mask = table.groupby(level=[0]).cumcount().ne(0)\n dupe_columns = ['isnull']\n optional_columns = ['pval','pval (adjusted)','ptest']\n for col in optional_columns:\n if col in table.columns.values:\n dupe_columns.append(col)\n\n table[dupe_columns] = table[dupe_columns].mask(dupe_mask).fillna('')\n\n # remove empty column added above\n table.drop([''], axis=1, inplace=True)\n\n # remove isnull column if not needed\n if not self._isnull:\n table.drop('isnull',axis=1,inplace=True)\n\n # replace nans with empty strings\n table.fillna('',inplace=True)\n\n # add column index\n if not self._groupbylvls == ['overall']:\n # rename groupby variable if requested\n c = self._groupby\n if self._alt_labels:\n if self._groupby in self._alt_labels:\n c = self._alt_labels[self._groupby]\n\n c = 'Grouped by {}'.format(c)\n table.columns = pd.MultiIndex.from_product([[c], table.columns])\n\n # display alternative labels if assigned\n table.rename(index=self._create_row_labels(), inplace=True, level=0)\n\n # if a limit has been set on the number of categorical variables\n # limit the number of categorical variables that are displayed\n if self._limit:\n table = table.groupby('variable').head(self._limit)\n\n # re-order the columns in a consistent fashion\n if self._groupby:\n cols = table.columns.levels[1].values\n else:\n cols = table.columns.values\n\n if 'isnull' in cols:\n cols = ['isnull'] + [x for x in cols if x != 'isnull']\n\n # iterate through each optional column\n # if they exist, put them at the end of the dataframe\n # ensures the last 3 columns will be in the same order as optional_columns\n for col in optional_columns:\n if col in cols:\n cols = [x for x in cols if x != col] + [col]\n\n if self._groupby:\n table = table.reindex(cols, axis=1, level=1)\n else:\n table = table.reindex(cols, axis=1)\n\n return table", "response": "Create a table 1 by combining continuous and categorical tables."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _create_row_labels(self):\n # start with the original column names\n labels = {}\n for c in self._columns:\n labels[c] = c\n \n # replace column names with alternative names if provided\n if self._alt_labels:\n for k in self._alt_labels.keys():\n labels[k] = self._alt_labels[k]\n \n # append the label suffix\n if self._label_suffix:\n for k in labels.keys():\n if k in self._nonnormal:\n labels[k] = \"{}, {}\".format(labels[k],\"median [Q1,Q3]\")\n elif k in self._categorical:\n labels[k] = \"{}, {}\".format(labels[k],\"n (%)\")\n else:\n labels[k] = \"{}, {}\".format(labels[k],\"mean (SD)\")\n \n return labels", "response": "Create the original labels for rows."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef dip_pval_tabinterpol(dip, N):\n '''\n dip - dip value computed from dip_from_cdf\n N - number of observations\n '''\n\n # if qDiptab_df is None:\n # raise DataError(\"Tabulated p-values not available. See installation instructions.\")\n\n if np.isnan(N) or N < 10:\n return np.nan\n\n qDiptab_dict = {'0': {4: 0.125,\n 5: 0.1,\n 6: 0.0833333333333333,\n 7: 0.0714285714285714,\n 8: 0.0625,\n 9: 0.0555555555555556,\n 10: 0.05,\n 15: 0.0341378172277919,\n 20: 0.033718563622065004,\n 30: 0.0262674485075642,\n 50: 0.0218544781364545,\n 100: 0.0164852597438403,\n 200: 0.0111236388849688,\n 500: 0.007554885975761959,\n 1000: 0.00541658127872122,\n 2000: 0.0039043999745055702,\n 5000: 0.00245657785440433,\n 10000: 0.00174954269199566,\n 20000: 0.00119458814106091,\n 40000: 0.000852415648011777,\n 72000: 0.000644400053256997},\n '0.01': {4: 0.125,\n 5: 0.1,\n 6: 0.0833333333333333,\n 7: 0.0714285714285714,\n 8: 0.0625,\n 9: 0.0613018090298924,\n 10: 0.0610132555623269,\n 15: 0.0546284208048975,\n 20: 0.0474333740698401,\n 30: 0.0395871890405749,\n 50: 0.0314400501999916,\n 100: 0.022831985803043,\n 200: 0.0165017735429825,\n 500: 0.0106403461127515,\n 1000: 0.0076028674530018705,\n 2000: 0.0054166418179658294,\n 5000: 0.0034480928223332603,\n 10000: 0.00244595133885302,\n 20000: 0.00173435346896287,\n 40000: 0.00122883479310665,\n 72000: 0.000916872204484283},\n '0.02': {4: 0.125,\n 5: 0.1,\n 6: 0.0833333333333333,\n 7: 0.0714285714285714,\n 8: 0.0656911994503283,\n 9: 0.0658615858179315,\n 10: 0.0651627333214016,\n 15: 0.0572191260231815,\n 20: 0.0490891387627092,\n 30: 0.0414574606741673,\n 50: 0.0329008160470834,\n 100: 0.0238917486442849,\n 200: 0.0172594157992489,\n 500: 0.0111255573208294,\n 1000: 0.00794987834644799,\n 2000: 0.0056617138625232296,\n 5000: 0.00360473943713036,\n 10000: 0.00255710802275612,\n 20000: 0.0018119443458468102,\n 40000: 0.0012846930445701802,\n 72000: 0.0009579329467655321},\n '0.05': {4: 0.125,\n 5: 0.1,\n 6: 0.0833333333333333,\n 7: 0.0725717816250742,\n 8: 0.0738651136071762,\n 9: 0.0732651142535317,\n 10: 0.0718321619656165,\n 15: 0.0610087367689692,\n 20: 0.052719998201553,\n 30: 0.0444462614069956,\n 50: 0.0353023819040016,\n 100: 0.0256559537977579,\n 200: 0.0185259426032926,\n 500: 0.0119353655328931,\n 1000: 0.0085216518343594,\n 2000: 0.00607120971135229,\n 5000: 0.0038632654801084897,\n 10000: 0.00273990955227265,\n 20000: 0.00194259470485893,\n 40000: 0.0013761765052555301,\n 72000: 0.00102641863872347},\n '0.1': {4: 0.125,\n 5: 0.1,\n 6: 0.0833333333333333,\n 7: 0.0817315478539489,\n 8: 0.0820045917762512,\n 9: 0.0803941629593475,\n 10: 0.077966212182459,\n 15: 0.0642657137330444,\n 20: 0.0567795509056742,\n 30: 0.0473998525042686,\n 50: 0.0377279973102482,\n 100: 0.0273987414570948,\n 200: 0.0197917612637521,\n 500: 0.0127411306411808,\n 1000: 0.00909775605533253,\n 2000: 0.0064762535755248,\n 5000: 0.00412089506752692,\n 10000: 0.0029225480567908,\n 20000: 0.00207173719623868,\n 40000: 0.0014675150200632301,\n 72000: 0.0010949515421800199},\n '0.2': {4: 0.125,\n 5: 0.1,\n 6: 0.0924514470941933,\n 7: 0.0940590181922527,\n 8: 0.0922700601131892,\n 9: 0.0890432420913848,\n 10: 0.0852835359834564,\n 15: 0.0692234107989591,\n 20: 0.0620134674468181,\n 30: 0.0516677370374349,\n 50: 0.0410699984399582,\n 100: 0.0298109370830153,\n 200: 0.0215233745778454,\n 500: 0.0138524542751814,\n 1000: 0.00988924521014078,\n 2000: 0.00703573098590029,\n 5000: 0.00447640050137479,\n 10000: 0.00317374638422465,\n 20000: 0.00224993202086955,\n 40000: 0.00159376453672466,\n 72000: 0.00118904090369415},\n '0.3': {4: 0.125,\n 5: 0.1,\n 6: 0.103913431059949,\n 7: 0.10324449080087102,\n 8: 0.0996737189599363,\n 9: 0.0950811420297928,\n 10: 0.0903204173707099,\n 15: 0.0745462114365167,\n 20: 0.0660163872069048,\n 30: 0.0551037519001622,\n 50: 0.0437704598622665,\n 100: 0.0317771496530253,\n 200: 0.0229259769870428,\n 500: 0.0147536004288476,\n 1000: 0.0105309297090482,\n 2000: 0.007494212545892991,\n 5000: 0.00476555693102276,\n 10000: 0.00338072258533527,\n 20000: 0.00239520831473419,\n 40000: 0.00169668445506151,\n 72000: 0.00126575197699874},\n '0.4': {4: 0.125,\n 5: 0.10872059357632902,\n 6: 0.113885220640212,\n 7: 0.110964599995697,\n 8: 0.10573353180273701,\n 9: 0.0999380897811046,\n 10: 0.0943334983745117,\n 15: 0.0792030878981762,\n 20: 0.0696506075066401,\n 30: 0.058265005347492994,\n 50: 0.0462925642671299,\n 100: 0.0336073821590387,\n 200: 0.024243848341112,\n 500: 0.0155963185751048,\n 1000: 0.0111322726797384,\n 2000: 0.007920878896017329,\n 5000: 0.005037040297500721,\n 10000: 0.0035724387653598205,\n 20000: 0.00253036792824665,\n 40000: 0.0017925341833790601,\n 72000: 0.00133750966361506},\n '0.5': {4: 0.125,\n 5: 0.12156379802641401,\n 6: 0.123071187137781,\n 7: 0.11780784650433501,\n 8: 0.11103512984770501,\n 9: 0.10415356007586801,\n 10: 0.0977817630384725,\n 15: 0.083621033469191,\n 20: 0.0733437740592714,\n 30: 0.0614510857304343,\n 50: 0.048851155289608,\n 100: 0.0354621760592113,\n 200: 0.025584358256487003,\n 500: 0.0164519238025286,\n 1000: 0.0117439009052552,\n 2000: 0.008355737247680059,\n 5000: 0.0053123924740821294,\n 10000: 0.00376734715752209,\n 20000: 0.00266863168718114,\n 40000: 0.00189061261635977,\n 72000: 0.00141049709228472},\n '0.6': {4: 0.125,\n 5: 0.134318918697053,\n 6: 0.13186973390253,\n 7: 0.124216086833531,\n 8: 0.11592005574998801,\n 9: 0.10800780236193198,\n 10: 0.102180866696628,\n 15: 0.0881198482202905,\n 20: 0.0776460662880254,\n 30: 0.0649164408053978,\n 50: 0.0516145897865757,\n 100: 0.0374805844550272,\n 200: 0.0270252129816288,\n 500: 0.017383057902553,\n 1000: 0.012405033293814,\n 2000: 0.00882439333812351,\n 5000: 0.00560929919359959,\n 10000: 0.00397885007249132,\n 20000: 0.0028181999035216,\n 40000: 0.00199645471886179,\n 72000: 0.00148936709298802},\n '0.7': {4: 0.13255954878268902,\n 5: 0.14729879897625198,\n 6: 0.140564796497941,\n 7: 0.130409013968317,\n 8: 0.120561479262465,\n 9: 0.112512617124951,\n 10: 0.10996094814295099,\n 15: 0.093124666680253,\n 20: 0.0824558407118372,\n 30: 0.0689178762425442,\n 50: 0.0548121932066019,\n 100: 0.0398046179116599,\n 200: 0.0286920262150517,\n 500: 0.0184503949887735,\n 1000: 0.0131684179320803,\n 2000: 0.009367858207170609,\n 5000: 0.00595352728377949,\n 10000: 0.00422430013176233,\n 20000: 0.00299137548142077,\n 40000: 0.00211929748381704,\n 72000: 0.00158027541945626},\n '0.8': {4: 0.15749736904023498,\n 5: 0.161085025702604,\n 6: 0.14941924112913002,\n 7: 0.136639642123068,\n 8: 0.125558759034845,\n 9: 0.12291503348081699,\n 10: 0.11884476721158699,\n 15: 0.0996694393390689,\n 20: 0.08834462700173701,\n 30: 0.0739249074078291,\n 50: 0.0588230482851366,\n 100: 0.0427283846799166,\n 200: 0.0308006766341406,\n 500: 0.0198162679782071,\n 1000: 0.0141377942603047,\n 2000: 0.01005604603884,\n 5000: 0.00639092280563517,\n 10000: 0.00453437508148542,\n 20000: 0.00321024899920135,\n 40000: 0.0022745769870358102,\n 72000: 0.00169651643860074},\n '0.9': {4: 0.18740187880755899,\n 5: 0.176811998476076,\n 6: 0.159137064572627,\n 7: 0.144240669035124,\n 8: 0.141841067033899,\n 9: 0.136412639387084,\n 10: 0.130462149644819,\n 15: 0.11008749690090598,\n 20: 0.0972346018122903,\n 30: 0.0814791379390127,\n 50: 0.0649136324046767,\n 100: 0.047152783315718,\n 200: 0.0339967814293504,\n 500: 0.0218781313182203,\n 1000: 0.0156148055023058,\n 2000: 0.0111019116837591,\n 5000: 0.00705566126234625,\n 10000: 0.00500178808402368,\n 20000: 0.00354362220314155,\n 40000: 0.00250999080890397,\n 72000: 0.0018730618472582602},\n '0.95': {4: 0.20726978858735998,\n 5: 0.18639179602794398,\n 6: 0.164769608513302,\n 7: 0.159903395678336,\n 8: 0.153978303998561,\n 9: 0.14660378495401902,\n 10: 0.139611395137099,\n 15: 0.118760769203664,\n 20: 0.105130218270636,\n 30: 0.0881689143126666,\n 50: 0.0702737877191269,\n 100: 0.0511279442868827,\n 200: 0.0368418413878307,\n 500: 0.0237294742633411,\n 1000: 0.0169343970067564,\n 2000: 0.0120380990328341,\n 5000: 0.0076506368153935,\n 10000: 0.00542372242836395,\n 20000: 0.00384330190244679,\n 40000: 0.00272375073486223,\n 72000: 0.00203178401610555},\n '0.98': {4: 0.22375580462922195,\n 5: 0.19361253363045,\n 6: 0.17917654739278197,\n 7: 0.17519655327122302,\n 8: 0.16597856724751,\n 9: 0.157084065653166,\n 10: 0.150961728882481,\n 15: 0.128890475210055,\n 20: 0.11430970428125302,\n 30: 0.0960564383013644,\n 50: 0.0767095886079179,\n 100: 0.0558022052195208,\n 200: 0.0402729850316397,\n 500: 0.025919578977657003,\n 1000: 0.018513067368104,\n 2000: 0.0131721010552576,\n 5000: 0.00836821687047215,\n 10000: 0.00592656681022859,\n 20000: 0.00420258799378253,\n 40000: 0.00298072958568387,\n 72000: 0.00222356097506054},\n '0.99': {4: 0.231796258864192,\n 5: 0.19650913979884502,\n 6: 0.191862827995563,\n 7: 0.184118659121501,\n 8: 0.172988528276759,\n 9: 0.164164643657217,\n 10: 0.159684158858235,\n 15: 0.13598356863636,\n 20: 0.120624043335821,\n 30: 0.101478558893837,\n 50: 0.0811998415355918,\n 100: 0.059024132304226,\n 200: 0.0426864799777448,\n 500: 0.0274518022761997,\n 1000: 0.0196080260483234,\n 2000: 0.0139655122281969,\n 5000: 0.00886357892854914,\n 10000: 0.00628034732880374,\n 20000: 0.00445774902155711,\n 40000: 0.00315942194040388,\n 72000: 0.00235782814777627},\n '0.995': {4: 0.23726374382677898,\n 5: 0.198159967287576,\n 6: 0.20210197104296804,\n 7: 0.19101439617430602,\n 8: 0.179010413496374,\n 9: 0.172821674582338,\n 10: 0.16719524735674,\n 15: 0.14245248368127697,\n 20: 0.126552378036739,\n 30: 0.10650487144103,\n 50: 0.0852854646662134,\n 100: 0.0620425065165146,\n 200: 0.044958959158761,\n 500: 0.0288986369564301,\n 1000: 0.0206489568587364,\n 2000: 0.0146889122204488,\n 5000: 0.00934162787186159,\n 10000: 0.00661030641550873,\n 20000: 0.00469461513212743,\n 40000: 0.0033273652798148,\n 72000: 0.00248343580127067},\n '0.998': {4: 0.241992892688593,\n 5: 0.19924427936243302,\n 6: 0.213015781111186,\n 7: 0.198216795232182,\n 8: 0.186504388711178,\n 9: 0.182555283567818,\n 10: 0.175419540856082,\n 15: 0.15017281653074202,\n 20: 0.13360135382395,\n 30: 0.112724636524262,\n 50: 0.0904847827490294,\n 100: 0.0658016011466099,\n 200: 0.0477643873749449,\n 500: 0.0306813505050163,\n 1000: 0.0219285176765082,\n 2000: 0.0156076779647454,\n 5000: 0.009932186363240291,\n 10000: 0.00702254699967648,\n 20000: 0.004994160691291679,\n 40000: 0.00353988965698579,\n 72000: 0.00264210826339498},\n '0.999': {4: 0.244369839049632,\n 5: 0.199617527406166,\n 6: 0.219518627282415,\n 7: 0.20234101074826102,\n 8: 0.19448404115794,\n 9: 0.188658833121906,\n 10: 0.180611195797351,\n 15: 0.15545613369632802,\n 20: 0.138569903791767,\n 30: 0.117164140184417,\n 50: 0.0940930106666244,\n 100: 0.0684479731118028,\n 200: 0.0497198001867437,\n 500: 0.0320170996823189,\n 1000: 0.0228689168972669,\n 2000: 0.0162685615996248,\n 5000: 0.0103498795291629,\n 10000: 0.0073182262815645795,\n 20000: 0.00520917757743218,\n 40000: 0.00369400045486625,\n 72000: 0.0027524322157581},\n '0.9995': {4: 0.245966625504691,\n 5: 0.19980094149902802,\n 6: 0.22433904739444602,\n 7: 0.205377566346832,\n 8: 0.200864297005026,\n 9: 0.19408912076824603,\n 10: 0.18528641605039603,\n 15: 0.160896499106958,\n 20: 0.14336916123968,\n 30: 0.12142585990898701,\n 50: 0.0974904344916743,\n 100: 0.0709169443994193,\n 200: 0.0516114611801451,\n 500: 0.0332452747332959,\n 1000: 0.023738710122235003,\n 2000: 0.0168874937789415,\n 5000: 0.0107780907076862,\n 10000: 0.0076065423418208,\n 20000: 0.005403962359243721,\n 40000: 0.00383345715372182,\n 72000: 0.0028608570740143},\n '0.9998': {4: 0.24743959723326198,\n 5: 0.19991708183427104,\n 6: 0.22944933215424101,\n 7: 0.208306562526874,\n 8: 0.20884999705022897,\n 9: 0.19915700809389003,\n 10: 0.19120308390504398,\n 15: 0.16697940794624802,\n 20: 0.148940116394883,\n 30: 0.126733051889401,\n 50: 0.10228420428399698,\n 100: 0.0741183486081263,\n 200: 0.0540543978864652,\n 500: 0.0348335698576168,\n 1000: 0.0248334158891432,\n 2000: 0.0176505093388153,\n 5000: 0.0113184316868283,\n 10000: 0.00795640367207482,\n 20000: 0.00564540201704594,\n 40000: 0.0040079346963469605,\n 72000: 0.00298695044508003},\n '0.9999': {4: 0.24823065965663801,\n 5: 0.19995902909307503,\n 6: 0.232714530449602,\n 7: 0.209866047852379,\n 8: 0.212556040406219,\n 9: 0.20288159843655804,\n 10: 0.19580515933918397,\n 15: 0.17111793515551002,\n 20: 0.152832538183622,\n 30: 0.131198578897542,\n 50: 0.104680624334611,\n 100: 0.0762579402903838,\n 200: 0.0558704526182638,\n 500: 0.0359832389317461,\n 1000: 0.0256126573433596,\n 2000: 0.0181944265400504,\n 5000: 0.0117329446468571,\n 10000: 0.0082270524584354,\n 20000: 0.00580460792299214,\n 40000: 0.00414892737222885,\n 72000: 0.00309340092038059},\n '0.99995': {4: 0.248754269146416,\n 5: 0.19997839537608197,\n 6: 0.236548128358969,\n 7: 0.21096757693345103,\n 8: 0.21714917413729898,\n 9: 0.205979795735129,\n 10: 0.20029398089673,\n 15: 0.17590050570443203,\n 20: 0.15601016361897102,\n 30: 0.133691739483444,\n 50: 0.107496694235039,\n 100: 0.0785735967934979,\n 200: 0.0573877056330228,\n 500: 0.0369051995840645,\n 1000: 0.0265491336936829,\n 2000: 0.0186226037818523,\n 5000: 0.0119995948968375,\n 10000: 0.00852240989786251,\n 20000: 0.00599774739593151,\n 40000: 0.0042839159079761,\n 72000: 0.00319932767198801},\n '0.99998': {4: 0.24930203997425898,\n 5: 0.199993151405815,\n 6: 0.2390887911995,\n 7: 0.212233348558702,\n 8: 0.22170007640450304,\n 9: 0.21054115498898,\n 10: 0.20565108964621898,\n 15: 0.18185667601316602,\n 20: 0.16131922583934502,\n 30: 0.137831637950694,\n 50: 0.11140887547015,\n 100: 0.0813458356889133,\n 200: 0.0593365901653878,\n 500: 0.0387221159256424,\n 1000: 0.027578430100535997,\n 2000: 0.0193001796565433,\n 5000: 0.0124410052027886,\n 10000: 0.00892863905540303,\n 20000: 0.00633099254378114,\n 40000: 0.0044187010443287895,\n 72000: 0.00332688234611187},\n '0.99999': {4: 0.24945965232322498,\n 5: 0.199995525025673,\n 6: 0.24010356643629502,\n 7: 0.21266103831250602,\n 8: 0.225000835357532,\n 9: 0.21180033095039003,\n 10: 0.209682048785853,\n 15: 0.185743454151004,\n 20: 0.165568255916749,\n 30: 0.14155750962435099,\n 50: 0.113536607717411,\n 100: 0.0832963013755522,\n 200: 0.0607646310473911,\n 500: 0.039930259057650005,\n 1000: 0.0284430733108,\n 2000: 0.0196241518040617,\n 5000: 0.0129467396733128,\n 10000: 0.009138539330002129,\n 20000: 0.00656987109386762,\n 40000: 0.00450818604569179,\n 72000: 0.00339316094477355},\n '1': {4: 0.24974836247845,\n 5: 0.199999835639211,\n 6: 0.24467288361776798,\n 7: 0.21353618608817,\n 8: 0.23377291968768302,\n 9: 0.21537991431762502,\n 10: 0.221530282182963,\n 15: 0.19224056333056197,\n 20: 0.175834459522789,\n 30: 0.163833046059817,\n 50: 0.11788671686531199,\n 100: 0.0926780423096737,\n 200: 0.0705309107882395,\n 500: 0.0431448163617178,\n 1000: 0.0313640941982108,\n 2000: 0.0213081254074584,\n 5000: 0.014396063834027,\n 10000: 0.00952234579566773,\n 20000: 0.006858294480462271,\n 40000: 0.00513477467565583,\n 72000: 0.00376331697005859}}\n\n qDiptab_df = pd.DataFrame(qDiptab_dict)\n\n diptable = np.array(qDiptab_df)\n ps = np.array(qDiptab_df.columns).astype(float)\n Ns = np.array(qDiptab_df.index)\n\n if N >= Ns[-1]:\n dip = transform_dip_to_other_nbr_pts(dip, N, Ns[-1]-0.1)\n N = Ns[-1]-0.1\n\n iNlow = np.nonzero(Ns < N)[0][-1]\n qN = (N-Ns[iNlow])/(Ns[iNlow+1]-Ns[iNlow])\n dip_sqrtN = np.sqrt(N)*dip\n dip_interpol_sqrtN = (\n np.sqrt(Ns[iNlow])*diptable[iNlow, :] + qN*(\n np.sqrt(Ns[iNlow+1])*diptable[iNlow+1, :]-np.sqrt(Ns[iNlow])*diptable[iNlow, :]))\n\n if not (dip_interpol_sqrtN < dip_sqrtN).any():\n return 1\n\n iplow = np.nonzero(dip_interpol_sqrtN < dip_sqrtN)[0][-1]\n if iplow == len(dip_interpol_sqrtN) - 1:\n return 0\n\n qp = (dip_sqrtN-dip_interpol_sqrtN[iplow])/(dip_interpol_sqrtN[iplow+1]-dip_interpol_sqrtN[iplow])\n p_interpol = ps[iplow] + qp*(ps[iplow+1]-ps[iplow])\n\n return 1 - p_interpol", "response": "Tabulated p - values for a given set of observations."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef dip_and_closest_unimodal_from_cdf(xF, yF, plotting=False, verbose=False, eps=1e-12):\n '''\n Dip computed as distance between empirical distribution function (EDF) and\n cumulative distribution function for the unimodal distribution with\n smallest such distance. The optimal unimodal distribution is found by\n the algorithm presented in\n\n Hartigan (1985): Computation of the dip statistic to test for\n unimodaliy. Applied Statistics, vol. 34, no. 3\n\n If the plotting option is enabled the optimal unimodal distribution\n function is plotted along with (xF, yF-dip) and (xF, yF+dip)\n\n xF - x-coordinates for EDF\n yF - y-coordinates for EDF\n\n '''\n\n ## TODO! Preprocess xF and yF so that yF increasing and xF does\n ## not have more than two copies of each x-value.\n\n if (xF[1:]-xF[:-1] < -eps).any():\n raise ValueError('Need sorted x-values to compute dip')\n if (yF[1:]-yF[:-1] < -eps).any():\n raise ValueError('Need sorted y-values to compute dip')\n\n # if plotting:\n # Nplot = 5\n # bfig = plt.figure(figsize=(12, 3))\n # i = 1 # plot index\n\n D = 0 # lower bound for dip*2\n\n # [L, U] is interval where we still need to find unimodal function,\n # the modal interval\n L = 0\n U = len(xF) - 1\n\n # iGfin are the indices of xF where the optimal unimodal distribution is greatest\n # convex minorant to (xF, yF+dip)\n # iHfin are the indices of xF where the optimal unimodal distribution is least\n # concave majorant to (xF, yF-dip)\n iGfin = L\n iHfin = U\n\n while 1:\n\n iGG = greatest_convex_minorant_sorted(xF[L:(U+1)], yF[L:(U+1)])\n iHH = least_concave_majorant_sorted(xF[L:(U+1)], yF[L:(U+1)])\n iG = np.arange(L, U+1)[iGG]\n iH = np.arange(L, U+1)[iHH]\n\n # Interpolate. First and last point are in both and does not need\n # interpolation. Might cause trouble if included due to possiblity\n # of infinity slope at beginning or end of interval.\n if iG[0] != iH[0] or iG[-1] != iH[-1]:\n raise ValueError('Convex minorant and concave majorant should start and end at same points.')\n hipl = np.interp(xF[iG[1:-1]], xF[iH], yF[iH])\n gipl = np.interp(xF[iH[1:-1]], xF[iG], yF[iG])\n hipl = np.hstack([yF[iH[0]], hipl, yF[iH[-1]]])\n gipl = np.hstack([yF[iG[0]], gipl, yF[iG[-1]]])\n #hipl = lin_interpol_sorted(xF[iG], xF[iH], yF[iH])\n #gipl = lin_interpol_sorted(xF[iH], xF[iG], yF[iG])\n\n # Find largest difference between GCM and LCM.\n gdiff = hipl - yF[iG]\n hdiff = yF[iH] - gipl\n imaxdiffg = np.argmax(gdiff)\n imaxdiffh = np.argmax(hdiff)\n d = max(gdiff[imaxdiffg], hdiff[imaxdiffh])\n\n # # Plot current GCM and LCM.\n # if plotting:\n # if i > Nplot:\n # bfig = plt.figure(figsize=(12, 3))\n # i = 1\n # bax = bfig.add_subplot(1, Nplot, i)\n # bax.plot(xF, yF, color='red')\n # bax.plot(xF, yF-d/2, color='black')\n # bax.plot(xF, yF+d/2, color='black')\n # bax.plot(xF[iG], yF[iG]+d/2, color='blue')\n # bax.plot(xF[iH], yF[iH]-d/2, color='blue')\n\n # if d <= D:\n # if verbose:\n # print(\"Difference in modal interval smaller than current dip\")\n # break\n\n # Find new modal interval so that largest difference is at endpoint\n # and set d to largest distance between current GCM and LCM.\n if gdiff[imaxdiffg] > hdiff[imaxdiffh]:\n L0 = iG[imaxdiffg]\n U0 = iH[iH >= L0][0]\n else:\n U0 = iH[imaxdiffh]\n L0 = iG[iG <= U0][-1]\n # Add points outside the modal interval to the final GCM and LCM.\n iGfin = np.hstack([iGfin, iG[(iG <= L0)*(iG > L)]])\n iHfin = np.hstack([iH[(iH >= U0)*(iH < U)], iHfin])\n\n # # Plot new modal interval\n # if plotting:\n # ymin, ymax = bax.get_ylim()\n # bax.axvline(xF[L0], ymin, ymax, color='orange')\n # bax.axvline(xF[U0], ymin, ymax, color='red')\n # bax.set_xlim(xF[L]-.1*(xF[U]-xF[L]), xF[U]+.1*(xF[U]-xF[L]))\n\n # Compute new lower bound for dip*2\n # i.e. largest difference outside modal interval\n gipl = np.interp(xF[L:(L0+1)], xF[iG], yF[iG])\n D = max(D, np.amax(yF[L:(L0+1)] - gipl))\n hipl = np.interp(xF[U0:(U+1)], xF[iH], yF[iH])\n D = max(D, np.amax(hipl - yF[U0:(U+1)]))\n\n if xF[U0]-xF[L0] < eps:\n if verbose:\n print(\"Modal interval zero length\")\n break\n\n # if plotting:\n # mxpt = np.argmax(yF[L:(L0+1)] - gipl)\n # bax.plot([xF[L:][mxpt], xF[L:][mxpt]], [yF[L:][mxpt]+d/2, \n # gipl[mxpt]+d/2], '+', color='red')\n # mxpt = np.argmax(hipl - yF[U0:(U+1)])\n # bax.plot([xF[U0:][mxpt], xF[U0:][mxpt]], [yF[U0:][mxpt]-d/2, \n # hipl[mxpt]-d/2], '+', color='red')\n # i += 1\n\n # Change modal interval\n L = L0\n U = U0\n\n if d <= D:\n if verbose:\n print(\"Difference in modal interval smaller than new dip\")\n break\n\n # if plotting:\n\n # # Add modal interval to figure\n # bax.axvline(xF[L0], ymin, ymax, color='green', linestyle='dashed')\n # bax.axvline(xF[U0], ymin, ymax, color='green', linestyle='dashed')\n\n # ## Plot unimodal function (not distribution function)\n # bfig = plt.figure()\n # bax = bfig.add_subplot(1, 1, 1)\n # bax.plot(xF, yF, color='red')\n # bax.plot(xF, yF-D/2, color='black')\n # bax.plot(xF, yF+D/2, color='black')\n\n # Find string position in modal interval\n iM = np.arange(iGfin[-1], iHfin[0]+1)\n yM_lower = yF[iM]-D/2\n yM_lower[0] = yF[iM[0]]+D/2\n iMM_concave = least_concave_majorant_sorted(xF[iM], yM_lower)\n iM_concave = iM[iMM_concave]\n #bax.plot(xF[iM], yM_lower, color='orange')\n #bax.plot(xF[iM_concave], yM_lower[iMM_concave], color='red')\n lcm_ipl = np.interp(xF[iM], xF[iM_concave], yM_lower[iMM_concave])\n try:\n mode = iM[np.nonzero(lcm_ipl > yF[iM]+D/2)[0][-1]]\n #bax.axvline(xF[mode], color='green', linestyle='dashed')\n except IndexError:\n iM_convex = np.zeros(0, dtype='i')\n else:\n after_mode = iM_concave > mode\n iM_concave = iM_concave[after_mode]\n iMM_concave = iMM_concave[after_mode]\n iM = iM[iM <= mode]\n iM_convex = iM[greatest_convex_minorant_sorted(xF[iM], yF[iM])]\n\n # if plotting:\n\n # bax.plot(xF[np.hstack([iGfin, iM_convex, iM_concave, iHfin])],\n # np.hstack([yF[iGfin] + D/2, yF[iM_convex] + D/2,\n # yM_lower[iMM_concave], yF[iHfin] - D/2]), color='blue')\n # #bax.plot(xF[iM], yM_lower, color='orange')\n\n # ## Plot unimodal distribution function\n # bfig = plt.figure()\n # bax = bfig.add_subplot(1, 1, 1)\n # bax.plot(xF, yF, color='red')\n # bax.plot(xF, yF-D/2, color='black')\n # bax.plot(xF, yF+D/2, color='black')\n\n # Find string position in modal interval\n iM = np.arange(iGfin[-1], iHfin[0]+1)\n yM_lower = yF[iM]-D/2\n yM_lower[0] = yF[iM[0]]+D/2\n iMM_concave = least_concave_majorant_sorted(xF[iM], yM_lower)\n iM_concave = iM[iMM_concave]\n #bax.plot(xF[iM], yM_lower, color='orange')\n #bax.plot(xF[iM_concave], yM_lower[iMM_concave], color='red')\n lcm_ipl = np.interp(xF[iM], xF[iM_concave], yM_lower[iMM_concave])\n try:\n mode = iM[np.nonzero(lcm_ipl > yF[iM]+D/2)[0][-1]]\n #bax.axvline(xF[mode], color='green', linestyle='dashed')\n except IndexError:\n iM_convex = np.zeros(0, dtype='i')\n else:\n after_mode = iM_concave > mode\n iM_concave = iM_concave[after_mode]\n iMM_concave = iMM_concave[after_mode]\n iM = iM[iM <= mode]\n iM_convex = iM[greatest_convex_minorant_sorted(xF[iM], yF[iM])]\n\n # Closest unimodal curve\n xU = xF[np.hstack([iGfin[:-1], iM_convex, iM_concave, iHfin[1:]])]\n yU = np.hstack([yF[iGfin[:-1]] + D/2, yF[iM_convex] + D/2,\n yM_lower[iMM_concave], yF[iHfin[1:]] - D/2])\n # Add points so unimodal curve goes from 0 to 1\n k_start = (yU[1]-yU[0])/(xU[1]-xU[0]+1e-5)\n xU_start = xU[0] - yU[0]/(k_start+1e-5)\n k_end = (yU[-1]-yU[-2])/(xU[-1]-xU[-2]+1e-5)\n xU_end = xU[-1] + (1-yU[-1])/(k_end+1e-5)\n xU = np.hstack([xU_start, xU, xU_end])\n yU = np.hstack([0, yU, 1])\n\n # if plotting:\n # bax.plot(xU, yU, color='blue')\n # #bax.plot(xF[iM], yM_lower, color='orange')\n # plt.show()\n\n return D/2, (xU, yU)", "response": "Compute the dip and closest unimodal from the original empirical distribution function."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the factor for one - dimensional plug - in bandwidth selection.", "response": "def bandwidth_factor(nbr_data_pts, deriv_order=0):\n '''\n Scale factor for one-dimensional plug-in bandwidth selection.\n '''\n if deriv_order == 0:\n return (3.0*nbr_data_pts/4)**(-1.0/5)\n\n if deriv_order == 2:\n return (7.0*nbr_data_pts/4)**(-1.0/9)\n\n raise ValueError('Not implemented for derivative of order {}'.format(deriv_order))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef make_html_tag(tag, text=None, **params):\n params_string = \"\"\n\n # Parameters are passed. Turn the dict into a string like \"a=1 b=2 c=3\" string.\n for key, value in sorted(params.items()):\n # Strip off a leading underscore from the attribute's key to allow attributes like '_class'\n # to be used as a CSS class specification instead of the reserved Python keyword 'class'.\n key = key.lstrip(\"_\")\n\n params_string += u' {0}=\"{1}\"'.format(key, value)\n\n # Create the tag string\n tag_string = u\"<{0}{1}>\".format(tag, params_string)\n\n # Add text and closing tag if required.\n if text:\n tag_string += u\"{0}\".format(text, tag)\n\n return tag_string", "response": "Create an HTML tag string."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a string that can be used to render a pager of the current page.", "response": "def pager(\n self,\n format=\"~2~\",\n url=None,\n show_if_single_page=False,\n separator=\" \",\n symbol_first=\"<<\",\n symbol_last=\">>\",\n symbol_previous=\"<\",\n symbol_next=\">\",\n link_attr=None,\n curpage_attr=None,\n dotdot_attr=None,\n link_tag=None,\n ):\n \"\"\"\n Return string with links to other pages (e.g. '1 .. 5 6 7 [8] 9 10 11 .. 50').\n\n format:\n Format string that defines how the pager is rendered. The string\n can contain the following $-tokens that are substituted by the\n string.Template module:\n\n - $first_page: number of first reachable page\n - $last_page: number of last reachable page\n - $page: number of currently selected page\n - $page_count: number of reachable pages\n - $items_per_page: maximal number of items per page\n - $first_item: index of first item on the current page\n - $last_item: index of last item on the current page\n - $item_count: total number of items\n - $link_first: link to first page (unless this is first page)\n - $link_last: link to last page (unless this is last page)\n - $link_previous: link to previous page (unless this is first page)\n - $link_next: link to next page (unless this is last page)\n\n To render a range of pages the token '~3~' can be used. The\n number sets the radius of pages around the current page.\n Example for a range with radius 3:\n\n '1 .. 5 6 7 [8] 9 10 11 .. 50'\n\n Default: '~2~'\n\n url\n The URL that page links will point to. Make sure it contains the string\n $page which will be replaced by the actual page number.\n Must be given unless a url_maker is specified to __init__, in which\n case this parameter is ignored.\n\n symbol_first\n String to be displayed as the text for the $link_first link above.\n\n Default: '<<' (<<)\n\n symbol_last\n String to be displayed as the text for the $link_last link above.\n\n Default: '>>' (>>)\n\n symbol_previous\n String to be displayed as the text for the $link_previous link above.\n\n Default: '<' (<)\n\n symbol_next\n String to be displayed as the text for the $link_next link above.\n\n Default: '>' (>)\n\n separator:\n String that is used to separate page links/numbers in the above range of pages.\n\n Default: ' '\n\n show_if_single_page:\n if True the navigator will be shown even if there is only one page.\n\n Default: False\n\n link_attr (optional)\n A dictionary of attributes that get added to A-HREF links pointing to other pages. Can\n be used to define a CSS style or class to customize the look of links.\n\n Example: { 'style':'border: 1px solid green' }\n Example: { 'class':'pager_link' }\n\n curpage_attr (optional)\n A dictionary of attributes that get added to the current page number in the pager (which\n is obviously not a link). If this dictionary is not empty then the elements will be\n wrapped in a SPAN tag with the given attributes.\n\n Example: { 'style':'border: 3px solid blue' }\n Example: { 'class':'pager_curpage' }\n\n dotdot_attr (optional)\n A dictionary of attributes that get added to the '..' string in the pager (which is\n obviously not a link). If this dictionary is not empty then the elements will be wrapped\n in a SPAN tag with the given attributes.\n\n Example: { 'style':'color: #808080' }\n Example: { 'class':'pager_dotdot' }\n\n link_tag (optional)\n A callable that accepts single argument `page` (page link information)\n and generates string with html that represents the link for specific page.\n Page objects are supplied from `link_map()` so the keys are the same.\n\n\n \"\"\"\n link_attr = link_attr or {}\n curpage_attr = curpage_attr or {}\n dotdot_attr = dotdot_attr or {}\n self.curpage_attr = curpage_attr\n self.separator = separator\n self.link_attr = link_attr\n self.dotdot_attr = dotdot_attr\n self.url = url\n self.link_tag = link_tag or self.default_link_tag\n\n # Don't show navigator if there is no more than one page\n if self.page_count == 0 or (self.page_count == 1 and not show_if_single_page):\n return \"\"\n\n regex_res = re.search(r\"~(\\d+)~\", format)\n if regex_res:\n radius = regex_res.group(1)\n else:\n radius = 2\n radius = int(radius)\n self.radius = radius\n link_map = self.link_map(\n format=format,\n url=url,\n show_if_single_page=show_if_single_page,\n separator=separator,\n symbol_first=symbol_first,\n symbol_last=symbol_last,\n symbol_previous=symbol_previous,\n symbol_next=symbol_next,\n link_attr=link_attr,\n curpage_attr=curpage_attr,\n dotdot_attr=dotdot_attr,\n )\n links_markup = self._range(link_map, radius)\n\n # Replace ~...~ in token format by range of pages\n result = re.sub(r\"~(\\d+)~\", links_markup, format)\n\n link_first = (\n self.page > self.first_page and self.link_tag(link_map[\"first_page\"]) or \"\"\n )\n link_last = (\n self.page < self.last_page and self.link_tag(link_map[\"last_page\"]) or \"\"\n )\n link_previous = (\n self.previous_page and self.link_tag(link_map[\"previous_page\"]) or \"\"\n )\n link_next = self.next_page and self.link_tag(link_map[\"next_page\"]) or \"\"\n # Interpolate '$' variables\n result = Template(result).safe_substitute(\n {\n \"first_page\": self.first_page,\n \"last_page\": self.last_page,\n \"page\": self.page,\n \"page_count\": self.page_count,\n \"items_per_page\": self.items_per_page,\n \"first_item\": self.first_item,\n \"last_item\": self.last_item,\n \"item_count\": self.item_count,\n \"link_first\": link_first,\n \"link_last\": link_last,\n \"link_previous\": link_previous,\n \"link_next\": link_next,\n }\n )\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a map with links to other pages.", "response": "def link_map(\n self,\n format=\"~2~\",\n url=None,\n show_if_single_page=False,\n separator=\" \",\n symbol_first=\"<<\",\n symbol_last=\">>\",\n symbol_previous=\"<\",\n symbol_next=\">\",\n link_attr=None,\n curpage_attr=None,\n dotdot_attr=None,\n ):\n \"\"\" Return map with links to other pages if default pager() function is not suitable solution.\n format:\n Format string that defines how the pager would be normally rendered rendered. Uses same arguments as pager()\n method, but returns a simple dictionary in form of:\n {'current_page': {'attrs': {},\n 'href': 'http://example.org/foo/page=1',\n 'value': 1},\n 'first_page': {'attrs': {},\n 'href': 'http://example.org/foo/page=1',\n 'type': 'first_page',\n 'value': 1},\n 'last_page': {'attrs': {},\n 'href': 'http://example.org/foo/page=8',\n 'type': 'last_page',\n 'value': 8},\n 'next_page': {'attrs': {}, 'href': 'HREF', 'type': 'next_page', 'value': 2},\n 'previous_page': None,\n 'range_pages': [{'attrs': {},\n 'href': 'http://example.org/foo/page=1',\n 'type': 'current_page',\n 'value': 1},\n ....\n {'attrs': {}, 'href': '', 'type': 'span', 'value': '..'}]}\n\n\n The string can contain the following $-tokens that are substituted by the\n string.Template module:\n\n - $first_page: number of first reachable page\n - $last_page: number of last reachable page\n - $page: number of currently selected page\n - $page_count: number of reachable pages\n - $items_per_page: maximal number of items per page\n - $first_item: index of first item on the current page\n - $last_item: index of last item on the current page\n - $item_count: total number of items\n - $link_first: link to first page (unless this is first page)\n - $link_last: link to last page (unless this is last page)\n - $link_previous: link to previous page (unless this is first page)\n - $link_next: link to next page (unless this is last page)\n\n To render a range of pages the token '~3~' can be used. The\n number sets the radius of pages around the current page.\n Example for a range with radius 3:\n\n '1 .. 5 6 7 [8] 9 10 11 .. 50'\n\n Default: '~2~'\n\n url\n The URL that page links will point to. Make sure it contains the string\n $page which will be replaced by the actual page number.\n Must be given unless a url_maker is specified to __init__, in which\n case this parameter is ignored.\n\n symbol_first\n String to be displayed as the text for the $link_first link above.\n\n Default: '<<' (<<)\n\n symbol_last\n String to be displayed as the text for the $link_last link above.\n\n Default: '>>' (>>)\n\n symbol_previous\n String to be displayed as the text for the $link_previous link above.\n\n Default: '<' (<)\n\n symbol_next\n String to be displayed as the text for the $link_next link above.\n\n Default: '>' (>)\n\n separator:\n String that is used to separate page links/numbers in the above range of pages.\n\n Default: ' '\n\n show_if_single_page:\n if True the navigator will be shown even if there is only one page.\n\n Default: False\n\n link_attr (optional)\n A dictionary of attributes that get added to A-HREF links pointing to other pages. Can\n be used to define a CSS style or class to customize the look of links.\n\n Example: { 'style':'border: 1px solid green' }\n Example: { 'class':'pager_link' }\n\n curpage_attr (optional)\n A dictionary of attributes that get added to the current page number in the pager (which\n is obviously not a link). If this dictionary is not empty then the elements will be\n wrapped in a SPAN tag with the given attributes.\n\n Example: { 'style':'border: 3px solid blue' }\n Example: { 'class':'pager_curpage' }\n\n dotdot_attr (optional)\n A dictionary of attributes that get added to the '..' string in the pager (which is\n obviously not a link). If this dictionary is not empty then the elements will be wrapped\n in a SPAN tag with the given attributes.\n\n Example: { 'style':'color: #808080' }\n Example: { 'class':'pager_dotdot' }\n \"\"\"\n link_attr = link_attr or {}\n curpage_attr = curpage_attr or {}\n dotdot_attr = dotdot_attr or {}\n self.curpage_attr = curpage_attr\n self.separator = separator\n self.link_attr = link_attr\n self.dotdot_attr = dotdot_attr\n self.url = url\n\n regex_res = re.search(r\"~(\\d+)~\", format)\n if regex_res:\n radius = regex_res.group(1)\n else:\n radius = 2\n radius = int(radius)\n self.radius = radius\n\n # Compute the first and last page number within the radius\n # e.g. '1 .. 5 6 [7] 8 9 .. 12'\n # -> leftmost_page = 5\n # -> rightmost_page = 9\n leftmost_page = (\n max(self.first_page, (self.page - radius)) if self.first_page else None\n )\n rightmost_page = (\n min(self.last_page, (self.page + radius)) if self.last_page else None\n )\n\n nav_items = {\n \"first_page\": None,\n \"last_page\": None,\n \"previous_page\": None,\n \"next_page\": None,\n \"current_page\": None,\n \"radius\": self.radius,\n \"range_pages\": [],\n }\n\n if leftmost_page is None or rightmost_page is None:\n return nav_items\n\n nav_items[\"first_page\"] = {\n \"type\": \"first_page\",\n \"value\": unicode(symbol_first),\n \"attrs\": self.link_attr,\n \"number\": self.first_page,\n \"href\": self.url_maker(self.first_page),\n }\n\n # Insert dots if there are pages between the first page\n # and the currently displayed page range\n if leftmost_page - self.first_page > 1:\n # Wrap in a SPAN tag if dotdot_attr is set\n nav_items[\"range_pages\"].append(\n {\n \"type\": \"span\",\n \"value\": \"..\",\n \"attrs\": self.dotdot_attr,\n \"href\": \"\",\n \"number\": None,\n }\n )\n\n for thispage in range(leftmost_page, rightmost_page + 1):\n # Highlight the current page number and do not use a link\n if thispage == self.page:\n # Wrap in a SPAN tag if curpage_attr is set\n nav_items[\"range_pages\"].append(\n {\n \"type\": \"current_page\",\n \"value\": unicode(thispage),\n \"number\": thispage,\n \"attrs\": self.curpage_attr,\n \"href\": self.url_maker(thispage),\n }\n )\n nav_items[\"current_page\"] = {\n \"value\": thispage,\n \"attrs\": self.curpage_attr,\n \"type\": \"current_page\",\n \"href\": self.url_maker(thispage),\n }\n # Otherwise create just a link to that page\n else:\n nav_items[\"range_pages\"].append(\n {\n \"type\": \"page\",\n \"value\": unicode(thispage),\n \"number\": thispage,\n \"attrs\": self.link_attr,\n \"href\": self.url_maker(thispage),\n }\n )\n\n # Insert dots if there are pages between the displayed\n # page numbers and the end of the page range\n if self.last_page - rightmost_page > 1:\n # Wrap in a SPAN tag if dotdot_attr is set\n nav_items[\"range_pages\"].append(\n {\n \"type\": \"span\",\n \"value\": \"..\",\n \"attrs\": self.dotdot_attr,\n \"href\": \"\",\n \"number\": None,\n }\n )\n\n # Create a link to the very last page (unless we are on the last\n # page or there would be no need to insert '..' spacers)\n nav_items[\"last_page\"] = {\n \"type\": \"last_page\",\n \"value\": unicode(symbol_last),\n \"attrs\": self.link_attr,\n \"href\": self.url_maker(self.last_page),\n \"number\": self.last_page,\n }\n\n nav_items[\"previous_page\"] = {\n \"type\": \"previous_page\",\n \"value\": unicode(symbol_previous),\n \"attrs\": self.link_attr,\n \"number\": self.previous_page or self.first_page,\n \"href\": self.url_maker(self.previous_page or self.first_page),\n }\n\n nav_items[\"next_page\"] = {\n \"type\": \"next_page\",\n \"value\": unicode(symbol_next),\n \"attrs\": self.link_attr,\n \"number\": self.next_page or self.last_page,\n \"href\": self.url_maker(self.next_page or self.last_page),\n }\n\n return nav_items"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a range of linked pages to substiture placeholder in pattern", "response": "def _range(self, link_map, radius):\n \"\"\"\n Return range of linked pages to substiture placeholder in pattern\n \"\"\"\n\n leftmost_page = max(self.first_page, (self.page - radius))\n rightmost_page = min(self.last_page, (self.page + radius))\n\n nav_items = []\n # Create a link to the first page (unless we are on the first page\n # or there would be no need to insert '..' spacers)\n if self.page != self.first_page and self.first_page < leftmost_page:\n page = link_map[\"first_page\"].copy()\n page[\"value\"] = unicode(page[\"number\"])\n nav_items.append(self.link_tag(page))\n\n for item in link_map[\"range_pages\"]:\n nav_items.append(self.link_tag(item))\n\n # Create a link to the very last page (unless we are on the last\n # page or there would be no need to insert '..' spacers)\n if self.page != self.last_page and rightmost_page < self.last_page:\n page = link_map[\"last_page\"].copy()\n page[\"value\"] = unicode(page[\"number\"])\n nav_items.append(self.link_tag(page))\n\n return self.separator.join(nav_items)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef default_link_tag(item):\n text = item[\"value\"]\n target_url = item[\"href\"]\n\n if not item[\"href\"] or item[\"type\"] in (\"span\", \"current_page\"):\n if item[\"attrs\"]:\n text = make_html_tag(\"span\", **item[\"attrs\"]) + text + \"\"\n return text\n\n return make_html_tag(\"a\", text=text, href=target_url, **item[\"attrs\"])", "response": "Create an A - HREF tag that points to another page."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef tag(self, corpus, tokenize=True):\n '''Tags a string `corpus`.'''\n # Assume untokenized corpus has \\n between sentences and ' ' between words\n s_split = SentenceTokenizer().tokenize if tokenize else lambda t: t.split('\\n')\n w_split = WordTokenizer().tokenize if tokenize else lambda s: s.split()\n def split_sents(corpus):\n for s in s_split(corpus):\n yield w_split(s)\n\n prev, prev2 = self.START\n tokens = []\n for words in split_sents(corpus):\n context = self.START + [self._normalize(w) for w in words] + self.END\n for i, word in enumerate(words):\n tag = self.tagdict.get(word)\n if not tag:\n features = self._get_features(i, word, context, prev, prev2)\n tag = self.model.predict(features)\n tokens.append((word, tag))\n prev2 = prev\n prev = tag\n return tokens", "response": "Tags a string corpus."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef train(self, sentences, save_loc=None, nr_iter=5):\n '''Train a model from sentences, and save it at ``save_loc``. ``nr_iter``\n controls the number of Perceptron training iterations.\n\n :param sentences: A list of (words, tags) tuples.\n :param save_loc: If not ``None``, saves a pickled model in this location.\n :param nr_iter: Number of training iterations.\n '''\n self._make_tagdict(sentences)\n self.model.classes = self.classes\n for iter_ in range(nr_iter):\n c = 0\n n = 0\n for words, tags in sentences:\n prev, prev2 = self.START\n context = self.START + [self._normalize(w) for w in words] \\\n + self.END\n for i, word in enumerate(words):\n guess = self.tagdict.get(word)\n if not guess:\n feats = self._get_features(i, word, context, prev, prev2)\n guess = self.model.predict(feats)\n self.model.update(tags[i], guess, feats)\n prev2 = prev\n prev = guess\n c += guess == tags[i]\n n += 1\n random.shuffle(sentences)\n logging.info(\"Iter {0}: {1}/{2}={3}\".format(iter_, c, n, _pc(c, n)))\n self.model.average_weights()\n # Pickle as a binary file\n if save_loc is not None:\n pickle.dump((self.model.weights, self.tagdict, self.classes),\n open(save_loc, 'wb'), -1)\n return None", "response": "Train a model from sentences and save it at save_loc."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef load(self, loc):\n '''Load a pickled model.'''\n try:\n w_td_c = pickle.load(open(loc, 'rb'))\n except IOError:\n msg = (\"Missing trontagger.pickle file.\")\n raise MissingCorpusError(msg)\n self.model.weights, self.tagdict, self.classes = w_td_c\n self.model.classes = self.classes\n return None", "response": "Load a pickled model."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _normalize(self, word):\n '''Normalization used in pre-processing.\n\n - All words are lower cased\n - Digits in the range 1800-2100 are represented as !YEAR;\n - Other digits are represented as !DIGITS\n\n :rtype: str\n '''\n if '-' in word and word[0] != '-':\n return '!HYPHEN'\n elif word.isdigit() and len(word) == 4:\n return '!YEAR'\n elif word[0].isdigit():\n return '!DIGITS'\n else:\n return word.lower()", "response": "Normalizes used in pre - processing."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_features(self, i, word, context, prev, prev2):\n '''Map tokens into a feature representation, implemented as a\n {hashable: float} dict. If the features change, a new model must be\n trained.\n '''\n def add(name, *args):\n features[' '.join((name,) + tuple(args))] += 1\n\n i += len(self.START)\n features = defaultdict(int)\n # It's useful to have a constant feature, which acts sort of like a prior\n add('bias')\n add('i suffix', word[-3:])\n add('i pref1', word[0])\n add('i-1 tag', prev)\n add('i-2 tag', prev2)\n add('i tag+i-2 tag', prev, prev2)\n add('i word', context[i])\n add('i-1 tag+i word', prev, context[i])\n add('i-1 word', context[i-1])\n add('i-1 suffix', context[i-1][-3:])\n add('i-2 word', context[i-2])\n add('i+1 word', context[i+1])\n add('i+1 suffix', context[i+1][-3:])\n add('i+2 word', context[i+2])\n return features", "response": "Map tokens into a feature representation implemented as a\n dict."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nmake a tag dictionary for single - tag words.", "response": "def _make_tagdict(self, sentences):\n '''Make a tag dictionary for single-tag words.'''\n counts = defaultdict(lambda: defaultdict(int))\n for words, tags in sentences:\n for word, tag in zip(words, tags):\n counts[word][tag] += 1\n self.classes.add(tag)\n freq_thresh = 20\n ambiguity_thresh = 0.97\n for word, tag_freqs in counts.items():\n tag, mode = max(tag_freqs.items(), key=lambda item: item[1])\n n = sum(tag_freqs.values())\n # Don't add rare words to the tag dictionary\n # Only add quite unambiguous words\n if n >= freq_thresh and (float(mode) / n) >= ambiguity_thresh:\n self.tagdict[word] = tag"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ntraining an averaged perceptron model on the given examples.", "response": "def train(nr_iter, examples):\n '''Return an averaged perceptron model trained on ``examples`` for\n ``nr_iter`` iterations.\n '''\n model = AveragedPerceptron()\n for i in range(nr_iter):\n random.shuffle(examples)\n for features, class_ in examples:\n scores = model.predict(features)\n guess, score = max(scores.items(), key=lambda i: i[1])\n if guess != class_:\n model.update(class_, guess, features)\n model.average_weights()\n return model"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef encode(self, V, P, X, CC, seqNum, M, PT, SSRC, payload):\n timestamp = int(time())\n header = bytearray(HEADER_SIZE)\n # Fill the header bytearray with RTP header fields\n # ...\n header[0] = header[0] | V << 6; \n header[0] = header[0] | P << 5; \n header[0] = header[0] | X << 4; \n header[0] = header[0] | CC; \n header[1] = header[1] | M << 7; \n header[1] = header[1] | PT; \n header[2] = (seqNum >> 8) & 0xFF; \n header[3] = seqNum & 0xFF; \n header[4] = (timestamp >> 24) & 0xFF; \n header[5] = (timestamp >> 16) & 0xFF;\n header[6] = (timestamp >> 8) & 0xFF;\n header[7] = timestamp & 0xFF;\n header[8] = (SSRC >> 24) & 0xFF; \n header[9] = (SSRC >> 16) & 0xFF;\n header[10] = (SSRC >> 8) & 0xFF;\n header[11] = SSRC & 0xFF\n\n self.header = header\n \n # Get the payload\n # ...\n self.payload = payload", "response": "Encode the RTP packet with header fields and payload."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndecoding the RTP packet.", "response": "def decode(self, byteStream):\n \"\"\"Decode the RTP packet.\"\"\"\n self.header = bytearray(byteStream[:HEADER_SIZE])\n self.payload = byteStream[HEADER_SIZE:]"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef timestamp(self):\n timestamp = self.header[4] << 24 | self.header[5] << 16 | self.header[6] << 8 | self.header[7]\n return int(timestamp)", "response": "Return the timestamp of the record."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef preview_stream(stream):\n # together with waitkeys later, helps to close the video window effectively\n _cv2.startWindowThread()\n \n for frame in stream.frame_generator():\n if frame is not None:\n _cv2.imshow('Video', frame)\n _cv2.moveWindow('Video',5,5)\n else:\n break\n key = _cv2.waitKey(1) & 0xFF\n if key == ord(\"q\"):\n break\n _cv2.waitKey(1)\n _cv2.destroyAllWindows()\n _cv2.waitKey(1)", "response": "Display a stream in an OpenCV window until q key is pressed."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets a list of available resources from the RTSP - DESCRIBE response.", "response": "def get_resources(connection):\n \"\"\" Do an RTSP-DESCRIBE request, then parse out available resources from the response \"\"\"\n resp = connection.describe(verbose=False).split('\\r\\n')\n resources = [x.replace('a=control:','') for x in resp if (x.find('control:') != -1 and x[-1] != '*' )]\n return resources"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef setup(self,resource_path = \"track1\"):\n self.rtsp_seq += 1\n\n uri = '/'.join([s.strip('/') for s in (self.server,self.stream_path,resource_path)])\n \n ## example: SETUP rtsp://example.com/foo/bar/baz.rm RTSP/2.0\n request = f\"SETUP rtsp://{uri} {RTSP_VER}\\r\\n\"\n request+= f\"CSeq: {self.rtsp_seq}\\r\\n\"\n request+= f\"Transport: RTP/AVP;unicast;client_port=5000-5001\\r\\n\"#,RTP/SAVPF,RTP/AVP;unicast;client_port=5000-5001,RTP/AVP/UDP;unicast;client_port=5000-5001\\r\\n\"\n #request+= f\"Accept-Ranges: npt, smpte, clock\\r\\n\"\n request+= f\"User-Agent: python\\r\\n\\r\\n\"\n\n reply = self.sendMessage(request.encode('UTF-8'))\n return reply", "response": "Send SETUP request to set up the current state of the current node."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nfetch a single frame using FFMPEG. Convert to PIL Image. Slow.", "response": "def fetch_image(self,rtsp_server_uri = _source,timeout_secs = 15):\n \"\"\" Fetch a single frame using FFMPEG. Convert to PIL Image. Slow. \"\"\"\n \n self._check_ffmpeg()\n cmd = \"ffmpeg -rtsp_transport tcp -i {} -loglevel quiet -frames 1 -f image2pipe -\".format(rtsp_server_uri)\n\n #stdout = _sp.check_output(ffmpeg_cmd,timeout = timeout_secs)\n with _sp.Popen(cmd, shell=True, stdout=_sp.PIPE) as process:\n try:\n stdout,stderr = process.communicate(timeout=timeout_secs)\n except _sp.TimeoutExpired as e:\n process.kill()\n raise TimeoutError(\"Connection to {} timed out\".format(rtsp_server_uri),e)\n \n return _Image.open(_io.BytesIO(stdout))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nexecute a command in the current thread.", "response": "def exec_cmd(rtsp,cmd):\n '''\u6839\u636e\u547d\u4ee4\u6267\u884c\u64cd\u4f5c'''\n global CUR_RANGE,CUR_SCALE\n if cmd in ('exit','teardown'):\n rtsp.do_teardown()\n elif cmd == 'pause':\n CUR_SCALE = 1; CUR_RANGE = 'npt=now-'\n rtsp.do_pause()\n elif cmd == 'help':\n PRINT(play_ctrl_help())\n elif cmd == 'forward':\n if CUR_SCALE < 0: CUR_SCALE = 1\n CUR_SCALE *= 2; CUR_RANGE = 'npt=now-'\n elif cmd == 'backward':\n if CUR_SCALE > 0: CUR_SCALE = -1\n CUR_SCALE *= 2; CUR_RANGE = 'npt=now-'\n elif cmd == 'begin':\n CUR_SCALE = 1; CUR_RANGE = 'npt=beginning-'\n elif cmd == 'live':\n CUR_SCALE = 1; CUR_RANGE = 'npt=end-'\n elif cmd.startswith('play'):\n m = re.search(r'range[:\\s]+(?P[^\\s]+)',cmd)\n if m: CUR_RANGE = m.group('range')\n m = re.search(r'scale[:\\s]+(?P[\\d\\.]+)',cmd)\n if m: CUR_SCALE = int(m.group('scale'))\n\n if cmd not in ('pause','exit','teardown','help'):\n rtsp.do_play(CUR_RANGE,CUR_SCALE)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nparsing url and return ip port target", "response": "def _parse_url(self,url):\n '''\u89e3\u6790url,\u8fd4\u56de(ip,port,target)\u4e09\u5143\u7ec4'''\n (ip,port,target) = ('',DEFAULT_SERVER_PORT,'')\n m = re.match(r'[rtspRTSP:/]+(?P(\\d{1,3}\\.){3}\\d{1,3})(:(?P\\d+))?(?P.*)',url)\n if m is not None:\n ip = m.group('ip')\n port = int(m.group('port'))\n target = m.group('target')\n #PRINT('ip: %s, port: %d, target: %s'%(ip,port,target), GREEN)\n return ip,port,target"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconnect to the server and return the ID.", "response": "def _connect_server(self):\n '''\u8fde\u63a5\u670d\u52a1\u5668,\u5efa\u7acbsocket'''\n try:\n self._sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM)\n self._sock.connect((self._server_ip,self._server_port))\n #PRINT('Connect [%s:%d] success!'%(self._server_ip,self._server_port), GREEN)\n except socket.error, e:\n sys.stderr.write('ERROR: %s[%s:%d]'%(e,self._server_ip,self._server_port))\n traceback.print_exc()\n sys.exit(1)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nupdating the DEST_IP global variable.", "response": "def _update_dest_ip(self):\n '''\u5982\u679c\u672a\u6307\u5b9aDEST_IP,\u9ed8\u8ba4\u4e0eRTSP\u4f7f\u7528\u76f8\u540cIP'''\n global DEST_IP\n if not DEST_IP:\n DEST_IP = self._sock.getsockname()[0]\n PRINT('DEST_IP: %s\\n'%DEST_IP, CYAN)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreceives a message from the socket.", "response": "def recv_msg(self):\n '''\u6536\u53d6\u4e00\u4e2a\u5b8c\u6574\u54cd\u5e94\u6d88\u606f\u6216ANNOUNCE\u901a\u77e5\u6d88\u606f'''\n try:\n while True:\n if HEADER_END_STR in self._recv_buf: break\n more = self._sock.recv(2048)\n if not more: break\n self._recv_buf += more\n except socket.error, e:\n PRINT('Receive data error: %s'%e,RED)\n sys.exit(-1)\n\n msg = ''\n if self._recv_buf:\n (msg,self._recv_buf) = self._recv_buf.split(HEADER_END_STR,1)\n content_length = self._get_content_length(msg)\n msg += HEADER_END_STR + self._recv_buf[:content_length]\n self._recv_buf = self._recv_buf[content_length:]\n return msg"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the content length from the message.", "response": "def _get_content_length(self,msg):\n '''\u4ece\u6d88\u606f\u4e2d\u89e3\u6790Content-length'''\n m = re.search(r'[Cc]ontent-length:\\s?(?P\\d+)',msg,re.S)\n return (m and int(m.group('len'))) or 0"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nprocess a response from the server.", "response": "def _process_response(self,msg):\n '''\u5904\u7406\u54cd\u5e94\u6d88\u606f'''\n status,headers,body = self._parse_response(msg)\n rsp_cseq = int(headers['cseq'])\n if self._cseq_map[rsp_cseq] != 'GET_PARAMETER':\n PRINT(self._get_time_str() + '\\n' + msg)\n if status == 302:\n self.location = headers['location']\n if status != 200:\n self.do_teardown()\n if self._cseq_map[rsp_cseq] == 'DESCRIBE':\n track_id_str = self._parse_track_id(body)\n self.do_setup(track_id_str)\n elif self._cseq_map[rsp_cseq] == 'SETUP':\n self._session_id = headers['session']\n self.do_play(CUR_RANGE,CUR_SCALE)\n self.send_heart_beat_msg()\n elif self._cseq_map[rsp_cseq] == 'PLAY':\n self.playing = True"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nprocess an announcing message.", "response": "def _process_announce(self,msg):\n '''\u5904\u7406ANNOUNCE\u901a\u77e5\u6d88\u606f'''\n global CUR_RANGE,CUR_SCALE\n PRINT(msg)\n headers = self._parse_header_params(msg.splitlines()[1:])\n x_notice_val = int(headers['x-notice'])\n if x_notice_val in (X_NOTICE_EOS,X_NOTICE_BOS):\n CUR_SCALE = 1\n self.do_play(CUR_RANGE,CUR_SCALE)\n elif x_notice_val == X_NOTICE_CLOSE:\n self.do_teardown()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _parse_response(self,msg):\n '''\u89e3\u6790\u54cd\u5e94\u6d88\u606f'''\n header,body = msg.split(HEADER_END_STR)[:2]\n header_lines = header.splitlines()\n version,status = header_lines[0].split(None,2)[:2]\n headers = self._parse_header_params(header_lines[1:])\n return int(status),headers,body", "response": "Parse the response from the server."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nparses the header parameter lines and return a dict of the header keys and values.", "response": "def _parse_header_params(self,header_param_lines):\n '''\u89e3\u6790\u5934\u90e8\u53c2\u6570'''\n headers = {}\n for line in header_param_lines:\n if line.strip(): # \u8df3\u8fc7\u7a7a\u884c\n key,val = line.split(':', 1)\n headers[key.lower()] = val.strip()\n return headers"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nparses the track ID from the SDP", "response": "def _parse_track_id(self,sdp):\n '''\u4ecesdp\u4e2d\u89e3\u6790trackID=2\u5f62\u5f0f\u7684\u5b57\u7b26\u4e32'''\n m = re.search(r'a=control:(?P[\\w=\\d]+)',sdp,re.S)\n return (m and m.group('trackid')) or ''"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _sendmsg(self,method,url,headers):\n '''\u53d1\u9001\u6d88\u606f'''\n msg = '%s %s %s'%(method,url,RTSP_VERSION)\n headers['User-Agent'] = DEFAULT_USERAGENT\n cseq = self._next_seq()\n self._cseq_map[cseq] = method\n headers['CSeq'] = str(cseq)\n if self._session_id: headers['Session'] = self._session_id\n for (k,v) in headers.items():\n msg += LINE_SPLIT_STR + '%s: %s'%(k,str(v))\n msg += HEADER_END_STR # End headers\n if method != 'GET_PARAMETER' or 'x-RetransSeq' in headers:\n PRINT(self._get_time_str() + LINE_SPLIT_STR + msg)\n try:\n self._sock.send(msg)\n except socket.error, e:\n PRINT('Send msg error: %s'%e, RED)", "response": "Send a message to the server."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the transport type", "response": "def _get_transport_type(self):\n '''\u83b7\u53d6SETUP\u65f6\u9700\u8981\u7684Transport\u5b57\u7b26\u4e32\u53c2\u6570'''\n transport_str = ''\n ip_type = 'unicast' #if IPAddress(DEST_IP).is_unicast() else 'multicast'\n for t in TRANSPORT_TYPE_LIST:\n if t not in TRANSPORT_TYPE_MAP:\n PRINT('Error param: %s'%t,RED)\n sys.exit(1)\n if t.endswith('tcp'):\n transport_str += TRANSPORT_TYPE_MAP[t]%ip_type\n else:\n transport_str += TRANSPORT_TYPE_MAP[t]%(ip_type,DEST_IP,CLIENT_PORT_RANGE)\n return transport_str"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsend a heartbeat message to the server", "response": "def send_heart_beat_msg(self):\n '''\u5b9a\u65f6\u53d1\u9001GET_PARAMETER\u6d88\u606f\u4fdd\u6d3b'''\n if self.running:\n self.do_get_parameter()\n threading.Timer(HEARTBEAT_INTERVAL, self.send_heart_beat_msg).start()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef updateMovie(self, imageFile):\n try:\n photo = ImageTk.PhotoImage(Image.open(imageFile)) #stuck here !!!!!!\n except:\n print(\"photo error\")\n print('-'*60)\n traceback.print_exc(file=sys.stdout)\n print('-'*60)\n\n self.label.configure(image = photo, height=288)\n self.label.image = photo", "response": "Update the image file as video frame in the GUI."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconnect to the Server.", "response": "def connectToServer(self):\n \"\"\"Connect to the Server. Start a new RTSP/TCP session.\"\"\"\n self.rtspSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n try:\n self.rtspSocket.connect((self.serverAddr, self.serverPort))\n except:\n tkinter.messagebox.showwarning('Connection Failed', 'Connection to \\'%s\\' failed.' %self.serverAddr)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef sendRtspRequest(self, requestCode):\n #-------------\n # TO COMPLETE\n #-------------\n\n # Setup request\n if requestCode == self.SETUP and self.state == self.INIT:\n threading.Thread(target=self.recvRtspReply).start()\n # Update RTSP sequence number.\n # ...\n self.rtspSeq = 1\n\n # Write the RTSP request to be sent.\n # request = ...\n request = \"SETUP \" + str(self.fileName) + \"\\n\" + str(self.rtspSeq) + \"\\n\" + \" RTSP/1.0 RTP/UDP \" + str(self.rtpPort)\n\n self.rtspSocket.send(request)\n # Keep track of the sent request.\n # self.requestSent = ...\n self.requestSent = self.SETUP\n\n # Play request\n elif requestCode == self.PLAY and self.state == self.READY:\n # Update RTSP sequence number.\n # ...\n self.rtspSeq = self.rtspSeq + 1\n # Write the RTSP request to be sent.\n # request = ...\n request = \"PLAY \" + \"\\n\" + str(self.rtspSeq)\n\n self.rtspSocket.send(request)\n print('-'*60 + \"\\nPLAY request sent to Server...\\n\" + '-'*60)\n # Keep track of the sent request.\n # self.requestSent = ...\n self.requestSent = self.PLAY\n\n # Pause request\n elif requestCode == self.PAUSE and self.state == self.PLAYING:\n # Update RTSP sequence number.\n # ...\n self.rtspSeq = self.rtspSeq + 1\n # Write the RTSP request to be sent.\n # request = ...\n request = \"PAUSE \" + \"\\n\" + str(self.rtspSeq)\n self.rtspSocket.send(request)\n print('-'*60 + \"\\nPAUSE request sent to Server...\\n\" + '-'*60)\n # Keep track of the sent request.\n # self.requestSent = ...\n self.requestSent = self.PAUSE\n\n # Resume request\n\n\n # Teardown request\n elif requestCode == self.TEARDOWN and not self.state == self.INIT:\n # Update RTSP sequence number.\n # ...\n self.rtspSeq = self.rtspSeq + 1\n # Write the RTSP request to be sent.\n # request = ...\n request = \"TEARDOWN \" + \"\\n\" + str(self.rtspSeq)\n self.rtspSocket.send(request)\n print('-'*60 + \"\\nTEARDOWN request sent to Server...\\n\" + '-'*60)\n # Keep track of the sent request.\n # self.requestSent = ...\n self.requestSent = self.TEARDOWN\n\n else:\n return", "response": "Send RTSP request to the server."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreceive RTSP reply from the server.", "response": "def recvRtspReply(self):\n \"\"\"Receive RTSP reply from the server.\"\"\"\n while True:\n reply = self.rtspSocket.recv(1024)\n\n if reply:\n self.parseRtspReply(reply)\n\n # Close the RTSP socket upon requesting Teardown\n if self.requestSent == self.TEARDOWN:\n self.rtspSocket.shutdown(socket.SHUT_RDWR)\n self.rtspSocket.close()\n break"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parseRtspReply(self, data):\n print(\"Parsing Received Rtsp data...\")\n\n \"\"\"Parse the RTSP reply from the server.\"\"\"\n lines = data.split('\\n')\n seqNum = int(lines[1].split(' ')[1])\n\n # Process only if the server reply's sequence number is the same as the request's\n if seqNum == self.rtspSeq:\n session = int(lines[2].split(' ')[1])\n # New RTSP session ID\n if self.sessionId == 0:\n self.sessionId = session\n\n # Process only if the session ID is the same\n if self.sessionId == session:\n if int(lines[0].split(' ')[1]) == 200:\n if self.requestSent == self.SETUP:\n #-------------\n # TO COMPLETE\n #-------------\n # Update RTSP state.\n print(\"Updating RTSP state...\")\n # self.state = ...\n self.state = self.READY\n # Open RTP port.\n #self.openRtpPort()\n print(\"Setting Up RtpPort for Video Stream\")\n self.openRtpPort()\n\n elif self.requestSent == self.PLAY:\n self.state = self.PLAYING\n print('-'*60 + \"\\nClient is PLAYING...\\n\" + '-'*60)\n elif self.requestSent == self.PAUSE:\n self.state = self.READY\n\n # The play thread exits. A new thread is created on resume.\n self.playEvent.set()\n\n elif self.requestSent == self.TEARDOWN:\n # self.state = ...\n\n # Flag the teardownAcked to close the socket.\n self.teardownAcked = 1", "response": "Parse the RTSP reply from the server."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nopening a datagram socket binded to a specified port.", "response": "def openRtpPort(self):\n \"\"\"Open RTP socket binded to a specified port.\"\"\"\n #-------------\n # TO COMPLETE\n #-------------\n # Create a new datagram socket to receive RTP packets from the server\n # self.rtpSocket = ...\n\n\n # Set the timeout value of the socket to 0.5sec\n # ...\n self.rtpSocket.settimeout(0.5)\n# try:\n # Bind the socket to the address using the RTP port given by the client user\n # ...\n# except:\n# tkMessageBox.showwarning('Unable to Bind', 'Unable to bind PORT=%d' %self.rtpPort)\n\n try:\n #self.rtpSocket.connect(self.serverAddr,self.rtpPort)\n self.rtpSocket.bind((self.serverAddr,self.rtpPort)) # WATCH OUT THE ADDRESS FORMAT!!!!! rtpPort# should be bigger than 1024\n #self.rtpSocket.listen(5)\n print(\"Bind RtpPort Success\")\n\n except:\n tkinter.messagebox.showwarning('Connection Failed', 'Connection to rtpServer failed...')"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nread the image from the camera.", "response": "def read(self):\n \"\"\"https://picamera.readthedocs.io/en/release-1.13/recipes1.html#capturing-to-a-pil-image\"\"\"\n stream = BytesIO()\n self.cam.capture(stream, format='png')\n # \"Rewind\" the stream to the beginning so we can read its content\n stream.seek(0)\n return Image.open(stream)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nblock function. Opens OpenCV window to display stream.", "response": "def preview(self):\n \"\"\" Blocking function. Opens OpenCV window to display stream. \"\"\"\n win_name = 'Camera'\n cv2.namedWindow(win_name, cv2.WINDOW_AUTOSIZE)\n cv2.moveWindow(win_name,20,20)\n self.open()\n while(self.isOpened()):\n cv2.imshow(win_name,self._stream.read()[1])\n if cv2.waitKey(25) & 0xFF == ord('q'):\n break\n cv2.waitKey()\n cv2.destroyAllWindows()\n cv2.waitKey()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef createWidgets(self):\n # Create Setup button\n self.setup = Button(self.master, width=20, padx=3, pady=3)\n self.setup[\"text\"] = \"Setup\"\n self.setup[\"command\"] = self.setupMovie\n self.setup.grid(row=1, column=0, padx=2, pady=2)\n\n # Create Play button\n self.start = Button(self.master, width=20, padx=3, pady=3)\n self.start[\"text\"] = \"Play\"\n self.start[\"command\"] = self.playMovie\n self.start.grid(row=1, column=1, padx=2, pady=2)\n\n # Create Pause button\n self.pause = Button(self.master, width=20, padx=3, pady=3)\n self.pause[\"text\"] = \"Pause\"\n self.pause[\"command\"] = self.pauseMovie\n self.pause.grid(row=1, column=2, padx=2, pady=2)\n\n # Create Teardown button\n self.teardown = Button(self.master, width=20, padx=3, pady=3)\n self.teardown[\"text\"] = \"Teardown\"\n self.teardown[\"command\"] = self.exitClient\n self.teardown.grid(row=1, column=3, padx=2, pady=2)\n\n # Create a label to display the movie\n self.label = Label(self.master, height=19)\n self.label.grid(row=0, column=0, columnspan=4, sticky=W+E+N+S, padx=5, pady=5)", "response": "Create the GUI for the movie\n ."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef writeFrame(self, data):\n\n cachename = CACHE_FILE_NAME + str(self.sessionId) + CACHE_FILE_EXT\n\n try:\n file = open(cachename, \"wb\")\n except:\n print \"file open error\"\n\n try:\n file.write(data)\n except:\n print \"file write error\"\n\n file.close()\n\n return cachename", "response": "Write the received frame to a temp image file. Return the image file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef connectToServer(self):\n self.rtspSocket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n try:\n self.rtspSocket.connect((self.serverAddr, self.serverPort))\n except:\n tkMessageBox.showwarning('Connection Failed', 'Connection to \\'%s\\' failed.' %self.serverAddr)", "response": "Connect to the Server."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef openRtpPort(self):\n #-------------\n # TO COMPLETE\n #-------------\n # Create a new datagram socket to receive RTP packets from the server\n # self.rtpSocket = ...\n\n\n # Set the timeout value of the socket to 0.5sec\n # ...\n self.rtpSocket.settimeout(0.5)\n# try:\n # Bind the socket to the address using the RTP port given by the client user\n # ...\n# except:\n# tkMessageBox.showwarning('Unable to Bind', 'Unable to bind PORT=%d' %self.rtpPort)\n\n try:\n #self.rtpSocket.connect(self.serverAddr,self.rtpPort)\n self.rtpSocket.bind((self.serverAddr,self.rtpPort)) # WATCH OUT THE ADDRESS FORMAT!!!!! rtpPort# should be bigger than 1024\n #self.rtpSocket.listen(5)\n print \"Bind RtpPort Success\"\n\n except:\n tkMessageBox.showwarning('Connection Failed', 'Connection to rtpServer failed...')", "response": "Open a datagram socket binded to a specified port."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef handler(self):\n self.pauseMovie()\n if tkMessageBox.askokcancel(\"Quit?\", \"Are you sure you want to quit?\"):\n self.exitClient()\n else: # When the user presses cancel, resume playing.\n #self.playMovie()\n print \"Playing Movie\"\n threading.Thread(target=self.listenRtp).start()\n #self.playEvent = threading.Event()\n #self.playEvent.clear()\n self.sendRtspRequest(self.PLAY)", "response": "Handler on explicitly closing the GUI window."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef encode(self, version, padding, extension, cc, seqnum, marker, pt, ssrc, payload):\n\n\t\ttimestamp = int(time())\n\t\tprint(\"timestamp: \" + str(timestamp))\n\t\tself.header = bytearray(HEADER_SIZE)\n\t\t#--------------\n\t\t# TO COMPLETE\n\t\t#--------------\n\t\t# Fill the header bytearray with RTP header fields\n\n\t\t#RTP-version filed(V), must set to 2\n\t\t#padding(P),extension(X),number of contributing sources(CC) and marker(M) fields all set to zero in this lab\n\n\t\t#Because we have no other contributing sources(field CC == 0),the CSRC-field does not exist\n\t\t#Thus the length of the packet header is therefore 12 bytes\n\n\n\t\t\t#Above all done in ServerWorker.py\n\n\t\t# ...\n\t\t#header[] =\n\n\t\t#header[0] = version + padding + extension + cc + seqnum + marker + pt + ssrc\n\t\tself.header[0] = version << 6\n\t\tself.header[0] = self.header[0] | padding << 5\n\t\tself.header[0] = self.header[0] | extension << 4\n\t\tself.header[0] = self.header[0] | cc\n\t\tself.header[1] = marker << 7\n\t\tself.header[1] = self.header[1] | pt\n\n\t\tself.header[2] = seqnum >> 8\n\t\tself.header[3] = seqnum\n\n\t\tself.header[4] = (timestamp >> 24) & 0xFF\n\t\tself.header[5] = (timestamp >> 16) & 0xFF\n\t\tself.header[6] = (timestamp >> 8) & 0xFF\n\t\tself.header[7] = timestamp & 0xFF\n\n\t\tself.header[8] = ssrc >> 24\n\t\tself.header[9] = ssrc >> 16\n\t\tself.header[10] = ssrc >> 8\n\t\tself.header[11] = ssrc\n\n\n\t\t# Get the payload from the argument\n\t\t# self.payload = ...\n\t\tself.payload = payload", "response": "Encode the RTP packet with header fields and payload."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nskip through the input iterator without recording the content.", "response": "def skip(self):\n \"\"\"\n Eats through the input iterator without recording the content.\n \"\"\"\n for pos, element in self.element_iter:\n tag, class_attr = _tag_and_class_attr(element)\n if tag == \"div\" and \"thread\" in class_attr and pos == \"end\":\n break"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nparse an HTML element and stores the relevant attributes.", "response": "def _process_element(self, pos, e):\n \"\"\"\n Parses an incoming HTML element/node for data.\n\n pos -- the part of the element being parsed\n (start/end)\n e -- the element being parsed\n \"\"\"\n tag, class_attr = _tag_and_class_attr(e)\n\n start_of_message = tag == 'div' and class_attr == 'message' and pos == 'start'\n end_of_thread = tag == 'div' and 'thread' in class_attr and pos == 'end'\n\n if start_of_message and not self.messages_started:\n self.messages_started = True\n elif tag == \"span\" and pos == \"end\":\n if \"user\" in class_attr:\n self.current_sender = self.name_resolver.resolve(e.text)\n elif \"meta\" in class_attr:\n self.current_timestamp =\\\n parse_timestamp(e.text, self.use_utc, self.timezone_hints)\n elif tag == 'p' and pos == 'end':\n # This is only necessary because of accidental double

    nesting on\n # Facebook's end. Clearly, QA and testing is one of Facebook's strengths ;)\n if not self.current_text:\n self.current_text = e.text.strip() if e.text else ''\n elif tag == 'img' and pos == 'start':\n self.current_text = '(image reference: {})'.format(e.attrib['src'])\n elif (start_of_message or end_of_thread) and self.messages_started:\n if not self.current_timestamp:\n # This is the typical error when the new Facebook format is\n # used with the legacy parser.\n raise UnsuitableParserError\n if not self.current_sender:\n if not self.no_sender_warning_status:\n sys.stderr.write(\n \"\\rWARNING: The sender was missing in one or more parsed messages. \"\n \"This is an error on Facebook's end that unfortunately cannot be \"\n \"recovered from. Some or all messages in the output may show the \"\n \"sender as 'Unknown' within each thread.\\n\")\n self.no_sender_warning_status = True\n self.current_sender = \"Unknown\"\n\n cm = ChatMessage(timestamp=self.current_timestamp,\n sender=self.current_sender,\n content=self.current_text or '',\n seq_num=self.seq_num)\n self.messages += [cm]\n\n self.seq_num -= 1\n self.current_sender, self.current_timestamp, self.current_text = None, None, None\n\n return end_of_thread"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef should_record_thread(self, participants):\n if not self.thread_filter:\n return True\n if len(participants) != len(self.thread_filter):\n return False\n participants = [[p.lower()] + p.lower().split(\" \")\n for p in participants]\n matches = defaultdict(set)\n for e, p in enumerate(participants):\n for f in self.thread_filter:\n if f in p:\n matches[f].add(e)\n matched = set()\n for f in matches:\n if len(matches[f]) == 0:\n return False\n matched |= matches[f]\n return len(matched) == len(participants)", "response": "Determines if the thread should be parsed based on the filter given."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse_thread(self, participants, element_iter, require_flush):\n\n # Very rarely threads may lack information on who the\n # participants are. We will consider those threads corrupted\n # and skip them.\n participants_text = _truncate(', '.join(participants), 60)\n if participants:\n skip_thread = not self.should_record_thread(participants)\n participants_text = yellow(\"[%s]\" % participants_text)\n else:\n participants_text = \"unknown participants\"\n skip_thread = True\n if skip_thread:\n line = \"\\rSkipping chat thread with %s...\" % \\\n yellow(participants_text)\n else:\n participants_key = \", \".join(participants)\n if participants_key in self.chat_threads:\n thread_current_len = len(self.chat_threads[participants_key])\n line = \"\\rContinuing chat thread with %s %s...\" \\\n % (yellow(participants_text), magenta(\"<@%d messages>\" % thread_current_len))\n else:\n line = \"\\rDiscovered chat thread with %s...\" \\\n % yellow(participants_text)\n if self.progress_output:\n sys.stderr.write(line.ljust(self.last_line_len))\n sys.stderr.flush()\n self.last_line_len = len(line)\n\n parser = ChatThreadParser(\n element_iter, self.timezone_hints, self.use_utc, self.name_resolver,\n self.no_sender_warning, self.seq_num)\n\n if skip_thread:\n if require_flush:\n parser.skip()\n else:\n self.no_sender_warning, thread = parser.parse(participants)\n return thread", "response": "Parses a thread with appropriate CLI feedback."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nclear the output of the current process.", "response": "def _clear_output(self):\n \"\"\"\n Clears progress output (if any) that was written to the screen.\n \"\"\"\n # If progress output was being written, clear it from the screen.\n if self.progress_output:\n sys.stderr.write(\"\\r\".ljust(self.last_line_len))\n sys.stderr.write(\"\\r\")\n sys.stderr.flush()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nparses the HTML content as a stream.", "response": "def parse_impl(self):\n \"\"\"\n Parses the HTML content as a stream. This is far less memory\n intensive than loading the entire HTML file into memory, like\n BeautifulSoup does.\n \"\"\"\n\n # Cast to str to ensure not unicode under Python 2, as the parser\n # doesn't like that.\n parser = XMLParser(encoding=str('UTF-8'))\n element_iter = ET.iterparse(self.handle, events=(\"start\", \"end\"), parser=parser)\n for pos, element in element_iter:\n tag, class_attr = _tag_and_class_attr(element)\n if tag == \"h1\" and pos == \"end\":\n if not self.user:\n self.user = element.text.strip()\n elif tag == \"div\" and \"thread\" in class_attr and pos == \"start\":\n participants = self.parse_participants(element)\n thread = self.parse_thread(participants, element_iter, True)\n self.save_thread(thread)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef messages(path, thread, fmt, nocolor, timezones, utc, noprogress, resolve, directory):\n with colorize_output(nocolor):\n try:\n chat_history = _process_history(\n path=path, thread=thread, timezones=timezones,\n utc=utc, noprogress=noprogress, resolve=resolve)\n except ProcessingFailure:\n return\n if directory:\n set_all_color(enabled=False)\n write(fmt, chat_history, directory or sys.stdout)", "response": "Writes the chat history to a file."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef stats(path, fmt, nocolor, timezones, utc, noprogress, most_common, resolve, length):\n with colorize_output(nocolor):\n try:\n chat_history = _process_history(\n path=path, thread='', timezones=timezones,\n utc=utc, noprogress=noprogress, resolve=resolve)\n except ProcessingFailure:\n return\n statistics = ChatHistoryStatistics(\n chat_history, most_common=None if most_common < 0 else most_common)\n if fmt == 'text':\n statistics.write_text(sys.stdout, -1 if length < 0 else length)\n elif fmt == 'json':\n statistics.write_json(sys.stdout)\n elif fmt == 'pretty-json':\n statistics.write_json(sys.stdout, pretty=True)\n elif fmt == 'yaml':\n statistics.write_yaml(sys.stdout)", "response": "Analysis of Facebook chat history."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset sys. stdout and sys. stderr to stream.", "response": "def set_stream_color(stream, disabled):\n \"\"\"\n Remember what our original streams were so that we\n can colorize them separately, which colorama doesn't\n seem to natively support.\n \"\"\"\n original_stdout = sys.stdout\n original_stderr = sys.stderr\n\n init(strip=disabled)\n\n if stream != original_stdout:\n sys.stdout = original_stdout\n sys.stderr = BinaryStreamWrapper(stream, sys.stderr)\n if stream != original_stderr:\n sys.stderr = original_stderr\n sys.stdout = BinaryStreamWrapper(stream, sys.stdout)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _manual_lookup(self, facebook_id, facebook_id_string):\n resp = self._session.get(\n 'https://www.facebook.com/%s' % facebook_id,\n allow_redirects=True, timeout=10\n )\n # No point in trying to get this using BeautifulSoup. The HTML here\n # is the very epitome of what it is to be invalid...\n m = _MANUAL_NAME_MATCHER.search(resp.text)\n if m:\n name = m.group(1)\n else:\n name = facebook_id_string\n self._cached_profiles[facebook_id] = name\n return name", "response": "Get the user s name from Facebook."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse_timestamp(raw_timestamp, use_utc, hints):\n global FACEBOOK_TIMESTAMP_FORMATS\n timestamp_string, offset = raw_timestamp.rsplit(\" \", 1)\n if \"UTC+\" in offset or \"UTC-\" in offset:\n if offset[3] == '-':\n offset = [-1 * int(x) for x in offset[4:].split(':')]\n else:\n offset = [int(x) for x in offset[4:].split(':')]\n else:\n offset_hint = hints.get(offset, None)\n if not offset_hint:\n if offset not in TIMEZONE_MAP:\n raise UnexpectedTimeFormatError(raw_timestamp)\n elif len(TIMEZONE_MAP[offset]) > 1:\n raise AmbiguousTimeZoneError(offset, TIMEZONE_MAP[offset])\n offset = list(TIMEZONE_MAP[offset].keys())[0][:2]\n else:\n offset = offset_hint\n\n if len(offset) == 1:\n # Timezones without minute offset may be formatted\n # as UTC+X (e.g UTC+8)\n offset += [0]\n\n delta = dt_timedelta(hours=offset[0], minutes=offset[1])\n\n # Facebook changes the format depending on whether the user is using\n # 12-hour or 24-hour clock settings.\n for number, date_parser in enumerate(_LOCALIZED_DATE_PARSERS):\n timestamp = date_parser.parse(timestamp_string)\n if timestamp is None:\n continue\n # Re-orient the list to ensure that the one that worked is tried first next time.\n if number > 0:\n del FACEBOOK_TIMESTAMP_FORMATS[number]\n FACEBOOK_TIMESTAMP_FORMATS = [date_parser] + FACEBOOK_TIMESTAMP_FORMATS\n break\n else:\n raise UnexpectedTimeFormatError(raw_timestamp)\n if use_utc:\n timestamp -= delta\n return timestamp.replace(tzinfo=pytz.utc)\n else:\n return timestamp.replace(tzinfo=TzInfoByOffset(delta))", "response": "Parses a string into a FACEBOOK_TIMESTAMP_FORMATS dictionary."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nregister an exporter to use for a specific type of object.", "response": "def register_exporter(name, base_class):\n \"\"\"\n Register an exporter to use for a :class:`Part `,\n :class:`Assembly `, or both\n (with :class:`Component `).\n\n Registration is necessary to use with\n :meth:`Component.exporter() `.\n\n :param name: name (or 'key') of exporter\n :type name: :class:`str`\n :param base_class: class of :class:`Component ` to export\n :type base_class: :class:`type`\n\n .. doctest::\n\n >>> from cqparts import Part\n >>> from cqparts.codec import Exporter, register_exporter\n\n >>> @register_exporter('my_type', Part)\n ... class MyExporter(Exporter):\n ... def __call__(self, filename='out.mytype'):\n ... print(\"export %r to %s\" % (self.obj, filename))\n\n >>> from cqparts_misc.basic.primatives import Sphere\n >>> thing = Sphere(radius=5)\n >>> thing.exporter('my_type')('some-file.mytype')\n export to some-file.mytype\n\n \"\"\"\n # Verify params\n if not isinstance(name, str) or (not name):\n raise TypeError(\"invalid name: %r\" % name)\n if not issubclass(base_class, Component):\n raise TypeError(\"invalid base_class: %r, must be a %r subclass\" % (base_class, Component))\n\n def decorator(cls):\n # --- Verify\n # Can only be registered once\n if base_class in exporter_index[name]:\n raise TypeError(\"'%s' exporter type %r has already been registered\" % (\n name, base_class\n ))\n\n # Verify class hierarchy will not conflict\n # (so you can't have an exporter for a Component, and a Part. must be\n # an Assembly, and a Part, respectively)\n for key in exporter_index[name].keys():\n if issubclass(key, base_class) or issubclass(base_class, key):\n raise TypeError(\"'%s' exporter type %r is in conflict with %r\" % (\n name, base_class, key,\n ))\n\n # --- Index\n exporter_index[name][base_class] = cls\n\n return cls\n return decorator"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget an exporter instance for the given object", "response": "def get_exporter(obj, name):\n \"\"\"\n Get an exporter for the\n\n :param obj: object to export\n :type obj: :class:`Component `\n :param name: registered name of exporter\n :type name: :class:`str`\n :return: an exporter instance of the given type\n :rtype: :class:`Exporter`\n :raises TypeError: if exporter cannot be found\n \"\"\"\n if name not in exporter_index:\n raise TypeError(\n (\"exporter type '%s' is not registered: \" % name) +\n (\"registered types: %r\" % sorted(exporter_index.keys()))\n )\n\n for base_class in exporter_index[name]:\n if isinstance(obj, base_class):\n return exporter_index[name][base_class](obj)\n\n raise TypeError(\"exporter type '%s' for a %r is not registered\" % (\n name, type(obj)\n ))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_importer(cls, name):\n if name not in importer_index:\n raise TypeError(\n (\"importer type '%s' is not registered: \" % name) +\n (\"registered types: %r\" % sorted(importer_index.keys()))\n )\n\n for base_class in importer_index[name]:\n if issubclass(cls, base_class):\n return importer_index[name][base_class](cls)\n\n raise TypeError(\"importer type '%s' for a %r is not registered\" % (\n name, cls\n ))", "response": "Returns an importer instance of the given type."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nparsing pagenated list of products", "response": "def parse(self, response):\n \"\"\"Parse pagenated list of products\"\"\"\n\n # Check if page is out of range\n no_more_products = re.search(\n r'No matching products were found',\n response.css('div.paged-results').extract_first(),\n flags=re.I\n )\n if no_more_products:\n pass # no more pages to populate, stop scraping\n\n else:\n # Scrape products list\n for product in response.css('article.product-list__item'):\n product_url = product.css('a::attr(\"href\")').extract_first()\n yield response.follow(product_url, self.parse_detail)\n\n (base, params) = split_url(response.url)\n params.update({'page': int(params.get('page', '1')) + 1})\n next_page_url = join_url(base, params)\n self.logger.info(next_page_url)\n yield response.follow(next_page_url, self.parse)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nparses individual product s detail", "response": "def parse_detail(self, response):\n \"\"\"Parse individual product's detail\"\"\"\n # Product Information (a start)\n product_data = {\n 'url': response.url,\n 'name': response.css('div.page-title h1::text').extract_first(),\n }\n\n # Inventory Number\n inventory_number = re.search(\n r'(?P\\d+)$',\n response.css('span.product-in::text').extract_first(),\n ).group('inv_num')\n product_data.update({'in': inventory_number})\n\n # Specifications (arbitrary key:value pairs)\n specs_table = response.css('#tab-specs dl')\n for row in specs_table.css('div.spec-row'):\n keys = row.css('dt::text').extract()\n values = row.css('dd::text').extract()\n product_data.update({\n key: value\n for (key, value) in zip(keys, values)\n })\n\n self.logger.info(product_data['name'])\n yield product_data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef solver(constraints, coord_sys=None):\n\n if coord_sys is None:\n coord_sys = CoordSystem() # default\n\n # Verify list contains constraints\n for constraint in constraints:\n if not isinstance(constraint, Constraint):\n raise ValueError(\"{!r} is not a constraint\".format(constraint))\n\n solved_count = 0\n\n indexed = list(constraints)\n\n # Continue running solver until no solution is found\n while indexed:\n indexes_solved = []\n for (i, constraint) in enumerate(indexed):\n\n # Fixed\n if isinstance(constraint, Fixed):\n indexes_solved.append(i)\n yield (\n constraint.mate.component,\n coord_sys + constraint.world_coords + (CoordSystem() - constraint.mate.local_coords)\n )\n\n # Coincident\n elif isinstance(constraint, Coincident):\n try:\n relative_to = constraint.to_mate.world_coords\n except ValueError:\n relative_to = None\n\n if relative_to is not None:\n indexes_solved.append(i)\n # note: relative_to are world coordinates; adding coord_sys is not necessary\n yield (\n constraint.mate.component,\n relative_to + (CoordSystem() - constraint.mate.local_coords)\n )\n\n if not indexes_solved: # no solutions found\n # At least 1 solution must be found each iteration.\n # if not, we'll just be looping forever.\n break\n else:\n # remove constraints from indexed list (so they're not solved twice)\n for j in reversed(indexes_solved):\n del indexed[j]\n\n if indexed:\n raise ValueError(\"not all constraints could be solved\")", "response": "Solve constraints. Solutions pair :class:`Constraint `\n instances with their suitable :class:`CoordSystem `\n world coordinates.\n\n :param constraints: constraints to solve\n :type constraints: iterable of :class:`Constraint `\n :param coord_sys: coordinate system to offset solutions (default: no offset)\n :type coord_sys: :class:`CoordSystem `\n\n :return: generator of (:class:`Component `,\n :class:`CoordSystem `) tuples."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef solid(solid_in):\n if isinstance(solid_in, cadquery.Solid):\n return solid_in\n elif isinstance(solid_in, cadquery.CQ):\n return solid_in.val()\n\n raise CastingError(\n \"Cannot cast object type of {!r} to a solid\".format(solid_in)\n )", "response": "Casts a solid instance to cadquery. Solid instance."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef vector(vect_in):\n if isinstance(vect_in, cadquery.Vector):\n return vect_in\n elif isinstance(vect_in, (tuple, list)):\n return cadquery.Vector(vect_in)\n\n raise CastingError(\n \"Cannot cast object type of {!r} to a vector\".format(vect_in)\n )", "response": "Converts a single object to a cadquery. Vector instance."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating solid to cut the fastener s into the head of the fastener s material.", "response": "def make_cutter(self):\n \"\"\"\n Create solid to subtract from material to make way for the fastener's\n head (just the head)\n \"\"\"\n return cadquery.Workplane('XY') \\\n .circle(self.access_diameter / 2) \\\n .extrude(self.access_height)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nverifying version consistency and return the correct classifier", "response": "def version_classifier(version_str):\n \"\"\"\n Verify version consistency:\n version number must correspond to the correct \"Development Status\" classifier\n :raises: ValueError if error found, but ideally this function does nothing\n \"\"\"\n # cast version\n version = LooseVersion(version_str)\n\n for (test_ver, classifier) in reversed(sorted(VERSION_CLASSIFIER_MAP, key=lambda x: x[0])):\n if version >= test_ver:\n return classifier\n\n raise ValueError(\"could not find valid 'Development Status' classifier for v{}\".format(version_str))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntopping of the stator", "response": "def mate_top(self):\n \" top of the stator\"\n return Mate(self, CoordSystem(\n origin=(0, 0, self.length/2),\n xDir=(0, 1, 0),\n normal=(0, 0, 1)\n ))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nbottoms of the stator", "response": "def mate_bottom(self):\n \" bottom of the stator\"\n return Mate(self, CoordSystem(\n origin=(0, 0, -self.length/2),\n xDir=(1, 0, 0),\n normal=(0, 0, -1)\n ))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the names of all class parameters.", "response": "def class_param_names(cls, hidden=True):\n \"\"\"\n Return the names of all class parameters.\n\n :param hidden: if ``False``, excludes parameters with a ``_`` prefix.\n :type hidden: :class:`bool`\n :return: set of parameter names\n :rtype: :class:`set`\n \"\"\"\n param_names = set(\n k for (k, v) in cls.__dict__.items()\n if isinstance(v, Parameter)\n )\n for parent in cls.__bases__:\n if hasattr(parent, 'class_param_names'):\n param_names |= parent.class_param_names(hidden=hidden)\n\n if not hidden:\n param_names = set(n for n in param_names if not n.startswith('_'))\n return param_names"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a dictionary of all class parameters and their values.", "response": "def class_params(cls, hidden=True):\n \"\"\"\n Gets all class parameters, and their :class:`Parameter` instances.\n\n :return: dict of the form: ``{: , ... }``\n :rtype: :class:`dict`\n\n .. note::\n\n The :class:`Parameter` instances returned do not have a value, only\n a default value.\n\n To get a list of an **instance's** parameters and values, use\n :meth:`params` instead.\n\n \"\"\"\n param_names = cls.class_param_names(hidden=hidden)\n return dict(\n (name, getattr(cls, name))\n for name in param_names\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets all instance parameters and their cast values.", "response": "def params(self, hidden=True):\n \"\"\"\n Gets all instance parameters, and their *cast* values.\n\n :return: dict of the form: ``{: , ... }``\n :rtype: :class:`dict`\n \"\"\"\n param_names = self.class_param_names(hidden=hidden)\n return dict(\n (name, getattr(self, name))\n for name in param_names\n )"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef serialize(self):\n return {\n # Encode library information (future-proofing)\n 'lib': {\n 'name': 'cqparts',\n 'version': __version__,\n },\n # class & name record, for automated import when decoding\n 'class': {\n 'module': type(self).__module__,\n 'name': type(self).__name__,\n },\n 'params': self.serialize_parameters(),\n }", "response": "Serialize a cqpart object to an object that can be serialized by the json module."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the parameter data in its serialized form.", "response": "def serialize_parameters(self):\n \"\"\"\n Get the parameter data in its serialized form.\n\n Data is serialized by each parameter's :meth:`Parameter.serialize`\n implementation.\n\n :return: serialized parameter data in the form: ``{: , ...}``\n :rtype: :class:`dict`\n \"\"\"\n\n # Get parameter data\n class_params = self.class_params()\n instance_params = self.params()\n\n # Serialize each parameter\n serialized = {}\n for name in class_params.keys():\n param = class_params[name]\n value = instance_params[name]\n serialized[name] = param.serialize(value)\n return serialized"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates instance from serial data", "response": "def deserialize(data):\n \"\"\"\n Create instance from serial data\n \"\"\"\n # Import module & get class\n try:\n module = import_module(data.get('class').get('module'))\n cls = getattr(module, data.get('class').get('name'))\n except ImportError:\n raise ImportError(\"No module named: %r\" % data.get('class').get('module'))\n except AttributeError:\n raise ImportError(\"module %r does not contain class %r\" % (\n data.get('class').get('module'),\n data.get('class').get('name')\n ))\n\n # Deserialize parameters\n class_params = cls.class_params(hidden=True)\n params = dict(\n (name, class_params[name].deserialize(value))\n for (name, value) in data.get('params').items()\n )\n\n # Instantiate new instance\n return cls(**params)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nclassing decorator to add :class:`Part ` or :class:`Assembly ` to the ``cqparts`` search index: .. testcode:: import cqparts from cqparts.params import * # Created Part or Assembly @cqparts.search.register( type='motor', current_class='dc', part_number='ABC123X', ) class SomeMotor(cqparts.Assembly): shaft_diam = PositiveFloat(5) def make_components(self): return {} # build assembly content motor_class = cqparts.search.find(part_number='ABC123X') motor = motor_class(shaft_diam=6.0) Then use :meth:`find` &/or :meth:`search` to instantiate it. .. warning:: Multiple classes *can* be registered with identical criteria, but should be avoided. If multiple classes share the same criteria, :meth:`find` will never yield the part you want. Try adding unique criteria, such as *make*, *model*, *part number*, *library name*, &/or *author*. To avoid this, learn more in :ref:`tutorial_component-index`.", "response": "def register(**criteria):\n \"\"\"\n class decorator to add :class:`Part ` or\n :class:`Assembly ` to the ``cqparts`` search index:\n\n .. testcode::\n\n import cqparts\n from cqparts.params import *\n\n # Created Part or Assembly\n @cqparts.search.register(\n type='motor',\n current_class='dc',\n part_number='ABC123X',\n )\n class SomeMotor(cqparts.Assembly):\n shaft_diam = PositiveFloat(5)\n def make_components(self):\n return {} # build assembly content\n\n motor_class = cqparts.search.find(part_number='ABC123X')\n motor = motor_class(shaft_diam=6.0)\n\n Then use :meth:`find` &/or :meth:`search` to instantiate it.\n\n .. warning::\n\n Multiple classes *can* be registered with identical criteria, but\n should be avoided.\n\n If multiple classes share the same criteria, :meth:`find` will never\n yield the part you want.\n\n Try adding unique criteria, such as *make*, *model*, *part number*,\n *library name*, &/or *author*.\n\n To avoid this, learn more in :ref:`tutorial_component-index`.\n \"\"\"\n\n def inner(cls):\n # Add class references to search index\n class_list.add(cls)\n for (category, value) in criteria.items():\n index[category][value].add(cls)\n\n # Retain search criteria\n _entry = dict((k, set([v])) for (k, v) in criteria.items())\n if cls not in class_criteria:\n class_criteria[cls] = _entry\n else:\n for key in _entry.keys():\n class_criteria[cls][key] = class_criteria[cls].get(key, set()) | _entry[key]\n\n # Return class\n return cls\n\n return inner"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsearches registered component classes matching the given criteria.", "response": "def search(**criteria):\n \"\"\"\n Search registered *component* classes matching the given criteria.\n\n :param criteria: search criteria of the form: ``a='1', b='x'``\n :return: parts registered with the given criteria\n :rtype: :class:`set`\n\n Will return an empty :class:`set` if nothing is found.\n\n ::\n\n from cqparts.search import search\n import cqparts_motors # example of a 3rd party lib\n\n # Get all DC motor classes\n dc_motors = search(type='motor', current_class='dc')\n\n # For more complex queries:\n air_cooled = search(cooling='air')\n non_aircooled_dcmotors = dc_motors - air_cooled\n # will be all DC motors that aren't air-cooled\n \"\"\"\n # Find all parts that match the given criteria\n results = copy(class_list) # start with full list\n for (category, value) in criteria.items():\n results &= index[category][value]\n\n return results"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfinds a single component class with the given criteria.", "response": "def find(**criteria):\n \"\"\"\n Find a single *component* class with the given criteria.\n\n Finds classes indexed with :meth:`register`\n\n :raises SearchMultipleFoundError: if more than one result found\n :raises SearchNoneFoundError: if nothing found\n\n ::\n\n from cqparts.search import find\n import cqparts_motors # example of a 3rd party lib\n\n # get a specific motor class\n motor_class = find(type='motor', part_number='ABC123X')\n motor = motor_class(shaft_diameter=6.0)\n \"\"\"\n # Find all parts that match the given criteria\n results = search(**criteria)\n\n # error cases\n if len(results) > 1:\n raise SearchMultipleFoundError(\"%i results found\" % len(results))\n elif not results:\n raise SearchNoneFoundError(\"%i results found\" % len(results))\n\n # return found Part|Assembly class\n return results.pop()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nwrapping a function to always call with the given ``common`` named parameters. :property common: criteria common to your function call :return: decorator function :rtype: :class:`function` .. doctest:: >>> import cqparts >>> from cqparts.search import register, search, find >>> from cqparts.search import common_criteria >>> # Somebody elses (boring) library may register with... >>> @register(a='one', b='two') ... class BoringThing(cqparts.Part): ... pass >>> # But your library is awesome; only registering with unique criteria... >>> lib_criteria = { ... 'author': 'your_name', ... 'libname': 'awesome_things', ... } >>> awesome_register = common_criteria(**lib_criteria)(register) >>> @awesome_register(a='one', b='two') # identical to BoringThing ... class AwesomeThing(cqparts.Part): ... pass >>> # So lets try a search >>> len(search(a='one', b='two')) # doctest: +SKIP 2 >>> # oops, that returned both classes >>> # To narrow it down, we add something unique: >>> len(search(a='one', b='two', libname='awesome_things')) # finds only yours # doctest: +SKIP 1 >>> # or, we could use common_criteria again... >>> awesome_search = common_criteria(**lib_criteria)(search) >>> awesome_find = common_criteria(**lib_criteria)(find) >>> len(awesome_search(a='one', b='two')) # doctest: +SKIP 1 >>> awesome_find(a='one', b='two').__name__ 'AwesomeThing' A good universal way to apply unique criteria is with .. testcode:: import cadquery, cqparts from cqparts.search import register, common_criteria _register = common_criteria(module=__name__)(register) @_register(shape='cube', scale='unit') class Cube(cqparts.Part): # just an example... def make(self): return cadquery.Workplane('XY').box(1, 1, 1)", "response": "def common_criteria(**common):\n \"\"\"\n Wrap a function to always call with the given ``common`` named parameters.\n\n :property common: criteria common to your function call\n :return: decorator function\n :rtype: :class:`function`\n\n .. doctest::\n\n >>> import cqparts\n >>> from cqparts.search import register, search, find\n >>> from cqparts.search import common_criteria\n\n >>> # Somebody elses (boring) library may register with...\n >>> @register(a='one', b='two')\n ... class BoringThing(cqparts.Part):\n ... pass\n\n >>> # But your library is awesome; only registering with unique criteria...\n >>> lib_criteria = {\n ... 'author': 'your_name',\n ... 'libname': 'awesome_things',\n ... }\n\n >>> awesome_register = common_criteria(**lib_criteria)(register)\n\n >>> @awesome_register(a='one', b='two') # identical to BoringThing\n ... class AwesomeThing(cqparts.Part):\n ... pass\n\n >>> # So lets try a search\n >>> len(search(a='one', b='two')) # doctest: +SKIP\n 2\n >>> # oops, that returned both classes\n\n >>> # To narrow it down, we add something unique:\n >>> len(search(a='one', b='two', libname='awesome_things')) # finds only yours # doctest: +SKIP\n 1\n\n >>> # or, we could use common_criteria again...\n >>> awesome_search = common_criteria(**lib_criteria)(search)\n >>> awesome_find = common_criteria(**lib_criteria)(find)\n >>> len(awesome_search(a='one', b='two')) # doctest: +SKIP\n 1\n >>> awesome_find(a='one', b='two').__name__\n 'AwesomeThing'\n\n A good universal way to apply unique criteria is with\n\n .. testcode::\n\n import cadquery, cqparts\n from cqparts.search import register, common_criteria\n _register = common_criteria(module=__name__)(register)\n\n @_register(shape='cube', scale='unit')\n class Cube(cqparts.Part):\n # just an example...\n def make(self):\n return cadquery.Workplane('XY').box(1, 1, 1)\n\n \"\"\"\n def decorator(func):\n def inner(*args, **kwargs):\n merged_kwargs = copy(common)\n merged_kwargs.update(kwargs)\n return func(*args, **merged_kwargs)\n return inner\n return decorator"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncombines bounding boxes to result in a single BoundBox that encloses all of them.", "response": "def merge_boundboxes(*bb_list):\n \"\"\"\n Combine bounding boxes to result in a single BoundBox that encloses\n all of them.\n\n :param bb_list: List of bounding boxes\n :type bb_list: :class:`list` of :class:`cadquery.BoundBox`\n \"\"\"\n # Verify types\n if not all(isinstance(x, cadquery.BoundBox) for x in bb_list):\n raise TypeError(\n \"parameters must be cadquery.BoundBox instances: {!r}\".format(bb_list)\n )\n\n if len(bb_list) <= 1:\n return bb_list[0] # if only 1, nothing to merge; simply return it\n\n # Find the smallest bounding box to enclose each of those given\n min_params = list(min(*vals) for vals in zip( # minimum for each axis\n *((bb.xmin, bb.ymin, bb.zmin) for bb in bb_list)\n ))\n max_params = list(max(*vals) for vals in zip( # maximum for each axis\n *((bb.xmax, bb.ymax, bb.zmax) for bb in bb_list)\n ))\n\n #__import__('ipdb').set_trace()\n\n # Create new object with combined parameters\n WrappedType = type(bb_list[0].wrapped) # assuming they're all the same\n wrapped_bb = WrappedType(*(min_params + max_params))\n return cadquery.BoundBox(wrapped_bb)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a new instance of a new cqterm object with the same attributes as the given plane", "response": "def from_plane(cls, plane):\n \"\"\"\n :param plane: cadquery plane instance to base coordinate system on\n :type plane: :class:`cadquery.Plane`\n :return: duplicate of the given plane, in this class\n :rtype: :class:`CoordSystem`\n\n usage example:\n\n .. doctest::\n\n >>> import cadquery\n >>> from cqparts.utils.geometry import CoordSystem\n >>> obj = cadquery.Workplane('XY').circle(1).extrude(5)\n >>> plane = obj.faces(\">Z\").workplane().plane\n >>> isinstance(plane, cadquery.Plane)\n True\n >>> coord_sys = CoordSystem.from_plane(plane)\n >>> isinstance(coord_sys, CoordSystem)\n True\n >>> coord_sys.origin.z\n 5.0\n \"\"\"\n return cls(\n origin=plane.origin.toTuple(),\n xDir=plane.xDir.toTuple(),\n normal=plane.zDir.toTuple(),\n )"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef from_transform(cls, matrix):\n # Create reference points at origin\n offset = FreeCAD.Vector(0, 0, 0)\n x_vertex = FreeCAD.Vector(1, 0, 0) # vertex along +X-axis\n z_vertex = FreeCAD.Vector(0, 0, 1) # vertex along +Z-axis\n\n # Transform reference points\n offset = matrix.multiply(offset)\n x_vertex = matrix.multiply(x_vertex)\n z_vertex = matrix.multiply(z_vertex)\n\n # Get axis vectors (relative to offset vertex)\n x_axis = x_vertex - offset\n z_axis = z_vertex - offset\n\n # Return new instance\n vect_tuple = lambda v: (v.x, v.y, v.z)\n return cls(\n origin=vect_tuple(offset),\n xDir=vect_tuple(x_axis),\n normal=vect_tuple(z_axis),\n )", "response": "r Returns a unit zero offset coordinate system transformed by the given affine transform."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a randomized coordinate system.", "response": "def random(cls, span=1, seed=None):\n \"\"\"\n Creates a randomized coordinate system.\n\n Useful for confirming that an *assembly* does not rely on its\n origin coordinate system to remain intact.\n\n For example, the :class:`CoordSysIndicator` *assembly* aligns 3 boxes\n along each of the :math:`XYZ` axes.\n Positioning it randomly by setting its ``world_coords`` shows that each\n box is always positioned orthogonally to the other two.\n\n .. doctest::\n\n from cqparts_misc.basic.indicators import CoordSysIndicator\n from cqparts.display import display\n from cqparts.utils import CoordSystem\n\n cs = CoordSysIndicator()\n cs.world_coords = CoordSystem.random()\n\n display(cs) # doctest: +SKIP\n\n\n :param span: origin of return will be :math:`\\pm span` per axis\n :param seed: if supplied, return is psudorandom (repeatable)\n :type seed: hashable object\n :return: randomized coordinate system\n :rtype: :class:`CoordSystem`\n \"\"\"\n if seed is not None:\n random.seed(seed)\n\n def rand_vect(min, max):\n return (\n random.uniform(min, max),\n random.uniform(min, max),\n random.uniform(min, max),\n )\n\n while True:\n try:\n return cls(\n origin=rand_vect(-span, span),\n xDir=rand_vect(-1, 1),\n normal=rand_vect(-1, 1),\n )\n except RuntimeError: # Base.FreeCADError inherits from RuntimeError\n # Raised if xDir & normal vectors are parallel.\n # (the chance is very low, but it could happen)\n continue"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconvert a function to a cqparts. Part instance.", "response": "def as_part(func):\n \"\"\"\n Converts a function to a :class:`Part ` instance.\n\n So the conventionally defined *part*::\n\n import cadquery\n from cqparts import Part\n from cqparts.params import Float\n\n class Box(Part):\n x = Float(1)\n y = Float(2)\n z = Float(4)\n def make(self):\n return cadquery.Workplane('XY').box(self.x, self.y, self.z)\n\n box = Box(x=6, y=3, z=1)\n\n May also be written as::\n\n import cadquery\n from cqparts.utils.wrappers import as_part\n\n @as_part\n def make_box(x=1, y=2, z=4):\n return cadquery.Workplane('XY').box(x, y, z)\n\n box = make_box(x=6, y=3, z=1)\n\n In both cases, ``box`` is a :class:`Part ` instance.\n \"\"\"\n from .. import Part\n \n def inner(*args, **kwargs):\n part_class = type(func.__name__, (Part,), {\n 'make': lambda self: func(*args, **kwargs),\n })\n return part_class()\n inner.__doc__ = func.__doc__\n return inner"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef find(self, *args, **kwargs):\n result = self.search(*args, **kwargs)\n\n if len(result) == 0:\n raise SearchNoneFoundError(\"nothing found\")\n elif len(result) > 1:\n raise SearchMultipleFoundError(\"more than one result found\")\n\n return result[0]", "response": "Performs the same action as : meth : search but asserts a single result."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add(self, id, obj, criteria={}, force=False, _check_id=True):\n # Verify component\n if not isinstance(obj, Component):\n raise TypeError(\"can only add(%r), component is a %r\" % (\n Component, type(obj)\n ))\n\n # Serialize object\n obj_data = obj.serialize()\n\n # Add to database\n q = tinydb.Query()\n if (force or _check_id) and self.items.count(q.id == id):\n if force:\n self.items.remove(q.id == id)\n else:\n raise ValueError(\"entry with id '%s' already exists\" % (id))\n\n index = self.items.insert({\n 'id': id, # must be unique\n 'criteria': criteria,\n 'obj': obj_data,\n })\n\n return index", "response": "Add a new entry to the catalogue."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef start_point(self):\n edge = self.result.wire().val().Edges()[0]\n return edge.Vertices()[0].Center()", "response": "Start vertex of effect\n\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the coordinate system at the start of the effect.", "response": "def start_coordsys(self):\n \"\"\"\n Coordinate system at start of effect.\n\n All axes are parallel to the original vector evaluation location, with\n the origin moved to this effect's start point.\n\n :return: coordinate system at start of effect\n :rtype: :class:`CoordSys`\n \"\"\"\n coordsys = copy(self.location)\n coordsys.origin = self.start_point\n return coordsys"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef end_coordsys(self):\n coordsys = copy(self.location)\n coordsys.origin = self.end_point\n return coordsys", "response": "Returns the coordinate system at the end of the effect."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the planar distance of origin from self. location along Z axis", "response": "def origin_displacement(self):\n \"\"\"\n planar distance of start point from self.location along :math:`-Z` axis\n \"\"\"\n return self.start_point.sub(self.location.origin).dot(-self.location.zDir)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef max_effect_length(self):\n # Method: using each solid's bounding box:\n # - get vector from start to bounding box center\n # - get vector from bounding box center to any corner\n # - add the length of both vectors\n # - return the maximum of these from all solids\n def max_length_iter():\n for part in self.parts:\n if part.local_obj.findSolid():\n bb = part.local_obj.findSolid().BoundingBox()\n yield abs(bb.center - self.location.origin) + (bb.DiagonalLength / 2)\n try:\n return max(max_length_iter())\n except ValueError as e:\n # if iter returns before yielding anything\n return 0", "response": "Returns the maximum length of the longest possible effect vector."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef perform_evaluation(self):\n # Create effect vector (with max length)\n if not self.max_effect_length:\n # no effect is possible, return an empty list\n return []\n edge = cadquery.Edge.makeLine(\n self.location.origin,\n self.location.origin + (self.location.zDir * -(self.max_effect_length + 1)) # +1 to avoid rounding errors\n )\n wire = cadquery.Wire.assembleEdges([edge])\n wp = cadquery.Workplane('XY').newObject([wire])\n\n effect_list = [] # list of self.effect_class instances\n for part in self.parts:\n solid = part.world_obj.translate((0, 0, 0))\n intersection = solid.intersect(copy(wp))\n effect = self.effect_class(\n location=self.location,\n part=part,\n result=intersection,\n )\n if effect:\n effect_list.append(effect)\n\n return sorted(effect_list)", "response": "Compute the effect vector for the given set of parts and return the list of effects on the given parts."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndisplays the given component based on the environment it s run from.", "response": "def display(component, **kwargs):\n \"\"\"\n Display the given component based on the environment it's run from.\n See :class:`DisplayEnvironment `\n documentation for more details.\n\n :param component: component to display\n :type component: :class:`Component `\n\n Additional parameters may be used by the chosen\n :class:`DisplayEnvironment `\n \"\"\"\n\n disp_env = get_display_environment()\n if disp_env is None:\n raise LookupError('valid display environment could not be found')\n\n disp_env.display(component, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef render_props(**kwargs):\n # Pop named args\n template = kwargs.pop('template', 'default')\n doc = kwargs.pop('doc', \"render properties\")\n\n params = {}\n\n # Template parameters\n if template in TEMPLATE:\n params.update(TEMPLATE[template])\n\n # override template with any additional params\n params.update(kwargs)\n\n # return parameter instance\n return RenderParam(params, doc=doc)", "response": "Return a valid property for cleaner referencing in cqparts. Part "} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\napplying the indentation into a workplane.", "response": "def apply(self, workplane, world_coords=CoordSystem()):\n \"\"\"\n Application of screwdrive indentation into a workplane\n (centred on the given world coordinates).\n\n :param workplane: workplane with solid to alter\n :type workplane: :class:`cadquery.Workplane`\n :param world_coords: coorindate system relative to ``workplane`` to move\n cutter before it's cut from the ``workplane``\n :type world_coords: :class:`CoordSystem`\n \"\"\"\n\n self.world_coords = world_coords\n return workplane.cut(self.world_obj)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding the parameter lines to the list of documentation lines that are used in the parametric object.", "response": "def add_parametric_object_params(prepend=False, hide_private=True):\n \"\"\"\n Add :class:`ParametricObject ` parameters\n in a list to the *docstring*.\n\n This is only intended to be used with *sphinx autodoc*.\n\n In your *sphinx* ``config.py`` file::\n\n from cqparts.utils.sphinx import add_parametric_object_params\n def setup(app):\n app.connect(\"autodoc-process-docstring\", add_parametric_object_params())\n\n Then, when documenting your :class:`Part ` or\n :class:`Assembly ` the\n :class:`ParametricObject ` parameters\n will also be documented in the output.\n\n :param prepend: if True, parameters are added to the beginning of the *docstring*.\n otherwise, they're appended at the end.\n :type prepend: :class:`bool`\n :param hide_private: if True, parameters with a ``_`` prefix are not documented.\n :type hide_private: :class:`bool`\n \"\"\"\n\n from ..params import ParametricObject\n\n def param_lines(app, obj):\n params = obj.class_params(hidden=(not hide_private))\n\n # Header\n doc_lines = []\n if params: # only add a header if it's relevant\n doc_lines += [\n \":class:`ParametricObject ` constructor parameters:\",\n \"\",\n ]\n\n for (name, param) in sorted(params.items(), key=lambda x: x[0]): # sort by name\n doc_lines.append(':param {name}: {doc}'.format(\n name=name, doc=param._param(),\n ))\n doc_lines.append(':type {name}: {doc}'.format(\n name=name, doc=param._type(),\n ))\n\n return doc_lines\n\n # Conditions for running above `param_lines` function (in order)\n conditions = [ # (all conditions must be met)\n lambda o: type(o) == type,\n lambda o: o is not ParametricObject,\n lambda o: issubclass(o, ParametricObject),\n ]\n\n def callback(app, what, name, obj, options, lines):\n # sphinx callback\n # (this method is what actually gets sent to the sphinx runtime)\n if all(c(obj) for c in conditions):\n new_lines = param_lines(app, obj)\n _add_lines(lines, new_lines, prepend=prepend)\n\n return callback"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding the search index criteria to the output.", "response": "def add_search_index_criteria(prepend=False):\n \"\"\"\n Add the search criteria used when calling :meth:`register() `\n on a :class:`Component ` as a table to the *docstring*.\n\n This is only intended to be used with *sphinx autodoc*.\n\n In your *sphinx* ``config.py`` file::\n\n from cqparts.utils.sphinx import add_search_index_criteria\n def setup(app):\n app.connect(\"autodoc-process-docstring\", add_search_index_criteria())\n\n Then, when documenting your :class:`Part ` or\n :class:`Assembly ` the\n search criteria will also be documented in the output.\n\n :param prepend: if True, table is added to the beginning of the *docstring*.\n otherwise, it's appended at the end.\n :type prepend: :class:`bool`\n \"\"\"\n\n from ..search import class_criteria\n from .. import Component\n\n COLUMN_INFO = [\n # (, <width>, <method>),\n ('Key', 50, lambda k, v: \"``%s``\" % k),\n ('Value', 10, lambda k, v: ', '.join(\"``%s``\" % w for w in v)),\n ] # note: last column width is irrelevant\n\n def param_lines(app, obj):\n doc_lines = []\n\n criteria = class_criteria.get(obj, {})\n row_seperator = ' '.join(('=' * w) for (_, w, _) in COLUMN_INFO)\n\n # Header\n if criteria: # only add a header if it's relevant\n doc_lines += [\n \"**Search Criteria:**\",\n \"\",\n \"This object can be found with :meth:`find() <cqparts.search.find>` \",\n \"and :meth:`search() <cqparts.search.search>` using the following \",\n \"search criteria.\",\n \"\",\n row_seperator,\n ' '.join(((\"%%-%is\" % w) % t) for (t, w, _) in COLUMN_INFO),\n row_seperator,\n ]\n\n # Add criteria\n for (key, value) in sorted(criteria.items(), key=lambda x: x[0]):\n doc_lines.append(' '.join(\n (\"%%-%is\" % w) % m(key, value)\n for (_, w, m) in COLUMN_INFO\n ))\n\n # Footer\n if criteria:\n doc_lines += [\n row_seperator,\n \"\",\n ]\n\n return doc_lines\n\n # Conditions for running above `param_lines` function (in order)\n conditions = [ # (all conditions must be met)\n lambda o: type(o) == type,\n lambda o: o is not Component,\n lambda o: issubclass(o, Component),\n ]\n\n def callback(app, what, name, obj, options, lines):\n # sphinx callback\n # (this method is what actually gets sent to the sphinx runtime)\n if all(c(obj) for c in conditions):\n new_lines = param_lines(app, obj)\n _add_lines(lines, new_lines, prepend=prepend)\n\n return callback"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\niterating through list and indicate which item is the last", "response": "def indicate_last(items):\n \"\"\"\n iterate through list and indicate which item is the last, intended to\n assist tree displays of hierarchical content.\n\n :return: yielding (<bool>, <item>) where bool is True only on last entry\n :rtype: generator\n \"\"\"\n last_index = len(items) - 1\n for (i, item) in enumerate(items):\n yield (i == last_index, item)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchanges working directory within a context", "response": "def working_dir(path):\n \"\"\"\n Change working directory within a context::\n\n >>> import os\n >>> from cqparts.utils import working_dir\n\n >>> print(os.getcwd())\n /home/myuser/temp\n\n >>> with working_dir('..'):\n ... print(os.getcwd())\n ...\n /home/myuser\n\n :param path: working path to use while in context\n :type path: :class:`str`\n \"\"\"\n initial_path = os.getcwd()\n os.chdir(path)\n yield\n os.chdir(initial_path)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef profile_to_cross_section(profile, lefthand=False, start_count=1, min_vertices=20):\n # verify parameter(s)\n if not isinstance(profile, cadquery.Workplane):\n raise TypeError(\"profile %r must be a %s instance\" % (profile, cadquery.Workplane))\n if not isinstance(min_vertices, (int, list, tuple)):\n raise TypeError(\"min_vertices %r must be an int, list, or tuple\" % (min_vertices))\n\n # get wire from Workplane\n wire = profile.val() # cadquery.Wire\n if not isinstance(wire, cadquery.Wire):\n raise TypeError(\"a valid profile Wire type could not be found in the given Workplane\")\n\n profile_bb = wire.BoundingBox()\n pitch = profile_bb.zmax - profile_bb.zmin\n lead = pitch * start_count\n\n # determine vertices count per edge\n edges = wire.Edges()\n vertices_count = None\n if isinstance(min_vertices, int):\n # evenly spread vertices count along profile wire\n # (weighted by the edge's length)\n vertices_count = [\n int(ceil(round(e.Length() / wire.Length(), 7) * min_vertices))\n for e in edges\n ]\n # rounded for desired contrived results\n # (trade-off: an error of 1 is of no great consequence)\n else:\n # min_vertices is defined per edge (already what we want)\n if len(min_vertices) != len(edges):\n raise ValueError(\n \"min_vertices list size does not match number of profile edges: \"\n \"len(%r) != %i\" % (min_vertices, len(edges))\n )\n vertices_count = min_vertices\n\n # Utilities for building cross-section\n def get_xz(vertex):\n if isinstance(vertex, cadquery.Vector):\n vertex = vertex.wrapped # TODO: remove this, it's messy\n # where isinstance(vertex, FreeCAD.Base.Vector)\n return (vertex.x, vertex.z)\n\n def cart2polar(x, z, z_offset=0):\n \"\"\"\n Convert cartesian coordinates to polar coordinates.\n Uses thread's lead height to give full 360deg translation.\n \"\"\"\n radius = x\n angle = ((z + z_offset) / lead) * (2 * pi) # radians\n if not lefthand:\n angle = -angle\n return (radius, angle)\n\n def transform(vertex, z_offset=0):\n # where isinstance(vertex, FreeCAD.Base.Vector)\n \"\"\"\n Transform profile vertex on the XZ plane to it's equivalent on\n the cross-section's XY plane\n \"\"\"\n (radius, angle) = cart2polar(*get_xz(vertex), z_offset=z_offset)\n return (radius * cos(angle), radius * sin(angle))\n\n # Conversion methods\n def apply_spline(wp, edge, vert_count, z_offset=0):\n \"\"\"\n Trace along edge and create a spline from the transformed verteces.\n \"\"\"\n curve = edge.wrapped.Curve # FreeCADPart.Geom* (depending on type)\n if edge.geomType() == 'CIRCLE':\n iter_dist = edge.wrapped.ParameterRange[1] / vert_count\n else:\n iter_dist = edge.Length() / vert_count\n points = []\n for j in range(vert_count):\n dist = (j + 1) * iter_dist\n vert = curve.value(dist)\n points.append(transform(vert, z_offset))\n return wp.spline(points)\n\n def apply_arc(wp, edge, z_offset=0):\n \"\"\"\n Create an arc using edge's midpoint and endpoint.\n Only intended for use for vertical lines on the given profile.\n \"\"\"\n return wp.threePointArc(\n point1=transform(edge.wrapped.valueAt(edge.Length() / 2), z_offset),\n point2=transform(edge.wrapped.valueAt(edge.Length()), z_offset),\n )\n\n def apply_radial_line(wp, edge, z_offset=0):\n \"\"\"\n Create a straight radial line\n \"\"\"\n return wp.lineTo(*transform(edge.endPoint(), z_offset))\n\n # Build cross-section\n start_v = edges[0].startPoint().wrapped\n cross_section = cadquery.Workplane(\"XY\") \\\n .moveTo(*transform(start_v))\n\n for i in range(start_count):\n z_offset = i * pitch\n for (j, edge) in enumerate(wire.Edges()):\n # where: isinstance(edge, cadquery.Edge)\n if (edge.geomType() == 'LINE') and (edge.startPoint().x == edge.endPoint().x):\n # edge is a vertical line, plot a circular arc\n cross_section = apply_arc(cross_section, edge, z_offset)\n elif (edge.geomType() == 'LINE') and (edge.startPoint().z == edge.endPoint().z):\n # edge is a horizontal line, plot a radial line\n cross_section = apply_radial_line(cross_section, edge, z_offset)\n else:\n # create bezier spline along transformed points (default)\n cross_section = apply_spline(cross_section, edge, vertices_count[j], z_offset)\n\n return cross_section.close()", "response": "r Converts a thread profile to a cross - section."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef profile(self):\n if self._profile is None:\n self._profile = self.build_profile()\n return self._profile", "response": "Return the profile of the user."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_radii(self):\n bb = self.profile.val().BoundingBox()\n return (bb.xmin, bb.xmax)", "response": "Get the inner and outer radii of the thread."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef make_simple(self):\n (inner_radius, outer_radius) = self.get_radii()\n radius = (inner_radius + outer_radius) / 2\n return cadquery.Workplane('XY') \\\n .circle(radius).extrude(self.length)", "response": "Return a simple cylinder with the thread s average radius & length."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_pilothole_cutter(self):\n # get pilothole ratio\n # note: not done in .initialize_parameters() because this would cause\n # the thread's profile to be created at initialisation (by default).\n pilothole_radius = self.pilothole_radius\n if pilothole_radius is None:\n (inner_radius, outer_radius) = self.get_radii()\n pilothole_radius = inner_radius + self.pilothole_ratio * (outer_radius - inner_radius)\n\n return cadquery.Workplane('XY') \\\n .circle(pilothole_radius) \\\n .extrude(self.length)", "response": "Make a solid to subtract from an interfacing solid to bore a pilot - hole."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef components(self):\n if self._components is None:\n self.build(recursive=False)\n return self._components", "response": "Returns full dict of named component instances after\n a successful build."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns full list of constraints after a successful build.", "response": "def constraints(self):\n \"\"\"\n Returns full :class:`list` of :class:`Constraint <cqparts.constraint.Constraint>` instances, after\n a successful :meth:`build`\n\n :return: list of named :class:`Constraint <cqparts.constraint.Constraint>` instances\n :rtype: :class:`list`\n\n For more information read about the :ref:`parts_assembly-build-cycle` .\n \"\"\"\n if self._constraints is None:\n self.build(recursive=False)\n return self._constraints"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef solve(self):\n if self.world_coords is None:\n log.warning(\"solving for Assembly without world coordinates set: %r\", self)\n\n for (component, world_coords) in solver(self.constraints, self.world_coords):\n component.world_coords = world_coords", "response": "Solve the Assembly for the given set of world coordinates."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef verify_components(components):\n # verify returned type from user-defined function\n if not isinstance(components, dict):\n raise ValueError(\n \"invalid type returned by make_components(): %r (must be a dict)\" % components\n )\n\n # check types for (name, component) pairs in dict\n for (name, component) in components.items():\n # name is a string\n if not isinstance(name, str):\n raise ValueError((\n \"invalid name from make_components(): (%r, %r) \"\n \"(must be a (str, Component))\"\n ) % (name, component))\n\n # component is a Component instance\n if not isinstance(component, Component):\n raise ValueError((\n \"invalid component type from make_components(): (%r, %r) \"\n \"(must be a (str, Component))\"\n ) % (name, component))\n\n # check component name validity\n invalid_chars = set(name) - VALID_NAME_CHARS\n if invalid_chars:\n raise ValueError(\n \"component name {!r} invalid; cannot include {!r}\".format(\n name, invalid_chars\n )\n )", "response": "Verify that the components returned from a function are valid."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef verify_constraints(constraints):\n # verify return is a list\n if not isinstance(constraints, list):\n raise ValueError(\n \"invalid type returned by make_constraints: %r (must be a list)\" % constraints\n )\n\n # verify each list element is a Constraint instance\n for constraint in constraints:\n if not isinstance(constraint, Constraint):\n raise ValueError(\n \"invalid constraint type: %r (must be a Constriant)\" % constraint\n )", "response": "Verify that the return value returned from the make_constraints method is a list of constraints. Raises ValueError if verification fails."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef build(self, recursive=True):\n\n # initialize values\n self._components = {}\n self._constraints = []\n\n def genwrap(obj, name, iter_type=None):\n # Force obj to act like a generator.\n # this wrapper will always yield at least once.\n if isinstance(obj, GeneratorType):\n for i in obj:\n if (iter_type is not None) and (not isinstance(i, iter_type)):\n raise TypeError(\"%s must yield a %r\" % (name, iter_type))\n yield i\n else:\n if (iter_type is not None) and (not isinstance(obj, iter_type)):\n raise TypeError(\"%s must return a %r\" % (name, iter_type))\n yield obj\n\n # Make Components\n components_iter = genwrap(self.make_components(), \"make_components\", dict)\n new_components = next(components_iter)\n self.verify_components(new_components)\n self._components.update(new_components)\n\n # Make Constraints\n constraints_iter = genwrap(self.make_constraints(), \"make_components\", list)\n new_constraints = next(constraints_iter)\n self.verify_constraints(new_constraints)\n self._constraints += new_constraints\n\n # Run solver : sets components' world coordinates\n self.solve()\n\n # Make Alterations\n alterations_iter = genwrap(self.make_alterations(), \"make_alterations\")\n next(alterations_iter) # return value is ignored\n\n while True:\n (s1, s2, s3) = (True, True, True) # stages\n # Make Components\n new_components = None\n try:\n new_components = next(components_iter)\n self.verify_components(new_components)\n self._components.update(new_components)\n except StopIteration:\n s1 = False\n\n # Make Constraints\n new_constraints = None\n try:\n new_constraints = next(constraints_iter)\n self.verify_constraints(new_constraints)\n self._constraints += new_constraints\n except StopIteration:\n s2 = False\n\n # Run solver : sets components' world coordinates\n if new_components or new_constraints:\n self.solve()\n\n # Make Alterations\n try:\n next(alterations_iter) # return value is ignored\n except StopIteration:\n s3 = False\n\n # end loop when all iters are finished\n if not any((s1, s2, s3)):\n break\n\n if recursive:\n for (name, component) in self._components.items():\n component.build(recursive=recursive)", "response": "Builds an assembly buffer of the current set of components and constraints."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfinds an entry in the Assembly by a list of keys.", "response": "def find(self, keys, _index=0):\n \"\"\"\n :param keys: key path. ``'a.b'`` is equivalent to ``['a', 'b']``\n :type keys: :class:`str` or :class:`list`\n\n Find a nested :class:`Component` by a \"`.`\" separated list of names.\n for example::\n\n >>> motor.find('bearing.outer_ring')\n\n would return the Part instance of the motor bearing's outer ring.\n\n ::\n\n >>> bearing = motor.find('bearing')\n >>> ring = bearing.find('inner_ring') # equivalent of 'bearing.inner_ring'\n\n the above code does much the same thing, ``bearing`` is an :class:`Assembly`,\n and ``ring`` is a :class:`Part`.\n\n .. note::\n\n For a key path of ``a.b.c`` the ``c`` key can referernce any\n :class:`Component` type.\n\n Everything prior (in this case ``a`` and ``b``) must reference an\n :class:`Assembly`.\n \"\"\"\n\n if isinstance(keys, six.string_types):\n keys = re.split(r'[\\.-]+', keys)\n if _index >= len(keys):\n return self\n\n key = keys[_index]\n if key in self.components:\n component = self.components[key]\n if isinstance(component, Assembly):\n return component.find(keys, _index=(_index + 1))\n elif _index == len(keys) - 1:\n # this is the last search key; component is a leaf, return it\n return component\n else:\n raise AssemblyFindError(\n \"could not find '%s' (invalid type at [%i]: %r)\" % (\n '.'.join(keys), _index, component\n )\n )\n else:\n raise AssemblyFindError(\n \"could not find '%s', '%s' is not a component of %r\" % (\n '.'.join(keys), key, self\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting a neat relative path to files relative to the CWD", "response": "def _relative_path_to(path_list, filename):\n \"\"\"Get a neat relative path to files relative to the CWD\"\"\"\n return os.path.join(\n os.path.relpath(os.path.join(*path_list), os.getcwd()),\n filename\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nbuilding a single tooth template for the current site.", "response": "def _make_tooth_template(self):\n \"\"\"\n Builds a single tooth including the cylinder with tooth faces\n tangential to its circumference.\n \"\"\"\n # parameters\n period_arc = (2 * pi) / self.tooth_count\n tooth_arc = period_arc * self.spacing_ratio # the arc between faces at effective_radius\n outer_radius = self.effective_radius + (self.tooth_height / 2)\n face_angle_rad = radians(self.face_angle)\n\n # cartesian isosceles trapezoid dimensions\n side_angle = face_angle_rad - (tooth_arc / 2)\n side_tangent_radius = sin(face_angle_rad) * self.effective_radius\n extra_side_angle = side_angle + acos(side_tangent_radius / outer_radius)\n\n tooth = cadquery.Workplane('XY', origin=(0, 0, -self.width / 2)) \\\n .moveTo(\n side_tangent_radius * cos(side_angle),\n side_tangent_radius * sin(side_angle)\n )\n opposite_point = (\n -side_tangent_radius * cos(side_angle),\n side_tangent_radius * sin(side_angle)\n )\n if self.face_angle:\n tooth = tooth.lineTo(*opposite_point)\n #tooth = tooth.threePointArc(\n # (0, -side_tangent_radius),\n # opposite_point\n #)\n tooth = tooth.lineTo(\n -cos(extra_side_angle) * outer_radius,\n sin(extra_side_angle) * outer_radius\n )\n opposite_point = (\n cos(extra_side_angle) * outer_radius,\n sin(extra_side_angle) * outer_radius\n )\n if self.flat_top:\n tooth = tooth.lineTo(*opposite_point)\n else:\n tooth = tooth.threePointArc((0, outer_radius), opposite_point)\n tooth = tooth.close().extrude(self.width)\n\n return tooth"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_mate_center(self, angle=0):\n return Mate(self, CoordSystem.from_plane(\n cadquery.Plane(\n origin=(0, 0, self.width / 2),\n xDir=(1, 0, 0),\n normal=(0, 0, 1),\n ).rotated((0, 0, angle)) # rotate about z-axis\n ))", "response": "Return a Mate at ring s center rotated about z - axis."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_max_ballcount(cls, ball_diam, rolling_radius, min_gap=0.):\n\n min_arc = asin(((ball_diam + min_gap) / 2) / rolling_radius) * 2\n return int((2 * pi) / min_arc)", "response": "Returns the maximum number of balls given the rolling_radius and ball_diam."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef display_callback(self, component, **kwargs):\n if type(self) is DisplayEnvironment:\n raise RuntimeError(\n (\"%r is not a functional display environment, \" % (type(self))) +\n \"it's meant to be inherited by an implemented environment\"\n )\n raise NotImplementedError(\n \"display_callback function not overridden by %r\" % (type(self))\n )", "response": "Display given component in this environment."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_vertex(self, x, y, z):\n self.vert_data.write(\n struct.pack('<f', x) +\n struct.pack('<f', y) +\n struct.pack('<f', z)\n )\n\n # retain min/max values\n self.vert_min = _list3_min(self.vert_min, (x, y, z))\n self.vert_max = _list3_max(self.vert_max, (x, y, z))", "response": "Add a vertex to the internal buffer."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_poly_index(self, i, j, k):\n self.idx_data.write(\n struct.pack(self.idx_fmt, i) +\n struct.pack(self.idx_fmt, j) +\n struct.pack(self.idx_fmt, k)\n )", "response": "Add 3 - Scalar of uint to the idx_data buffer."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\niterate through chunks of the vertices and indices buffers seamlessly.", "response": "def buffer_iter(self, block_size=1024):\n \"\"\"\n Iterate through chunks of the vertices, and indices buffers seamlessly.\n\n .. note::\n\n To see a usage example, look at the :class:`ShapeBuffer` description.\n \"\"\"\n streams = (\n self.vert_data,\n self.idx_data,\n )\n\n # Chain streams seamlessly\n for stream in streams:\n stream.seek(0)\n while True:\n chunk = stream.read(block_size)\n if chunk:\n yield chunk\n else:\n break"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreads the key - value pairs from the buffer.", "response": "def read(self):\n \"\"\"\n Read buffer out as a single stream.\n\n .. warning::\n\n Avoid using this function!\n\n **Why?** This is a *convenience* function; it doesn't encourage good\n memory management.\n\n All memory required for a mesh is duplicated, and returned as a\n single :class:`str`. So at best, using this function will double\n the memory required for a single model.\n\n **Instead:** Wherever possible, please use :meth:`buffer_iter`.\n \"\"\"\n buffer = BytesIO()\n for chunk in self.buffer_iter():\n log.debug('buffer.write(%r)', chunk)\n buffer.write(chunk)\n buffer.seek(0)\n return buffer.read()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef world_coords(self):\n if self.component is None:\n # no component, world == local\n return copy(self.local_coords)\n else:\n cmp_origin = self.component.world_coords\n if cmp_origin is None:\n raise ValueError(\n \"mate's component does not have world coordinates; \"\n \"cannot get mate's world coordinates\"\n )\n\n return cmp_origin + self.local_coords", "response": "world coordinates of mate."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates and return a simple representation of the current export entry.", "response": "def make_simple(self):\n \"\"\"\n Create and return *simplified* solid part.\n\n The simplified representation of a ``Part`` is to lower the export\n quality of an ``Assembly`` or ``Part`` for rendering.\n\n Overriding this is optional, but highly recommended.\n\n The default behaviour returns the full complexity object's bounding box.\n But to do this, theh full complexity object must be generated first.\n\n There are 2 main problems with this:\n\n #. building the full complexity part is not efficient.\n #. a bounding box may not be a good representation of the part.\n\n **Bolts**\n\n A good example of this is a bolt.\n\n * building a bolt's thread is not a trivial task;\n it can take some time to generate.\n * a box is not a good visual representation of a bolt\n\n So for the ``Fastener`` parts, all ``make_simple`` methods are overridden\n to provide 2 cylinders, one for the bolt's head, and another for the thread.\n \"\"\"\n complex_obj = self.make()\n bb = complex_obj.findSolid().BoundingBox()\n simple_obj = cadquery.Workplane('XY', origin=(bb.xmin, bb.ymin, bb.zmin)) \\\n .box(bb.xlen, bb.ylen, bb.zlen, centered=(False, False, False))\n return simple_obj"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the local object of the current object.", "response": "def local_obj(self):\n \"\"\"\n Buffered result of :meth:`make` which is (probably) a\n :class:`cadquery.Workplane` instance. If ``_simple`` is ``True``, then\n :meth:`make_simple` is returned instead.\n\n .. note::\n This is usually the correct way to get your part's object\n for rendering, exporting, or measuring.\n\n Only call :meth:`cqparts.Part.make` directly if you explicitly intend\n to re-generate the model from scratch, then dispose of it.\n \"\"\"\n if self._local_obj is None:\n # Simplified or Complex\n if self._simple:\n value = self.make_simple()\n else:\n value = self.make()\n # Verify type\n if not isinstance(value, cadquery.CQ):\n raise MakeError(\"invalid object type returned by make(): %r\" % value)\n # Buffer object\n self._local_obj = value\n return self._local_obj"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef world_obj(self):\n if self._world_obj is None:\n local_obj = self.local_obj\n world_coords = self.world_coords\n if (local_obj is not None) and (world_coords is not None):\n # Copy local object, apply transform to move to its new home.\n self._world_obj = world_coords + local_obj\n return self._world_obj", "response": "The world object in the current component."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef bounding_box(self):\n if self.world_coords:\n return self.world_obj.findSolid().BoundingBox()\n return self.local_obj.findSolid().BoundingBox()", "response": "Generate a bounding box based on the full complexity part."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef make_cutter(self):\n # head\n obj = self.head.make_cutter()\n\n # neck\n if self.neck_length:\n # neck cut diameter (if thread is larger than the neck, thread must fit through)\n (inner_radius, outer_radius) = self.thread.get_radii()\n neck_cut_radius = max(outer_radius, self.neck_diam / 2)\n\n neck = cadquery.Workplane(\n 'XY', origin=(0, 0, -self.neck_length)\n ).circle(neck_cut_radius).extrude(self.neck_length)\n obj = obj.union(neck)\n\n # thread (pilot hole)\n pilot_hole = self.thread.make_pilothole_cutter() \\\n .translate((0, 0, -self.length))\n obj = obj.union(pilot_hole)\n\n return obj", "response": "Makes a cutter for the current neck and thread."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the cutout for the shaft", "response": "def get_cutout(self, clearance=0):\n \" get the cutout for the shaft\"\n return cq.Workplane('XY', origin=(0, 0, 0)) \\\n .circle((self.diam / 2) + clearance) \\\n .extrude(10)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef mate_bottom(self):\n \" connect to the bottom of the cup\"\n return Mate(self, CoordSystem(\\\n origin=(0, 0, -self.height),\\\n xDir=(1, 0, 0),\\\n normal=(0, 0, 1)))", "response": "connect to the bottom of the cup"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef as_parameter(nullable=True, strict=True):\n def decorator(cls):\n base_class = Parameter if nullable else NonNullParameter\n\n return type(cls.__name__, (base_class,), {\n # Preserve text for documentation\n '__name__': cls.__name__,\n '__doc__': cls.__doc__,\n '__module__': cls.__module__,\n # Sphinx doc type string\n '_doc_type': \":class:`{class_name} <{module}.{class_name}>`\".format(\n class_name=cls.__name__, module=__name__\n ),\n #\n 'type': lambda self, value: cls(**value)\n })\n\n\n return decorator", "response": "A class decorator that creates a functional container class for a parametric object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning valid base API path based on version given", "response": "def _construct_api_path(self, version):\n \"\"\"Returns valid base API path based on version given\n\n The base API path for the URL is different depending on UniFi server version.\n Default returns correct path for latest known stable working versions.\n\n \"\"\"\n\n V2_PATH = 'api/'\n V3_PATH = 'api/s/' + self.site_id + '/'\n\n if(version == 'v2'):\n return V2_PATH\n if(version == 'v3'):\n return V3_PATH\n if(version == 'v4'):\n return V3_PATH\n if(version == 'v5'):\n return V3_PATH\n else:\n return V2_PATH"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_alerts_unarchived(self):\n\n js = json.dumps({'_sort': '-time', 'archived': False})\n params = urllib.urlencode({'json': js})\n return self._read(self.api_url + 'list/alarm', params)", "response": "Return a list of Alerts unarchived."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_statistics_24h(self, endtime):\n\n js = json.dumps(\n {'attrs': [\"bytes\", \"num_sta\", \"time\"], 'start': int(endtime - 86400) * 1000, 'end': int(endtime - 3600) * 1000})\n params = urllib.urlencode({'json': js})\n return self._read(self.api_url + 'stat/report/hourly.system', params)", "response": "Return statistical data last 24h from time"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a list of all APs with significant information about each.", "response": "def get_aps(self):\n \"\"\"Return a list of all AP:s, with significant information about each.\"\"\"\n\n #Set test to 0 instead of NULL\n params = json.dumps({'_depth': 2, 'test': 0})\n return self._read(self.api_url + 'stat/device', params)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nrestart an access point by name.", "response": "def restart_ap_name(self, name):\n \"\"\"Restart an access point (by name).\n\n Arguments:\n name -- the name address of the AP to restart.\n\n \"\"\"\n\n if not name:\n raise APIError('%s is not a valid name' % str(name))\n for ap in self.get_aps():\n if ap.get('state', 0) == 1 and ap.get('name', None) == name:\n self.restart_ap(ap['mac'])"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nask controller to create a backup archive file", "response": "def create_backup(self):\n \"\"\"Ask controller to create a backup archive file, response contains the path to the backup file.\n\n Warning: This process puts significant load on the controller may\n render it partially unresponsive for other requests.\n \"\"\"\n\n js = json.dumps({'cmd': 'backup'})\n params = urllib.urlencode({'json': js})\n answer = self._read(self.api_url + 'cmd/system', params)\n\n return answer[0].get('url')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_backup(self, target_file='unifi-backup.unf'):\n download_path = self.create_backup()\n\n opener = self.opener.open(self.url + download_path)\n unifi_archive = opener.read()\n\n backupfile = open(target_file, 'w')\n backupfile.write(unifi_archive)\n backupfile.close()", "response": "Get a backup archive from a controller."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef authorize_guest(self, guest_mac, minutes, up_bandwidth=None, down_bandwidth=None, byte_quota=None, ap_mac=None):\n cmd = 'authorize-guest'\n js = {'mac': guest_mac, 'minutes': minutes}\n\n if up_bandwidth:\n js['up'] = up_bandwidth\n if down_bandwidth:\n js['down'] = down_bandwidth\n if byte_quota:\n js['bytes'] = byte_quota\n if ap_mac and self.version != 'v2':\n js['ap_mac'] = ap_mac\n\n return self._run_command(cmd, params=js)", "response": "This method is used to authorize a guest based on his MAC address."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef unauthorize_guest(self, guest_mac):\n cmd = 'unauthorize-guest'\n js = {'mac': guest_mac}\n\n return self._run_command(cmd, params=js)", "response": "Unauthorize a guest based on his MAC address."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef download(gfile, wks_name=None, col_names=False, row_names=False,\n credentials=None, start_cell = 'A1'):\n \"\"\"\n Download Google Spreadsheet and convert it to Pandas DataFrame\n\n :param gfile: path to Google Spreadsheet or gspread ID\n :param wks_name: worksheet name\n :param col_names: assing top row to column names for Pandas DataFrame\n :param row_names: assing left column to row names for Pandas DataFrame\n :param credentials: provide own credentials\n :param start_cell: specify where to start capturing of the DataFrame; default is A1\n :type gfile: str\n :type wks_name: str\n :type col_names: bool\n :type row_names: bool\n :type credentials: class 'oauth2client.client.OAuth2Credentials'\n :type start_cell: str\n :returns: Pandas DataFrame\n :rtype: class 'pandas.core.frame.DataFrame'\n\n :Example:\n\n >>> from df2gspread import gspread2df as g2d\n >>> df = g2d.download(gfile=\"1U-kSDyeD-...\", col_names=True, row_names=True)\n >>> df\n col1 col2\n field1 1 2\n field2 3 4\n \"\"\"\n\n # access credentials\n credentials = get_credentials(credentials)\n # auth for gspread\n gc = gspread.authorize(credentials)\n\n try:\n # if gfile is file_id\n gc.open_by_key(gfile).__repr__()\n gfile_id = gfile\n except:\n # else look for file_id in drive\n gfile_id = get_file_id(credentials, gfile)\n\n if gfile_id is None:\n raise RuntimeError(\n \"Trying to open non-existent or inaccessible spreadsheet\")\n\n wks = get_worksheet(gc, gfile_id, wks_name)\n\n if wks is None:\n raise RuntimeError(\n \"Trying to open non-existent or inaccessible worksheet\")\n\n raw_data = wks.get_all_values()\n\n if not raw_data:\n raise ValueError(\n 'Worksheet is empty or invalid.'\n )\n\n start_row_int, start_col_int = gspread.utils.a1_to_rowcol(start_cell)\n\n rows, cols = np.shape(raw_data)\n if start_col_int > cols or (row_names and start_col_int + 1 > cols):\n raise RuntimeError(\n \"Start col (%s) out of the table columns(%s)\" % (start_col_int +\n row_names, cols))\n\n if start_row_int > rows or (col_names and start_row_int + 1 > rows):\n raise RuntimeError(\n \"Start row (%s) out of the table rows(%s)\" % (start_row_int +\n col_names, rows))\n\n raw_data = [row[start_col_int-1:] for row in raw_data[start_row_int-1:]]\n\n if row_names and col_names:\n row_names = [row[0] for row in raw_data[1:]]\n col_names = raw_data[0][1:]\n raw_data = [row[1:] for row in raw_data[1:]]\n elif row_names:\n row_names = [row[0] for row in raw_data]\n col_names = np.arange(len(raw_data[0]) - 1)\n raw_data = [row[1:] for row in raw_data]\n elif col_names:\n row_names = np.arange(len(raw_data) - 1)\n col_names = raw_data[0]\n raw_data = raw_data[1:]\n else:\n row_names = np.arange(len(raw_data))\n col_names = np.arange(len(raw_data[0]))\n\n df = pd.DataFrame([pd.Series(row) for row in raw_data], index=row_names)\n df.columns = col_names\n\n return df", "response": "Download a file from Google Spreadsheet and convert it to Pandas DataFrame."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning valid credentials object.", "response": "def get_credentials(credentials=None, client_secret_file=CLIENT_SECRET_FILE, refresh_token=None):\n \"\"\"Consistently returns valid credentials object.\n\n See Also:\n https://developers.google.com/drive/web/quickstart/python\n\n Args:\n client_secret_file (str): path to client secrets file, defaults to .gdrive_private\n refresh_token (str): path to a user provided refresh token that is already\n pre-authenticated\n credentials (`~oauth2client.client.OAuth2Credentials`, optional): handle direct\n input of credentials, which will check credentials for valid type and\n return them\n\n Returns:\n `~oauth2client.client.OAuth2Credentials`: google credentials object\n\n \"\"\"\n\n # if the utility was provided credentials just return those\n if credentials:\n if _is_valid_credentials(credentials):\n # auth for gspread\n return credentials\n else:\n print(\"Invalid credentials supplied. Will generate from default token.\")\n\n token = refresh_token or DEFAULT_TOKEN\n dir_name = os.path.dirname(DEFAULT_TOKEN)\n try:\n os.makedirs(dir_name)\n except OSError:\n if not os.path.isdir(dir_name):\n raise\n store = file.Storage(token)\n credentials = store.get()\n\n try:\n import argparse\n flags = argparse.ArgumentParser(\n parents=[tools.argparser]).parse_known_args()[0]\n except ImportError:\n flags = None\n logr.error(\n 'Unable to parse oauth2client args; `pip install argparse`')\n\n if not credentials or credentials.invalid:\n\n flow = client.flow_from_clientsecrets(\n client_secret_file, SCOPES)\n flow.redirect_uri = client.OOB_CALLBACK_URN\n if flags:\n credentials = tools.run_flow(flow, store, flags)\n else: # Needed only for compatability with Python 2.6\n credentials = tools.run(flow, store)\n logr.info('Storing credentials to ' + DEFAULT_TOKEN)\n\n return credentials"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef create_service_credentials(private_key_file=None, client_email=None,\n client_secret_file=CLIENT_SECRET_FILE):\n \"\"\"Create credentials from service account information.\n\n See Also:\n https://developers.google.com/api-client-library/python/auth/service-accounts\n\n Args:\n client_secret_file (str): path to json file with just the client_email when\n providing the `private_key_file` separately, or this file can have both the\n `client_email` and `private_key` contained in it. Defaults to .gdrive_private\n client_email (str): service email account\n private_key_file (str): path to the p12 private key, defaults to same name of file\n used for regular authentication\n\n Returns:\n `~oauth2client.client.OAuth2Credentials`: google credentials object\n\n \"\"\"\n if private_key_file is not None:\n with open(os.path.expanduser(private_key_file)) as f:\n private_key = f.read()\n else:\n private_key = None\n\n if client_email is None:\n with open(os.path.expanduser(client_secret_file)) as client_file:\n client_data = json.load(client_file)\n\n if 'installed' in client_data:\n\n # handle regular json format where key is separate\n client_email = client_data['installed']['client_id']\n if private_key is None:\n raise RuntimeError('You must have the private key file \\\n with the regular json file. Try creating a new \\\n public/private key pair and downloading as json.')\n else:\n # handle newer case where json file has everything in it\n client_email = client_data['client_email']\n private_key = client_data['private_key']\n\n if client_email is None or private_key is None:\n raise RuntimeError(\n 'Client email and/or private key not provided by inputs.')\n\n credentials = client.SignedJwtAssertionCredentials(\n client_email, private_key, SCOPES)\n\n return credentials", "response": "Create credentials from service account information."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_file_id(credentials, gfile, write_access=False):\n # auth for apiclient\n http = credentials.authorize(Http())\n service = discovery.build('drive', 'v3', http=http, cache_discovery=False)\n\n file_id = service.files().get(fileId='root', fields='id').execute().get('id')\n\n # folder/folder/folder/spreadsheet\n pathway = gfile.strip('/').split('/')\n\n for idx, name in enumerate(pathway):\n files = service.files().list(\n q=\"name = '{}' and trashed = false and '{}' in parents\".format(name, file_id)).execute()['files']\n if len(files) > 0:\n # Why do you ever need to use several folders with the same name?!\n file_id = files[0].get('id')\n elif write_access == True:\n body = {\n 'mimeType': 'application/vnd.google-apps.' + ('spreadsheet' if idx == len(pathway)-1 else 'folder'),\n 'name': name,\n 'parents': [file_id]\n }\n file_id = service.files().create(body=body, fields='id').execute().get('id')\n else:\n return None\n return file_id", "response": "Get file ID by provided path."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets a single worksheet from a spreadsheet.", "response": "def get_worksheet(gc, gfile_id, wks_name, write_access=False, new_sheet_dimensions=(1000, 100)):\n \"\"\"DOCS...\"\"\"\n\n spsh = gc.open_by_key(gfile_id)\n\n # if worksheet name is not provided , take first worksheet\n if wks_name is None:\n wks = spsh.sheet1\n # if worksheet name provided and exist in given spreadsheet\n else:\n try:\n wks = spsh.worksheet(wks_name)\n except:\n #rows, cols = new_sheet_dimensions\n wks = spsh.add_worksheet(\n wks_name, *new_sheet_dimensions) if write_access == True else None\n\n return wks"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndeletes a file from the cache.", "response": "def delete_file(credentials, file_id):\n \"\"\"DOCS...\"\"\"\n try:\n http = credentials.authorize(Http())\n service = discovery.build(\n 'drive', 'v3', http=http, cache_discovery=False)\n service.files().delete(fileId=file_id).execute()\n except errors.HttpError as e:\n logr.error(e)\n raise"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef upload(df, gfile=\"/New Spreadsheet\", wks_name=None,\n col_names=True, row_names=True, clean=True, credentials=None,\n start_cell = 'A1', df_size = False, new_sheet_dimensions = (1000,100)):\n '''\n Upload given Pandas DataFrame to Google Drive and returns\n gspread Worksheet object\n\n :param df: Pandas DataFrame\n :param gfile: path to Google Spreadsheet or gspread ID\n :param wks_name: worksheet name\n :param col_names: passing top row to column names for Pandas DataFrame\n :param row_names: passing left column to row names for Pandas DataFrame\n :param clean: clean all data in worksheet before uploading\n :param credentials: provide own credentials\n :param start_cell: specify where to insert the DataFrame; default is A1\n :param df_size:\n -If True and worksheet name does NOT exist, will create\n a new worksheet that is the size of the df; otherwise, by default,\n creates sheet of 1000x100 cells.\n -If True and worksheet does exist, will resize larger or smaller to\n fit new dataframe.\n -If False and dataframe is larger than existing sheet, will resize\n the sheet larger.\n -If False and dataframe is smaller than existing sheet, does not resize.\n :param new_sheet_dimensions: tuple of (row, cols) for size of a new sheet\n :type df: class 'pandas.core.frame.DataFrame'\n :type gfile: str\n :type wks_name: str\n :type col_names: bool\n :type row_names: bool\n :type clean: bool\n :type credentials: class 'oauth2client.client.OAuth2Credentials'\n :type start_cell: str\n :type df_size: bool\n :type new_sheet_dimensions: tuple\n :returns: gspread Worksheet\n :rtype: class 'gspread.models.Worksheet'\n\n :Example:\n\n >>> from df2gspread import df2gspread as d2g\n >>> import pandas as pd\n >>> df = pd.DataFrame([1 2 3])\n >>> wks = d2g.upload(df, wks_name='Example worksheet')\n >>> wks.title\n 'Example worksheet'\n '''\n # access credentials\n credentials = get_credentials(credentials)\n # auth for gspread\n gc = gspread.authorize(credentials)\n\n try:\n gc.open_by_key(gfile).__repr__()\n gfile_id = gfile\n except:\n gfile_id = get_file_id(credentials, gfile, write_access=True)\n\n # Tuple of rows, cols in the dataframe.\n # If user did not explicitly specify to resize sheet to dataframe size\n # then for new sheets set it to new_sheet_dimensions, which is by default 1000x100\n if df_size:\n new_sheet_dimensions = (len(df), len(df.columns))\n wks = get_worksheet(gc, gfile_id, wks_name, write_access=True,\n new_sheet_dimensions=new_sheet_dimensions)\n if clean:\n wks = clean_worksheet(wks, gfile_id, wks_name, credentials)\n\n start_col = re.split(r'(\\d+)',start_cell)[0].upper()\n start_row = re.split(r'(\\d+)',start_cell)[1]\n start_row_int, start_col_int = gspread.utils.a1_to_rowcol(start_cell)\n\n # find last index and column name (A B ... Z AA AB ... AZ BA)\n num_rows = len(df.index) + 1 if col_names else len(df.index)\n last_idx_adjust = start_row_int - 1\n last_idx = num_rows + last_idx_adjust\n\n num_cols = len(df.columns) + 1 if row_names else len(df.columns)\n last_col_adjust = start_col_int - 1\n last_col_int = num_cols + last_col_adjust\n last_col = re.split(r'(\\d+)',(gspread.utils.rowcol_to_a1(1, last_col_int)))[0].upper()\n\n # If user requested to resize sheet to fit dataframe, go ahead and\n # resize larger or smaller to better match new size of pandas dataframe.\n # Otherwise, leave it the same size unless the sheet needs to be expanded\n # to accomodate a larger dataframe.\n if df_size:\n wks.resize(rows=len(df.index) + col_names, cols=len(df.columns) + row_names)\n if len(df.index) + col_names + last_idx_adjust > wks.row_count:\n wks.add_rows(len(df.index) - wks.row_count + col_names + last_idx_adjust)\n if len(df.columns) + row_names + last_col_adjust > wks.col_count:\n wks.add_cols(len(df.columns) - wks.col_count + row_names + last_col_adjust)\n\n # Define first cell for rows and columns\n first_col = re.split(r'(\\d+)',(gspread.utils.rowcol_to_a1(1, start_col_int + 1)))[0].upper() if row_names else start_col\n first_row = str(start_row_int + 1) if col_names else start_row\n\n # Addition of col names\n if col_names:\n cell_list = wks.range('%s%s:%s%s' % (first_col, start_row, last_col, start_row))\n for idx, cell in enumerate(cell_list):\n cell.value = df.columns.astype(str)[idx]\n wks.update_cells(cell_list)\n\n # Addition of row names\n if row_names:\n cell_list = wks.range('%s%s:%s%d' % (\n start_col, first_row, start_col, last_idx))\n for idx, cell in enumerate(cell_list):\n cell.value = df.index.astype(str)[idx]\n wks.update_cells(cell_list)\n\n\n # convert df values to string\n df = df.applymap(str)\n # Addition of cell values\n cell_list = wks.range('%s%s:%s%d' % (\n first_col, first_row, last_col, last_idx))\n for j, idx in enumerate(df.index):\n for i, col in enumerate(df.columns.values):\n if not pd.isnull(df[col][idx]):\n cell_list[i + j * len(df.columns.values)].value = df[col][idx]\n\n wks.update_cells(cell_list)\n return wks", "response": "Uploads a Pandas DataFrame to Google Drive and returns a new gspread Worksheet object with the ID of the new object."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef clean_worksheet(wks, gfile_id, wks_name, credentials):\n\n values = wks.get_all_values()\n if values:\n df_ = pd.DataFrame(index=range(len(values)),\n columns=range(len(values[0])))\n df_ = df_.fillna('')\n wks = upload(df_, gfile_id, wks_name=wks_name,\n col_names=False, row_names=False, clean=False,\n credentials=credentials)\n return wks", "response": "Clean the contents of a single worksheet."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nbuilds an aggregation concordance dict for a given list of countries and a list of aggregates.", "response": "def agg_conc(original_countries,\n aggregates,\n missing_countries='test',\n merge_multiple_string='_&_',\n log_missing_countries=None,\n log_merge_multiple_strings=None,\n coco=None,\n as_dataframe='sparse',\n original_countries_class=None):\n \"\"\" Builds an aggregation concordance dict, vec or matrix\n\n Parameters\n ----------\n\n original_countries: list or str\n List of countries to aggregated, also accepts and valid column name of\n CountryConverter.data\n\n aggregates: list of dict or str\n List of aggregation information. This can either be dict mapping the\n names of 'original_countries' to aggregates, or a valid column name of\n CountryConverter.data Aggregation happens in order given in this\n parameter. Thus, country assigned to an aggregate are not re-assigned\n by the following aggregation information.\n\n missing_countries: str, boolean, None\n Entry to fill in for countries in 'original_countries' which do not\n appear in 'aggregates'. str: Use the given name for all missing\n countries True: Use the name in original_countries for missing\n countries False: Skip these countries None: Use None for these\n countries\n\n merge_multiple_string: str or None, optional\n If multiple correspondance entries are given in one of the aggregates\n join them with the given string (default: '_&_'. To skip these enries,\n pass None.\n\n log_missing_countries: function, optional\n This function is called with country is country is in\n 'original_countries' but missing in all 'aggregates'.\n For example, pass\n lambda x: logging.error('Country {} missing'.format(x))\n to log errors for such countries. Default: do nothing\n\n log_merge_multiple_strings: function, optional\n Function to call for logging multiple strings, see\n log_missing_countries Default: do nothing\n\n coco: instance of CountryConverter, optional\n CountryConverter instance used for the conversion. Pass a custom one\n if additional data is needed in addition to the custom country\n converter file. If None (default), the bare CountryConverter is used\n\n as_dataframe: boolean or st, optional\n If False, output as OrderedDict. If True or str, output as pandas\n dataframe. If str and 'full', output as a full matrix, otherwise only\n two collumns with the original and aggregated names are returned.\n\n original_countries_class: str, optional\n Valid column name of CountryConverter.data. This parameter is needed\n if a list of countries is passed to 'orginal_countries' and strings\n corresponding to data in CountryConverter.data are used subsequently.\n Can be omitted otherwise.\n\n Returns\n -------\n\n OrderedDict or DataFrame (defined by 'as_dataframe')\n\n \"\"\"\n\n if coco is None:\n coco = CountryConverter()\n\n if type(original_countries) is str:\n original_countries_class = original_countries\n original_countries = coco.data[original_countries].values\n else:\n original_countries_class = (original_countries_class or\n coco._get_input_format_from_name(\n original_countries[0]))\n\n if type(aggregates) is not list:\n aggregates = [aggregates]\n\n correspond = OrderedDict.fromkeys(original_countries)\n\n for agg in aggregates:\n if type(agg) is str:\n agg = coco.get_correspondance_dict(original_countries_class,\n agg)\n for country in original_countries:\n if correspond.get(country) is None:\n try:\n entry = agg[country]\n except KeyError:\n entry = None\n\n if type(entry) is list:\n if 1 < len(entry):\n if merge_multiple_string:\n entry = merge_multiple_string.join([\n str(e) for e in entry])\n else:\n entry = None\n if log_merge_multiple_strings:\n log_merge_multiple_strings(country)\n else:\n entry = entry[0]\n\n correspond[country] = entry\n\n for country in original_countries:\n if correspond.get(country) is None:\n if missing_countries is True:\n correspond[country] = country\n elif missing_countries is False:\n del correspond[country]\n else:\n correspond[country] = missing_countries\n if log_missing_countries:\n log_missing_countries(country)\n\n if as_dataframe:\n correspond = pd.DataFrame.from_dict(\n correspond, orient='index').reset_index()\n correspond.columns = ['original', 'aggregated']\n if ((type(as_dataframe) is str) and\n (as_dataframe[0].lower() == 'f')):\n _co_list = correspond.original\n correspond['val'] = 1\n correspond = correspond.set_index(\n ['original', 'aggregated']).unstack().fillna(0)['val']\n correspond = correspond.loc[_co_list]\n\n return correspond"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef match(list_a, list_b, not_found='not_found', enforce_sublist=False,\n country_data=COUNTRY_DATA_FILE, additional_data=None):\n \"\"\" Matches the country names given in two lists into a dictionary.\n\n This function matches names given in list_a to the one provided in list_b\n using regular expressions defined in country_data.\n\n Parameters\n ----------\n list_a : list\n Names of countries to identify\n list_b : list\n Master list of names for countries\n\n not_found : str, optional\n Fill in value for not found entries. If None, keep the input value\n (default: 'not found')\n\n enforce_sublist : boolean, optional\n If True, all entries in both list are list.\n If False(default), only multiple matches are list, rest are strings\n\n country_data : Pandas DataFrame or path to data file (optional)\n This is by default set to COUNTRY_DATA_FILE - the standard (tested)\n country list for coco.\n\n additional_data: (list of) Pandas DataFrames or data files (optional)\n Additional data to include for a specific analysis.\n This must be given in the same format as specified in the\n country_data file. (utf-8 encoded tab separated data, same\n column headers in all files)\n\n Returns\n -------\n dict:\n A dictionary with a key for every entry in list_a. The value\n correspond to the matching entry in list_b if found. If there is\n a 1:1 correspondence, the value is a str (if enforce_sublist is False),\n otherwise multiple entries as list.\n\n \"\"\"\n if isinstance(list_a, str):\n list_a = [list_a]\n if isinstance(list_b, str):\n list_b = [list_b]\n if isinstance(list_a, tuple):\n list_a = list(list_a)\n if isinstance(list_b, tuple):\n list_b = list(list_b)\n\n coco = CountryConverter(country_data, additional_data)\n\n name_dict_a = dict()\n match_dict_a = dict()\n\n for name_a in list_a:\n\n name_dict_a[name_a] = []\n match_dict_a[name_a] = []\n\n for regex in coco.regexes:\n if regex.search(name_a):\n match_dict_a[name_a].append(regex)\n\n if len(match_dict_a[name_a]) == 0:\n logging.warning('Could not identify {} in list_a'.format(name_a))\n _not_found_entry = name_a if not not_found else not_found\n name_dict_a[name_a].append(_not_found_entry)\n if not enforce_sublist:\n name_dict_a[name_a] = name_dict_a[name_a][0]\n continue\n\n if len(match_dict_a[name_a]) > 1:\n logging.warning(\n 'Multiple matches for name {} in list_a'.format(name_a))\n\n for match_case in match_dict_a[name_a]:\n b_matches = 0\n for name_b in list_b:\n if match_case.search(name_b):\n b_matches += 1\n name_dict_a[name_a].append(name_b)\n\n if b_matches == 0:\n logging.warning(\n 'Could not find any '\n 'correspondence for {} in list_b'.format(name_a))\n _not_found_entry = name_a if not not_found else not_found\n name_dict_a[name_a].append(_not_found_entry)\n\n if b_matches > 1:\n logging.warning('Multiple matches for '\n 'name {} in list_b'.format(name_a))\n\n if not enforce_sublist and (len(name_dict_a[name_a]) == 1):\n name_dict_a[name_a] = name_dict_a[name_a][0]\n\n return name_dict_a", "response": "This function matches the names given in list_a to list_b and returns a dictionary that maps the names of the countries in list_a to the ones in list_b."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nwraps around CountryConverter. convert", "response": "def convert(*args, **kargs):\n \"\"\" Wrapper around CountryConverter.convert()\n\n Uses the same parameters. This function has the same performance as\n CountryConverter.convert for one call; for multiple calls it is better to\n instantiate a common CountryConverter (this avoid loading the source data\n file multiple times).\n\n Note\n ----\n A lot of the functionality can also be done directly in Pandas DataFrames.\n For example:\n cc = CountryConverter()\n names = ['USA', 'SWZ', 'PRI']\n cc.data[cc.data['ISO3'].isin(names)][['ISO2', 'continent']]\n\n Parameters\n ----------\n names : str or list like\n Countries in 'src' classification to convert to 'to' classification\n\n src : str, optional\n Source classification\n\n to : str, optional\n Output classification (valid str for an index of the\n country data file), default: name_short\n\n enforce_list : boolean, optional\n If True, enforces the output to be list (if only one name was passed)\n or to be a list of lists (if multiple names were passed). If False\n (default), the output will be a string (if only one name was passed) or\n a list of str and/or lists (str if a one to one matching, list\n otherwise).\n\n not_found : str, optional\n Fill in value for not found entries. If None, keep the input value\n (default: 'not found')\n\n country_data : Pandas DataFrame or path to data file (optional)\n This is by default set to COUNTRY_DATA_FILE - the standard (tested)\n country list for coco.\n\n additional_data: (list of) Pandas DataFrames or data files (optional)\n Additional data to include for a specific analysis.\n This must be given in the same format as specified in the\n country_data_file. (utf-8 encoded tab separated data, same\n column headers as in the general country data file)\n\n Returns\n -------\n list or str, depending on enforce_list\n\n \"\"\"\n init = {'country_data': COUNTRY_DATA_FILE,\n 'additional_data': None,\n 'only_UNmember': False,\n 'include_obsolete': False}\n init.update({kk: kargs.get(kk) for kk in init.keys() if kk in kargs})\n coco = CountryConverter(**init)\n kargs = {kk: ii for kk, ii in kargs.items() if kk not in init.keys()}\n return coco.convert(*args, **kargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncommanding line parser for coco", "response": "def _parse_arg(valid_classifications):\n \"\"\" Command line parser for coco\n\n Parameters\n ----------\n\n valid_classifications: list\n Available classifications, used for checking input parameters.\n\n Returns\n -------\n\n args : ArgumentParser namespace\n \"\"\"\n\n parser = argparse.ArgumentParser(\n description=('The country converter (coco): a Python package for '\n 'converting country names between '\n 'different classifications schemes. '\n 'Version: {}'.format(__version__)\n ), prog='coco', usage=('%(prog)s --names --src --to]'))\n\n parser.add_argument('names',\n help=('List of countries to convert '\n '(space separated, country names consisting of '\n 'multiple words must be put in quotation marks).'\n 'Possible classifications: ' +\n ', '.join(valid_classifications) +\n '; NB: long, official and short are provided '\n 'as shortcuts for the names classifications'\n ), nargs='*')\n parser.add_argument('-s', '--src', '--source', '-f', '--from',\n help=('Classification of the names given, '\n '(default: inferred from names)'))\n parser.add_argument('-t', '--to',\n help=('Required classification of the passed names'\n '(default: \"ISO3\"'))\n parser.add_argument('-o', '--output_sep',\n help=('Seperator for output names '\n '(default: space), e.g. \",\" '))\n parser.add_argument('-n', '--not_found',\n default='not found',\n help=('Fill in value for none found entries. '\n 'If \"None\" (string), keep the input value '\n '(default: not found)'))\n parser.add_argument('-a', '--additional_data',\n help=('Data file with additional country data'\n '(Same format as the original data file - '\n 'utf-8 encoded tab separated data, same '\n 'column headers as in the general country '\n 'data file; default: not found)'))\n\n args = parser.parse_args()\n args.src = args.src or None\n args.to = args.to or 'ISO3'\n args.not_found = args.not_found if args.not_found != 'None' else None\n args.output_sep = args.output_sep or ' '\n\n return args"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef main():\n args = _parse_arg(CountryConverter().valid_class)\n coco = CountryConverter(additional_data=args.additional_data)\n converted_names = coco.convert(\n names=args.names,\n src=args.src,\n to=args.to,\n enforce_list=False,\n not_found=args.not_found)\n\n print(args.output_sep.join(\n [str(etr) for etr in converted_names] if\n isinstance(converted_names, list) else [str(converted_names)]))", "response": "Entry point for the base class call."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _separate_exclude_cases(name, exclude_prefix):\n\n excluder = re.compile('|'.join(exclude_prefix))\n split_entries = excluder.split(name)\n return {'clean_name': split_entries[0],\n 'excluded_countries': split_entries[1:]}", "response": "Splits the excluded_countries and name into a single dictionary with the name and the excluded_countries as keys and the excluded_countries as values."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef convert(self, names, src=None, to='ISO3', enforce_list=False,\n not_found='not found',\n exclude_prefix=['excl\\\\w.*', 'without', 'w/o']):\n \"\"\" Convert names from a list to another list.\n\n Note\n ----\n A lot of the functionality can also be done directly in Pandas\n DataFrames.\n For example:\n coco = CountryConverter()\n names = ['USA', 'SWZ', 'PRI']\n coco.data[coco.data['ISO3'].isin(names)][['ISO2', 'continent']]\n\n Parameters\n ----------\n names : str or list like\n Countries in 'src' classification to convert\n to 'to' classification\n\n src : str, optional\n Source classification. If None (default), each passed name is\n checked if it is a number (assuming UNnumeric) or 2 (ISO2) or\n 3 (ISO3) characters long; for longer names 'regex' is assumed.\n\n to : str, optional\n Output classification (valid index of the country_data file),\n default: ISO3\n\n enforce_list : boolean, optional\n If True, enforces the output to be list (if only one name was\n passed) or to be a list of lists (if multiple names were passed).\n If False (default), the output will be a string (if only one name\n was passed) or a list of str and/or lists (str if a one to one\n matching, list otherwise).\n\n not_found : str, optional\n Fill in value for none found entries. If None, keep the input value\n (default: 'not found')\n\n exclude_prefix : list of valid regex strings\n List of indicators which negate the subsequent country/region.\n These prefixes and everything following will not be converted.\n E.g. 'Asia excluding China' becomes 'Asia' and\n 'China excluding Hong Kong' becomes 'China' prior to conversion\n Default: ['excl\\\\w.*', 'without', 'w/o'])\n\n Returns\n -------\n list or str, depending on enforce_list\n\n \"\"\"\n # The list to tuple conversion is necessary for Matlab interface\n names = list(names) if (\n isinstance(names, tuple) or\n isinstance(names, set)) else names\n\n names = names if isinstance(names, list) else [names]\n\n names = [str(n) for n in names]\n\n outlist = names.copy()\n\n to = [self._validate_input_para(to, self.data.columns)]\n\n exclude_split = {name: self._separate_exclude_cases(name,\n exclude_prefix)\n for name in names}\n\n for ind_names, current_name in enumerate(names):\n spec_name = exclude_split[current_name]['clean_name']\n\n if src is None:\n src_format = self._get_input_format_from_name(spec_name)\n else:\n src_format = self._validate_input_para(src, self.data.columns)\n\n if src_format.lower() == 'regex':\n result_list = []\n for ind_regex, ccregex in enumerate(self.regexes):\n if ccregex.search(spec_name):\n result_list.append(\n self.data.ix[ind_regex, to].values[0])\n if len(result_list) > 1:\n logging.warning('More then one regular expression '\n 'match for {}'.format(spec_name))\n\n else:\n _match_col = self.data[src_format].astype(\n str).str.replace('\\\\..*', '')\n\n result_list = [etr[0] for etr in\n self.data[_match_col.str.contains(\n '^' + spec_name + '$', flags=re.IGNORECASE,\n na=False)][to].values]\n\n if len(result_list) == 0:\n logging.warning(\n '{} not found in {}'.format(spec_name, src_format))\n _fillin = not_found or spec_name\n outlist[ind_names] = [_fillin] if enforce_list else _fillin\n else:\n outlist[ind_names] = []\n for etr in result_list:\n try:\n conv_etr = int(etr)\n except ValueError:\n conv_etr = etr\n outlist[ind_names].append(conv_etr)\n\n if len(outlist[ind_names]) == 1 and enforce_list is False:\n outlist[ind_names] = outlist[ind_names][0]\n\n if (len(outlist) == 1) and not enforce_list:\n return outlist[0]\n else:\n return outlist", "response": "Convert names from one list to another list."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the EU28 countries in the specified classification", "response": "def EU28as(self, to='name_short'):\n \"\"\"\n Return EU28 countries in the specified classification\n\n Parameters\n ----------\n to : str, optional\n Output classification (valid str for an index of\n country_data file), default: name_short\n\n Returns\n -------\n Pandas DataFrame\n\n \"\"\"\n if type(to) is str:\n to = [to]\n return self.data[self.data.EU < 2015][to]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef EU27as(self, to='name_short'):\n if isinstance(to, str):\n to = [to]\n return self.data[self.data.EU < 2013][to]", "response": "Return the EU27 countries in the specified classification"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef OECDas(self, to='name_short'):\n if isinstance(to, str):\n to = [to]\n return self.data[self.data.OECD > 0][to]", "response": "Return the OECD member states in the specified classification"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the UN member states in the specified classification", "response": "def UNas(self, to='name_short'):\n \"\"\"\n Return UN member states in the specified classification\n\n Parameters\n ----------\n to : str, optional\n Output classification (valid str for an index of\n country_data file), default: name_short\n\n Returns\n -------\n Pandas DataFrame\n\n \"\"\"\n if isinstance(to, str):\n to = [to]\n return self.data[self.data.UNmember > 0][to]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a DataFrame containing obsolete countries in the specified classification", "response": "def obsoleteas(self, to='name_short'):\n \"\"\"\n Return obsolete countries in the specified classification\n\n Parameters\n ----------\n to : str, optional\n Output classification (valid str for an index of\n country_data file), default: name_short\n\n Returns\n -------\n Pandas DataFrame\n\n \"\"\"\n if isinstance(to, str):\n to = [to]\n return self.data[self.data.obsolete > 0][to]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a dictionary of the correspoding entries between classA and classB.", "response": "def get_correspondance_dict(self, classA, classB,\n restrict=None,\n replace_numeric=True):\n \"\"\" Returns a correspondance between classification A and B as dict\n\n Parameters\n ----------\n\n classA: str\n Valid classification (column name of data)\n\n classB: str\n Valid classification (column name of data).\n\n restrict: boolean vector of size cc.data, optional\n where cc is the name of the CountryConverter instance. Used to\n restrict the data sheet if necessary. E.g. to convert to countries\n which were OECD members before 1970 use\n cc.get_correspondance_dict('ISO3', 'OECD', restrict=cc.data.OECD <\n 1970)\n\n replace_numeric: boolean, optional\n If True (default) replace numeric values with the column header.\n This can be used if get a correspondance to, for example, 'OECD'\n instead of to the OECD membership years. Set to False if the actual\n numbers are required (as for UNcode).\n\n Returns\n -------\n dict with\n keys: based on classA\n items: list of correspoding entries in classB or None\n\n \"\"\"\n result = {nn: None for nn in self.data[classA].values}\n\n if restrict is None:\n df = self.data.copy()\n else:\n df = self.data[restrict].copy()\n\n if replace_numeric and df[classB].dtype.kind in 'bifc':\n df.loc[~df[classB].isnull(), classB] = classB\n df.loc[df[classB].isnull(), classB] = None\n\n result.update(df.groupby(classA)\n .aggregate(lambda x: list(x.unique()))\n .to_dict()[classB])\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _validate_input_para(self, para, column_names):\n lower_case_valid_class = [et.lower() for et in self.valid_class]\n\n alt_valid_names = {\n 'name_short': ['short', 'short_name', 'name', 'names'],\n 'name_official': ['official', 'long_name', 'long'],\n 'UNcode': ['un', 'unnumeric'],\n 'ISOnumeric': ['isocode'],\n }\n\n for item in alt_valid_names.items():\n if para.lower() in item[1]:\n para = item[0]\n\n try:\n validated_para = self.valid_class[\n lower_case_valid_class.index(para.lower())]\n except ValueError:\n raise KeyError(\n '{} is not a valid country classification'.format(para))\n\n return validated_para", "response": "Convert the input classificaton para to the correct df column name"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_input_format_from_name(self, name):\n try:\n int(name)\n src_format = 'ISOnumeric'\n except ValueError:\n if len(name) == 2:\n src_format = 'ISO2'\n elif len(name) == 3:\n src_format = 'ISO3'\n else:\n src_format = 'regex'\n return src_format", "response": "Determines the input format based on the given name."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef Bezier(points, at):\n at = np.asarray(at)\n at_flat = at.ravel()\n N = len(points)\n curve = np.zeros((at_flat.shape[0], 2))\n for ii in range(N):\n curve += np.outer(Bernstein(N - 1, ii)(at_flat), points[ii])\n return curve.reshape(at.shape + (2,))", "response": "Build B\u00e9zier curve from points."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreads data from the pty in a thread.", "response": "def _read_in_thread(address, pty, blocking):\n \"\"\"Read data from the pty in a thread.\n \"\"\"\n client = socket.socket(socket.AF_INET, socket.SOCK_STREAM)\n client.connect(address)\n\n while 1:\n data = pty.read(4096, blocking=blocking)\n\n if not data and not pty.isalive():\n while not data and not pty.iseof():\n data += pty.read(4096, blocking=blocking)\n\n if not data:\n try:\n client.send(b'')\n except socket.error:\n pass\n break\n try:\n client.send(data)\n except socket.error:\n break\n\n client.close()"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nstarts a new child process in a pseudo terminal.", "response": "def spawn(cls, argv, cwd=None, env=None, dimensions=(24, 80)):\n \"\"\"Start the given command in a child process in a pseudo terminal.\n\n This does all the setting up the pty, and returns an instance of\n PtyProcess.\n\n Dimensions of the psuedoterminal used for the subprocess can be\n specified as a tuple (rows, cols), or the default (24, 80) will be\n used.\n \"\"\"\n if isinstance(argv, str):\n argv = shlex.split(argv, posix=False)\n\n if not isinstance(argv, (list, tuple)):\n raise TypeError(\"Expected a list or tuple for argv, got %r\" % argv)\n\n # Shallow copy of argv so we can modify it\n argv = argv[:]\n command = argv[0]\n env = env or os.environ\n\n path = env.get('PATH', os.defpath)\n command_with_path = which(command, path=path)\n if command_with_path is None:\n raise FileNotFoundError(\n 'The command was not found or was not ' +\n 'executable: %s.' % command\n )\n command = command_with_path\n argv[0] = command\n cmdline = ' ' + subprocess.list2cmdline(argv[1:])\n cwd = cwd or os.getcwd()\n\n proc = PTY(dimensions[1], dimensions[0])\n\n # Create the environemnt string.\n envStrs = []\n for (key, value) in env.items():\n envStrs.append('%s=%s' % (key, value))\n env = '\\0'.join(envStrs) + '\\0'\n\n if PY2:\n command = _unicode(command)\n cwd = _unicode(cwd)\n cmdline = _unicode(cmdline)\n env = _unicode(env)\n\n if len(argv) == 1:\n proc.spawn(command, cwd=cwd, env=env)\n else:\n proc.spawn(command, cwd=cwd, env=env, cmdline=cmdline)\n\n inst = cls(proc)\n inst._winsize = dimensions\n\n # Set some informational attributes\n inst.argv = argv\n if env is not None:\n inst.env = env\n if cwd is not None:\n inst.launch_dir = cwd\n\n return inst"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef close(self, force=False):\n if not self.closed:\n self.pty.close()\n self.fileobj.close()\n self._server.close()\n # Give kernel time to update process status.\n time.sleep(self.delayafterclose)\n if self.isalive():\n if not self.terminate(force):\n raise IOError('Could not terminate the child.')\n self.fd = -1\n self.closed = True\n del self.pty\n self.pty = None", "response": "This closes the connection with the child application."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nread and return at most size characters from the pty.", "response": "def read(self, size=1024):\n \"\"\"Read and return at most ``size`` characters from the pty.\n\n Can block if there is nothing to read. Raises :exc:`EOFError` if the\n terminal was closed.\n \"\"\"\n data = self.fileobj.recv(size)\n if not data:\n self.flag_eof = True\n raise EOFError('Pty is closed')\n\n return self.decoder.decode(data, final=False)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef readline(self):\n buf = []\n while 1:\n try:\n ch = self.read(1)\n except EOFError:\n return ''.join(buf)\n buf.append(ch)\n if ch == '\\n':\n return ''.join(buf)", "response": "Read one line from the pseudoterminal as bytes."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nwrites the string s to the pseudoterminal. Returns the number of bytes written.", "response": "def write(self, s):\n \"\"\"Write the string ``s`` to the pseudoterminal.\n\n Returns the number of bytes written.\n \"\"\"\n if not self.isalive():\n raise EOFError('Pty is closed')\n if PY2:\n s = _unicode(s)\n\n success, nbytes = self.pty.write(s)\n if not success:\n raise IOError('Write failed')\n return nbytes"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef setwinsize(self, rows, cols):\n self._winsize = (rows, cols)\n self.pty.set_size(cols, rows)", "response": "Set the terminal window size of the child tty."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef read(self, length=1000, blocking=False):\n size_p = PLARGE_INTEGER(LARGE_INTEGER(0))\n if not blocking:\n windll.kernel32.GetFileSizeEx(self.conout_pipe, size_p)\n size = size_p[0]\n length = min(size, length)\n data = ctypes.create_string_buffer(length)\n if length > 0:\n num_bytes = PLARGE_INTEGER(LARGE_INTEGER(0))\n ReadFile(self.conout_pipe, data, length, num_bytes, None)\n return data.value", "response": "Read length bytes from current process output stream."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nwrite string data to current process input stream.", "response": "def write(self, data):\n \"\"\"Write string data to current process input stream.\"\"\"\n data = data.encode('utf-8')\n data_p = ctypes.create_string_buffer(data)\n num_bytes = PLARGE_INTEGER(LARGE_INTEGER(0))\n bytes_to_write = len(data)\n success = WriteFile(self.conin_pipe, data_p,\n bytes_to_write, num_bytes, None)\n return success, num_bytes[0]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef close(self):\n windll.kernel32.CloseHandle(self.conout_pipe)\n windll.kernel32.CloseHandle(self.conin_pipe)", "response": "Close all communication process streams."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nchecks if current process streams are still open.", "response": "def iseof(self):\n \"\"\"Check if current process streams are still open.\"\"\"\n succ = windll.kernel32.PeekNamedPipe(\n self.conout_pipe, None, None, None, None, None\n )\n return not bool(succ)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nscrap an instagram profile page", "response": "def instagram_scrap_profile(username):\n \"\"\"\n Scrap an instagram profile page\n :param username:\n :return:\n \"\"\"\n try:\n url = \"https://www.instagram.com/{}/\".format(username)\n page = requests.get(url)\n # Raise error for 404 cause by a bad profile name\n page.raise_for_status()\n return html.fromstring(page.content)\n except HTTPError:\n logging.exception('user profile \"{}\" not found'.format(username))\n except (ConnectionError, socket_error) as e:\n logging.exception(\"instagram.com unreachable\")"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nretrieves the script tags from the parsed page.", "response": "def instagram_profile_js(username):\n \"\"\"\n Retrieve the script tags from the parsed page.\n :param username:\n :return:\n \"\"\"\n try:\n tree = instagram_scrap_profile(username)\n return tree.xpath('//script')\n except AttributeError:\n logging.exception(\"scripts not found\")\n return None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the JSON data string from the scripts.", "response": "def instagram_profile_json(username):\n \"\"\"\n Get the JSON data string from the scripts.\n :param username:\n :return:\n \"\"\"\n scripts = instagram_profile_js(username)\n source = None\n\n if scripts:\n for script in scripts:\n if script.text:\n if script.text[0:SCRIPT_JSON_PREFIX] == \"window._sharedData\":\n source = script.text[SCRIPT_JSON_DATA_INDEX:-1]\n\n return source"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_profile_media(profile, page = 0):\n try:\n edges = profile['entry_data']['ProfilePage'][page]['graphql']['user']['edge_owner_to_timeline_media']['edges']\n return [edge['node'] for edge in edges]\n except KeyError:\n logging.exception(\"path to profile media not found\")", "response": "Parse a generated media object"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a node that represents the most recent media of a user.", "response": "def instagram_user_recent_media(parser, token):\n \"\"\"\n Tag for getting data about recent media of an user.\n :param parser:\n :param token:\n :return:\n \"\"\"\n try:\n tagname, username = token.split_contents()\n\n return InstagramUserRecentMediaNode(username)\n except ValueError:\n raise template.TemplateSyntaxError(\n \"%r tag requires a single argument\" % token.contents.split()[0]\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfulling - length binary representation of the IP address.", "response": "def bin(self):\n \"\"\"Full-length binary representation of the IP address.\n\n >>> ip = IP(\"127.0.0.1\")\n >>> print(ip.bin())\n 01111111000000000000000000000001\n \"\"\"\n bits = self.v == 4 and 32 or 128\n return bin(self.ip).split('b')[1].rjust(bits, '0')"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef info(self):\n b = self.bin()\n for i in range(len(b), 0, -1):\n if b[:i] in self._range[self.v]:\n return self._range[self.v][b[:i]]\n return 'UNKNOWN'", "response": "Show IANA allocation information for the current IP address."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting dotquad or hextet to long.", "response": "def _dqtoi(self, dq):\n \"\"\"Convert dotquad or hextet to long.\"\"\"\n # hex notation\n if dq.startswith('0x'):\n return self._dqtoi_hex(dq)\n\n # IPv6\n if ':' in dq:\n return self._dqtoi_ipv6(dq)\n elif len(dq) == 32:\n # Assume full heximal notation\n self.v = 6\n return int(dq, 16)\n\n # IPv4\n if '.' in dq:\n return self._dqtoi_ipv4(dq)\n\n raise ValueError('Invalid address input')"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconverts long to dotquad or hextet.", "response": "def _itodq(self, n):\n \"\"\"Convert long to dotquad or hextet.\"\"\"\n if self.v == 4:\n return '.'.join(map(str, [\n (n >> 24) & 0xff,\n (n >> 16) & 0xff,\n (n >> 8) & 0xff,\n n & 0xff,\n ]))\n else:\n n = '%032x' % n\n return ':'.join(n[4 * x:4 * x + 4] for x in range(0, 8))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncompress an IP address to its shortest possible compressed form.", "response": "def to_compressed(self):\n \"\"\"\n Compress an IP address to its shortest possible compressed form.\n\n >>> print(IP('127.0.0.1').to_compressed())\n 127.1\n >>> print(IP('127.1.0.1').to_compressed())\n 127.1.1\n >>> print(IP('127.0.1.1').to_compressed())\n 127.0.1.1\n >>> print(IP('2001:1234:0000:0000:0000:0000:0000:5678').to_compressed())\n 2001:1234::5678\n >>> print(IP('1234:0000:0000:beef:0000:0000:0000:5678').to_compressed())\n 1234:0:0:beef::5678\n >>> print(IP('0000:0000:0000:0000:0000:0000:0000:0001').to_compressed())\n ::1\n >>> print(IP('fe80:0000:0000:0000:0000:0000:0000:0000').to_compressed())\n fe80::\n \"\"\"\n if self.v == 4:\n quads = self.dq.split('.')\n try:\n zero = quads.index('0')\n if zero == 1 and quads.index('0', zero + 1):\n quads.pop(zero)\n quads.pop(zero)\n return '.'.join(quads)\n elif zero == 2:\n quads.pop(zero)\n return '.'.join(quads)\n except ValueError: # No zeroes\n pass\n\n return self.dq\n else:\n quads = map(lambda q: '%x' % (int(q, 16)), self.dq.split(':'))\n quadc = ':%s:' % (':'.join(quads),)\n zeros = [0, -1]\n\n # Find the largest group of zeros\n for match in re.finditer(r'(:[:0]+)', quadc):\n count = len(match.group(1)) - 1\n if count > zeros[0]:\n zeros = [count, match.start(1)]\n\n count, where = zeros\n if count:\n quadc = quadc[:where] + ':' + quadc[where + count:]\n\n quadc = re.sub(r'((^:)|(:$))', '', quadc)\n quadc = re.sub(r'((^:)|(:$))', '::', quadc)\n\n return quadc"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef to_ipv4(self):\n if self.v == 4:\n return self\n else:\n if self.bin().startswith('0' * 96):\n return IP(int(self), version=4)\n elif self.bin().startswith('0' * 80 + '1' * 16):\n return IP(int(self) & MAX_IPV4, version=4)\n elif int(self) & BASE_6TO4:\n return IP((int(self) - BASE_6TO4) >> 80, version=4)\n else:\n return ValueError('%s: IPv6 address is not IPv4 compatible or mapped, '\n 'nor an 6-to-4 IP' % self.dq)", "response": "Convert an IPv6 address to an IPv4 address if possible."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ninitializes a new network from binary notation.", "response": "def from_bin(cls, value):\n \"\"\"Initialize a new network from binary notation.\"\"\"\n value = value.lstrip('b')\n if len(value) == 32:\n return cls(int(value, 2))\n elif len(value) == 128:\n return cls(int(value, 2))\n else:\n return ValueError('%r: invalid binary notation' % (value,))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef from_hex(cls, value):\n if len(value) == 8:\n return cls(int(value, 16))\n elif len(value) == 32:\n return cls(int(value, 16))\n else:\n raise ValueError('%r: invalid hexadecimal notation' % (value,))", "response": "Initialize a new network from hexadecimal notation."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts an IPv4 address to an IPv6 address.", "response": "def to_ipv6(self, ip_type='6-to-4'):\n \"\"\"\n Convert (an IPv4) IP address to an IPv6 address.\n\n >>> ip = IP('192.0.2.42')\n >>> print(ip.to_ipv6())\n 2002:c000:022a:0000:0000:0000:0000:0000\n\n >>> print(ip.to_ipv6('compat'))\n 0000:0000:0000:0000:0000:0000:c000:022a\n\n >>> print(ip.to_ipv6('mapped'))\n 0000:0000:0000:0000:0000:ffff:c000:022a\n \"\"\"\n assert ip_type in ['6-to-4', 'compat', 'mapped'], 'Conversion ip_type not supported'\n if self.v == 4:\n if ip_type == '6-to-4':\n return IP(BASE_6TO4 | int(self) << 80, version=6)\n elif ip_type == 'compat':\n return IP(int(self), version=6)\n elif ip_type == 'mapped':\n return IP(0xffff << 32 | int(self), version=6)\n else:\n return self"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef to_reverse(self):\n if self.v == 4:\n return '.'.join(list(self.dq.split('.')[::-1]) + ['in-addr', 'arpa'])\n else:\n return '.'.join(list(self.hex())[::-1] + ['ip6', 'arpa'])", "response": "Convert the IP address to a PTR record."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns netmask as long.", "response": "def netmask_long(self):\n \"\"\"\n Network netmask derived from subnet size, as long.\n\n >>> localnet = Network('127.0.0.1/8')\n >>> print(localnet.netmask_long())\n 4278190080\n \"\"\"\n if self.version() == 4:\n return (MAX_IPV4 >> (32 - self.mask)) << (32 - self.mask)\n else:\n return (MAX_IPV6 >> (128 - self.mask)) << (128 - self.mask)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef broadcast_long(self):\n if self.version() == 4:\n return self.network_long() | (MAX_IPV4 - self.netmask_long())\n else:\n return self.network_long() \\\n | (MAX_IPV6 - self.netmask_long())", "response": "Return the broadcast address as long."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the first available host in this subnet.", "response": "def host_first(self):\n \"\"\"First available host in this subnet.\"\"\"\n if (self.version() == 4 and self.mask > 30) or \\\n (self.version() == 6 and self.mask > 126):\n return self\n else:\n return IP(self.network_long() + 1, version=self.version())"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef host_last(self):\n if (self.version() == 4 and self.mask == 32) or \\\n (self.version() == 6 and self.mask == 128):\n return self\n elif (self.version() == 4 and self.mask == 31) or \\\n (self.version() == 6 and self.mask == 127):\n return IP(int(self) + 1, version=self.version())\n else:\n return IP(self.broadcast_long() - 1, version=self.version())", "response": "Last available host in this subnet."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef check_collision(self, other):\n other = Network(other)\n return self.network_long() <= other.network_long() <= self.broadcast_long() or \\\n other.network_long() <= self.network_long() <= other.broadcast_long()", "response": "Check another network against the given network."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef init_with_context(self, context):\n # Apply the include/exclude patterns:\n listitems = self._visible_models(context['request'])\n\n # Convert to a similar data structure like the dashboard icons have.\n # This allows sorting the items identically.\n models = [\n {'name': model._meta.model_name,\n 'app_name': model._meta.app_label,\n 'title': capfirst(model._meta.verbose_name_plural),\n 'url': self._get_admin_change_url(model, context)\n }\n for model, perms in listitems if self.is_item_visible(model, perms)\n ]\n\n # Sort models.\n sort_cms_models(models)\n\n # Convert to items\n for model in models:\n self.children.append(items.MenuItem(title=model['title'], url=model['url']))", "response": "Initialize the menu with the given context."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef init_with_context(self, context):\n super(ReturnToSiteItem, self).init_with_context(context)\n\n # See if the current page is being edited, update URL accordingly.\n edited_model = self.get_edited_object(context['request'])\n if edited_model:\n try:\n url = edited_model.get_absolute_url()\n except (AttributeError, urls.NoReverseMatch) as e:\n pass\n else:\n if url:\n self.url = url", "response": "Initialize the return to site item based on the context."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the object which is currently being edited.", "response": "def get_edited_object(self, request):\n \"\"\"\n Return the object which is currently being edited.\n Returns ``None`` if the match could not be made.\n \"\"\"\n resolvermatch = urls.resolve(request.path_info)\n if resolvermatch.namespace == 'admin' and resolvermatch.url_name and resolvermatch.url_name.endswith('_change'):\n # In \"appname_modelname_change\" view of the admin.\n # Extract the appname and model from the url name.\n # For some custom views, url_name might not be filled in (e.g. django-polymorphic's subclass_view)\n match = RE_CHANGE_URL.match(resolvermatch.url_name)\n if not match:\n return None\n\n # object_id can be string (e.g. a country code as PK).\n try:\n object_id = resolvermatch.kwargs['object_id'] # Django 2.0+\n except KeyError:\n object_id = resolvermatch.args[0]\n\n return self.get_object_by_natural_key(match.group(1), match.group(2), object_id)\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_object_by_natural_key(self, app_label, model_name, object_id):\n try:\n model_type = ContentType.objects.get_by_natural_key(app_label, model_name)\n except ContentType.DoesNotExist:\n return None\n\n # Pointless to fetch the object, if there is no URL to generate\n # Avoid another database query.\n ModelClass = model_type.model_class()\n if not hasattr(ModelClass, 'get_absolute_url'):\n return None\n\n try:\n return model_type.get_object_for_this_type(pk=object_id)\n except ObjectDoesNotExist:\n return None", "response": "Return a model based on a natural key."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_personal_module(self):\n return PersonalModule(\n layout='inline',\n draggable=False,\n deletable=False,\n collapsible=False,\n )", "response": "Instantiate the PersonalModule for use in the dashboard."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_application_modules(self):\n modules = []\n appgroups = get_application_groups()\n for title, kwargs in appgroups:\n AppListClass = get_class(kwargs.pop('module')) # e.g. CmsAppIconlist, AppIconlist, Applist\n modules.append(AppListClass(title, **kwargs))\n return modules", "response": "Instantiate all application modules."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_recent_actions_module(self):\n return modules.RecentActions(\n _('Recent Actions'),\n include_list=self.get_app_content_types(),\n limit=5,\n enabled=False,\n collapsible=False\n )", "response": "Return the recent actions module."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the application groups that are available in the system.", "response": "def get_application_groups():\n \"\"\"\n Return the applications of the system, organized in various groups.\n\n These groups are not connected with the application names,\n but rather with a pattern of applications.\n \"\"\"\n\n groups = []\n for title, groupdict in appsettings.FLUENT_DASHBOARD_APP_GROUPS:\n # Allow to pass all possible arguments to the DashboardModule class.\n module_kwargs = groupdict.copy()\n\n # However, the 'models' is treated special, to have catch-all support.\n if '*' in groupdict['models']:\n default_module = appsettings.FLUENT_DASHBOARD_DEFAULT_MODULE\n module_kwargs['exclude'] = ALL_KNOWN_APPS + list(module_kwargs.get('exclude', []))\n del module_kwargs['models']\n else:\n default_module = 'CmsAppIconList'\n\n # Get module to display, can be a alias for known variations.\n module = groupdict.get('module', default_module)\n if module in MODULE_ALIASES:\n module = MODULE_ALIASES[module]\n module_kwargs['module'] = module\n groups.append((title, module_kwargs),)\n\n return groups"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef sort_cms_models(cms_models):\n cms_models.sort(key=lambda model: (\n get_cms_model_order(model['name']) if is_cms_app(model['app_name']) else 999,\n model['app_name'],\n model['title']\n ))", "response": "Sort a set of CMS - related models in a custom order."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef is_cms_app(app_name):\n for pat in appsettings.FLUENT_DASHBOARD_CMS_APP_NAMES:\n if fnmatch(app_name, pat):\n return True\n\n return False", "response": "Returns whether the given application name is a CMS app"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a numeric ordering for a model name.", "response": "def get_cms_model_order(model_name):\n \"\"\"\n Return a numeric ordering for a model name.\n \"\"\"\n for (name, order) in iteritems(appsettings.FLUENT_DASHBOARD_CMS_MODEL_ORDER):\n if name in model_name:\n return order\n return 999"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef init_with_context(self, context):\n site_name = get_admin_site_name(context)\n\n self.children += [\n items.MenuItem(_('Dashboard'), reverse('{0}:index'.format(site_name))),\n items.Bookmarks(),\n ]\n\n for title, kwargs in get_application_groups():\n if kwargs.get('enabled', True):\n self.children.append(CmsModelList(title, **kwargs))\n\n self.children += [\n ReturnToSiteItem()\n ]", "response": "Initialize the menu items with the given context."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef init_with_context(self, context):\n super(PersonalModule, self).init_with_context(context)\n current_user = context['request'].user\n if django.VERSION < (1, 5):\n current_username = current_user.first_name or current_user.username\n else:\n current_username = current_user.get_short_name() or current_user.get_username()\n site_name = get_admin_site_name(context)\n\n # Personalize\n self.title = _('Welcome,') + ' ' + (current_username)\n\n # Expose links\n self.pages_link = None\n self.pages_title = None\n self.password_link = reverse('{0}:password_change'.format(site_name))\n self.logout_link = reverse('{0}:logout'.format(site_name))\n\n if self.cms_page_model:\n try:\n app_label, model_name = self.cms_page_model\n model = apps.get_model(app_label, model_name)\n pages_title = model._meta.verbose_name_plural.lower()\n pages_link = reverse('{site}:{app}_{model}_changelist'.format(site=site_name, app=app_label.lower(), model=model_name.lower()))\n except AttributeError:\n raise ImproperlyConfigured(\"The value {0} of FLUENT_DASHBOARD_CMS_PAGE_MODEL setting (or cms_page_model value) does not reffer to an existing model.\".format(self.cms_page_model))\n except NoReverseMatch:\n pass\n else:\n # Also check if the user has permission to view the module.\n # TODO: When there are modules that use Django 1.8's has_module_permission, add the support here.\n permission_name = 'change_{0}'.format(model._meta.model_name.lower())\n\n if current_user.has_perm('{0}.{1}'.format(model._meta.app_label, permission_name)):\n self.pages_title = pages_title\n self.pages_link = pages_link", "response": "Initializes the link list."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef init_with_context(self, context):\n super(AppIconList, self).init_with_context(context)\n apps = self.children\n\n # Standard model only has a title, change_url and add_url.\n # Restore the app_name and name, so icons can be matched.\n for app in apps:\n app_name = self._get_app_name(app)\n app['name'] = app_name\n\n for model in app['models']:\n try:\n model_name = self._get_model_name(model)\n model['name'] = model_name\n model['icon'] = self.get_icon_for_model(app_name, model_name) or appsettings.FLUENT_DASHBOARD_DEFAULT_ICON\n except ValueError:\n model['icon'] = appsettings.FLUENT_DASHBOARD_DEFAULT_ICON\n\n # Automatically add STATIC_URL before relative icon paths.\n model['icon'] = self.get_icon_url(model['icon'])\n model['app_name'] = app_name", "response": "Initializes the icon list with the given context."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_model_name(self, modeldata):\n if 'change_url' in modeldata:\n return modeldata['change_url'].strip('/').split('/')[-1] # /foo/admin/appname/modelname\n elif 'add_url' in modeldata:\n return modeldata['add_url'].strip('/').split('/')[-2] # /foo/admin/appname/modelname/add\n else:\n raise ValueError(\"Missing attributes in modeldata to find the model name!\")", "response": "Extract the model name from the modeldata that django - admin - tools provides."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the icon for the given app and model.", "response": "def get_icon_for_model(self, app_name, model_name, default=None):\n \"\"\"\n Return the icon for the given model.\n It reads the :ref:`FLUENT_DASHBOARD_APP_ICONS` setting.\n \"\"\"\n key = \"{0}/{1}\".format(app_name, model_name)\n return appsettings.FLUENT_DASHBOARD_APP_ICONS.get(key, default)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_icon_url(self, icon):\n if not icon.startswith('/') \\\n and not icon.startswith('http://') \\\n and not icon.startswith('https://'):\n if '/' in icon:\n return self.icon_static_root + icon\n else:\n return self.icon_theme_root + icon\n else:\n return icon", "response": "Returns the full usable URL for the given icon name."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef init_with_context(self, context):\n super(CmsAppIconList, self).init_with_context(context)\n apps = self.children\n\n cms_apps = [a for a in apps if is_cms_app(a['name'])]\n non_cms_apps = [a for a in apps if a not in cms_apps]\n\n if cms_apps:\n # Group the models of all CMS apps in one group.\n cms_models = []\n for app in cms_apps:\n cms_models += app['models']\n\n sort_cms_models(cms_models)\n single_cms_app = {'name': \"Modules\", 'title': \"Modules\", 'url': \"\", 'models': cms_models}\n\n # Put remaining groups after the first CMS group.\n self.children = [single_cms_app] + non_cms_apps", "response": "Initializes the icon list with the given context."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef init_with_context(self, context):\n super(CacheStatusGroup, self).init_with_context(context)\n\n if 'dashboardmods' in settings.INSTALLED_APPS:\n import dashboardmods\n memcache_mods = dashboardmods.get_memcache_dash_modules()\n\n try:\n varnish_mods = dashboardmods.get_varnish_dash_modules()\n except (socket.error, KeyError) as e:\n # dashboardmods 2.2 throws KeyError for 'cache_misses' when the Varnish cache is empty.\n # Socket errors are also ignored, to work similar to the memcache stats.\n logger.exception(\"Unable to request Varnish stats: {0}\".format(str(e)))\n varnish_mods = []\n except ImportError:\n varnish_mods = []\n\n self.children = memcache_mods + varnish_mods", "response": "Initializes the status list with the given context."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_version():\n\n path = os.path.join(os.path.dirname(__file__), 'soupsieve')\n fp, pathname, desc = imp.find_module('__meta__', [path])\n try:\n meta = imp.load_module('__meta__', fp, pathname, desc)\n return meta.__version__, meta.__version_info__._get_dev_status()\n finally:\n fp.close()", "response": "Get version and version_info without importing the entire module."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse_attribute_selector(self, sel, m, has_selector, quirks):\n\n inverse = False\n op = m.group('cmp')\n case = util.lower(m.group('case')) if m.group('case') else None\n parts = [css_unescape(a) for a in m.group('ns_attr').split('|')]\n ns = ''\n is_type = False\n pattern2 = None\n if len(parts) > 1:\n ns = parts[0]\n attr = parts[1]\n else:\n attr = parts[0]\n if case:\n flags = re.I if case == 'i' else 0\n elif util.lower(attr) == 'type':\n flags = re.I\n is_type = True\n else:\n flags = 0\n\n if op:\n if m.group('value').startswith(('\"', \"'\")) and not quirks:\n value = css_unescape(m.group('value')[1:-1], True)\n else:\n value = css_unescape(m.group('value'))\n else:\n value = None\n if not op:\n # Attribute name\n pattern = None\n elif op.startswith('^'):\n # Value start with\n pattern = re.compile(r'^%s.*' % re.escape(value), flags)\n elif op.startswith('$'):\n # Value ends with\n pattern = re.compile(r'.*?%s$' % re.escape(value), flags)\n elif op.startswith('*'):\n # Value contains\n pattern = re.compile(r'.*?%s.*' % re.escape(value), flags)\n elif op.startswith('~'):\n # Value contains word within space separated list\n # `~=` should match nothing if it is empty or contains whitespace,\n # so if either of these cases is present, use `[^\\s\\S]` which cannot be matched.\n value = r'[^\\s\\S]' if not value or RE_WS.search(value) else re.escape(value)\n pattern = re.compile(r'.*?(?:(?<=^)|(?<=[ \\t\\r\\n\\f]))%s(?=(?:[ \\t\\r\\n\\f]|$)).*' % value, flags)\n elif op.startswith('|'):\n # Value starts with word in dash separated list\n pattern = re.compile(r'^%s(?:-.*)?$' % re.escape(value), flags)\n elif op.startswith('!'):\n # Equivalent to `:not([attr=value])`\n pattern = re.compile(r'^%s(?:-.*)?$' % re.escape(value), flags)\n inverse = True\n else:\n # Value matches\n pattern = re.compile(r'^%s$' % re.escape(value), flags)\n if is_type and pattern:\n pattern2 = re.compile(pattern.pattern)\n\n # Append the attribute selector\n sel_attr = ct.SelectorAttribute(attr, ns, pattern, pattern2)\n if inverse:\n # If we are using `!=`, we need to nest the pattern under a `:not()`.\n sub_sel = _Selector()\n sub_sel.attributes.append(sel_attr)\n not_list = ct.SelectorList([sub_sel.freeze()], True, False)\n sel.selectors.append(not_list)\n else:\n sel.attributes.append(sel_attr)\n\n has_selector = True\n return has_selector", "response": "Parse the attribute selector from the regex match m."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse_tag_pattern(self, sel, m, has_selector):\n\n parts = [css_unescape(x) for x in m.group(0).split('|')]\n if len(parts) > 1:\n prefix = parts[0]\n tag = parts[1]\n else:\n tag = parts[0]\n prefix = None\n sel.tag = ct.SelectorTag(tag, prefix)\n has_selector = True\n return has_selector", "response": "Parse tag pattern from regex match."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef parse_pseudo_class_custom(self, sel, m, has_selector):\n\n pseudo = util.lower(css_unescape(m.group('name')))\n selector = self.custom.get(pseudo)\n if selector is None:\n raise SelectorSyntaxError(\n \"Undefined custom selector '{}' found at postion {}\".format(pseudo, m.end(0)),\n self.pattern,\n m.end(0)\n )\n\n if not isinstance(selector, ct.SelectorList):\n self.custom[pseudo] = None\n selector = CSSParser(\n selector, custom=self.custom, flags=self.flags\n ).process_selectors(flags=FLG_PSEUDO)\n self.custom[pseudo] = selector\n\n sel.selectors.append(selector)\n has_selector = True\n return has_selector", "response": "Parse custom pseudo class alias."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef parse_pseudo_open(self, sel, name, has_selector, iselector, index):\n\n flags = FLG_PSEUDO | FLG_OPEN\n if name == ':not':\n flags |= FLG_NOT\n if name == ':has':\n flags |= FLG_RELATIVE\n\n sel.selectors.append(self.parse_selectors(iselector, index, flags))\n has_selector = True\n return has_selector", "response": "Parse pseudo with opening bracket."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nparsing has combinator tokens.", "response": "def parse_has_combinator(self, sel, m, has_selector, selectors, rel_type, index):\n \"\"\"Parse combinator tokens.\"\"\"\n\n combinator = m.group('relation').strip()\n if not combinator:\n combinator = WS_COMBINATOR\n if combinator == COMMA_COMBINATOR:\n if not has_selector:\n # If we've not captured any selector parts, the comma is either at the beginning of the pattern\n # or following another comma, both of which are unexpected. Commas must split selectors.\n raise SelectorSyntaxError(\n \"The combinator '{}' at postion {}, must have a selector before it\".format(combinator, index),\n self.pattern,\n index\n )\n sel.rel_type = rel_type\n selectors[-1].relations.append(sel)\n rel_type = \":\" + WS_COMBINATOR\n selectors.append(_Selector())\n else:\n if has_selector:\n # End the current selector and associate the leading combinator with this selector.\n sel.rel_type = rel_type\n selectors[-1].relations.append(sel)\n elif rel_type[1:] != WS_COMBINATOR:\n # It's impossible to have two whitespace combinators after each other as the patterns\n # will gobble up trailing whitespace. It is also impossible to have a whitespace\n # combinator after any other kind for the same reason. But we could have\n # multiple non-whitespace combinators. So if the current combinator is not a whitespace,\n # then we've hit the multiple combinator case, so we should fail.\n raise SelectorSyntaxError(\n 'The multiple combinators at position {}'.format(index),\n self.pattern,\n index\n )\n # Set the leading combinator for the next selector.\n rel_type = ':' + combinator\n sel = _Selector()\n\n has_selector = False\n return has_selector, sel, rel_type"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nparse HTML classes and ids.", "response": "def parse_class_id(self, sel, m, has_selector):\n \"\"\"Parse HTML classes and ids.\"\"\"\n\n selector = m.group(0)\n if selector.startswith('.'):\n sel.classes.append(css_unescape(selector[1:]))\n else:\n sel.ids.append(css_unescape(selector[1:]))\n has_selector = True\n return has_selector"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef selector_iter(self, pattern):\n\n # Ignore whitespace and comments at start and end of pattern\n m = RE_WS_BEGIN.search(pattern)\n index = m.end(0) if m else 0\n m = RE_WS_END.search(pattern)\n end = (m.start(0) - 1) if m else (len(pattern) - 1)\n\n if self.debug: # pragma: no cover\n if self.quirks:\n print('## QUIRKS MODE: Throwing out the spec!')\n print('## PARSING: {!r}'.format(pattern))\n while index <= end:\n m = None\n for v in self.css_tokens:\n if not v.enabled(self.flags): # pragma: no cover\n continue\n m = v.match(pattern, index)\n if m:\n name = v.get_name()\n if self.debug: # pragma: no cover\n print(\"TOKEN: '{}' --> {!r} at position {}\".format(name, m.group(0), m.start(0)))\n index = m.end(0)\n yield name, m\n break\n if m is None:\n c = pattern[index]\n # If the character represents the start of one of the known selector types,\n # throw an exception mentioning that the known selector type is in error;\n # otherwise, report the invalid character.\n if c == '[':\n msg = \"Malformed attribute selector at position {}\".format(index)\n elif c == '.':\n msg = \"Malformed class selector at position {}\".format(index)\n elif c == '#':\n msg = \"Malformed id selector at position {}\".format(index)\n elif c == ':':\n msg = \"Malformed pseudo-class selector at position {}\".format(index)\n else:\n msg = \"Invalid character {!r} position {}\".format(c, index)\n raise SelectorSyntaxError(msg, self.pattern, index)\n if self.debug: # pragma: no cover\n print('## END PARSING')", "response": "Iterate through the selector tokens in the pattern."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nprocess selectors. We do our own selectors as BeautifulSoup4 has some annoying quirks, and we don't really need to do nth selectors or siblings or descendants etc.", "response": "def process_selectors(self, index=0, flags=0):\n \"\"\"\n Process selectors.\n\n We do our own selectors as BeautifulSoup4 has some annoying quirks,\n and we don't really need to do nth selectors or siblings or\n descendants etc.\n \"\"\"\n\n return self.parse_selectors(self.selector_iter(self.pattern), index, flags)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfind a label in the list of available labeles.", "response": "def find_label(label, label_color, label_description):\n \"\"\"Find label.\"\"\"\n edit = None\n for name, values in label_list.items():\n color, description = values\n if isinstance(name, tuple):\n old_name = name[0]\n new_name = name[1]\n else:\n old_name = name\n new_name = name\n if label.lower() == old_name.lower():\n edit = LabelEdit(old_name, new_name, color, description)\n break\n return edit"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate the labels in a tree.", "response": "def update_labels(repo):\n \"\"\"Update labels.\"\"\"\n updated = set()\n for label in repo.get_labels():\n edit = find_label(label.name, label.color, label.description)\n if edit is not None:\n print(' Updating {}: #{} \"{}\"'.format(edit.new, edit.color, edit.description))\n label.edit(edit.new, edit.color, edit.description)\n updated.add(edit.old)\n updated.add(edit.new)\n else:\n if DELETE_UNSPECIFIED:\n print(' Deleting {}: #{} \"{}\"'.format(label.name, label.color, label.description))\n label.delete()\n else:\n print(' Skipping {}: #{} \"{}\"'.format(label.name, label.color, label.description))\n updated.add(label.name)\n for name, values in label_list.items():\n color, description = values\n if isinstance(name, tuple):\n new_name = name[1]\n else:\n new_name = name\n if new_name not in updated:\n print(' Creating {}: #{} \"{}\"'.format(new_name, color, description))\n repo.create_label(new_name, color, description)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef main():\n\n if len(sys.argv) > 1 and os.path.exists(sys.argv[1]):\n try:\n with open(sys.argv[1], 'r') as f:\n user_name, password = f.read().strip().split(':')\n git = Github(user_name, password)\n password = None\n except Exception:\n git = get_auth()\n else:\n git = get_auth()\n\n user = git.get_user()\n\n print('Finding repo...')\n for repo in user.get_repos():\n if repo.owner.name == user.name:\n if repo.name == REPO_NAME:\n print(repo.name)\n update_labels(repo)\n break", "response": "Main function for the\nador script."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse_version(ver, pre=False):\n\n m = RE_VER.match(ver)\n\n # Handle major, minor, micro\n major = int(m.group('major'))\n minor = int(m.group('minor')) if m.group('minor') else 0\n micro = int(m.group('micro')) if m.group('micro') else 0\n\n # Handle pre releases\n if m.group('type'):\n release = PRE_REL_MAP[m.group('type')]\n pre = int(m.group('pre'))\n else:\n release = \"final\"\n pre = 0\n\n # Handle development releases\n dev = m.group('dev') if m.group('dev') else 0\n if m.group('dev'):\n dev = int(m.group('dev'))\n release = '.dev-' + release if pre else '.dev'\n else:\n dev = 0\n\n # Handle post\n post = int(m.group('post')) if m.group('post') else 0\n\n return Version(major, minor, micro, release, pre, post, dev)", "response": "Parse a version into a comparable Version tuple."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_canonical(self):\n\n # Assemble major, minor, micro version and append `pre`, `post`, or `dev` if needed..\n if self.micro == 0:\n ver = \"{}.{}\".format(self.major, self.minor)\n else:\n ver = \"{}.{}.{}\".format(self.major, self.minor, self.micro)\n if self._is_pre():\n ver += '{}{}'.format(REL_MAP[self.release], self.pre)\n if self._is_post():\n ver += \".post{}\".format(self.post)\n if self._is_dev():\n ver += \".dev{}\".format(self.dev)\n\n return ver", "response": "Assemble major minor micro version and append pre post or dev to the canonical output string."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef assert_valid_input(cls, tag):\n\n # Fail on unexpected types.\n if not cls.is_tag(tag):\n raise TypeError(\"Expected a BeautifulSoup 'Tag', but instead recieved type {}\".format(type(tag)))", "response": "Check if valid input tag or document."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks if element is an iframe.", "response": "def is_iframe(self, el):\n \"\"\"Check if element is an `iframe`.\"\"\"\n\n return ((el.name if self.is_xml_tree(el) else util.lower(el.name)) == 'iframe') and self.is_html_tag(el)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn whether the element is the root element.", "response": "def is_root(self, el):\n \"\"\"\n Return whether element is a root element.\n\n We check that the element is the root of the tree (which we have already pre-calculated),\n and we check if it is the root element under an `iframe`.\n \"\"\"\n\n root = self.root and self.root is el\n if not root:\n parent = self.get_parent(el)\n root = parent is not None and self.is_html and self.is_iframe(parent)\n return root"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget contents or contents in reverse.", "response": "def get_contents(self, el, no_iframe=False):\n \"\"\"Get contents or contents in reverse.\"\"\"\n if not no_iframe or not self.is_iframe(el):\n for content in el.contents:\n yield content"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets children of an element.", "response": "def get_children(self, el, start=None, reverse=False, tags=True, no_iframe=False):\n \"\"\"Get children.\"\"\"\n\n if not no_iframe or not self.is_iframe(el):\n last = len(el.contents) - 1\n if start is None:\n index = last if reverse else 0\n else:\n index = start\n end = -1 if reverse else last + 1\n incr = -1 if reverse else 1\n\n if 0 <= index <= last:\n while index != end:\n node = el.contents[index]\n index += incr\n if not tags or self.is_tag(node):\n yield node"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget next sibling tag.", "response": "def get_next_tag(cls, el):\n \"\"\"Get next sibling tag.\"\"\"\n\n sibling = el.next_sibling\n while not cls.is_tag(sibling) and sibling is not None:\n sibling = sibling.next_sibling\n return sibling"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting previous sibling tag.", "response": "def get_previous_tag(cls, el):\n \"\"\"Get previous sibling tag.\"\"\"\n\n sibling = el.previous_sibling\n while not cls.is_tag(sibling) and sibling is not None:\n sibling = sibling.previous_sibling\n return sibling"}