{"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nread and return the dataset contents as binary.", "response": "def read_as_binary(self):\n '''Read and return the dataset contents as binary.'''\n return self.workspace._rest.read_intermediate_dataset_contents_binary(\n self.workspace.workspace_id,\n self.experiment.experiment_id,\n self.node_id,\n self.port_name\n )"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreading and return the dataset contents as text.", "response": "def read_as_text(self):\n '''Read and return the dataset contents as text.'''\n return self.workspace._rest.read_intermediate_dataset_contents_text(\n self.workspace.workspace_id,\n self.experiment.experiment_id,\n self.node_id,\n self.port_name\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nread and return the dataset contents as a pandas DataFrame.", "response": "def _to_dataframe(self):\n \"\"\"Read and return the dataset contents as a pandas DataFrame.\"\"\"\n #TODO: figure out why passing in the opened stream directly gives invalid data\n data = self.read_as_binary()\n reader = BytesIO(data)\n return deserialize_dataframe(reader, self.data_type_id)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_intermediate_dataset(self, node_id, port_name, data_type_id):\n return IntermediateDataset(self.workspace, self, node_id, port_name, data_type_id)", "response": "Returns an intermediate dataset object."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nrunning HTTP GET request to retrieve the list of experiments in a workspace.", "response": "def get_experiments(self, workspace_id):\r\n \"\"\"Runs HTTP GET request to retrieve the list of experiments.\"\"\"\r\n api_path = self.EXPERIMENTS_URI_FMT.format(workspace_id)\r\n return self._send_get_req(api_path)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nruns HTTP GET request to retrieve the list of datasets in a workspace.", "response": "def get_datasets(self, workspace_id):\r\n \"\"\"Runs HTTP GET request to retrieve the list of datasets.\"\"\"\r\n api_path = self.DATASOURCES_URI_FMT.format(workspace_id)\r\n return self._send_get_req(api_path)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_dataset(self, workspace_id, dataset_id):\r\n api_path = self.DATASOURCE_URI_FMT.format(workspace_id, dataset_id)\r\n return self._send_get_req(api_path)", "response": "Runs HTTP GET request to retrieve a single dataset."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\npublish a callable function or decorates a function to be published.", "response": "def publish(func_or_workspace_id, workspace_id_or_token = None, workspace_token_or_none = None, files=(), endpoint=None):\n '''publishes a callable function or decorates a function to be published. \n\nReturns a callable, iterable object. Calling the object will invoke the published service.\nIterating the object will give the API URL, API key, and API help url.\n \nTo define a function which will be published to Azure you can simply decorate it with\nthe @publish decorator. This will publish the service, and then future calls to the\nfunction will run against the operationalized version of the service in the cloud.\n\n>>> @publish(workspace_id, workspace_token)\n>>> def func(a, b): \n>>> return a + b\n\nAfter publishing you can then invoke the function using:\nfunc.service(1, 2)\n\nOr continue to invoke the function locally:\nfunc(1, 2)\n\nYou can also just call publish directly to publish a function:\n\n>>> def func(a, b): return a + b\n>>> \n>>> res = publish(func, workspace_id, workspace_token)\n>>> \n>>> url, api_key, help_url = res\n>>> res(2, 3)\n5\n>>> url, api_key, help_url = res.url, res.api_key, res.help_url\n\nThe returned result will be the published service.\n\nYou can specify a list of files which should be published along with the function.\nThe resulting files will be stored in a subdirectory called 'Script Bundle'. The\nlist of files can be one of:\n (('file1.txt', None), ) # file is read from disk\n (('file1.txt', b'contents'), ) # file contents are provided\n ('file1.txt', 'file2.txt') # files are read from disk, written with same filename\n ((('file1.txt', 'destname.txt'), None), ) # file is read from disk, written with different destination name\n\nThe various formats for each filename can be freely mixed and matched.\n'''\n if not callable(func_or_workspace_id):\n def do_publish(func):\n func.service = _publish_worker(func, files, func_or_workspace_id, workspace_id_or_token, endpoint)\n return func\n return do_publish\n\n return _publish_worker(func_or_workspace_id, files, workspace_id_or_token, workspace_token_or_none, endpoint)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef service(url, api_key, help_url = None):\n '''Marks a function as having been published and causes all invocations to go to the remote\noperationalized service.\n\n>>> @service(url, api_key)\n>>> def f(a, b):\n>>> pass\n'''\n def do_publish(func):\n return published(url, api_key, help_url, func, None)\n return do_publish", "response": "A decorator that creates a function that is published to the remote\noperationalized service."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nspecifying the types used for the arguments of a published service.", "response": "def types(**args):\n \"\"\"Specifies the types used for the arguments of a published service.\n\n@types(a=int, b = str)\ndef f(a, b):\n pass\n\"\"\"\n def l(func):\n if hasattr(func, '__annotations__'):\n func.__annotations__.update(args)\n else:\n func.__annotations__ = args\n return func\n return l"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef returns(type):\n def l(func):\n if hasattr(func, '__annotations__'):\n func.__annotations__['return'] = type\n else:\n func.__annotations__ = {'return': type}\n return func\n return l", "response": "Specifies the return type for a published service."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef attach(name, contents = None):\n def do_attach(func):\n if hasattr(func, '__attachments__'):\n func.__attachments__.append((name, contents))\n else:\n func.__attachments__ = [(name, contents)]\n return func\n return do_attach", "response": "attaches a file to the payload to be uploaded."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nwalking the byte code to find the variables which are actually globals", "response": "def find_globals(code):\n \"\"\"walks the byte code to find the variables which are actually globals\"\"\"\n cur_byte = 0\n byte_code = code.co_code\n \n names = set()\n while cur_byte < len(byte_code):\n op = ord(byte_code[cur_byte])\n\n if op >= dis.HAVE_ARGUMENT:\n if op == _LOAD_GLOBAL:\n oparg = ord(byte_code[cur_byte + 1]) + (ord(byte_code[cur_byte + 2]) << 8)\n name = code.co_names[oparg]\n names.add(name)\n\n cur_byte += 2\n cur_byte += 1\n \n return names"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nmaps the function onto multiple inputs.", "response": "def map(self, *args):\n \"\"\"maps the function onto multiple inputs. The input should be multiple sequences. The\nsequences will be zipped together forming the positional arguments for the call. This is\nequivalent to map(func, ...) but is executed with a single network call.\"\"\"\n call_args = [self._map_args(*cur_args) for cur_args in zip(*args)]\n r = self._invoke(call_args)\n\n ret_type = _get_annotation('return', self.func)\n output_name = getattr(self.func, '__output_name__', 'output1')\n return [_decode_response(\n r['Results'][output_name]['value'].get(\"ColumnNames\"), \n r['Results'][output_name]['value'].get(\"ColumnTypes\"), \n x, \n ret_type) \n for x in r['Results']['output1']['value']['Values']]"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a copy of this pen.", "response": "def copy(self):\n \"\"\"Create a copy of this pen.\"\"\"\n pen = Pen()\n pen.__dict__ = self.__dict__.copy()\n return pen"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef lookup_color(c):\n import sys\n import gi\n gi.require_version('Gtk', '3.0')\n gi.require_version('PangoCairo', '1.0')\n\n from gi.repository import Gdk\n\n try:\n color = Gdk.color_parse(c)\n except ValueError:\n pass\n else:\n s = 1.0/65535.0\n r = color.red*s\n g = color.green*s\n b = color.blue*s\n a = 1.0\n return r, g, b, a\n\n try:\n dummy, scheme, index = c.split('/')\n r, g, b = brewer_colors[scheme][int(index)]\n except (ValueError, KeyError):\n pass\n else:\n s = 1.0/255.0\n r = r*s\n g = g*s\n b = b*s\n a = 1.0\n return r, g, b, a\n\n sys.stderr.write(\"warning: unknown color '%s'\\n\" % c)\n return None", "response": "Return RGBA values of color c"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef draw(self, cr, highlight=False, bounding=None):\n if bounding is None or self._intersects(bounding):\n self._draw(cr, highlight, bounding)", "response": "Draw this shape with the given cairo context"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _cubic_bernstein_extrema(p0, p1, p2, p3):\n # compute coefficients of derivative\n a = 3.*(p3-p0+3.*(p1-p2))\n b = 6.*(p0+p2-2.*p1)\n c = 3.*(p1-p0)\n\n if a == 0:\n if b == 0:\n return () # constant\n return (-c / b,) # linear\n\n # quadratic\n # compute discriminant\n d = b*b - 4.*a*c\n if d < 0:\n return ()\n\n k = -2. * a\n if d == 0:\n return (b / k,)\n\n r = math.sqrt(d)\n return ((b + r) / k, (b - r) / k)", "response": "Compute extremas of a function of real domain defined by evaluating\n a cubic bernstein polynomial of given coefficients."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _cubic_bernstein(p0, p1, p2, p3, t):\n u = 1 - t\n return p0*(u**3) + 3*t*u*(p1*u + p2*t) + p3*(t**3)", "response": "Evaluate a polynomial of given bernstein coefficients."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nbuilds the choices list runtime using sitetree_tree tag", "response": "def _build_choices(self):\n \"\"\"Build choices list runtime using 'sitetree_tree' tag\"\"\"\n tree_token = u'sitetree_tree from \"%s\" template \"%s\"' % (self.tree, self.template)\n\n context_kwargs = {'current_app': 'admin'}\n context = template.Context(context_kwargs) if VERSION >= (1, 8) else template.Context(**context_kwargs)\n context.update({'request': object()})\n\n choices_str = sitetree_tree(\n Parser(None), Token(token_type=TOKEN_BLOCK, contents=tree_token)\n ).render(context)\n\n tree_choices = [(ITEMS_FIELD_ROOT_ID, self.root_title)]\n\n for line in choices_str.splitlines():\n if line.strip():\n splitted = line.split(':::')\n tree_choices.append((splitted[0], mark_safe(splitted[1])))\n\n return tree_choices"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_sitetree():\n sitetree = getattr(_THREAD_LOCAL, _THREAD_SITETREE, None)\n\n if sitetree is None:\n sitetree = SiteTree()\n setattr(_THREAD_LOCAL, _THREAD_SITETREE, sitetree)\n\n return sitetree", "response": "Returns a SiteTree object implementing utility methods."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef register_items_hook(func):\n global _ITEMS_PROCESSOR\n global _ITEMS_PROCESSOR_ARGS_LEN\n\n _ITEMS_PROCESSOR = func\n\n if func:\n args_len = len(getargspec(func).args)\n if args_len not in {2, 3}:\n raise SiteTreeError('`register_items_hook()` expects a function with two or three arguments.')\n _ITEMS_PROCESSOR_ARGS_LEN = args_len", "response": "Registers a function to process tree items right before they are passed to templates."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef register_dynamic_trees(trees, *args, **kwargs):\n global _DYNAMIC_TREES\n\n if _IDX_ORPHAN_TREES not in _DYNAMIC_TREES:\n _DYNAMIC_TREES[_IDX_ORPHAN_TREES] = {}\n\n if isinstance(trees, dict): # New `less-brackets` style registration.\n trees = [trees]\n trees.extend(args)\n\n for tree in trees or []:\n if tree is not None and tree['sitetrees'] is not None:\n if tree['tree'] is None:\n # Register trees as they are defined in app.\n for st in tree['sitetrees']:\n if st.alias not in _DYNAMIC_TREES[_IDX_ORPHAN_TREES]:\n _DYNAMIC_TREES[_IDX_ORPHAN_TREES][st.alias] = []\n _DYNAMIC_TREES[_IDX_ORPHAN_TREES][st.alias].append(st)\n else:\n # Register tree items as parts of existing trees.\n index = _IDX_TPL % (tree['tree'], tree['parent_item'])\n if index not in _DYNAMIC_TREES:\n _DYNAMIC_TREES[index] = []\n _DYNAMIC_TREES[index].extend(tree['sitetrees'])\n\n reset_cache = kwargs.get('reset_cache', False)\n if reset_cache:\n cache_ = get_sitetree().cache\n cache_.empty()\n cache_.reset()", "response": "Registers dynamic trees to be available for the siteetree runtime."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef compose_dynamic_tree(src, target_tree_alias=None, parent_tree_item_alias=None, include_trees=None):\n def result(sitetrees=src):\n if include_trees is not None:\n sitetrees = [tree for tree in sitetrees if tree.alias in include_trees]\n\n return {\n 'app': src,\n 'sitetrees': sitetrees,\n 'tree': target_tree_alias,\n 'parent_item': parent_tree_item_alias}\n\n if isinstance(src, six.string_types):\n # Considered to be an application name.\n try:\n module = import_app_sitetree_module(src)\n return None if module is None else result(getattr(module, 'sitetrees', None))\n\n except ImportError as e:\n if settings.DEBUG:\n warnings.warn('Unable to register dynamic sitetree(s) for `%s` application: %s. ' % (src, e))\n return None\n\n return result()", "response": "Returns a structure describing a dynamic tree."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef init(self):\n\n # Drop cache flag set by .reset() method.\n cache.get('sitetrees_reset') and self.empty(init=False)\n\n self.cache = cache.get(\n 'sitetrees', {'sitetrees': {}, 'parents': {}, 'items_by_ids': {}, 'tree_aliases': {}})", "response": "Initializes local cache from Django cache."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef empty(self, **kwargs):\n cache.delete('sitetrees')\n cache.delete('sitetrees_reset')\n\n kwargs.get('init', True) and self.init()", "response": "Empties cached sitetree data."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the value of a cache entry parameter by its name.", "response": "def get_entry(self, entry_name, key):\n \"\"\"Returns cache entry parameter value by its name.\n\n :param str|unicode entry_name:\n :param key:\n :return:\n \"\"\"\n return self.cache[entry_name].get(key, False)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update_entry_value(self, entry_name, key, value):\n if key not in self.cache[entry_name]:\n self.cache[entry_name][key] = {}\n\n self.cache[entry_name][key].update(value)", "response": "Updates the value of a cache entry parameter with new data."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set_entry(self, entry_name, key, value):\n self.cache[entry_name][key] = value", "response": "Replaces entire cache entry parameter data by its name with new data."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ninitializing the sitetree to handle new request.", "response": "def init(self, context):\n \"\"\"Initializes sitetree to handle new request.\n\n :param Context|None context:\n \"\"\"\n self.cache = Cache()\n self.current_page_context = context\n self.current_request = context.get('request', None) if context else None\n self.current_lang = get_language()\n\n self._current_app_is_admin = None\n self._current_user_permissions = _UNSET\n self._items_urls = {} # Resolved urls are cache for a request.\n self._current_items = {}"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nresolves internationalized tree alias.", "response": "def resolve_tree_i18n_alias(self, alias):\n \"\"\"Resolves internationalized tree alias.\n Verifies whether a separate sitetree is available for currently active language.\n If so, returns i18n alias. If not, returns the initial alias.\n\n :param str|unicode alias:\n :rtype: str|unicode\n \"\"\"\n if alias not in _I18N_TREES:\n return alias\n\n current_language_code = self.current_lang\n i18n_tree_alias = '%s_%s' % (alias, current_language_code)\n trees_count = self.cache.get_entry('tree_aliases', i18n_tree_alias)\n\n if trees_count is False:\n trees_count = MODEL_TREE_CLASS.objects.filter(alias=i18n_tree_alias).count()\n self.cache.set_entry('tree_aliases', i18n_tree_alias, trees_count)\n\n if trees_count:\n alias = i18n_tree_alias\n\n return alias"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nattaching dynamic sitetrees items to an initial source items list.", "response": "def attach_dynamic_tree_items(tree_alias, src_tree_items):\n \"\"\"Attaches dynamic sitetrees items registered with `register_dynamic_trees()`\n to an initial (source) items list.\n\n :param str|unicode tree_alias:\n :param list src_tree_items:\n :rtype: list\n \"\"\"\n if not _DYNAMIC_TREES:\n return src_tree_items\n\n # This guarantees that a dynamic source stays intact,\n # no matter how dynamic sitetrees are attached.\n trees = deepcopy(_DYNAMIC_TREES)\n\n items = []\n if not src_tree_items:\n if _IDX_ORPHAN_TREES in trees and tree_alias in trees[_IDX_ORPHAN_TREES]:\n for tree in trees[_IDX_ORPHAN_TREES][tree_alias]:\n items.extend(tree.dynamic_items)\n else:\n\n # TODO Seems to be underoptimized %)\n\n # Tree item attachment by alias.\n for static_item in list(src_tree_items):\n items.append(static_item)\n if not static_item.alias:\n continue\n\n idx = _IDX_TPL % (tree_alias, static_item.alias)\n if idx not in trees:\n continue\n\n for tree in trees[idx]:\n tree.alias = tree_alias\n for dyn_item in tree.dynamic_items:\n if dyn_item.parent is None:\n dyn_item.parent = static_item\n # Unique IDs are required for the same trees attached\n # to different parents.\n dyn_item.id = generate_id_for(dyn_item)\n items.append(dyn_item)\n\n # Tree root attachment.\n idx = _IDX_TPL % (tree_alias, None)\n if idx in _DYNAMIC_TREES:\n trees = deepcopy(_DYNAMIC_TREES)\n for tree in trees[idx]:\n tree.alias = tree_alias\n items.extend(tree.dynamic_items)\n\n return items"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn True if current application is Admin contrib.", "response": "def current_app_is_admin(self):\n \"\"\"Returns boolean whether current application is Admin contrib.\n\n :rtype: bool\n \"\"\"\n is_admin = self._current_app_is_admin\n if is_admin is None:\n context = self.current_page_context\n\n current_app = getattr(\n # Try from request.resolver_match.app_name\n getattr(context.get('request', None), 'resolver_match', None), 'app_name',\n # Try from global context obj.\n getattr(context, 'current_app', None))\n\n if current_app is None: # Try from global context dict.\n current_app = context.get('current_app', '')\n\n is_admin = current_app == ADMIN_APP_NAME\n self._current_app_is_admin = is_admin\n\n return is_admin"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_sitetree(self, alias):\n cache_ = self.cache\n get_cache_entry = cache_.get_entry\n set_cache_entry = cache_.set_entry\n\n caching_required = False\n\n if not self.current_app_is_admin():\n # We do not need i18n for a tree rendered in Admin dropdown.\n alias = self.resolve_tree_i18n_alias(alias)\n\n sitetree = get_cache_entry('sitetrees', alias)\n\n if not sitetree:\n if DYNAMIC_ONLY:\n sitetree = []\n\n else:\n sitetree = (\n MODEL_TREE_ITEM_CLASS.objects.\n select_related('parent', 'tree').\n prefetch_related('access_permissions__content_type').\n filter(tree__alias__exact=alias).\n order_by('parent__sort_order', 'sort_order'))\n\n sitetree = self.attach_dynamic_tree_items(alias, sitetree)\n set_cache_entry('sitetrees', alias, sitetree)\n caching_required = True\n\n parents = get_cache_entry('parents', alias)\n if not parents:\n parents = defaultdict(list)\n for item in sitetree:\n parent = getattr(item, 'parent')\n parents[parent].append(item)\n set_cache_entry('parents', alias, parents)\n\n # Prepare items by ids cache if needed.\n if caching_required:\n # We need this extra pass to avoid future problems on items depth calculation.\n cache_update = cache_.update_entry_value\n for item in sitetree:\n cache_update('items_by_ids', alias, {item.id: item})\n\n url = self.url\n calculate_item_depth = self.calculate_item_depth\n\n for item in sitetree:\n if caching_required:\n item.has_children = False\n\n if not hasattr(item, 'depth'):\n item.depth = calculate_item_depth(alias, item.id)\n item.depth_range = range(item.depth)\n\n # Resolve item permissions.\n if item.access_restricted:\n permissions_src = (\n item.permissions if getattr(item, 'is_dynamic', False)\n else item.access_permissions.all())\n\n item.perms = set(\n ['%s.%s' % (perm.content_type.app_label, perm.codename) for perm in permissions_src])\n\n # Contextual properties.\n item.url_resolved = url(item)\n item.title_resolved = LazyTitle(item.title) if VARIABLE_TAG_START in item.title else item.title\n item.is_current = False\n item.in_current_branch = False\n\n # Get current item for the given sitetree.\n self.get_tree_current_item(alias)\n\n # Save sitetree data into cache if needed.\n if caching_required:\n cache_.save()\n\n return alias, sitetree", "response": "Gets site tree items from the given site tree alias."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef calculate_item_depth(self, tree_alias, item_id, depth=0):\n item = self.get_item_by_id(tree_alias, item_id)\n\n if hasattr(item, 'depth'):\n depth = item.depth + depth\n else:\n if item.parent is not None:\n depth = self.calculate_item_depth(tree_alias, item.parent.id, depth + 1)\n\n return depth", "response": "Calculates the depth of the item in the tree."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nresolve current tree item of tree_alias request path against URL of given tree item.", "response": "def get_tree_current_item(self, tree_alias):\n \"\"\"Resolves current tree item of 'tree_alias' tree matching current\n request path against URL of given tree item.\n\n :param str|unicode tree_alias:\n :rtype: TreeItemBase\n \"\"\"\n current_item = self._current_items.get(tree_alias, _UNSET)\n\n if current_item is not _UNSET:\n\n if current_item is not None:\n current_item.is_current = True # Could be reset by .get_sitetree()\n\n return current_item\n\n current_item = None\n\n if self.current_app_is_admin():\n self._current_items[tree_alias] = current_item\n return None\n\n # urlquote is an attempt to support non-ascii in url.\n current_url = self.current_request.path\n if isinstance(current_url, str):\n current_url = current_url.encode('UTF-8')\n if current_url:\n current_url = urlquote(current_url)\n\n for url_item, url in self._items_urls.items():\n # Iterate each as this dict may contains \"current\" items for various trees.\n if url != current_url:\n continue\n\n url_item.is_current = True\n if url_item.tree.alias == tree_alias:\n current_item = url_item\n\n if current_item is not None:\n self._current_items[tree_alias] = current_item\n\n return current_item"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nresolve item s URL.", "response": "def url(self, sitetree_item, context=None):\n \"\"\"Resolves item's URL.\n\n :param TreeItemBase sitetree_item: TreeItemBase heir object, 'url' property of which\n is processed as URL pattern or simple URL.\n\n :param Context context:\n\n :rtype: str|unicode\n \"\"\"\n context = context or self.current_page_context\n resolve_var = self.resolve_var\n\n if not isinstance(sitetree_item, MODEL_TREE_ITEM_CLASS):\n sitetree_item = resolve_var(sitetree_item, context)\n\n resolved_url = self._items_urls.get(sitetree_item)\n if resolved_url is not None:\n return resolved_url\n\n # Resolve only if item's URL is marked as pattern.\n if sitetree_item.urlaspattern:\n url = sitetree_item.url\n view_path = url\n all_arguments = []\n\n if ' ' in url:\n view_path = url.split(' ')\n # We should try to resolve URL parameters from site tree item.\n for view_argument in view_path[1:]:\n resolved = resolve_var(view_argument)\n # We enclose arg in double quotes as already resolved.\n all_arguments.append('\"%s\"' % resolved)\n\n view_path = view_path[0].strip('\"\\' ')\n\n url_pattern = \"'%s' %s\" % (view_path, ' '.join(all_arguments))\n\n else:\n url_pattern = '%s' % sitetree_item.url\n\n if sitetree_item.urlaspattern:\n # Form token to pass to Django 'url' tag.\n url_token = 'url %s as item.url_resolved' % url_pattern\n url_tag(\n Parser(None),\n Token(token_type=TOKEN_BLOCK, contents=url_token)\n ).render(context)\n\n resolved_url = context['item.url_resolved'] or UNRESOLVED_ITEM_MARKER\n\n else:\n resolved_url = url_pattern\n\n self._items_urls[sitetree_item] = resolved_url\n\n return resolved_url"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef init_tree(self, tree_alias, context):\n request = context.get('request', None)\n\n if request is None:\n raise SiteTreeError(\n 'Sitetree requires \"django.core.context_processors.request\" template context processor to be active. '\n 'If it is, check that your view pushes request data into the template.')\n\n if id(request) != id(self.current_request):\n self.init(context)\n\n # Resolve tree_alias from the context.\n tree_alias = self.resolve_var(tree_alias)\n tree_alias, sitetree_items = self.get_sitetree(tree_alias)\n\n if not sitetree_items:\n return None, None\n\n return tree_alias, sitetree_items", "response": "Initializes the site tree with the given alias and context."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn an arbitrary attribute of a sitetree item resolved as current for current page.", "response": "def get_current_page_attr(self, attr_name, tree_alias, context):\n \"\"\"Returns an arbitrary attribute of a sitetree item resolved as current for current page.\n\n :param str|unicode attr_name:\n :param str|unicode tree_alias:\n :param Context context:\n :rtype: str|unicode\n \"\"\"\n tree_alias, sitetree_items = self.init_tree(tree_alias, context)\n current_item = self.get_tree_current_item(tree_alias)\n\n if current_item is None:\n if settings.DEBUG and RAISE_ITEMS_ERRORS_ON_DEBUG:\n raise SiteTreeError(\n 'Unable to resolve current sitetree item to get a `%s` for current page. Check whether '\n 'there is an appropriate sitetree item defined for current URL.' % attr_name)\n\n return ''\n\n return getattr(current_item, attr_name, '')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the ancestor of level deep recursively", "response": "def get_ancestor_level(self, current_item, depth=1):\n \"\"\"Returns ancestor of level `deep` recursively\n\n :param TreeItemBase current_item:\n :param int depth:\n :rtype: TreeItemBase\n \"\"\"\n if current_item.parent is None:\n return current_item\n\n if depth <= 1:\n return current_item.parent\n\n return self.get_ancestor_level(current_item.parent, depth=depth-1)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef menu(self, tree_alias, tree_branches, context):\n tree_alias, sitetree_items = self.init_tree(tree_alias, context)\n\n if not sitetree_items:\n return ''\n\n tree_branches = self.resolve_var(tree_branches)\n\n parent_isnull = False\n parent_ids = []\n parent_aliases = []\n\n current_item = self.get_tree_current_item(tree_alias)\n self.tree_climber(tree_alias, current_item)\n\n # Support item addressing both through identifiers and aliases.\n for branch_id in tree_branches.split(','):\n branch_id = branch_id.strip()\n\n if branch_id == ALIAS_TRUNK:\n parent_isnull = True\n\n elif branch_id == ALIAS_THIS_CHILDREN and current_item is not None:\n branch_id = current_item.id\n parent_ids.append(branch_id)\n\n elif branch_id == ALIAS_THIS_ANCESTOR_CHILDREN and current_item is not None:\n branch_id = self.get_ancestor_item(tree_alias, current_item).id\n parent_ids.append(branch_id)\n\n elif branch_id == ALIAS_THIS_SIBLINGS and current_item is not None and current_item.parent is not None:\n branch_id = current_item.parent.id\n parent_ids.append(branch_id)\n\n elif branch_id == ALIAS_THIS_PARENT_SIBLINGS and current_item is not None:\n branch_id = self.get_ancestor_level(current_item, depth=2).id\n parent_ids.append(branch_id)\n\n elif branch_id.isdigit():\n parent_ids.append(int(branch_id))\n\n else:\n parent_aliases.append(branch_id)\n\n check_access = self.check_access\n\n menu_items = []\n for item in sitetree_items:\n if not item.hidden and item.inmenu and check_access(item, context):\n if item.parent is None:\n if parent_isnull:\n menu_items.append(item)\n else:\n if item.parent.id in parent_ids or item.parent.alias in parent_aliases:\n menu_items.append(item)\n\n menu_items = self.apply_hook(menu_items, 'menu')\n self.update_has_children(tree_alias, menu_items, 'menu')\n return menu_items", "response": "Builds and returns the menu structure for the sitetree_menu tag."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef apply_hook(self, items, sender):\n processor = _ITEMS_PROCESSOR\n\n if processor is None:\n return items\n\n if _ITEMS_PROCESSOR_ARGS_LEN == 2:\n return processor(tree_items=items, tree_sender=sender)\n\n return processor(tree_items=items, tree_sender=sender, context=self.current_page_context)", "response": "Applies item processing hook to items supplied and returns processed list."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking whether a current user has access to a certain item.", "response": "def check_access(self, item, context):\n \"\"\"Checks whether a current user has an access to a certain item.\n\n :param TreeItemBase item:\n :param Context context:\n :rtype: bool\n \"\"\"\n if hasattr(self.current_request.user.is_authenticated, '__call__'):\n authenticated = self.current_request.user.is_authenticated()\n else:\n authenticated = self.current_request.user.is_authenticated\n\n if item.access_loggedin and not authenticated:\n return False\n\n if item.access_guest and authenticated:\n return False\n\n if item.access_restricted:\n user_perms = self._current_user_permissions\n\n if user_perms is _UNSET:\n user_perms = set(context['user'].get_all_permissions())\n self._current_user_permissions = user_perms\n\n if item.access_perm_type == MODEL_TREE_ITEM_CLASS.PERM_TYPE_ALL:\n if len(item.perms) != len(item.perms.intersection(user_perms)):\n return False\n else:\n if not len(item.perms.intersection(user_perms)):\n return False\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef breadcrumbs(self, tree_alias, context):\n tree_alias, sitetree_items = self.init_tree(tree_alias, context)\n\n if not sitetree_items:\n return ''\n\n current_item = self.get_tree_current_item(tree_alias)\n\n breadcrumbs = []\n\n if current_item is not None:\n\n context_ = self.current_page_context\n check_access = self.check_access\n get_item_by_id = self.get_item_by_id\n\n def climb(base_item):\n \"\"\"Climbs up the site tree to build breadcrumb path.\n\n :param TreeItemBase base_item:\n \"\"\"\n if base_item.inbreadcrumbs and not base_item.hidden and check_access(base_item, context_):\n breadcrumbs.append(base_item)\n\n if hasattr(base_item, 'parent') and base_item.parent is not None:\n climb(get_item_by_id(tree_alias, base_item.parent.id))\n\n climb(current_item)\n breadcrumbs.reverse()\n\n items = self.apply_hook(breadcrumbs, 'breadcrumbs')\n self.update_has_children(tree_alias, items, 'breadcrumbs')\n\n return items", "response": "Builds and returns the breadcrumb trail structure for the site tree."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef tree(self, tree_alias, context):\n tree_alias, sitetree_items = self.init_tree(tree_alias, context)\n\n if not sitetree_items:\n return ''\n\n tree_items = self.filter_items(self.get_children(tree_alias, None), 'sitetree')\n tree_items = self.apply_hook(tree_items, 'sitetree')\n self.update_has_children(tree_alias, tree_items, 'sitetree')\n\n return tree_items", "response": "Builds and returns the tree structure for the given tree_alias and context."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef children(self, parent_item, navigation_type, use_template, context):\n # Resolve parent item and current tree alias.\n parent_item = self.resolve_var(parent_item, context)\n tree_alias, tree_items = self.get_sitetree(parent_item.tree.alias)\n\n # Mark path to current item.\n self.tree_climber(tree_alias, self.get_tree_current_item(tree_alias))\n\n tree_items = self.get_children(tree_alias, parent_item)\n tree_items = self.filter_items(tree_items, navigation_type)\n tree_items = self.apply_hook(tree_items, '%s.children' % navigation_type)\n self.update_has_children(tree_alias, tree_items, navigation_type)\n\n my_template = get_template(use_template)\n\n context.push()\n context['sitetree_items'] = tree_items\n rendered = my_template.render(context.flatten() if _CONTEXT_FLATTEN else context)\n context.pop()\n\n return rendered", "response": "Builds and returns site tree item children structure for sitetree_children tag."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns item s children.", "response": "def get_children(self, tree_alias, item):\n \"\"\"Returns item's children.\n\n :param str|unicode tree_alias:\n :param TreeItemBase|None item:\n :rtype: list\n \"\"\"\n if not self.current_app_is_admin():\n # We do not need i18n for a tree rendered in Admin dropdown.\n tree_alias = self.resolve_tree_i18n_alias(tree_alias)\n\n return self.cache.get_entry('parents', tree_alias)[item]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef update_has_children(self, tree_alias, tree_items, navigation_type):\n get_children = self.get_children\n filter_items = self.filter_items\n apply_hook = self.apply_hook\n\n for tree_item in tree_items:\n children = get_children(tree_alias, tree_item)\n children = filter_items(children, navigation_type)\n children = apply_hook(children, '%s.has_children' % navigation_type)\n tree_item.has_children = len(children) > 0", "response": "Updates has_children attribute for tree items inplace."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nfilters items by navigation type.", "response": "def filter_items(self, items, navigation_type=None):\n \"\"\"Filters sitetree item's children if hidden and by navigation type.\n\n NB: We do not apply any filters to sitetree in admin app.\n\n :param list items:\n :param str|unicode navigation_type: sitetree, breadcrumbs, menu\n :rtype: list\n \"\"\"\n if self.current_app_is_admin():\n return items\n\n items_filtered = []\n\n context = self.current_page_context\n check_access = self.check_access\n\n for item in items:\n if item.hidden:\n continue\n\n if not check_access(item, context):\n continue\n\n if not getattr(item, 'in%s' % navigation_type, True): # Hidden for current nav type\n continue\n\n items_filtered.append(item)\n\n return items_filtered"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_ancestor_item(self, tree_alias, base_item):\n parent = None\n\n if hasattr(base_item, 'parent') and base_item.parent is not None:\n parent = self.get_ancestor_item(tree_alias, self.get_item_by_id(tree_alias, base_item.parent.id))\n\n if parent is None:\n return base_item\n\n return parent", "response": "Climbs up the site tree to resolve root item for chosen one."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef tree_climber(self, tree_alias, base_item):\n if base_item is not None:\n base_item.in_current_branch = True\n if hasattr(base_item, 'parent') and base_item.parent is not None:\n self.tree_climber(tree_alias, self.get_item_by_id(tree_alias, base_item.parent.id))", "response": "Climbs up the site tree to mark items of current branch."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nresolves name as a variable in a given context.", "response": "def resolve_var(self, varname, context=None):\n \"\"\"Resolves name as a variable in a given context.\n\n If no context specified page context' is considered as context.\n\n :param str|unicode varname:\n :param Context context:\n :return:\n \"\"\"\n context = context or self.current_page_context\n\n if isinstance(varname, FilterExpression):\n varname = varname.resolve(context)\n else:\n varname = varname.strip()\n\n try:\n varname = Variable(varname).resolve(context)\n except VariableDoesNotExist:\n varname = varname\n\n return varname"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef sitetree_tree(parser, token):\n tokens = token.split_contents()\n use_template = detect_clause(parser, 'template', tokens)\n tokens_num = len(tokens)\n\n if tokens_num in (3, 5):\n tree_alias = parser.compile_filter(tokens[2])\n return sitetree_treeNode(tree_alias, use_template)\n else:\n raise template.TemplateSyntaxError(\n '%r tag requires two arguments. E.g. {%% sitetree_tree from \"mytree\" %%}.' % tokens[0])", "response": "Parses the sitetree tree from mytree template."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nparses the sitetree_children tag parameters.", "response": "def sitetree_children(parser, token):\n \"\"\"Parses sitetree_children tag parameters.\n\n Six arguments:\n {% sitetree_children of someitem for menu template \"sitetree/mychildren.html\" %}\n Used to render child items of specific site tree 'someitem'\n using template \"sitetree/mychildren.html\" for menu navigation.\n\n Basically template argument should contain path to current template itself.\n\n Allowed navigation types: 1) menu; 2) sitetree.\n\n \"\"\"\n tokens = token.split_contents()\n use_template = detect_clause(parser, 'template', tokens)\n tokens_num = len(tokens)\n\n clauses_in_places = (\n tokens_num == 5 and tokens[1] == 'of' and tokens[3] == 'for' and tokens[4] in ('menu', 'sitetree')\n )\n if clauses_in_places and use_template is not None:\n tree_item = tokens[2]\n navigation_type = tokens[4]\n return sitetree_childrenNode(tree_item, navigation_type, use_template)\n else:\n raise template.TemplateSyntaxError(\n '%r tag requires six arguments. '\n 'E.g. {%% sitetree_children of someitem for menu template \"sitetree/mychildren.html\" %%}.' % tokens[0])"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nparses the sitetree_breadcrumbs tag parameters.", "response": "def sitetree_breadcrumbs(parser, token):\n \"\"\"Parses sitetree_breadcrumbs tag parameters.\n\n Two notation types are possible:\n 1. Two arguments:\n {% sitetree_breadcrumbs from \"mytree\" %}\n Used to render breadcrumb path for \"mytree\" site tree.\n\n 2. Four arguments:\n {% sitetree_breadcrumbs from \"mytree\" template \"sitetree/mycrumb.html\" %}\n Used to render breadcrumb path for \"mytree\" site tree using specific\n template \"sitetree/mycrumb.html\"\n\n \"\"\"\n tokens = token.split_contents()\n use_template = detect_clause(parser, 'template', tokens)\n tokens_num = len(tokens)\n\n if tokens_num == 3:\n tree_alias = parser.compile_filter(tokens[2])\n return sitetree_breadcrumbsNode(tree_alias, use_template)\n else:\n raise template.TemplateSyntaxError(\n '%r tag requires two arguments. E.g. {%% sitetree_breadcrumbs from \"mytree\" %%}.' % tokens[0])"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef sitetree_menu(parser, token):\n tokens = token.split_contents()\n use_template = detect_clause(parser, 'template', tokens)\n tokens_num = len(tokens)\n\n if tokens_num == 5 and tokens[3] == 'include':\n tree_alias = parser.compile_filter(tokens[2])\n tree_branches = parser.compile_filter(tokens[4])\n return sitetree_menuNode(tree_alias, tree_branches, use_template)\n else:\n raise template.TemplateSyntaxError(\n '%r tag requires four arguments. '\n 'E.g. {%% sitetree_menu from \"mytree\" include \"trunk,1,level3\" %%}.' % tokens[0])", "response": "Parses the sitetree_menu tag parameters and returns a site tree menu."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef render(context, tree_items, use_template):\n context.push()\n context['sitetree_items'] = tree_items\n\n if isinstance(use_template, FilterExpression):\n use_template = use_template.resolve(context)\n\n content = get_template(use_template).render(context.flatten() if _CONTEXT_FLATTEN else context)\n context.pop()\n\n return content", "response": "Render given template with given tree items."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef for_tag(cls, parser, token, preposition, error_hint):\n tokens = token.split_contents()\n\n if len(tokens) >= 3 and tokens[1] == preposition:\n as_var = cls.get_as_var(tokens)\n tree_alias = parser.compile_filter(tokens[2])\n return cls(tree_alias, as_var)\n\n raise template.TemplateSyntaxError(\n '%r tag requires at least two arguments. E.g. {%% %s %%}.' % (tokens[0], error_hint))", "response": "Returns a node constructor to be used in tags."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_model_url_name(model_nfo, page, with_namespace=False):\n prefix = ''\n if with_namespace:\n prefix = 'admin:'\n return ('%s%s_%s' % (prefix, '%s_%s' % model_nfo, page)).lower()", "response": "Returns a URL for a given Tree admin page type."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nforce unregistration of tree admin class with following re - registration.", "response": "def _reregister_tree_admin():\n \"\"\"Forces unregistration of tree admin class with following re-registration.\"\"\"\n try:\n admin.site.unregister(MODEL_TREE_CLASS)\n except NotRegistered:\n pass\n admin.site.register(MODEL_TREE_CLASS, _TREE_ADMIN())"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef redirects_handler(*args, **kwargs):\n path = args[0].path\n shift = '../'\n\n if 'delete' in path:\n # Weird enough 'delete' is not handled by TreeItemAdmin::response_change().\n shift += '../'\n elif 'history' in path:\n if 'item_id' not in kwargs:\n # Encountered request from history page to return to tree layout page.\n shift += '../'\n\n return HttpResponseRedirect(path + shift)", "response": "Fixes Admin contrib redirects compatibility problems."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef response_add(self, request, obj, post_url_continue=None, **kwargs):\n if post_url_continue is None:\n post_url_continue = '../item_%s/' % obj.pk\n\n return self._redirect(request, super(TreeItemAdmin, self).response_add(request, obj, post_url_continue))", "response": "Redirects to the appropriate items continue page on item add."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef response_change(self, request, obj, **kwargs):\n return self._redirect(request, super(TreeItemAdmin, self).response_change(request, obj))", "response": "Redirect to the appropriate items add page on item change."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns modified form for TreeItem model.", "response": "def get_form(self, request, obj=None, **kwargs):\n \"\"\"Returns modified form for TreeItem model.\n 'Parent' field choices are built by sitetree itself.\n\n \"\"\"\n if obj is not None and obj.parent is not None:\n self.previous_parent = obj.parent\n previous_parent_id = self.previous_parent.id\n else:\n previous_parent_id = None\n\n my_choice_field = TreeItemChoiceField(self.tree, initial=previous_parent_id)\n form = super(TreeItemAdmin, self).get_form(request, obj, **kwargs)\n my_choice_field.label = form.base_fields['parent'].label\n my_choice_field.help_text = form.base_fields['parent'].help_text\n my_choice_field.widget = form.base_fields['parent'].widget\n # Replace 'parent' TreeItem field with new appropriate one\n form.base_fields['parent'] = my_choice_field\n\n # Try to resolve all currently registered url names including those in namespaces.\n if not getattr(self, 'known_url_names', False):\n self.known_url_names = []\n self.known_url_rules = []\n resolver = get_resolver(get_urlconf())\n for ns, (url_prefix, ns_resolver) in resolver.namespace_dict.items():\n if ns != 'admin':\n self._stack_known_urls(ns_resolver.reverse_dict, ns)\n self._stack_known_urls(resolver.reverse_dict)\n self.known_url_rules = sorted(self.known_url_rules)\n\n form.known_url_names_hint = _(\n 'You are seeing this warning because \"URL as Pattern\" option is active and pattern entered above '\n 'seems to be invalid. Currently registered URL pattern names and parameters: ')\n form.known_url_names = self.known_url_names\n form.known_url_rules = self.known_url_rules\n return form"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfetch a Tree for current or given TreeItem.", "response": "def get_tree(self, request, tree_id, item_id=None):\n \"\"\"Fetches Tree for current or given TreeItem.\"\"\"\n if tree_id is None:\n tree_id = self.get_object(request, item_id).tree_id\n self.tree = MODEL_TREE_CLASS._default_manager.get(pk=tree_id)\n self.tree.verbose_name_plural = self.tree._meta.verbose_name_plural\n self.tree.urls = _TREE_URLS\n return self.tree"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nmoving item up or down by swapping sort_order field values of neighboring items.", "response": "def item_move(self, request, tree_id, item_id, direction):\n \"\"\"Moves item up or down by swapping 'sort_order' field values of neighboring items.\"\"\"\n current_item = MODEL_TREE_ITEM_CLASS._default_manager.get(pk=item_id)\n if direction == 'up':\n sort_order = 'sort_order'\n else:\n sort_order = '-sort_order'\n\n siblings = MODEL_TREE_ITEM_CLASS._default_manager.filter(\n parent=current_item.parent,\n tree=current_item.tree\n ).order_by(sort_order)\n\n previous_item = None\n for item in siblings:\n if item != current_item:\n previous_item = item\n else:\n break\n\n if previous_item is not None:\n current_item_sort_order = current_item.sort_order\n previous_item_sort_order = previous_item.sort_order\n\n current_item.sort_order = previous_item_sort_order\n previous_item.sort_order = current_item_sort_order\n\n current_item.save()\n previous_item.save()\n\n return HttpResponseRedirect('../../')"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef save_model(self, request, obj, form, change):\n if change:\n # No, you're not allowed to make item parent of itself\n if obj.parent is not None and obj.parent.id == obj.id:\n obj.parent = self.previous_parent\n messages.warning(\n request, _(\"Item's parent left unchanged. Item couldn't be parent to itself.\"), '', True)\n obj.tree = self.tree\n obj.save()", "response": "Saves a TreeItem model under certain Tree."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nmanages not only TreeAdmin URLs but also TreeItemAdmin URLs.", "response": "def get_urls(self):\n \"\"\"Manages not only TreeAdmin URLs but also TreeItemAdmin URLs.\"\"\"\n urls = super(TreeAdmin, self).get_urls()\n\n prefix_change = 'change/' if DJANGO_POST_19 else ''\n\n sitetree_urls = [\n url(r'^change/$', redirects_handler, name=get_tree_item_url_name('changelist')),\n\n url(r'^((?P\\d+)/)?%sitem_add/$' % prefix_change,\n self.admin_site.admin_view(self.tree_admin.item_add), name=get_tree_item_url_name('add')),\n\n url(r'^(?P\\d+)/%sitem_(?P\\d+)/$' % prefix_change,\n self.admin_site.admin_view(self.tree_admin.item_edit), name=get_tree_item_url_name('change')),\n\n url(r'^%sitem_(?P\\d+)/$' % prefix_change,\n self.admin_site.admin_view(self.tree_admin.item_edit), name=get_tree_item_url_name('change')),\n\n url(r'^((?P\\d+)/)?%sitem_(?P\\d+)/delete/$' % prefix_change,\n self.admin_site.admin_view(self.tree_admin.item_delete), name=get_tree_item_url_name('delete')),\n\n url(r'^((?P\\d+)/)?%sitem_(?P\\d+)/history/$' % prefix_change,\n self.admin_site.admin_view(self.tree_admin.item_history), name=get_tree_item_url_name('history')),\n\n url(r'^(?P\\d+)/%sitem_(?P\\d+)/move_(?P(up|down))/$' % prefix_change,\n self.admin_site.admin_view(self.tree_admin.item_move), name=get_tree_item_url_name('move')),\n ]\n\n if not DJANGO_POST_19:\n sitetree_urls = patterns_func('', *sitetree_urls)\n\n if SMUGGLER_INSTALLED:\n sitetree_urls += (url(r'^dump_all/$', self.admin_site.admin_view(self.dump_view), name='sitetree_dump'),)\n\n return sitetree_urls + urls"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef dump_view(cls, request):\n from smuggler.views import dump_to_response\n return dump_to_response(request, [MODEL_TREE, MODEL_TREE_ITEM], filename_prefix='sitetrees')", "response": "Dumps the items of the current tree using django - smuggler."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef tree(alias, title='', items=None, **kwargs):\n tree_obj = get_tree_model()(alias=alias, title=title, **kwargs)\n tree_obj.id = generate_id_for(tree_obj)\n tree_obj.is_dynamic = True\n\n if items is not None:\n tree_obj.dynamic_items = []\n def traverse(items):\n for item in items:\n item.tree = tree_obj\n tree_obj.dynamic_items.append(item)\n if hasattr(item, 'dynamic_children'):\n traverse(item.dynamic_children)\n\n traverse(items)\n return tree_obj", "response": "Dynamically creates and returns a tree object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef item(\n title, url, children=None, url_as_pattern=True, hint='', alias='', description='',\n in_menu=True, in_breadcrumbs=True, in_sitetree=True,\n access_loggedin=False, access_guest=False,\n access_by_perms=None, perms_mode_all=True, **kwargs):\n \"\"\"Dynamically creates and returns a sitetree item object.\n\n :param str|unicode title:\n\n :param str|unicode url:\n\n :param list, set children: a list of children for tree item. Children should also be created by `item` function.\n\n :param bool url_as_pattern: consider URL as a name of a named URL\n\n :param str|unicode hint: hints are usually shown to users\n\n :param str|unicode alias: item name to address it from templates\n\n :param str|unicode description: additional information on item (usually is not shown to users)\n\n :param bool in_menu: show this item in menus\n\n :param bool in_breadcrumbs: show this item in breadcrumbs\n\n :param bool in_sitetree: show this item in sitetrees\n\n :param bool access_loggedin: show item to logged in users only\n\n :param bool access_guest: show item to guest users only\n\n :param list|str||unicode|int, Permission access_by_perms: restrict access to users with these permissions.\n\n This can be set to one or a list of permission names, IDs or Permission instances.\n\n Permission names are more portable and should be in a form `.`, e.g.:\n my_app.allow_save\n\n\n :param bool perms_mode_all: permissions set interpretation rule:\n True - user should have all the permissions;\n False - user should have any of chosen permissions.\n\n :rtype: TreeItemBase\n\n \"\"\"\n item_obj = get_tree_item_model()(\n title=title, url=url, urlaspattern=url_as_pattern,\n hint=hint, alias=alias, description=description, inmenu=in_menu,\n insitetree=in_sitetree, inbreadcrumbs=in_breadcrumbs,\n access_loggedin=access_loggedin, access_guest=access_guest,\n **kwargs)\n\n item_obj.id = generate_id_for(item_obj)\n item_obj.is_dynamic = True\n item_obj.dynamic_children = []\n\n cleaned_permissions = []\n if access_by_perms:\n # Make permissions a list if currently a single object\n if not isinstance(access_by_perms, list):\n access_by_perms = [access_by_perms]\n\n for perm in access_by_perms:\n if isinstance(perm, six.string_types):\n # Get permission object from string\n try:\n app, codename = perm.split('.')\n except ValueError:\n raise ValueError(\n 'Wrong permission string format: supplied - `%s`; '\n 'expected - `.`.' % perm)\n\n try:\n perm = Permission.objects.get(codename=codename, content_type__app_label=app)\n except Permission.DoesNotExist:\n raise ValueError('Permission `%s.%s` does not exist.' % (app, codename))\n\n elif not isinstance(perm, (int, Permission)):\n raise ValueError('Permissions must be given as strings, ints, or `Permission` instances.')\n\n cleaned_permissions.append(perm)\n\n item_obj.permissions = cleaned_permissions or []\n item_obj.access_perm_type = item_obj.PERM_TYPE_ALL if perms_mode_all else item_obj.PERM_TYPE_ANY\n\n if item_obj.permissions:\n item_obj.access_restricted = True\n\n if children is not None:\n for child in children:\n child.parent = item_obj\n item_obj.dynamic_children.append(child)\n\n return item_obj", "response": "Dynamically creates and returns a tree item object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nimport sitetree module from a given app.", "response": "def import_app_sitetree_module(app):\n \"\"\"Imports sitetree module from a given app.\n\n :param str|unicode app: Application name\n :return: module|None\n \"\"\"\n module_name = settings.APP_MODULE_NAME\n module = import_module(app)\n try:\n sub_module = import_module('%s.%s' % (app, module_name))\n return sub_module\n except ImportError:\n if module_has_submodule(module, module_name):\n raise\n return None"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef import_project_sitetree_modules():\n from django.conf import settings as django_settings\n submodules = []\n for app in django_settings.INSTALLED_APPS:\n module = import_app_sitetree_module(app)\n if module is not None:\n submodules.append(module)\n return submodules", "response": "Imports sitetrees modules from packages."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn tuple with application and tree [ item model class names.", "response": "def get_app_n_model(settings_entry_name):\n \"\"\"Returns tuple with application and tree[item] model class names.\n\n :param str|unicode settings_entry_name:\n :rtype: tuple\n \"\"\"\n try:\n app_name, model_name = getattr(settings, settings_entry_name).split('.')\n except ValueError:\n raise ImproperlyConfigured(\n '`SITETREE_%s` must have the following format: `app_name.model_name`.' % settings_entry_name)\n return app_name, model_name"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a certain sitetree model as defined in the project settings.", "response": "def get_model_class(settings_entry_name):\n \"\"\"Returns a certain sitetree model as defined in the project settings.\n\n :param str|unicode settings_entry_name:\n :rtype: TreeItemBase|TreeBase\n \"\"\"\n app_name, model_name = get_app_n_model(settings_entry_name)\n\n try:\n model = apps_get_model(app_name, model_name)\n except (LookupError, ValueError):\n model = None\n\n if model is None:\n raise ImproperlyConfigured(\n '`SITETREE_%s` refers to model `%s` that has not been installed.' % (settings_entry_name, model_name))\n\n return model"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsends a message to the ASGI instance.", "response": "async def asgi_send(self, message: dict) -> None:\n \"\"\"Called by the ASGI instance to send a message.\"\"\"\n if message[\"type\"] == \"http.response.start\" and self.state == ASGIHTTPState.REQUEST:\n self.response = message\n elif message[\"type\"] == \"http.response.body\" and self.state in {\n ASGIHTTPState.REQUEST,\n ASGIHTTPState.RESPONSE,\n }:\n if self.state == ASGIHTTPState.REQUEST:\n headers = build_and_validate_headers(self.response[\"headers\"])\n headers.extend(self.response_headers())\n await self.asend(\n h11.Response(status_code=int(self.response[\"status\"]), headers=headers)\n )\n self.state = ASGIHTTPState.RESPONSE\n\n if (\n not suppress_body(self.scope[\"method\"], int(self.response[\"status\"]))\n and message.get(\"body\", b\"\") != b\"\"\n ):\n await self.asend(h11.Data(data=bytes(message[\"body\"])))\n\n if not message.get(\"more_body\", False):\n if self.state != ASGIHTTPState.CLOSED:\n await self.asend(h11.EndOfMessage())\n await self.asgi_put({\"type\": \"http.disconnect\"})\n self.state = ASGIHTTPState.CLOSED\n else:\n raise UnexpectedMessage(self.state, message[\"type\"])"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsending a message to the ASGI instance.", "response": "async def asgi_send(self, message: dict) -> None:\n \"\"\"Called by the ASGI instance to send a message.\"\"\"\n if message[\"type\"] == \"websocket.accept\" and self.state == ASGIWebsocketState.HANDSHAKE:\n headers = build_and_validate_headers(message.get(\"headers\", []))\n raise_if_subprotocol_present(headers)\n headers.extend(self.response_headers())\n await self.asend(\n AcceptConnection(\n extensions=[PerMessageDeflate()],\n extra_headers=headers,\n subprotocol=message.get(\"subprotocol\"),\n )\n )\n self.state = ASGIWebsocketState.CONNECTED\n self.config.access_logger.access(\n self.scope, {\"status\": 101, \"headers\": []}, time() - self.start_time\n )\n elif (\n message[\"type\"] == \"websocket.http.response.start\"\n and self.state == ASGIWebsocketState.HANDSHAKE\n ):\n self.response = message\n self.config.access_logger.access(self.scope, self.response, time() - self.start_time)\n elif message[\"type\"] == \"websocket.http.response.body\" and self.state in {\n ASGIWebsocketState.HANDSHAKE,\n ASGIWebsocketState.RESPONSE,\n }:\n await self._asgi_send_rejection(message)\n elif message[\"type\"] == \"websocket.send\" and self.state == ASGIWebsocketState.CONNECTED:\n data: Union[bytes, str]\n if message.get(\"bytes\") is not None:\n await self.asend(BytesMessage(data=bytes(message[\"bytes\"])))\n elif not isinstance(message[\"text\"], str):\n raise TypeError(f\"{message['text']} should be a str\")\n else:\n await self.asend(TextMessage(data=message[\"text\"]))\n elif message[\"type\"] == \"websocket.close\" and self.state == ASGIWebsocketState.HANDSHAKE:\n await self.send_http_error(403)\n self.state = ASGIWebsocketState.HTTPCLOSED\n elif message[\"type\"] == \"websocket.close\":\n await self.asend(CloseConnection(code=int(message[\"code\"])))\n self.state = ASGIWebsocketState.CLOSED\n else:\n raise UnexpectedMessage(self.state, message[\"type\"])"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\nasync def serve(app: ASGIFramework, config: Config) -> None:\n if config.debug:\n warnings.warn(\"The config `debug` has no affect when using serve\", Warning)\n if config.workers != 1:\n warnings.warn(\"The config `workers` has no affect when using serve\", Warning)\n if config.worker_class != \"asyncio\":\n warnings.warn(\"The config `worker_class` has no affect when using serve\", Warning)\n\n await worker_serve(app, config)", "response": "Serve an ASGI framework app given the config."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nserves an ASGI framework app given the config.", "response": "async def serve(\n app: ASGIFramework,\n config: Config,\n *,\n task_status: trio._core._run._TaskStatus = trio.TASK_STATUS_IGNORED,\n) -> None:\n \"\"\"Serve an ASGI framework app given the config.\n\n This allows for a programmatic way to serve an ASGI framework, it\n can be used via,\n\n .. code-block:: python\n\n trio.run(partial(serve, app, config))\n\n It is assumed that the event-loop is configured before calling\n this function, therefore configuration values that relate to loop\n setup or process setup are ignored.\n\n \"\"\"\n if config.debug:\n warnings.warn(\"The config `debug` has no affect when using serve\", Warning)\n if config.workers != 1:\n warnings.warn(\"The config `workers` has no affect when using serve\", Warning)\n if config.worker_class != \"asyncio\":\n warnings.warn(\"The config `worker_class` has no affect when using serve\", Warning)\n\n await worker_serve(app, config, task_status=task_status)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef from_mapping(\n cls: Type[\"Config\"], mapping: Optional[Mapping[str, Any]] = None, **kwargs: Any\n ) -> \"Config\":\n \"\"\"Create a configuration from a mapping.\n\n This allows either a mapping to be directly passed or as\n keyword arguments, for example,\n\n .. code-block:: python\n\n config = {'keep_alive_timeout': 10}\n Config.from_mapping(config)\n Config.form_mapping(keep_alive_timeout=10)\n\n Arguments:\n mapping: Optionally a mapping object.\n kwargs: Optionally a collection of keyword arguments to\n form a mapping.\n \"\"\"\n mappings: Dict[str, Any] = {}\n if mapping is not None:\n mappings.update(mapping)\n mappings.update(kwargs)\n config = cls()\n for key, value in mappings.items():\n try:\n setattr(config, key, value)\n except AttributeError:\n pass\n\n return config", "response": "Create a configuration object from a mapping object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef from_pyfile(cls: Type[\"Config\"], filename: FilePath) -> \"Config\":\n file_path = os.fspath(filename)\n spec = importlib.util.spec_from_file_location(\"module.name\", file_path)\n module = importlib.util.module_from_spec(spec)\n spec.loader.exec_module(module) # type: ignore\n return cls.from_object(module)", "response": "Create a configuration from a Python file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nloading the configuration values from a TOML formatted file.", "response": "def from_toml(cls: Type[\"Config\"], filename: FilePath) -> \"Config\":\n \"\"\"Load the configuration values from a TOML formatted file.\n\n This allows configuration to be loaded as so\n\n .. code-block:: python\n\n Config.from_toml('config.toml')\n\n Arguments:\n filename: The filename which gives the path to the file.\n \"\"\"\n file_path = os.fspath(filename)\n with open(file_path) as file_:\n data = toml.load(file_)\n return cls.from_mapping(data)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef from_object(cls: Type[\"Config\"], instance: Union[object, str]) -> \"Config\":\n if isinstance(instance, str):\n try:\n path, config = instance.rsplit(\".\", 1)\n except ValueError:\n path = instance\n instance = importlib.import_module(instance)\n else:\n module = importlib.import_module(path)\n instance = getattr(module, config)\n\n mapping = {\n key: getattr(instance, key)\n for key in dir(instance)\n if not isinstance(getattr(instance, key), types.ModuleType)\n }\n return cls.from_mapping(mapping)", "response": "Create a new configuration from a Python object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef emit_spans(self):\n\n # FIXME: Should have a single aggregate handler\n if self.firehose_handler:\n # FIXME: We need to allow different batching settings per handler\n self._emit_spans_with_span_sender(\n ZipkinBatchSender(self.firehose_handler,\n self.max_span_batch_size,\n self.encoder)\n )\n\n if not self.zipkin_attrs.is_sampled:\n self._get_tracer().clear()\n return\n\n span_sender = ZipkinBatchSender(self.transport_handler,\n self.max_span_batch_size,\n self.encoder)\n\n self._emit_spans_with_span_sender(span_sender)\n self._get_tracer().clear()", "response": "Main function to emit all the spans stored during the entire crawler request. This function is called by the supervisor to emit all the annotations stored during the entire crawler request."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_attrs_for_span(\n sample_rate=100.0,\n trace_id=None,\n span_id=None,\n use_128bit_trace_id=False,\n):\n \"\"\"Creates a set of zipkin attributes for a span.\n\n :param sample_rate: Float between 0.0 and 100.0 to determine sampling rate\n :type sample_rate: float\n :param trace_id: Optional 16-character hex string representing a trace_id.\n If this is None, a random trace_id will be generated.\n :type trace_id: str\n :param span_id: Optional 16-character hex string representing a span_id.\n If this is None, a random span_id will be generated.\n :type span_id: str\n :param use_128bit_trace_id: If true, generate 128-bit trace_ids\n :type use_128bit_trace_id: boolean\n \"\"\"\n # Calculate if this trace is sampled based on the sample rate\n if trace_id is None:\n if use_128bit_trace_id:\n trace_id = generate_random_128bit_string()\n else:\n trace_id = generate_random_64bit_string()\n if span_id is None:\n span_id = generate_random_64bit_string()\n if sample_rate == 0.0:\n is_sampled = False\n else:\n is_sampled = (random.random() * 100) < sample_rate\n\n return ZipkinAttrs(\n trace_id=trace_id,\n span_id=span_id,\n parent_span_id=None,\n flags='0',\n is_sampled=is_sampled,\n )", "response": "Creates a set of zipkin attributes for a single span."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngenerates the HTTP headers for a new span.", "response": "def create_http_headers_for_new_span(context_stack=None, tracer=None):\n \"\"\"\n Generate the headers for a new zipkin span.\n\n .. note::\n\n If the method is not called from within a zipkin_trace context,\n empty dict will be returned back.\n\n :returns: dict containing (X-B3-TraceId, X-B3-SpanId, X-B3-ParentSpanId,\n X-B3-Flags and X-B3-Sampled) keys OR an empty dict.\n \"\"\"\n if tracer:\n zipkin_attrs = tracer.get_zipkin_attrs()\n elif context_stack:\n zipkin_attrs = context_stack.get()\n else:\n zipkin_attrs = get_default_tracer().get_zipkin_attrs()\n\n if not zipkin_attrs:\n return {}\n\n return {\n 'X-B3-TraceId': zipkin_attrs.trace_id,\n 'X-B3-SpanId': generate_random_64bit_string(),\n 'X-B3-ParentSpanId': zipkin_attrs.span_id,\n 'X-B3-Flags': '0',\n 'X-B3-Sampled': '1' if zipkin_attrs.is_sampled else '0',\n }"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_current_context(self):\n # This check is technically not necessary since only root spans will have\n # sample_rate, zipkin_attrs or a transport set. But it helps making the\n # code clearer by separating the logic for a root span from the one for a\n # child span.\n if self._is_local_root_span:\n\n # If sample_rate is set, we need to (re)generate a trace context.\n # If zipkin_attrs (trace context) were passed in as argument there are\n # 2 possibilities:\n # is_sampled = False --> we keep the same trace_id but re-roll the dice\n # for is_sampled.\n # is_sampled = True --> we don't want to stop sampling halfway through\n # a sampled trace, so we do nothing.\n # If no zipkin_attrs were passed in, we generate new ones and start a\n # new trace.\n if self.sample_rate is not None:\n\n # If this trace is not sampled, we re-roll the dice.\n if self.zipkin_attrs_override and \\\n not self.zipkin_attrs_override.is_sampled:\n # This will be the root span of the trace, so we should\n # set timestamp and duration.\n return True, create_attrs_for_span(\n sample_rate=self.sample_rate,\n trace_id=self.zipkin_attrs_override.trace_id,\n )\n\n # If zipkin_attrs_override was not passed in, we simply generate\n # new zipkin_attrs to start a new trace.\n elif not self.zipkin_attrs_override:\n return True, create_attrs_for_span(\n sample_rate=self.sample_rate,\n use_128bit_trace_id=self.use_128bit_trace_id,\n )\n\n if self.firehose_handler and not self.zipkin_attrs_override:\n # If it has gotten here, the only thing that is\n # causing a trace is the firehose. So we force a trace\n # with sample rate of 0\n return True, create_attrs_for_span(\n sample_rate=0.0,\n use_128bit_trace_id=self.use_128bit_trace_id,\n )\n\n # If we arrive here it means the sample_rate was not set while\n # zipkin_attrs_override was, so let's simply return that.\n return False, self.zipkin_attrs_override\n\n else:\n # Check if there's already a trace context in _context_stack.\n existing_zipkin_attrs = self.get_tracer().get_zipkin_attrs()\n # If there's an existing context, let's create new zipkin_attrs\n # with that context as parent.\n if existing_zipkin_attrs:\n return False, ZipkinAttrs(\n trace_id=existing_zipkin_attrs.trace_id,\n span_id=generate_random_64bit_string(),\n parent_span_id=existing_zipkin_attrs.span_id,\n flags=existing_zipkin_attrs.flags,\n is_sampled=existing_zipkin_attrs.is_sampled,\n )\n\n return False, None", "response": "Returns the current ZipkinAttrs and generates new ones if needed."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef start(self):\n self.do_pop_attrs = False\n\n report_root_timestamp, self.zipkin_attrs = self._get_current_context()\n\n # If zipkin_attrs are not set up by now, that means this span is not\n # configured to perform logging itself, and it's not in an existing\n # Zipkin trace. That means there's nothing else to do and it can exit\n # early.\n if not self.zipkin_attrs:\n return self\n\n self.get_tracer().push_zipkin_attrs(self.zipkin_attrs)\n self.do_pop_attrs = True\n\n self.start_timestamp = time.time()\n\n if self._is_local_root_span:\n # Don't set up any logging if we're not sampling\n if not self.zipkin_attrs.is_sampled and not self.firehose_handler:\n return self\n # If transport is already configured don't override it. Doing so would\n # cause all previously recorded spans to never be emitted as exiting\n # the inner logging context will reset transport_configured to False.\n if self.get_tracer().is_transport_configured():\n log.info('Transport was already configured, ignoring override'\n 'from span {}'.format(self.span_name))\n return self\n endpoint = create_endpoint(self.port, self.service_name, self.host)\n self.logging_context = ZipkinLoggingContext(\n self.zipkin_attrs,\n endpoint,\n self.span_name,\n self.transport_handler,\n report_root_timestamp or self.report_root_timestamp_override,\n self.get_tracer,\n self.service_name,\n binary_annotations=self.binary_annotations,\n add_logging_annotation=self.add_logging_annotation,\n client_context=self.kind == Kind.CLIENT,\n max_span_batch_size=self.max_span_batch_size,\n firehose_handler=self.firehose_handler,\n encoding=self.encoding,\n )\n self.logging_context.start()\n self.get_tracer().set_transport_configured(configured=True)\n\n return self", "response": "Enter the new span context."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef stop(self, _exc_type=None, _exc_value=None, _exc_traceback=None):\n\n if self.do_pop_attrs:\n self.get_tracer().pop_zipkin_attrs()\n\n # If no transport is configured, there's no reason to create a new Span.\n # This also helps avoiding memory leaks since without a transport nothing\n # would pull spans out of get_tracer().\n if not self.get_tracer().is_transport_configured():\n return\n\n # Add the error annotation if an exception occurred\n if any((_exc_type, _exc_value, _exc_traceback)):\n error_msg = u'{0}: {1}'.format(_exc_type.__name__, _exc_value)\n self.update_binary_annotations({\n ERROR_KEY: error_msg,\n })\n\n # Logging context is only initialized for \"root\" spans of the local\n # process (i.e. this zipkin_span not inside of any other local\n # zipkin_spans)\n if self.logging_context:\n try:\n self.logging_context.stop()\n except Exception as ex:\n err_msg = 'Error emitting zipkin trace. {}'.format(\n repr(ex),\n )\n log.error(err_msg)\n finally:\n self.logging_context = None\n self.get_tracer().clear()\n self.get_tracer().set_transport_configured(configured=False)\n return\n\n # If we've gotten here, that means that this span is a child span of\n # this context's root span (i.e. it's a zipkin_span inside another\n # zipkin_span).\n end_timestamp = time.time()\n # If self.duration is set, it means the user wants to override it\n if self.duration:\n duration = self.duration\n else:\n duration = end_timestamp - self.start_timestamp\n\n endpoint = create_endpoint(self.port, self.service_name, self.host)\n self.get_tracer().add_span(Span(\n trace_id=self.zipkin_attrs.trace_id,\n name=self.span_name,\n parent_id=self.zipkin_attrs.parent_span_id,\n span_id=self.zipkin_attrs.span_id,\n kind=self.kind,\n timestamp=self.timestamp if self.timestamp else self.start_timestamp,\n duration=duration,\n annotations=self.annotations,\n local_endpoint=endpoint,\n remote_endpoint=self.remote_endpoint,\n tags=self.binary_annotations,\n ))", "response": "Exit the span context."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nupdates the binary annotations for the current span.", "response": "def update_binary_annotations(self, extra_annotations):\n \"\"\"Updates the binary annotations for the current span.\"\"\"\n if not self.logging_context:\n # This is not the root span, so binary annotations will be added\n # to the log handler when this span context exits.\n self.binary_annotations.update(extra_annotations)\n else:\n # Otherwise, we're in the context of the root span, so just update\n # the binary annotations for the logging context directly.\n self.logging_context.tags.update(extra_annotations)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_sa_binary_annotation(\n self,\n port=0,\n service_name='unknown',\n host='127.0.0.1',\n ):\n \"\"\"Adds a 'sa' binary annotation to the current span.\n\n 'sa' binary annotations are useful for situations where you need to log\n where a request is going but the destination doesn't support zipkin.\n\n Note that the span must have 'cs'/'cr' annotations.\n\n :param port: The port number of the destination\n :type port: int\n :param service_name: The name of the destination service\n :type service_name: str\n :param host: Host address of the destination\n :type host: str\n \"\"\"\n if self.kind != Kind.CLIENT:\n # TODO: trying to set a sa binary annotation for a non-client span\n # should result in a logged error\n return\n\n remote_endpoint = create_endpoint(\n port=port,\n service_name=service_name,\n host=host,\n )\n if not self.logging_context:\n if self.remote_endpoint is not None:\n raise ValueError('SA annotation already set.')\n self.remote_endpoint = remote_endpoint\n else:\n if self.logging_context.remote_endpoint is not None:\n raise ValueError('SA annotation already set.')\n self.logging_context.remote_endpoint = remote_endpoint", "response": "Adds a sa binary annotation to the current span."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef override_span_name(self, name):\n self.span_name = name\n if self.logging_context:\n self.logging_context.span_name = name", "response": "Overrides the current span name."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_endpoint(port=None, service_name=None, host=None, use_defaults=True):\n if use_defaults:\n if port is None:\n port = 0\n if service_name is None:\n service_name = 'unknown'\n if host is None:\n try:\n host = socket.gethostbyname(socket.gethostname())\n except socket.gaierror:\n host = '127.0.0.1'\n\n ipv4 = None\n ipv6 = None\n\n if host:\n # Check ipv4 or ipv6.\n try:\n socket.inet_pton(socket.AF_INET, host)\n ipv4 = host\n except socket.error:\n # If it's not an ipv4 address, maybe it's ipv6.\n try:\n socket.inet_pton(socket.AF_INET6, host)\n ipv6 = host\n except socket.error:\n # If it's neither ipv4 or ipv6, leave both ip addresses unset.\n pass\n\n return Endpoint(\n ipv4=ipv4,\n ipv6=ipv6,\n port=port,\n service_name=service_name,\n )", "response": "Creates a new Endpoint object."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a copy of an existing Endpoint object with a new service name.", "response": "def copy_endpoint_with_new_service_name(endpoint, new_service_name):\n \"\"\"Creates a copy of a given endpoint with a new service name.\n\n :param endpoint: existing Endpoint object\n :type endpoint: Endpoint\n :param new_service_name: new service name\n :type new_service_name: str\n :returns: zipkin new Endpoint object\n \"\"\"\n return Endpoint(\n service_name=new_service_name,\n ipv4=endpoint.ipv4,\n ipv6=endpoint.ipv6,\n port=endpoint.port,\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nbuild and returns a V1Span.", "response": "def build_v1_span(self):\n \"\"\"Builds and returns a V1 Span.\n\n :return: newly generated _V1Span\n :rtype: _V1Span\n \"\"\"\n # We are simulating a full two-part span locally, so set cs=sr and ss=cr\n full_annotations = OrderedDict([\n ('cs', self.timestamp),\n ('sr', self.timestamp),\n ('ss', self.timestamp + self.duration),\n ('cr', self.timestamp + self.duration),\n ])\n\n if self.kind != Kind.LOCAL:\n # If kind is not LOCAL, then we only want client or\n # server side annotations.\n for ann in _DROP_ANNOTATIONS_BY_KIND[self.kind]:\n del full_annotations[ann]\n\n # Add user-defined annotations. We write them in full_annotations\n # instead of the opposite so that user annotations will override\n # any automatically generated annotation.\n full_annotations.update(self.annotations)\n\n return _V1Span(\n trace_id=self.trace_id,\n name=self.name,\n parent_id=self.parent_id,\n id=self.span_id,\n timestamp=self.timestamp if self.shared is False else None,\n duration=self.duration if self.shared is False else None,\n endpoint=self.local_endpoint,\n annotations=full_annotations,\n binary_annotations=self.tags,\n remote_endpoint=self.remote_endpoint,\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nencode list of protobuf Spans to binary.", "response": "def encode_pb_list(pb_spans):\n \"\"\"Encode list of protobuf Spans to binary.\n\n :param pb_spans: list of protobuf Spans.\n :type pb_spans: list of zipkin_pb2.Span\n :return: encoded list.\n :rtype: bytes\n \"\"\"\n pb_list = zipkin_pb2.ListOfSpans()\n pb_list.spans.extend(pb_spans)\n return pb_list.SerializeToString()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nconvert a py_zipkin Span into a protobuf s Span.", "response": "def create_protobuf_span(span):\n \"\"\"Converts a py_zipkin Span in a protobuf Span.\n\n :param span: py_zipkin Span to convert.\n :type span: py_zipkin.encoding.Span\n :return: protobuf's Span\n :rtype: zipkin_pb2.Span\n \"\"\"\n\n # Protobuf's composite types (i.e. Span's local_endpoint) are immutable.\n # So we can't create a zipkin_pb2.Span here and then set the appropriate\n # fields since `pb_span.local_endpoint = zipkin_pb2.Endpoint` fails.\n # Instead we just create the kwargs and pass them in to the Span constructor.\n pb_kwargs = {}\n\n pb_kwargs['trace_id'] = _hex_to_bytes(span.trace_id)\n\n if span.parent_id:\n pb_kwargs['parent_id'] = _hex_to_bytes(span.parent_id)\n\n pb_kwargs['id'] = _hex_to_bytes(span.span_id)\n\n pb_kind = _get_protobuf_kind(span.kind)\n if pb_kind:\n pb_kwargs['kind'] = pb_kind\n\n if span.name:\n pb_kwargs['name'] = span.name\n if span.timestamp:\n pb_kwargs['timestamp'] = int(span.timestamp * 1000 * 1000)\n if span.duration:\n pb_kwargs['duration'] = int(span.duration * 1000 * 1000)\n\n if span.local_endpoint:\n pb_kwargs['local_endpoint'] = _convert_endpoint(span.local_endpoint)\n\n if span.remote_endpoint:\n pb_kwargs['remote_endpoint'] = _convert_endpoint(span.remote_endpoint)\n\n if len(span.annotations) > 0:\n pb_kwargs['annotations'] = _convert_annotations(span.annotations)\n\n if len(span.tags) > 0:\n pb_kwargs['tags'] = span.tags\n\n if span.debug:\n pb_kwargs['debug'] = span.debug\n\n if span.shared:\n pb_kwargs['shared'] = span.shared\n\n return zipkin_pb2.Span(**pb_kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _hex_to_bytes(hex_id):\n if len(hex_id) <= 16:\n int_id = unsigned_hex_to_signed_int(hex_id)\n return struct.pack('>q', int_id)\n else:\n # There's no 16-bytes encoding in Python's struct. So we convert the\n # id as 2 64 bit ids and then concatenate the result.\n\n # NOTE: we count 16 chars from the right (:-16) rather than the left so\n # that ids with less than 32 chars will be correctly pre-padded with 0s.\n high_id = unsigned_hex_to_signed_int(hex_id[:-16])\n high_bin = struct.pack('>q', high_id)\n\n low_id = unsigned_hex_to_signed_int(hex_id[-16:])\n low_bin = struct.pack('>q', low_id)\n\n return high_bin + low_bin", "response": "Encodes to hexadecimal ids to big - endian binary."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_protobuf_kind(kind):\n if kind == Kind.CLIENT:\n return zipkin_pb2.Span.CLIENT\n elif kind == Kind.SERVER:\n return zipkin_pb2.Span.SERVER\n elif kind == Kind.PRODUCER:\n return zipkin_pb2.Span.PRODUCER\n elif kind == Kind.CONSUMER:\n return zipkin_pb2.Span.CONSUMER\n return None", "response": "Converts py_zipkin s Kind to Protobuf s Kind."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _convert_endpoint(endpoint):\n pb_endpoint = zipkin_pb2.Endpoint()\n\n if endpoint.service_name:\n pb_endpoint.service_name = endpoint.service_name\n if endpoint.port and endpoint.port != 0:\n pb_endpoint.port = endpoint.port\n if endpoint.ipv4:\n pb_endpoint.ipv4 = socket.inet_pton(socket.AF_INET, endpoint.ipv4)\n if endpoint.ipv6:\n pb_endpoint.ipv6 = socket.inet_pton(socket.AF_INET6, endpoint.ipv6)\n\n return pb_endpoint", "response": "Converts py_zipkins Endpoint to Protobuf s Endpoint."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconvert py_zipkin s annotations dict to protobuf s list of annotations.", "response": "def _convert_annotations(annotations):\n \"\"\"Converts py_zipkin's annotations dict to protobuf.\n\n :param annotations: annotations dict.\n :type annotations: dict\n :return: corresponding protobuf's list of annotations.\n :rtype: list\n \"\"\"\n pb_annotations = []\n for value, ts in annotations.items():\n pb_annotations.append(zipkin_pb2.Annotation(\n timestamp=int(ts * 1000 * 1000),\n value=value,\n ))\n return pb_annotations"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_annotation(timestamp, value, host):\n return zipkin_core.Annotation(timestamp=timestamp, value=value, host=host)", "response": "Create a zipkin annotation object"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a zipkin binary annotation object.", "response": "def create_binary_annotation(key, value, annotation_type, host):\n \"\"\"\n Create a zipkin binary annotation object\n\n :param key: name of the annotation, such as 'http.uri'\n :param value: value of the annotation, such as a URI\n :param annotation_type: type of annotation, such as AnnotationType.I32\n :param host: zipkin endpoint object\n\n :returns: zipkin binary annotation object\n \"\"\"\n return zipkin_core.BinaryAnnotation(\n key=key,\n value=value,\n annotation_type=annotation_type,\n host=host,\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_endpoint(port=0, service_name='unknown', ipv4=None, ipv6=None):\n ipv4_int = 0\n ipv6_binary = None\n\n # Convert ip address to network byte order\n if ipv4:\n ipv4_int = struct.unpack('!i', socket.inet_pton(socket.AF_INET, ipv4))[0]\n\n if ipv6:\n ipv6_binary = socket.inet_pton(socket.AF_INET6, ipv6)\n\n # Zipkin passes unsigned values in signed types because Thrift has no\n # unsigned types, so we have to convert the value.\n port = struct.unpack('h', struct.pack('H', port))[0]\n return zipkin_core.Endpoint(\n ipv4=ipv4_int,\n ipv6=ipv6_binary,\n port=port,\n service_name=service_name,\n )", "response": "Create a Zipkin Endpoint object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncopying a given endpoint with a new service name.", "response": "def copy_endpoint_with_new_service_name(endpoint, service_name):\n \"\"\"Copies a copy of a given endpoint with a new service name.\n This should be very fast, on the order of several microseconds.\n\n :param endpoint: existing zipkin_core.Endpoint object\n :param service_name: str of new service name\n :returns: zipkin Endpoint object\n \"\"\"\n return zipkin_core.Endpoint(\n ipv4=endpoint.ipv4,\n port=endpoint.port,\n service_name=service_name,\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreformatting annotations dict to return list of corresponding zipkin_core objects.", "response": "def annotation_list_builder(annotations, host):\n \"\"\"\n Reformat annotations dict to return list of corresponding zipkin_core objects.\n\n :param annotations: dict containing key as annotation name,\n value being timestamp in seconds(float).\n :type host: :class:`zipkin_core.Endpoint`\n :returns: a list of annotation zipkin_core objects\n :rtype: list\n \"\"\"\n return [\n create_annotation(int(timestamp * 1000000), key, host)\n for key, timestamp in annotations.items()\n ]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef binary_annotation_list_builder(binary_annotations, host):\n # TODO: Remove the type hard-coding of STRING to take it as a param option.\n ann_type = zipkin_core.AnnotationType.STRING\n return [\n create_binary_annotation(key, str(value), ann_type, host)\n for key, value in binary_annotations.items()\n ]", "response": "Reformat binary annotations dict to return list of zipkin_core objects."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef create_span(\n span_id,\n parent_span_id,\n trace_id,\n span_name,\n annotations,\n binary_annotations,\n timestamp_s,\n duration_s,\n):\n \"\"\"Takes a bunch of span attributes and returns a thriftpy2 representation\n of the span. Timestamps passed in are in seconds, they're converted to\n microseconds before thrift encoding.\n \"\"\"\n # Check if trace_id is 128-bit. If so, record trace_id_high separately.\n trace_id_length = len(trace_id)\n trace_id_high = None\n if trace_id_length > 16:\n assert trace_id_length == 32\n trace_id, trace_id_high = trace_id[16:], trace_id[:16]\n\n if trace_id_high:\n trace_id_high = unsigned_hex_to_signed_int(trace_id_high)\n\n span_dict = {\n 'trace_id': unsigned_hex_to_signed_int(trace_id),\n 'name': span_name,\n 'id': unsigned_hex_to_signed_int(span_id),\n 'annotations': annotations,\n 'binary_annotations': binary_annotations,\n 'timestamp': int(timestamp_s * 1000000) if timestamp_s else None,\n 'duration': int(duration_s * 1000000) if duration_s else None,\n 'trace_id_high': trace_id_high,\n }\n if parent_span_id:\n span_dict['parent_id'] = unsigned_hex_to_signed_int(parent_span_id)\n return zipkin_core.Span(**span_dict)", "response": "Takes a bunch of attributes and returns a thriftpy2 representation of the span."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nencodes a Thrift span into a TBinaryProtocol format bytes.", "response": "def span_to_bytes(thrift_span):\n \"\"\"\n Returns a TBinaryProtocol encoded Thrift span.\n\n :param thrift_span: thrift object to encode.\n :returns: thrift object in TBinaryProtocol format bytes.\n \"\"\"\n transport = TMemoryBuffer()\n protocol = TBinaryProtocol(transport)\n thrift_span.write(protocol)\n\n return bytes(transport.getvalue())"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef encode_bytes_list(binary_thrift_obj_list): # pragma: no cover\n transport = TMemoryBuffer()\n write_list_begin(transport, TType.STRUCT, len(binary_thrift_obj_list))\n for thrift_bin in binary_thrift_obj_list:\n transport.write(thrift_bin)\n\n return bytes(transport.getvalue())", "response": "Encodes a list of Thrift objects into a byte string."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndetect the span type and encoding for the provided message.", "response": "def detect_span_version_and_encoding(message):\n \"\"\"Returns the span type and encoding for the message provided.\n\n The logic in this function is a Python port of\n https://github.com/openzipkin/zipkin/blob/master/zipkin/src/main/java/zipkin/internal/DetectingSpanDecoder.java\n\n :param message: span to perform operations on.\n :type message: byte array\n :returns: span encoding.\n :rtype: Encoding\n \"\"\"\n # In case message is sent in as non-bytearray format,\n # safeguard convert to bytearray before handling\n if isinstance(message, six.string_types):\n # Even six.b is not enough to handle the py2/3 difference since\n # it uses latin-1 as default encoding and not utf-8.\n if six.PY2:\n message = six.b(message) # pragma: no cover\n else:\n message = message.encode('utf-8') # pragma: no cover\n\n if len(message) < 2:\n raise ZipkinError(\"Invalid span format. Message too short.\")\n\n # Check for binary format\n if six.byte2int(message) <= 16:\n if six.byte2int(message) == 10 and six.byte2int(message[1:2]) != 0:\n return Encoding.V2_PROTO3\n return Encoding.V1_THRIFT\n\n str_msg = message.decode('utf-8')\n\n # JSON case for list of spans\n if str_msg[0] == '[':\n span_list = json.loads(str_msg)\n if len(span_list) > 0:\n # Assumption: All spans in a list are the same version\n # Logic: Search for identifying fields in all spans, if any span can\n # be strictly identified to a version, return that version.\n # Otherwise, if no spans could be strictly identified, default to V2.\n for span in span_list:\n if any(word in span for word in _V2_ATTRIBUTES):\n return Encoding.V2_JSON\n elif (\n 'binaryAnnotations' in span or\n (\n 'annotations' in span and\n 'endpoint' in span['annotations']\n )\n ):\n return Encoding.V1_JSON\n return Encoding.V2_JSON\n\n raise ZipkinError(\"Unknown or unsupported span encoding\")"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef convert_spans(spans, output_encoding, input_encoding=None):\n if not isinstance(input_encoding, Encoding):\n input_encoding = detect_span_version_and_encoding(message=spans)\n\n if input_encoding == output_encoding:\n return spans\n\n decoder = get_decoder(input_encoding)\n encoder = get_encoder(output_encoding)\n decoded_spans = decoder.decode_spans(spans)\n output_spans = []\n\n # Encode each indivicual span\n for span in decoded_spans:\n output_spans.append(encoder.encode_span(span))\n\n # Outputs from encoder.encode_span() can be easily concatenated in a list\n return encoder.encode_queue(output_spans)", "response": "Converts a list of spans into a different encoding."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nstores the zipkin attributes to thread local.", "response": "def push_zipkin_attrs(zipkin_attr):\n \"\"\"Stores the zipkin attributes to thread local.\n\n .. deprecated::\n Use the Tracer interface which offers better multi-threading support.\n push_zipkin_attrs will be removed in version 1.0.\n\n :param zipkin_attr: tuple containing zipkin related attrs\n :type zipkin_attr: :class:`zipkin.ZipkinAttrs`\n \"\"\"\n from py_zipkin.storage import ThreadLocalStack\n log.warning('push_zipkin_attrs is deprecated. See DEPRECATIONS.rst for'\n 'details on how to migrate to using Tracer.')\n return ThreadLocalStack().push(zipkin_attr)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a 128 bit UTF - 8 encoded string.", "response": "def generate_random_128bit_string():\n \"\"\"Returns a 128 bit UTF-8 encoded string. Follows the same conventions\n as generate_random_64bit_string().\n\n The upper 32 bits are the current time in epoch seconds, and the\n lower 96 bits are random. This allows for AWS X-Ray `interop\n `_\n\n :returns: 32-character hex string\n \"\"\"\n t = int(time.time())\n lower_96 = random.getrandbits(96)\n return '{:032x}'.format((t << 96) | lower_96)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_encoder(encoding):\n if encoding == Encoding.V1_THRIFT:\n return _V1ThriftEncoder()\n if encoding == Encoding.V1_JSON:\n return _V1JSONEncoder()\n if encoding == Encoding.V2_JSON:\n return _V2JSONEncoder()\n if encoding == Encoding.V2_PROTO3:\n return _V2ProtobufEncoder()\n raise ZipkinError('Unknown encoding: {}'.format(encoding))", "response": "Returns an encoder object for the given encoding."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks if the new span fits in the max payload size.", "response": "def fits(self, current_count, current_size, max_size, new_span):\n \"\"\"Checks if the new span fits in the max payload size.\n\n Thrift lists have a fixed-size header and no delimiters between elements\n so it's easy to compute the list size.\n \"\"\"\n return thrift.LIST_HEADER_SIZE + current_size + len(new_span) <= max_size"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nencoding the current span to thrift.", "response": "def encode_span(self, v2_span):\n \"\"\"Encodes the current span to thrift.\"\"\"\n span = v2_span.build_v1_span()\n\n thrift_endpoint = thrift.create_endpoint(\n span.endpoint.port,\n span.endpoint.service_name,\n span.endpoint.ipv4,\n span.endpoint.ipv6,\n )\n\n thrift_annotations = thrift.annotation_list_builder(\n span.annotations,\n thrift_endpoint,\n )\n\n thrift_binary_annotations = thrift.binary_annotation_list_builder(\n span.binary_annotations,\n thrift_endpoint,\n )\n\n # Add sa/ca binary annotations\n if v2_span.remote_endpoint:\n self.encode_remote_endpoint(\n v2_span.remote_endpoint,\n v2_span.kind,\n thrift_binary_annotations,\n )\n\n thrift_span = thrift.create_span(\n span.id,\n span.parent_id,\n span.trace_id,\n span.name,\n thrift_annotations,\n thrift_binary_annotations,\n span.timestamp,\n span.duration,\n )\n\n encoded_span = thrift.span_to_bytes(thrift_span)\n return encoded_span"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting an Endpoint object to a JSON endpoint dict.", "response": "def _create_json_endpoint(self, endpoint, is_v1):\n \"\"\"Converts an Endpoint to a JSON endpoint dict.\n\n :param endpoint: endpoint object to convert.\n :type endpoint: Endpoint\n :param is_v1: whether we're serializing a v1 span. This is needed since\n in v1 some fields default to an empty string rather than being\n dropped if they're not set.\n :type is_v1: bool\n :return: dict representing a JSON endpoint.\n :rtype: dict\n \"\"\"\n json_endpoint = {}\n\n if endpoint.service_name:\n json_endpoint['serviceName'] = endpoint.service_name\n elif is_v1:\n # serviceName is mandatory in v1\n json_endpoint['serviceName'] = \"\"\n if endpoint.port and endpoint.port != 0:\n json_endpoint['port'] = endpoint.port\n if endpoint.ipv4 is not None:\n json_endpoint['ipv4'] = endpoint.ipv4\n if endpoint.ipv6 is not None:\n json_endpoint['ipv6'] = endpoint.ipv6\n\n return json_endpoint"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef encode_span(self, v2_span):\n span = v2_span.build_v1_span()\n\n json_span = {\n 'traceId': span.trace_id,\n 'name': span.name,\n 'id': span.id,\n 'annotations': [],\n 'binaryAnnotations': [],\n }\n\n if span.parent_id:\n json_span['parentId'] = span.parent_id\n if span.timestamp:\n json_span['timestamp'] = int(span.timestamp * 1000000)\n if span.duration:\n json_span['duration'] = int(span.duration * 1000000)\n\n v1_endpoint = self._create_json_endpoint(span.endpoint, True)\n\n for key, timestamp in span.annotations.items():\n json_span['annotations'].append({\n 'endpoint': v1_endpoint,\n 'timestamp': int(timestamp * 1000000),\n 'value': key,\n })\n\n for key, value in span.binary_annotations.items():\n json_span['binaryAnnotations'].append({\n 'key': key,\n 'value': value,\n 'endpoint': v1_endpoint,\n })\n\n # Add sa/ca binary annotations\n if v2_span.remote_endpoint:\n self.encode_remote_endpoint(\n v2_span.remote_endpoint,\n v2_span.kind,\n json_span['binaryAnnotations'],\n )\n\n encoded_span = json.dumps(json_span)\n\n return encoded_span", "response": "Encodes a single span to JSON."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nencodes a single span to JSON.", "response": "def encode_span(self, span):\n \"\"\"Encodes a single span to JSON.\"\"\"\n\n json_span = {\n 'traceId': span.trace_id,\n 'id': span.span_id,\n }\n\n if span.name:\n json_span['name'] = span.name\n if span.parent_id:\n json_span['parentId'] = span.parent_id\n if span.timestamp:\n json_span['timestamp'] = int(span.timestamp * 1000000)\n if span.duration:\n json_span['duration'] = int(span.duration * 1000000)\n if span.shared is True:\n json_span['shared'] = True\n if span.kind and span.kind.value is not None:\n json_span['kind'] = span.kind.value\n if span.local_endpoint:\n json_span['localEndpoint'] = self._create_json_endpoint(\n span.local_endpoint,\n False,\n )\n if span.remote_endpoint:\n json_span['remoteEndpoint'] = self._create_json_endpoint(\n span.remote_endpoint,\n False,\n )\n if span.tags and len(span.tags) > 0:\n json_span['tags'] = span.tags\n\n if span.annotations:\n json_span['annotations'] = [\n {\n 'timestamp': int(timestamp * 1000000),\n 'value': key,\n }\n for key, timestamp in span.annotations.items()\n ]\n\n encoded_span = json.dumps(json_span)\n\n return encoded_span"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fits(self, current_count, current_size, max_size, new_span):\n return current_size + len(new_span) <= max_size", "response": "Checks if the new span fits in the max payload size."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef encode_span(self, span):\n if not protobuf.installed():\n raise ZipkinError(\n 'protobuf encoding requires installing the protobuf\\'s extra '\n 'requirements. Use py-zipkin[protobuf] in your requirements.txt.'\n )\n\n pb_span = protobuf.create_protobuf_span(span)\n return protobuf.encode_pb_list([pb_span])", "response": "Encodes a single span to protobuf."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_decoder(encoding):\n if encoding == Encoding.V1_THRIFT:\n return _V1ThriftDecoder()\n if encoding == Encoding.V1_JSON:\n raise NotImplementedError(\n '{} decoding not yet implemented'.format(encoding))\n if encoding == Encoding.V2_JSON:\n raise NotImplementedError(\n '{} decoding not yet implemented'.format(encoding))\n raise ZipkinError('Unknown encoding: {}'.format(encoding))", "response": "Creates a decoder object for the given encoding."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndecode an encoded list of spans.", "response": "def decode_spans(self, spans):\n \"\"\"Decodes an encoded list of spans.\n\n :param spans: encoded list of spans\n :type spans: bytes\n :return: list of spans\n :rtype: list of Span\n \"\"\"\n decoded_spans = []\n transport = TMemoryBuffer(spans)\n\n if six.byte2int(spans) == TType.STRUCT:\n _, size = read_list_begin(transport)\n else:\n size = 1\n\n for _ in range(size):\n span = zipkin_core.Span()\n span.read(TBinaryProtocol(transport))\n decoded_spans.append(self._decode_thrift_span(span))\n return decoded_spans"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _convert_from_thrift_endpoint(self, thrift_endpoint):\n ipv4 = None\n ipv6 = None\n port = struct.unpack('H', struct.pack('h', thrift_endpoint.port))[0]\n\n if thrift_endpoint.ipv4 != 0:\n ipv4 = socket.inet_ntop(\n socket.AF_INET,\n struct.pack('!i', thrift_endpoint.ipv4),\n )\n\n if thrift_endpoint.ipv6:\n ipv6 = socket.inet_ntop(socket.AF_INET6, thrift_endpoint.ipv6)\n\n return Endpoint(\n service_name=thrift_endpoint.service_name,\n ipv4=ipv4,\n ipv6=ipv6,\n port=port,\n )", "response": "Accepts a thrift encoded endpoint and converts it to an Endpoint."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _decode_thrift_annotations(self, thrift_annotations):\n local_endpoint = None\n kind = Kind.LOCAL\n all_annotations = {}\n timestamp = None\n duration = None\n\n for thrift_annotation in thrift_annotations:\n all_annotations[thrift_annotation.value] = thrift_annotation.timestamp\n if thrift_annotation.host:\n local_endpoint = self._convert_from_thrift_endpoint(\n thrift_annotation.host,\n )\n\n if 'cs' in all_annotations and 'sr' not in all_annotations:\n kind = Kind.CLIENT\n timestamp = all_annotations['cs']\n duration = all_annotations['cr'] - all_annotations['cs']\n elif 'cs' not in all_annotations and 'sr' in all_annotations:\n kind = Kind.SERVER\n timestamp = all_annotations['sr']\n duration = all_annotations['ss'] - all_annotations['sr']\n\n annotations = {\n name: self.seconds(ts) for name, ts in all_annotations.items()\n if name not in _DROP_ANNOTATIONS\n }\n\n return annotations, local_endpoint, kind, timestamp, duration", "response": "Takes a list of thrift annotations and converts it to a v1 annotation."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\naccept a thrift decoded binary annotation and converts it to a v1 binary annotation.", "response": "def _convert_from_thrift_binary_annotations(self, thrift_binary_annotations):\n \"\"\"Accepts a thrift decoded binary annotation and converts it\n to a v1 binary annotation.\n \"\"\"\n tags = {}\n local_endpoint = None\n remote_endpoint = None\n\n for binary_annotation in thrift_binary_annotations:\n if binary_annotation.key == 'sa':\n remote_endpoint = self._convert_from_thrift_endpoint(\n thrift_endpoint=binary_annotation.host,\n )\n else:\n key = binary_annotation.key\n\n annotation_type = binary_annotation.annotation_type\n value = binary_annotation.value\n\n if annotation_type == zipkin_core.AnnotationType.BOOL:\n tags[key] = \"true\" if value == 1 else \"false\"\n elif annotation_type == zipkin_core.AnnotationType.STRING:\n tags[key] = str(value)\n else:\n log.warning('Only STRING and BOOL binary annotations are '\n 'supported right now and can be properly decoded.')\n\n if binary_annotation.host:\n local_endpoint = self._convert_from_thrift_endpoint(\n thrift_endpoint=binary_annotation.host,\n )\n\n return tags, local_endpoint, remote_endpoint"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _decode_thrift_span(self, thrift_span):\n parent_id = None\n local_endpoint = None\n annotations = {}\n tags = {}\n kind = Kind.LOCAL\n remote_endpoint = None\n timestamp = None\n duration = None\n\n if thrift_span.parent_id:\n parent_id = self._convert_unsigned_long_to_lower_hex(\n thrift_span.parent_id,\n )\n\n if thrift_span.annotations:\n annotations, local_endpoint, kind, timestamp, duration = \\\n self._decode_thrift_annotations(thrift_span.annotations)\n\n if thrift_span.binary_annotations:\n tags, local_endpoint, remote_endpoint = \\\n self._convert_from_thrift_binary_annotations(\n thrift_span.binary_annotations,\n )\n\n trace_id = self._convert_trace_id_to_string(\n thrift_span.trace_id,\n thrift_span.trace_id_high,\n )\n\n return Span(\n trace_id=trace_id,\n name=thrift_span.name,\n parent_id=parent_id,\n span_id=self._convert_unsigned_long_to_lower_hex(thrift_span.id),\n kind=kind,\n timestamp=self.seconds(timestamp or thrift_span.timestamp),\n duration=self.seconds(duration or thrift_span.duration),\n local_endpoint=local_endpoint,\n remote_endpoint=remote_endpoint,\n shared=(kind == Kind.SERVER and thrift_span.timestamp is None),\n annotations=annotations,\n tags=tags,\n )", "response": "Decodes a thrift span."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _convert_trace_id_to_string(self, trace_id, trace_id_high=None):\n if trace_id_high is not None:\n result = bytearray(32)\n self._write_hex_long(result, 0, trace_id_high)\n self._write_hex_long(result, 16, trace_id)\n return result.decode(\"utf8\")\n\n result = bytearray(16)\n self._write_hex_long(result, 0, trace_id)\n return result.decode(\"utf8\")", "response": "Converts the provided traceId hex value with optional high bits\n to a string."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert the provided unsigned long value to a hex string.", "response": "def _convert_unsigned_long_to_lower_hex(self, value):\n \"\"\"\n Converts the provided unsigned long value to a hex string.\n\n :param value: the value to convert\n :type value: unsigned long\n :returns: value as a hex string\n \"\"\"\n result = bytearray(16)\n self._write_hex_long(result, 0, value)\n return result.decode(\"utf8\")"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nwrite an unsigned long value across a byte array at the specified position.", "response": "def _write_hex_long(self, data, pos, value):\n \"\"\"\n Writes an unsigned long value across a byte array.\n\n :param data: the buffer to write the value to\n :type data: bytearray\n :param pos: the starting position\n :type pos: int\n :param value: the value to write\n :type value: unsigned long\n \"\"\"\n self._write_hex_byte(data, pos + 0, (value >> 56) & 0xff)\n self._write_hex_byte(data, pos + 2, (value >> 48) & 0xff)\n self._write_hex_byte(data, pos + 4, (value >> 40) & 0xff)\n self._write_hex_byte(data, pos + 6, (value >> 32) & 0xff)\n self._write_hex_byte(data, pos + 8, (value >> 24) & 0xff)\n self._write_hex_byte(data, pos + 10, (value >> 16) & 0xff)\n self._write_hex_byte(data, pos + 12, (value >> 8) & 0xff)\n self._write_hex_byte(data, pos + 14, (value & 0xff))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nfixes up illegal February 29 and 30 dates with the last day of February.", "response": "def date_fixup_pre_processor(transactions, tag, tag_dict, *args):\n \"\"\"\n Replace illegal February 29, 30 dates with the last day of February.\n\n German banks use a variant of the 30/360 interest rate calculation,\n where each month has always 30 days even February. Python's datetime\n module won't accept such dates.\n \"\"\"\n if tag_dict['month'] == '02':\n year = int(tag_dict['year'], 10)\n _, max_month_day = calendar.monthrange(year, 2)\n if int(tag_dict['day'], 10) > max_month_day:\n tag_dict['day'] = str(max_month_day)\n\n return tag_dict"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef mBank_set_transaction_code(transactions, tag, tag_dict, *args):\n tag_dict['transaction_code'] = int(\n tag_dict[tag.slug].split(';')[0].split(' ', 1)[0])\n\n return tag_dict", "response": "This function is used to set the transaction code for the given tag."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nparse the MT940 data and returns a list of Transaction objects.", "response": "def parse(self, data):\n '''Parses mt940 data, expects a string with data\n\n Args:\n data (str): The MT940 data\n\n Returns: :py:class:`list` of :py:class:`Transaction`\n '''\n # Remove extraneous whitespace and such\n data = '\\n'.join(self.strip(data.split('\\n')))\n\n # The pattern is a bit annoying to match by regex, even with a greedy\n # match it's difficult to get both the beginning and the end so we're\n # working around it in a safer way to get everything.\n tag_re = re.compile(\n r'^:\\n?(?P(?P[0-9]{2}|NS)(?P[A-Z])?):',\n re.MULTILINE)\n matches = list(tag_re.finditer(data))\n\n # identify valid matches\n valid_matches = list(self.sanatize_tag_id_matches(matches))\n\n for i, match in enumerate(valid_matches):\n tag_id = self.normalize_tag_id(match.group('tag'))\n\n # get tag instance corresponding to tag id\n tag = self.tags.get(match.group('full_tag')) \\\n or self.tags[tag_id]\n\n # Nice trick to get all the text that is part of this tag, python\n # regex matches have a `end()` and `start()` to indicate the start\n # and end index of the match.\n\n if valid_matches[i + 1:]:\n tag_data = \\\n data[match.end():valid_matches[i + 1].start()].strip()\n else:\n tag_data = data[match.end():].strip()\n\n tag_dict = tag.parse(self, tag_data)\n\n # Preprocess data before creating the object\n\n for processor in self.processors.get('pre_%s' % tag.slug, []):\n tag_dict = processor(self, tag, tag_dict)\n\n result = tag(self, tag_dict)\n\n # Postprocess the object\n\n for processor in self.processors.get('post_%s' % tag.slug, []):\n result = processor(self, tag, tag_dict, result)\n\n # Creating a new transaction for :20: and :61: tags allows the\n # tags from :20: to :61: to be captured as part of the transaction.\n\n if isinstance(tag, mt940.tags.Statement):\n # Transactions only get a Transaction Reference Code ID from a\n # :61: tag which is why a new transaction is created if the\n # 'id' has a value.\n\n if not self.transactions:\n transaction = Transaction(self)\n self.transactions.append(transaction)\n\n if transaction.data.get('id'):\n transaction = Transaction(self, result)\n self.transactions.append(transaction)\n else:\n transaction.data.update(result)\n elif issubclass(tag.scope, Transaction) and self.transactions:\n # Combine multiple results together as one string, Rabobank has\n # multiple :86: tags for a single transaction\n\n for k, v in _compat.iteritems(result):\n if k in transaction.data and hasattr(v, 'strip'):\n transaction.data[k] += '\\n%s' % v.strip()\n else:\n transaction.data[k] = v\n\n elif issubclass(tag.scope, Transactions): # pragma: no branch\n self.data.update(result)\n\n return self.transactions"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse(src, encoding=None):\n '''\n Parses mt940 data and returns transactions object\n\n :param src: file handler to read, filename to read or raw data as string\n :return: Collection of transactions\n :rtype: Transactions\n '''\n\n def safe_is_file(filename):\n try:\n return os.path.isfile(src)\n except ValueError: # pragma: no cover\n return False\n\n if hasattr(src, 'read'): # pragma: no branch\n data = src.read()\n elif safe_is_file(src):\n with open(src, 'rb') as fh:\n data = fh.read()\n else: # pragma: no cover\n data = src\n\n if hasattr(data, 'decode'): # pragma: no branch\n exception = None\n encodings = [encoding, 'utf-8', 'cp852', 'iso8859-15', 'latin1']\n\n for encoding in encodings: # pragma: no cover\n if not encoding:\n continue\n\n try:\n data = data.decode(encoding)\n break\n except UnicodeDecodeError as e:\n exception = e\n except UnicodeEncodeError:\n break\n else:\n raise exception # pragma: no cover\n\n transactions = mt940.models.Transactions()\n transactions.parse(data)\n\n return transactions", "response": "Parses mt940 data and returns transactions object"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef join_lines(string, strip=Strip.BOTH):\n '''\n Join strings together and strip whitespace in between if needed\n '''\n lines = []\n\n for line in string.splitlines():\n if strip & Strip.RIGHT:\n line = line.rstrip()\n\n if strip & Strip.LEFT:\n line = line.lstrip()\n\n lines.append(line)\n\n return ''.join(lines)", "response": "Join strings together and strip whitespace in between if needed\n "} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nturn a response into a properly formatted json or text object", "response": "async def json_or_text(response):\n \"\"\"Turns response into a properly formatted json or text object\"\"\"\n text = await response.text()\n if response.headers['Content-Type'] == 'application/json; charset=utf-8':\n return json.loads(text)\n return text"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nhandling the message shown when we are ratelimited", "response": "async def limited(until):\n \"\"\"Handles the message shown when we are ratelimited\"\"\"\n duration = int(round(until - time.time()))\n mins = duration / 60\n fmt = 'We have exhausted a ratelimit quota. Retrying in %.2f seconds (%.3f minutes).'\n log.warn(fmt, duration, mins)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nhandle requests to the DBL API.", "response": "async def request(self, method, url, **kwargs):\n \"\"\"Handles requests to the API\"\"\"\n rate_limiter = RateLimiter(max_calls=59, period=60, callback=limited)\n # handles ratelimits. max_calls is set to 59 because current implementation will retry in 60s after 60 calls is reached. DBL has a 1h block so obviously this doesn't work well, as it will get a 429 when 60 is reached.\n\n async with rate_limiter: # this works but doesn't 'save' over restart. need a better implementation.\n\n if not self.token:\n raise UnauthorizedDetected('UnauthorizedDetected (status code: 401): No TOKEN provided')\n\n headers = {\n 'User-Agent': self.user_agent,\n 'Content-Type': 'application/json'\n }\n\n if 'json' in kwargs:\n kwargs['data'] = to_json(kwargs.pop('json'))\n\n kwargs['headers'] = headers\n headers['Authorization'] = self.token\n\n\n for tries in range(5):\n async with self.session.request(method, url, **kwargs) as resp:\n log.debug('%s %s with %s has returned %s', method,\n url, kwargs.get('data'), resp.status)\n\n data = await json_or_text(resp)\n\n\n if 300 > resp.status >= 200:\n return data\n\n\n if resp.status == 429: # we are being ratelimited\n fmt = 'We are being rate limited. Retrying in %.2f seconds (%.3f minutes).'\n\n # sleep a bit\n retry_after = json.loads(resp.headers.get('Retry-After'))\n mins = retry_after / 60\n log.warning(fmt, retry_after, mins)\n\n # check if it's a global rate limit (True as only 1 ratelimit atm - /api/bots)\n is_global = True # is_global = data.get('global', False)\n if is_global:\n self._global_over.clear()\n\n await asyncio.sleep(retry_after, loop=self.loop)\n log.debug('Done sleeping for the rate limit. Retrying...')\n\n # release the global lock now that the\n # global rate limit has passed\n if is_global:\n self._global_over.set()\n log.debug('Global rate limit is now over.')\n\n continue\n\n\n if resp.status == 400:\n raise HTTPException(resp, data)\n elif resp.status == 401:\n raise Unauthorized(resp, data)\n elif resp.status == 403:\n raise Forbidden(resp, data)\n elif resp.status == 404:\n raise NotFound(resp, data)\n else:\n raise HTTPException(resp, data)\n # We've run out of retries, raise.\n raise HTTPException(resp, data)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\nasync def get_bot_info(self, bot_id):\n '''Gets the information of the given Bot ID'''\n resp = await self.request('GET', '{}/bots/{}'.format(self.BASE, bot_id))\n resp['date'] = datetime.strptime(resp['date'], '%Y-%m-%dT%H:%M:%S.%fZ')\n for k in resp:\n if resp[k] == '':\n resp[k] = None\n return resp", "response": "Gets the information of the given Bot ID"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\nasync def get_bots(self, limit, offset):\n '''Gets an object of bots on DBL'''\n if limit > 500:\n limit = 50\n return await self.request('GET', '{}/bots?limit={}&offset={}'.format(self.BASE, limit, offset))", "response": "Gets an object of bots on DBL"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef guild_count(self):\n try:\n return len(self.bot.guilds)\n except AttributeError:\n return len(self.bot.servers)", "response": "Gets the guild count from the Client or Bot object"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\npost the guild count to discordbots. org.", "response": "async def post_guild_count(\n self,\n shard_count: int = None,\n shard_no: int = None\n ):\n \"\"\"This function is a coroutine.\n\n Posts the guild count to discordbots.org\n\n .. _0 based indexing : https://en.wikipedia.org/wiki/Zero-based_numbering\n\n Parameters\n ==========\n\n shard_count: int[Optional]\n The total number of shards.\n shard_no: int[Optional]\n The index of the current shard. DBL uses `0 based indexing`_ for shards.\n \"\"\"\n await self.http.post_guild_count(self.bot_id, self.guild_count(), shard_count, shard_no)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\nasync def get_bots(self, limit: int = 50, offset: int = 0):\n return await self.http.get_bots(limit, offset)", "response": "This function is a coroutine. It returns a dictionary of bots and their associated data."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\nasync def get_widget_large(self, bot_id: int = None):\n if bot_id is None:\n bot_id = self.bot_id\n url = 'https://discordbots.org/api/widget/{0}.png'.format(bot_id)\n return url", "response": "This function is a coroutine. It generates the default large widget."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\nasync def generate_widget_small(\n self,\n bot_id: int = None,\n avabg: str = '2C2F33',\n lcol: str = '23272A',\n rcol: str = '2C2F33',\n ltxt: str = 'FFFFFF',\n rtxt: str = 'FFFFFF'\n ):\n \"\"\"This function is a coroutine.\n\n Generates a custom large widget. Do not add `#` to the color codes (e.g. #FF00FF become FF00FF).\n\n Parameters\n ==========\n\n bot_id: int\n The bot_id of the bot you wish to make a widget for.\n avabg: str\n The hex color code of the background of the avatar (if the avatar has transparency).\n lcol: str\n The hex color code of the left background color.\n rcol: str\n The hex color code of the right background color.\n ltxt: str\n The hex color code of the left text.\n rtxt: str\n The hex color code of the right text.\n\n Returns\n =======\n\n URL of the widget: str\n \"\"\"\n url = 'https://discordbots.org/api/widget/lib/{0}.png?avatarbg={1}&lefttextcolor={2}&righttextcolor={3}&leftcolor={4}&rightcolor={5}'\n if bot_id is None:\n bot_id = self.bot_id\n url = url.format(bot_id, avabg, ltxt, rtxt, lcol, rcol)\n return url", "response": "This function generates a small widget for the given user."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\nasync def close(self):\n if self._is_closed:\n return\n else:\n await self.http.close()\n self._is_closed = True", "response": "This function is a coroutine. Closes all connections."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nclose the connection to the system.", "response": "def close(self):\n \"\"\"Close port.\"\"\"\n os.close(self.in_d)\n os.close(self.out_d)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef decode(string):\n if not string:\n raise IncompleteData(string)\n if string[0] != 131:\n raise ValueError(\"unknown protocol version: %r\" % string[0])\n if string[1:2] == b'P':\n # compressed term\n if len(string) < 16:\n raise IncompleteData(string)\n d = decompressobj()\n term_string = d.decompress(string[6:]) + d.flush()\n uncompressed_size, = _int4_unpack(string[2:6])\n if len(term_string) != uncompressed_size:\n raise ValueError(\n \"invalid compressed tag, \"\n \"%d bytes but got %d\" % (uncompressed_size, len(term_string)))\n # tail data returned by decode_term() can be simple ignored\n term, _tail = decode_term(term_string)\n return term, d.unused_data\n return decode_term(string[1:])", "response": "Decode Erlang external term."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nencode Erlang external term.", "response": "def encode(term, compressed=False):\n \"\"\"Encode Erlang external term.\"\"\"\n encoded_term = encode_term(term)\n # False and 0 do not attempt compression.\n if compressed:\n if compressed is True:\n # default compression level of 6\n compressed = 6\n elif compressed < 0 or compressed > 9:\n raise ValueError(\"invalid compression level: %r\" % (compressed,))\n zlib_term = compress(encoded_term, compressed)\n ln = len(encoded_term)\n if len(zlib_term) + 5 <= ln:\n # Compressed term should be smaller\n return b\"\\x83P\" + _int4_pack(ln) + zlib_term\n return b\"\\x83\" + encoded_term"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_policy_events_duration(self, duration_sec, sampling=None, aggregations=None, scope_filter=None, event_filter=None):\n '''**Description**\n Fetch all policy events that occurred in the last duration_sec seconds. This method is used in conjunction with\n :func:`~sdcclient.SdSecureClient.get_more_policy_events` to provide paginated access to policy events.\n\n **Arguments**\n - duration_sec: Fetch all policy events that have occurred in the last *duration_sec* seconds.\n - sampling: Sample all policy events using *sampling* interval.\n - aggregations: When present it specifies how to aggregate events (sampling does not need to be specified, because when it's present it automatically means events will be aggregated). This field can either be a list of scope metrics or a list of policyEvents fields but (currently) not a mix of the two. When policy events fields are specified, only these can be used= severity, agentId, containerId, policyId, ruleType.\n - scope_filter: this is a SysdigMonitor-like filter (e.g 'container.image=ubuntu'). When provided, events are filtered by their scope, so only a subset will be returned (e.g. 'container.image=ubuntu' will provide only events that have happened on an ubuntu container).\n - event_filter: this is a SysdigMonitor-like filter (e.g. policyEvent.policyId=3). When provided, events are filtered by some of their properties. Currently the supported set of filters is policyEvent.all(which can be used just with matches, policyEvent.policyId, policyEvent.id, policyEvent.severity, policyEvent.ruleTye, policyEvent.ruleSubtype.\n\n **Success Return Value**\n An array containing:\n - A context object that should be passed to later calls to get_more_policy_events.\n - An array of policy events, in JSON format. See :func:`~sdcclient.SdSecureClient.get_more_policy_events`\n for details on the contents of policy events.\n\n **Example**\n `examples/get_secure_policy_events.py `_\n\n '''\n epoch = datetime.datetime.utcfromtimestamp(0)\n to_ts = (datetime.datetime.utcnow() - epoch).total_seconds() * 1000 * 1000\n from_ts = to_ts - (int(duration_sec) * 1000 * 1000)\n\n options = {\"to\": to_ts,\n \"from\": from_ts,\n \"offset\": 0,\n \"limit\": 1000,\n \"sampling\": sampling,\n \"aggregations\": aggregations,\n \"scopeFilter\": scope_filter,\n \"eventFilter\": event_filter}\n ctx = {k: v for k, v in options.items() if v is not None}\n return self._get_policy_events_int(ctx)", "response": "This method returns all policy events that occurred in the last duration_sec seconds."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_policy_events_id_range(self, id, from_sec, to_sec, sampling=None, aggregations=None, scope_filter=None, event_filter=None):\n '''**Description**\n Fetch all policy events with id that occurred in the time range [from_sec:to_sec]. This method is used in conjunction\n with :func:`~sdcclient.SdSecureClient.get_more_policy_events` to provide paginated access to policy events.\n\n **Arguments**\n - id: the id of the policy events to fetch.\n - from_sec: the start of the timerange for which to get events\n - end_sec: the end of the timerange for which to get events\n - sampling: sample all policy events using *sampling* interval.\n - scope_filter: this is a SysdigMonitor-like filter (e.g 'container.image=ubuntu'). When provided, events are filtered by their scope, so only a subset will be returned (e.g. 'container.image=ubuntu' will provide only events that have happened on an ubuntu container).\n - event_filter: this is a SysdigMonitor-like filter (e.g. policyEvent.policyId=3). When provided, events are filtered by some of their properties. Currently the supported set of filters is policyEvent.all(which can be used just with matches, policyEvent.policyId, policyEvent.id, policyEvent.severity, policyEvent.ruleTye, policyEvent.ruleSubtype.\n - aggregations: When present it specifies how to aggregate events (sampling does not need to be specified, because when it's present it automatically means events will be aggregated). This field can either be a list of scope metrics or a list of policyEvents fields but (currently) not a mix of the two. When policy events fields are specified, only these can be used= severity, agentId, containerId, policyId, ruleType.\n\n **Success Return Value**\n An array containing:\n - A context object that should be passed to later calls to get_more_policy_events.\n - An array of policy events, in JSON format. See :func:`~sdcclient.SdSecureClient.get_more_policy_events`\n for details on the contents of policy events.\n\n **Example**\n `examples/get_secure_policy_events.py `_\n\n '''\n options = {\"id\": id,\n \"from\": int(from_sec) * 1000000,\n \"to\": int(to_sec) * 1000000,\n \"offset\": 0,\n \"limit\": 1000,\n \"sampling\": sampling,\n \"aggregations\": aggregations,\n \"scopeFilter\": scope_filter,\n \"eventFilter\": event_filter}\n ctx = {k: v for k, v in options.items() if v is not None}\n return self._get_policy_events_int(ctx)", "response": "This method returns all policy events that occurred in the time range from_sec to_sec."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a set of default policies using the current system falco rules file.", "response": "def create_default_policies(self):\n '''**Description**\n Create a set of default policies using the current system falco rules file as a reference. For every falco rule in the system\n falco rules file, one policy will be created. The policy will take the name and description from the name and description of\n the corresponding falco rule. If a policy already exists with the same name, no policy is added or modified. Existing\n policies will be unchanged.\n\n **Arguments**\n - None\n\n **Success Return Value**\n JSON containing details on any new policies that were added.\n\n **Example**\n `examples/create_default_policies.py `_\n\n '''\n res = requests.post(self.url + '/api/policies/createDefault', headers=self.hdrs, verify=self.ssl_verify)\n return self._request_result(res)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndeleting all existing policies.", "response": "def delete_all_policies(self):\n '''**Description**\n Delete all existing policies. The falco rules file is unchanged.\n\n **Arguments**\n - None\n\n **Success Return Value**\n The string \"Policies Deleted\"\n\n **Example**\n `examples/delete_all_policies.py `_\n\n '''\n res = requests.post(self.url + '/api/policies/deleteAll', headers=self.hdrs, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n\n return [True, \"Policies Deleted\"]"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nchanges the policy evaluation order of the specified items.", "response": "def set_policy_priorities(self, priorities_json):\n '''**Description**\n Change the policy evaluation order\n\n **Arguments**\n - priorities_json: a description of the new policy order.\n\n **Success Return Value**\n A JSON object representing the updated list of policy ids.\n\n **Example**\n `examples/set_policy_order.py `_\n\n '''\n\n try:\n json.loads(priorities_json)\n except Exception as e:\n return [False, \"priorities json is not valid json: {}\".format(str(e))]\n\n res = requests.put(self.url + '/api/policies/priorities', headers=self.hdrs, data=priorities_json, verify=self.ssl_verify)\n return self._request_result(res)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a json description of the given policy with the given name.", "response": "def get_policy(self, name):\n '''**Description**\n Find the policy with name and return its json description.\n\n **Arguments**\n - name: the name of the policy to fetch\n\n **Success Return Value**\n A JSON object containing the description of the policy. If there is no policy with\n the given name, returns False.\n\n **Example**\n `examples/get_policy.py `_\n\n '''\n res = requests.get(self.url + '/api/policies', headers=self.hdrs, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n\n policies = res.json()[\"policies\"]\n\n # Find the policy with the given name and return it.\n for policy in policies:\n if policy[\"name\"] == name:\n return [True, policy]\n\n return [False, \"No policy with name {}\".format(name)]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_policy(self, policy_json):\n '''**Description**\n Add a new policy using the provided json.\n\n **Arguments**\n - policy_json: a description of the new policy\n\n **Success Return Value**\n The string \"OK\"\n\n **Example**\n `examples/add_policy.py `_\n\n '''\n\n try:\n policy_obj = json.loads(policy_json)\n except Exception as e:\n return [False, \"policy json is not valid json: {}\".format(str(e))]\n\n body = {\"policy\": policy_obj}\n res = requests.post(self.url + '/api/policies', headers=self.hdrs, data=json.dumps(body), verify=self.ssl_verify)\n return self._request_result(res)", "response": "Description ** add_policy Add a new policy using the provided json."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndelete the policy with the given name.", "response": "def delete_policy_name(self, name):\n '''**Description**\n Delete the policy with the given name.\n\n **Arguments**\n - name: the name of the policy to delete\n\n **Success Return Value**\n The JSON object representing the now-deleted policy.\n\n **Example**\n `examples/delete_policy.py `_\n\n '''\n res = requests.get(self.url + '/api/policies', headers=self.hdrs, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n\n # Find the policy with the given name and delete it\n for policy in res.json()[\"policies\"]:\n if policy[\"name\"] == name:\n return self.delete_policy_id(policy[\"id\"])\n\n return [False, \"No policy with name {}\".format(name)]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndelete the policy with the given id", "response": "def delete_policy_id(self, id):\n '''**Description**\n Delete the policy with the given id\n\n **Arguments**\n - id: the id of the policy to delete\n\n **Success Return Value**\n The JSON object representing the now-deleted policy.\n\n **Example**\n `examples/delete_policy.py `_\n\n '''\n res = requests.delete(self.url + '/api/policies/{}'.format(id), headers=self.hdrs, verify=self.ssl_verify)\n return self._request_result(res)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd a new compliance task to the agent.", "response": "def add_compliance_task(self, name, module_name='docker-bench-security', schedule='06:00:00Z/PT12H', scope=None, enabled=True):\n '''**Description**\n Add a new compliance task.\n\n **Arguments**\n - name: The name of the task e.g. 'Check Docker Compliance'.\n - module_name: The name of the module that implements this task. Separate from task name in case you want to use the same module to run separate tasks with different scopes or schedules. [ 'docker-bench-security', 'kube-bench' ]\n - schedule: The frequency at which this task should run. Expressed as an `ISO 8601 Duration `_\n - scope: The agent will only run the task on hosts matching this scope or on hosts where containers match this scope.\n - enabled: Whether this task should actually run as defined by its schedule.\n\n **Success Return Value**\n A JSON representation of the compliance task.\n '''\n task = {\n \"id\": None,\n \"name\": name,\n \"moduleName\": module_name,\n \"enabled\": enabled,\n \"scope\": scope,\n \"schedule\": schedule\n }\n res = requests.post(self.url + '/api/complianceTasks', data=json.dumps(task), headers=self.hdrs, verify=self.ssl_verify)\n return self._request_result(res)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting a compliance task.", "response": "def get_compliance_task(self, id):\n '''**Description**\n Get a compliance task.\n\n **Arguments**\n - id: the id of the compliance task to get.\n\n **Success Return Value**\n A JSON representation of the compliance task.\n '''\n res = requests.get(self.url + '/api/complianceTasks/{}'.format(id), headers=self.hdrs, verify=self.ssl_verify)\n return self._request_result(res)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nupdating a compliance task.", "response": "def update_compliance_task(self, id, name=None, module_name=None, schedule=None, scope=None, enabled=None):\n '''**Description**\n Update an existing compliance task.\n\n **Arguments**\n - id: the id of the compliance task to be updated.\n - name: The name of the task e.g. 'Check Docker Compliance'.\n - module_name: The name of the module that implements this task. Separate from task name in case you want to use the same module to run separate tasks with different scopes or schedules. [ 'docker-bench-security', 'kube-bench' ]\n - schedule: The frequency at which this task should run. Expressed as an `ISO 8601 Duration `_\n - scope: The agent will only run the task on hosts matching this scope or on hosts where containers match this scope.\n - enabled: Whether this task should actually run as defined by its schedule.\n\n **Success Return Value**\n A JSON representation of the compliance task.\n '''\n ok, res = self.get_compliance_task(id)\n if not ok:\n return ok, res\n\n task = res\n options = {\n 'name': name,\n 'moduleName': module_name,\n 'schedule': schedule,\n 'scope': scope,\n 'enabled': enabled\n }\n task.update({k: v for k, v in options.items() if v is not None})\n res = requests.put(self.url + '/api/complianceTasks/{}'.format(id), data=json.dumps(task), headers=self.hdrs, verify=self.ssl_verify)\n return self._request_result(res)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef list_commands_audit(self, from_sec=None, to_sec=None, scope_filter=None, command_filter=None, limit=100, offset=0, metrics=[]):\n '''**Description**\n List the commands audit.\n\n **Arguments**\n - from_sec: the start of the timerange for which to get commands audit.\n - end_sec: the end of the timerange for which to get commands audit.\n - scope_filter: this is a SysdigMonitor-like filter (e.g 'container.image=ubuntu'). When provided, commands are filtered by their scope, so only a subset will be returned (e.g. 'container.image=ubuntu' will provide only commands that have happened on an ubuntu container).\n - command_filter: this is a SysdigMonitor-like filter (e.g. command.comm=\"touch\"). When provided, commands are filtered by some of their properties. Currently the supported set of filters is command.comm, command.cwd, command.pid, command.ppid, command.uid, command.loginshell.id, command.loginshell.distance\n - limit: Maximum number of commands in the response.\n - metrics: A list of metric values to include in the return.\n\n **Success Return Value**\n A JSON representation of the commands audit.\n '''\n if to_sec is None:\n to_sec = time.time()\n if from_sec is None:\n from_sec = to_sec - (24 * 60 * 60) # 1 day\n\n url = \"{url}/api/commands?from={frm}&to={to}&offset={offset}&limit={limit}{scope}{commandFilter}{metrics}\".format(\n url=self.url,\n offset=offset,\n limit=limit,\n frm=int(from_sec * 10**6),\n to=int(to_sec * 10**6),\n scope=\"&scopeFilter=\" + scope_filter if scope_filter else \"\",\n commandFilter=\"&commandFilter=\" + command_filter if command_filter else \"\",\n metrics=\"&metrics=\" + json.dumps(metrics) if metrics else \"\")\n res = requests.get(url, headers=self.hdrs, verify=self.ssl_verify)\n return self._request_result(res)", "response": "This method returns a list of commands audited by the SysdigMonitor API."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget a command audit.", "response": "def get_command_audit(self, id, metrics=[]):\n '''**Description**\n Get a command audit.\n\n **Arguments**\n - id: the id of the command audit to get.\n\n **Success Return Value**\n A JSON representation of the command audit.\n '''\n url = \"{url}/api/commands/{id}?from=0&to={to}{metrics}\".format(\n url=self.url,\n id=id,\n to=int(time.time() * 10**6),\n metrics=\"&metrics=\" + json.dumps(metrics) if metrics else \"\")\n res = requests.get(url, headers=self.hdrs, verify=self.ssl_verify)\n return self._request_result(res)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nupdate resolution status of an alert notification.", "response": "def update_notification_resolution(self, notification, resolved):\n '''**Description**\n Updates the resolution status of an alert notification.\n\n **Arguments**\n - **notification**: notification object as returned by :func:`~SdcClient.get_notifications`.\n - **resolved**: new resolution status. Supported values are ``True`` and ``False``.\n\n **Success Return Value**\n The updated notification.\n\n **Example**\n `examples/resolve_alert_notifications.py `_\n '''\n if 'id' not in notification:\n return [False, 'Invalid notification format']\n\n notification['resolved'] = resolved\n data = {'notification': notification}\n\n res = requests.put(self.url + '/api/notifications/' + str(notification['id']), headers=self.hdrs, data=json.dumps(data), verify=self.ssl_verify)\n return self._request_result(res)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_alert(self, name=None, description=None, severity=None, for_atleast_s=None, condition=None,\n segmentby=[], segment_condition='ANY', user_filter='', notify=None, enabled=True,\n annotations={}, alert_obj=None):\n '''**Description**\n Create a threshold-based alert.\n\n **Arguments**\n - **name**: the alert name. This will appear in the Sysdig Monitor UI and in notification emails.\n - **description**: the alert description. This will appear in the Sysdig Monitor UI and in notification emails.\n - **severity**: syslog-encoded alert severity. This is a number from 0 to 7 where 0 means 'emergency' and 7 is 'debug'.\n - **for_atleast_s**: the number of consecutive seconds the condition must be satisfied for the alert to fire.\n - **condition**: the alert condition, as described here https://app.sysdigcloud.com/apidocs/#!/Alerts/post_api_alerts\n - **segmentby**: a list of Sysdig Monitor segmentation criteria that can be used to apply the alert to multiple entities. For example, segmenting a CPU alert by ['host.mac', 'proc.name'] allows to apply it to any process in any machine.\n - **segment_condition**: When *segmentby* is specified (and therefore the alert will cover multiple entities) this field is used to determine when it will fire. In particular, you have two options for *segment_condition*: **ANY** (the alert will fire when at least one of the monitored entities satisfies the condition) and **ALL** (the alert will fire when all of the monitored entities satisfy the condition).\n - **user_filter**: a boolean expression combining Sysdig Monitor segmentation criteria that makes it possible to reduce the scope of the alert. For example: *kubernetes.namespace.name='production' and container.image='nginx'*.\n - **notify**: the type of notification you want this alert to generate. Options are *EMAIL*, *SNS*, *PAGER_DUTY*, *SYSDIG_DUMP*.\n - **enabled**: if True, the alert will be enabled when created.\n - **annotations**: an optional dictionary of custom properties that you can associate to this alert for automation or management reasons\n - **alert_obj**: an optional fully-formed Alert object of the format returned in an \"alerts\" list by :func:`~SdcClient.get_alerts` This is an alternative to creating the Alert using the individual parameters listed above.\n\n **Success Return Value**\n A dictionary describing the just created alert, with the format described at `this link `__\n\n **Example**\n `examples/create_alert.py `_\n '''\n #\n # Get the list of alerts from the server\n #\n res = requests.get(self.url + '/api/alerts', headers=self.hdrs, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n res.json()\n\n if alert_obj is None:\n if None in (name, description, severity, for_atleast_s, condition):\n return [False, 'Must specify a full Alert object or all parameters: name, description, severity, for_atleast_s, condition']\n else:\n #\n # Populate the alert information\n #\n alert_json = {\n 'alert': {\n 'type': 'MANUAL',\n 'name': name,\n 'description': description,\n 'enabled': enabled,\n 'severity': severity,\n 'timespan': for_atleast_s * 1000000,\n 'condition': condition,\n 'filter': user_filter\n }\n }\n\n if segmentby != None and segmentby != []:\n alert_json['alert']['segmentBy'] = segmentby\n alert_json['alert']['segmentCondition'] = {'type': segment_condition}\n\n if annotations != None and annotations != {}:\n alert_json['alert']['annotations'] = annotations\n\n if notify != None:\n alert_json['alert']['notificationChannelIds'] = notify\n else:\n # The REST API enforces \"Alert ID and version must be null\", so remove them if present,\n # since these would have been there in a dump from the list_alerts.py example.\n alert_obj.pop('id', None)\n alert_obj.pop('version', None)\n alert_json = {\n 'alert': alert_obj\n }\n\n #\n # Create the new alert\n #\n res = requests.post(self.url + '/api/alerts', headers=self.hdrs, data=json.dumps(alert_json), verify=self.ssl_verify)\n return self._request_result(res)", "response": "Create an alert in Sysdig Cloud."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update_alert(self, alert):\n '''**Description**\n Update a modified threshold-based alert.\n\n **Arguments**\n - **alert**: one modified alert object of the same format as those in the list returned by :func:`~SdcClient.get_alerts`.\n\n **Success Return Value**\n The updated alert.\n\n **Example**\n `examples/update_alert.py `_\n '''\n if 'id' not in alert:\n return [False, \"Invalid alert format\"]\n\n res = requests.put(self.url + '/api/alerts/' + str(alert['id']), headers=self.hdrs, data=json.dumps({\"alert\": alert}), verify=self.ssl_verify)\n return self._request_result(res)", "response": "**Description**\n Update a modified threshold-based alert.\n\n **Arguments**\n - **alert**: one modified alert object of the same format as those in the list returned by :func:`~SdcClient.get_alerts`.\n\n **Success Return Value**\n The updated alert.\n\n **Example**\n `examples/update_alert.py `_"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef delete_alert(self, alert):\n '''**Description**\n Deletes an alert.\n\n **Arguments**\n - **alert**: the alert dictionary as returned by :func:`~SdcClient.get_alerts`.\n\n **Success Return Value**\n ``None``.\n\n **Example**\n `examples/delete_alert.py `_\n '''\n if 'id' not in alert:\n return [False, 'Invalid alert format']\n\n res = requests.delete(self.url + '/api/alerts/' + str(alert['id']), headers=self.hdrs, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n\n return [True, None]", "response": "**Description**\n Deletes an alert.\n\n **Arguments**\n - **alert**: the alert dictionary as returned by :func:`~SdcClient.get_alerts`.\n\n **Success Return Value**\n ``None``.\n\n **Example**\n `examples/delete_alert.py `_"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the current group hierarchy of the user s explore grouping criteria.", "response": "def get_explore_grouping_hierarchy(self):\n '''**Description**\n Return the user's current grouping hierarchy as visible in the Explore tab of Sysdig Monitor.\n\n **Success Return Value**\n A list containing the list of the user's Explore grouping criteria.\n\n **Example**\n `examples/print_explore_grouping.py `_\n '''\n res = requests.get(self.url + '/api/groupConfigurations', headers=self.hdrs, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n\n data = res.json()\n\n if 'groupConfigurations' not in data:\n return [False, 'corrupted groupConfigurations API response']\n\n gconfs = data['groupConfigurations']\n\n for gconf in gconfs:\n if gconf['id'] == 'explore':\n res = []\n items = gconf['groups'][0]['groupBy']\n\n for item in items:\n res.append(item['metric'])\n\n return [True, res]\n\n return [False, 'corrupted groupConfigurations API response, missing \"explore\" entry']"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef set_explore_grouping_hierarchy(self, new_hierarchy):\n '''**Description**\n Changes the grouping hierarchy in the Explore panel of the current user.\n\n **Arguments**\n - **new_hierarchy**: a list of sysdig segmentation metrics indicating the new grouping hierarchy.\n '''\n body = {\n 'id': 'explore',\n 'groups': [{'groupBy': []}]\n }\n\n for item in new_hierarchy:\n body['groups'][0]['groupBy'].append({'metric': item})\n\n res = requests.put(self.url + '/api/groupConfigurations/explore', headers=self.hdrs,\n data=json.dumps(body), verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n else:\n return [True, None]", "response": "This method is used to change the grouping hierarchy of the current user."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the list of dashboards available under the user account.", "response": "def get_dashboards(self):\n '''**Description**\n Return the list of dashboards available under the given user account. This includes the dashboards created by the user and the ones shared with her by other users.\n\n **Success Return Value**\n A dictionary containing the list of available sampling intervals.\n\n **Example**\n `examples/list_dashboards.py `_\n '''\n res = requests.get(self.url + self._dashboards_api_endpoint, headers=self.hdrs, verify=self.ssl_verify)\n return self._request_result(res)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef find_dashboard_by(self, name=None):\n '''**Description**\n Finds dashboards with the specified name. You can then delete the dashboard (with :func:`~SdcClient.delete_dashboard`) or edit panels (with :func:`~SdcClient.add_dashboard_panel` and :func:`~SdcClient.remove_dashboard_panel`)\n\n **Arguments**\n - **name**: the name of the dashboards to find.\n\n **Success Return Value**\n A list of dictionaries of dashboards matching the specified name.\n\n **Example**\n `examples/dashboard.py `_\n '''\n res = self.get_dashboards()\n if res[0] is False:\n return res\n else:\n def filter_fn(configuration):\n return configuration['name'] == name\n\n def create_item(configuration):\n return {'dashboard': configuration}\n\n dashboards = list(map(create_item, list(filter(filter_fn, res[1]['dashboards']))))\n return [True, dashboards]", "response": "Find dashboards with the specified name."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a new dashboard from a template.", "response": "def create_dashboard_from_view(self, newdashname, viewname, filter, shared=False, public=False):\n '''**Description**\n Create a new dasboard using one of the Sysdig Monitor views as a template. You will be able to define the scope of the new dashboard.\n\n **Arguments**\n - **newdashname**: the name of the dashboard that will be created.\n - **viewname**: the name of the view to use as the template for the new dashboard. This corresponds to the name that the view has in the Explore page.\n - **filter**: a boolean expression combining Sysdig Monitor segmentation criteria that defines what the new dasboard will be applied to. For example: *kubernetes.namespace.name='production' and container.image='nginx'*.\n - **shared**: if set to True, the new dashboard will be a shared one.\n - **public**: if set to True, the new dashboard will be shared with public token.\n\n **Success Return Value**\n A dictionary showing the details of the new dashboard.\n\n **Example**\n `examples/create_dashboard.py `_\n '''\n #\n # Find our template view\n #\n gvres = self.get_view(viewname)\n if gvres[0] is False:\n return gvres\n\n view = gvres[1]['defaultDashboard']\n\n view['timeMode'] = {'mode': 1}\n view['time'] = {'last': 2 * 60 * 60 * 1000000, 'sampling': 2 * 60 * 60 * 1000000}\n\n #\n # Create the new dashboard\n #\n return self.create_dashboard_from_template(newdashname, view, filter, shared, public)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_dashboard_from_dashboard(self, newdashname, templatename, filter, shared=False, public=False):\n '''**Description**\n Create a new dasboard using one of the existing dashboards as a template. You will be able to define the scope of the new dasboard.\n\n **Arguments**\n - **newdashname**: the name of the dashboard that will be created.\n - **viewname**: the name of the dasboard to use as the template, as it appears in the Sysdig Monitor dashboard page.\n - **filter**: a boolean expression combining Sysdig Monitor segmentation criteria defines what the new dasboard will be applied to. For example: *kubernetes.namespace.name='production' and container.image='nginx'*.\n - **shared**: if set to True, the new dashboard will be a shared one.\n - **public**: if set to True, the new dashboard will be shared with public token.\n\n **Success Return Value**\n A dictionary showing the details of the new dashboard.\n\n **Example**\n `examples/create_dashboard.py `_\n '''\n #\n # Get the list of dashboards from the server\n #\n res = requests.get(self.url + self._dashboards_api_endpoint, headers=self.hdrs, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n\n j = res.json()\n\n #\n # Find our template dashboard\n #\n dboard = None\n\n for db in j['dashboards']:\n if db['name'] == templatename:\n dboard = db\n break\n\n if dboard is None:\n self.lasterr = 'can\\'t find dashboard ' + templatename + ' to use as a template'\n return [False, self.lasterr]\n\n #\n # Create the dashboard\n #\n return self.create_dashboard_from_template(newdashname, dboard, filter, shared, public)", "response": "This method creates a new dashboard using a template."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef create_dashboard_from_file(self, dashboard_name, filename, filter, shared=False, public=False):\n '''\n **Description**\n Create a new dasboard using a dashboard template saved to disk. See :func:`~SdcClient.save_dashboard_to_file` to use the file to create a dashboard (usefl to create and restore backups).\n\n The file can contain the following JSON formats:\n 1. dashboard object in the format of an array element returned by :func:`~SdcClient.get_dashboards`\n 2. JSON object with the following properties:\n * version: dashboards API version (e.g. 'v2')\n * dashboard: dashboard object in the format of an array element returned by :func:`~SdcClient.get_dashboards`\n\n **Arguments**\n - **dashboard_name**: the name of the dashboard that will be created.\n - **filename**: name of a file containing a JSON object\n - **filter**: a boolean expression combining Sysdig Monitor segmentation criteria defines what the new dasboard will be applied to. For example: *kubernetes.namespace.name='production' and container.image='nginx'*.\n - **shared**: if set to True, the new dashboard will be a shared one.\n - **public**: if set to True, the new dashboard will be shared with public token.\n\n **Success Return Value**\n A dictionary showing the details of the new dashboard.\n\n **Example**\n `examples/dashboard_save_load.py `_\n '''\n #\n # Load the Dashboard\n #\n with open(filename) as data_file:\n loaded_object = json.load(data_file)\n\n #\n # Handle old files\n #\n if 'dashboard' not in loaded_object:\n loaded_object = {\n 'version': 'v1',\n 'dashboard': loaded_object\n }\n\n dashboard = loaded_object['dashboard']\n\n if loaded_object['version'] != self._dashboards_api_version:\n #\n # Convert the dashboard (if possible)\n #\n conversion_result, dashboard = self._convert_dashboard_to_current_version(dashboard, loaded_object['version'])\n\n if conversion_result == False:\n return conversion_result, dashboard\n\n #\n # Create the new dashboard\n #\n return self.create_dashboard_from_template(dashboard_name, dashboard, filter, shared, public)", "response": "Create a new dashboard from a file."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsaving a dashboard to a file.", "response": "def save_dashboard_to_file(self, dashboard, filename):\n '''\n **Description**\n Save a dashboard to disk. See :func:`~SdcClient.create_dashboard_from_file` to use the file to create a dashboard (usefl to create and restore backups).\n\n The file will contain a JSON object with the following properties:\n * version: dashboards API version (e.g. 'v2')\n * dashboard: dashboard object in the format of an array element returned by :func:`~SdcClient.get_dashboards`\n\n **Arguments**\n - **dashboard**: dashboard object in the format of an array element returned by :func:`~SdcClient.get_dashboards`\n - **filename**: name of a file that will contain a JSON object\n\n **Example**\n `examples/dashboard_save_load.py `_\n '''\n with open(filename, 'w') as outf:\n json.dump({\n 'version': self._dashboards_api_version,\n 'dashboard': dashboard\n }, outf)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delete_dashboard(self, dashboard):\n '''**Description**\n Deletes a dashboard.\n\n **Arguments**\n - **dashboard**: the dashboard object as returned by :func:`~SdcClient.get_dashboards`.\n\n **Success Return Value**\n `None`.\n\n **Example**\n `examples/delete_dashboard.py `_\n '''\n if 'id' not in dashboard:\n return [False, \"Invalid dashboard format\"]\n\n res = requests.delete(self.url + self._dashboards_api_endpoint + '/' + str(dashboard['id']), headers=self.hdrs, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n\n return [True, None]", "response": "**Description**\n Deletes a dashboard.\n\n **Arguments**\n - **dashboard**: the dashboard object as returned by :func:`~SdcClient.get_dashboards`.\n\n **Success Return Value**\n `None`.\n\n **Example**\n `examples/delete_dashboard.py `_"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_notification_ids(self, channels=None):\n '''**Description**\n Get an array of all configured Notification Channel IDs, or a filtered subset of them.\n\n **Arguments**\n - **channels**: an optional array of dictionaries to limit the set of Notification Channel IDs returned. If not specified, IDs for all configured Notification Channels are returned. Each dictionary contains a ``type`` field that can be one of the available types of Notification Channel (``EMAIL``, ``SNS``, ``PAGER_DUTY``, ``SLACK``, ``OPSGENIE``, ``VICTOROPS``, ``WEBHOOK``) as well as additional elements specific to each channel type.\n\n **Success Return Value**\n An array of Notification Channel IDs (integers).\n\n **Examples**\n - `examples/create_alert.py `_\n - `examples/restore_alerts.py `_\n '''\n\n res = requests.get(self.url + '/api/notificationChannels', headers=self.hdrs, verify=self.ssl_verify)\n\n if not self._checkResponse(res):\n return False, self.lasterr\n\n ids = []\n\n # If no array of channel types/names was provided to filter by,\n # just return them all.\n if channels is None:\n for ch in res.json()[\"notificationChannels\"]:\n ids.append(ch['id'])\n return [True, ids]\n\n # Return the filtered set of channels based on the provided types/names array.\n # Should try and improve this M * N lookup\n for c in channels:\n found = False\n for ch in res.json()[\"notificationChannels\"]:\n if c['type'] == ch['type']:\n if c['type'] == 'SNS':\n opt = ch['options']\n if set(opt['snsTopicARNs']) == set(c['snsTopicARNs']):\n found = True\n ids.append(ch['id'])\n elif c['type'] == 'EMAIL':\n opt = ch['options']\n if 'emailRecipients' in c:\n if set(c['emailRecipients']) == set(opt['emailRecipients']):\n found = True\n ids.append(ch['id'])\n elif 'name' in c:\n if c['name'] == ch.get('name'):\n found = True\n ids.append(ch['id'])\n elif c['type'] == 'PAGER_DUTY':\n opt = ch['options']\n if opt['account'] == c['account'] and opt['serviceName'] == c['serviceName']:\n found = True\n ids.append(ch['id'])\n elif c['type'] == 'SLACK':\n opt = ch['options']\n if 'channel' in opt and opt['channel'] == c['channel']:\n found = True\n ids.append(ch['id'])\n elif c['type'] == 'OPSGENIE':\n if 'name' in c:\n if c['name'] == ch.get('name'):\n found = True\n ids.append(ch['id'])\n elif c['type'] == 'VICTOROPS':\n if 'name' in c:\n if c['name'] == ch.get('name'):\n found = True\n ids.append(ch['id'])\n elif c['type'] == 'WEBHOOK':\n if 'name' in c:\n if c['name'] == ch.get('name'):\n found = True\n ids.append(ch['id'])\n if not found:\n return False, \"Channel not found: \" + str(c)\n\n return True, ids", "response": "Get an array of all configured Notification Channel IDs or a filtered subset of them."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef post_event(self, name, description=None, severity=None, event_filter=None, tags=None):\n '''**Description**\n Send an event to Sysdig Monitor. The events you post are available in the Events tab in the Sysdig Monitor UI and can be overlied to charts.\n\n **Arguments**\n - **name**: the name of the new event.\n - **description**: a longer description offering detailed information about the event.\n - **severity**: syslog style from 0 (high) to 7 (low).\n - **event_filter**: metadata, in Sysdig Monitor format, of nodes to associate with the event, e.g. ``host.hostName = 'ip-10-1-1-1' and container.name = 'foo'``.\n - **tags**: a list of key-value dictionaries that can be used to tag the event. Can be used for filtering/segmenting purposes in Sysdig Monitor.\n\n **Success Return Value**\n A dictionary describing the new event.\n\n **Examples**\n - `examples/post_event_simple.py `_\n - `examples/post_event.py `_\n '''\n options = {\n 'name': name,\n 'description': description,\n 'severity': severity,\n 'filter': event_filter,\n 'tags': tags\n }\n edata = {\n 'event': {k: v for k, v in options.items() if v is not None}\n }\n res = requests.post(self.url + '/api/events/', headers=self.hdrs, data=json.dumps(edata), verify=self.ssl_verify)\n return self._request_result(res)", "response": "Send an event to Sysdig Monitor."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef delete_event(self, event):\n '''**Description**\n Deletes an event.\n\n **Arguments**\n - **event**: the event object as returned by :func:`~SdcClient.get_events`.\n\n **Success Return Value**\n `None`.\n\n **Example**\n `examples/delete_event.py `_\n '''\n if 'id' not in event:\n return [False, \"Invalid event format\"]\n\n res = requests.delete(self.url + '/api/events/' + str(event['id']), headers=self.hdrs, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n return [True, None]", "response": "**Description**\n Deletes an event.\n\n **Arguments**\n - **event**: the event object as returned by :func:`~SdcClient.get_events`.\n\n **Success Return Value**\n `None`.\n\n **Example**\n `examples/delete_event.py `_"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_data(self, metrics, start_ts, end_ts=0, sampling_s=0,\n filter='', datasource_type='host', paging=None):\n '''**Description**\n Export metric data (both time-series and table-based).\n\n **Arguments**\n - **metrics**: a list of dictionaries, specifying the metrics and grouping keys that the query will return. A metric is any of the entries that can be found in the *Metrics* section of the Explore page in Sysdig Monitor. Metric entries require an *aggregations* section specifying how to aggregate the metric across time and containers/hosts. A grouping key is any of the entries that can be found in the *Show* or *Segment By* sections of the Explore page in Sysdig Monitor. These entries are used to apply single or hierarchical segmentation to the returned data and don't require the aggregations section. Refer to the Example link below for ready-to-use code snippets.\n - **start_ts**: the UTC time (in seconds) of the beginning of the data window. A negative value can be optionally used to indicate a relative time in the past from now. For example, -3600 means \"one hour ago\".\n - **end_ts**: the UTC time (in seconds) of the end of the data window, or 0 to indicate \"now\". A negative value can also be optionally used to indicate a relative time in the past from now. For example, -3600 means \"one hour ago\".\n - **sampling_s**: the duration of the samples that will be returned. 0 means that the whole data will be returned as a single sample.\n - **filter**: a boolean expression combining Sysdig Monitor segmentation criteria that defines what the query will be applied to. For example: *kubernetes.namespace.name='production' and container.image='nginx'*.\n - **datasource_type**: specify the metric source for the request, can be ``container`` or ``host``. Most metrics, for example ``cpu.used.percent`` or ``memory.bytes.used``, are reported by both hosts and containers. By default, host metrics are used, but if the request contains a container-specific grouping key in the metric list/filter (e.g. ``container.name``), then the container source is used. In cases where grouping keys are missing or apply to both hosts and containers (e.g. ``tag.Name``), *datasource_type* can be explicitly set to avoid any ambiguity and allow the user to select precisely what kind of data should be used for the request. `examples/get_data_datasource.py `_ contains a few examples that should clarify the use of this argument.\n - **paging**: if segmentation of the query generates values for several different entities (e.g. containers/hosts), this parameter specifies which to include in the returned result. It's specified as a dictionary of inclusive values for ``from`` and ``to`` with the default being ``{ \"from\": 0, \"to\": 9 }``, which will return values for the \"top 10\" entities. The meaning of \"top\" is query-dependent, based on points having been sorted via the specified group aggregation, with the results sorted in ascending order if the group aggregation is ``min`` or ``none``, and descending order otherwise.\n\n **Success Return Value**\n A dictionary with the requested data. Data is organized in a list of time samples, each of which includes a UTC timestamp and a list of values, whose content and order reflect what was specified in the *metrics* argument.\n\n **Examples**\n - `examples/get_data_simple.py `_\n - `examples/get_data_advanced.py `_\n - `examples/list_hosts.py `_\n - `examples/get_data_datasource.py `_\n '''\n reqbody = {\n 'metrics': metrics,\n 'dataSourceType': datasource_type,\n }\n\n if start_ts < 0:\n reqbody['last'] = -start_ts\n elif start_ts == 0:\n return [False, \"start_ts cannot be 0\"]\n else:\n reqbody['start'] = start_ts\n reqbody['end'] = end_ts\n\n if filter != '':\n reqbody['filter'] = filter\n\n if paging is not None:\n reqbody['paging'] = paging\n\n if sampling_s != 0:\n reqbody['sampling'] = sampling_s\n\n res = requests.post(self.url + '/api/data/', headers=self.hdrs, data=json.dumps(reqbody), verify=self.ssl_verify)\n return self._request_result(res)", "response": "Returns the exported data for the given metric list and grouping keys."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the updated state of a sysdig capture.", "response": "def poll_sysdig_capture(self, capture):\n '''**Description**\n Fetch the updated state of a sysdig capture. Can be used to poll the status of a capture that has been previously created and started with :func:`~SdcClient.create_sysdig_capture`.\n\n **Arguments**\n - **capture**: the capture object as returned by :func:`~SdcClient.get_sysdig_captures` or :func:`~SdcClient.create_sysdig_capture`.\n\n **Success Return Value**\n A dictionary showing the updated details of the capture. Use the ``status`` field to check the progress of a capture.\n\n **Example**\n `examples/create_sysdig_capture.py `_\n '''\n if 'id' not in capture:\n return [False, 'Invalid capture format']\n\n url = '{url}/api/sysdig/{id}?source={source}'.format(\n url=self.url, id=capture['id'], source=self.product)\n res = requests.get(url, headers=self.hdrs, verify=self.ssl_verify)\n return self._request_result(res)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef download_sysdig_capture(self, capture_id):\n '''**Description**\n Download a sysdig capture by id.\n\n **Arguments**\n - **capture_id**: the capture id to download.\n\n **Success Return Value**\n The bytes of the scap\n '''\n url = '{url}/api/sysdig/{id}/download?_product={product}'.format(\n url=self.url, id=capture_id, product=self.product)\n res = requests.get(url, headers=self.hdrs, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return False, self.lasterr\n\n return True, res.content", "response": "This method downloads a sysdig capture by id."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_user_invite(self, user_email, first_name=None, last_name=None, system_role=None):\n '''**Description**\n Invites a new user to use Sysdig Monitor. This should result in an email notification to the specified address.\n\n **Arguments**\n - **user_email**: the email address of the user that will be invited to use Sysdig Monitor\n - **first_name**: the first name of the user being invited\n - **last_name**: the last name of the user being invited\n - **system_role**: system-wide privilege level for this user regardless of team. specify 'ROLE_CUSTOMER' to create an Admin. if not specified, default is a non-Admin ('ROLE_USER').\n\n **Success Return Value**\n The newly created user.\n\n **Examples**\n - `examples/user_team_mgmt.py `_\n - `examples/user_team_mgmt_extended.py `_\n\n '''\n # Look up the list of users to see if this exists, do not create if one exists\n res = requests.get(self.url + '/api/users', headers=self.hdrs, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n data = res.json()\n for user in data['users']:\n if user['username'] == user_email:\n return [False, 'user ' + user_email + ' already exists']\n\n # Create the user\n options = {'username': user_email,\n 'firstName': first_name,\n 'lastName': last_name,\n 'systemRole': system_role}\n user_json = {k: v for k, v in options.items() if v is not None}\n\n res = requests.post(self.url + '/api/users', headers=self.hdrs, data=json.dumps(user_json), verify=self.ssl_verify)\n return self._request_result(res)", "response": "This method creates a new user in Sysdig Monitor."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the team with the specified name.", "response": "def get_team(self, name):\n '''**Description**\n Return the team with the specified team name, if it is present.\n\n **Arguments**\n - **name**: the name of the team to return\n\n **Success Return Value**\n The requested team.\n\n **Example**\n `examples/user_team_mgmt.py `_\n '''\n res = self.get_teams(name)\n if res[0] == False:\n return res\n for t in res[1]:\n if t['name'] == name:\n return [True, t]\n return [False, 'Could not find team']"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a new team with the specified name and memberships.", "response": "def create_team(self, name, memberships=None, filter='', description='', show='host', theme='#7BB0B2',\n perm_capture=False, perm_custom_events=False, perm_aws_data=False):\n '''\n **Description**\n Creates a new team\n\n **Arguments**\n - **name**: the name of the team to create.\n - **memberships**: dictionary of (user-name, team-role) pairs that should describe new memberships of the team.\n - **filter**: the scope that this team is able to access within Sysdig Monitor.\n - **description**: describes the team that will be created.\n - **show**: possible values are *host*, *container*.\n - **theme**: the color theme that Sysdig Monitor will use when displaying the team.\n - **perm_capture**: if True, this team will be allowed to take sysdig captures.\n - **perm_custom_events**: if True, this team will be allowed to view all custom events from every user and agent.\n - **perm_aws_data**: if True, this team will have access to all AWS metrics and tags, regardless of the team's scope.\n\n **Success Return Value**\n The newly created team.\n\n **Example**\n `examples/user_team_mgmt.py `_\n '''\n reqbody = {\n 'name': name,\n 'description': description,\n 'theme': theme,\n 'show': show,\n 'canUseSysdigCapture': perm_capture,\n 'canUseCustomEvents': perm_custom_events,\n 'canUseAwsMetrics': perm_aws_data,\n }\n\n # Map user-names to IDs\n if memberships != None and len(memberships) != 0:\n res = self._get_user_id_dict(list(memberships.keys()))\n if res[0] == False:\n return [False, 'Could not fetch IDs for user names']\n reqbody['userRoles'] = [\n {\n 'userId': user_id,\n 'role': memberships[user_name]\n }\n for (user_name, user_id) in res[1].items()\n ]\n else:\n reqbody['users'] = []\n\n if filter != '':\n reqbody['filter'] = filter\n\n res = requests.post(self.url + '/api/teams', headers=self.hdrs, data=json.dumps(reqbody), verify=self.ssl_verify)\n return self._request_result(res)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nedits an existing team.", "response": "def edit_team(self, name, memberships=None, filter=None, description=None, show=None, theme=None,\n perm_capture=None, perm_custom_events=None, perm_aws_data=None):\n '''\n **Description**\n Edits an existing team. All arguments are optional. Team settings for any arguments unspecified will remain at their current settings.\n\n **Arguments**\n - **name**: the name of the team to edit.\n - **memberships**: dictionary of (user-name, team-role) pairs that should describe new memberships of the team.\n - **filter**: the scope that this team is able to access within Sysdig Monitor.\n - **description**: describes the team that will be created.\n - **show**: possible values are *host*, *container*.\n - **theme**: the color theme that Sysdig Monitor will use when displaying the team.\n - **perm_capture**: if True, this team will be allowed to take sysdig captures.\n - **perm_custom_events**: if True, this team will be allowed to view all custom events from every user and agent.\n - **perm_aws_data**: if True, this team will have access to all AWS metrics and tags, regardless of the team's scope.\n\n **Success Return Value**\n The edited team.\n\n **Example**\n `examples/user_team_mgmt.py `_\n '''\n res = self.get_team(name)\n if res[0] == False:\n return res\n\n t = res[1]\n reqbody = {\n 'name': name,\n 'theme': theme if theme else t['theme'],\n 'show': show if show else t['show'],\n 'canUseSysdigCapture': perm_capture if perm_capture else t['canUseSysdigCapture'],\n 'canUseCustomEvents': perm_custom_events if perm_custom_events else t['canUseCustomEvents'],\n 'canUseAwsMetrics': perm_aws_data if perm_aws_data else t['canUseAwsMetrics'],\n 'id': t['id'],\n 'version': t['version']\n }\n\n # Handling team description\n if description is not None:\n reqbody['description'] = description\n elif 'description' in list(t.keys()):\n reqbody['description'] = t['description']\n\n # Handling for users to map (user-name, team-role) pairs to memberships\n if memberships != None:\n res = self._get_user_id_dict(list(memberships.keys()))\n if res[0] == False:\n return [False, 'Could not convert user names to IDs']\n reqbody['userRoles'] = [\n {\n 'userId': user_id,\n 'role': memberships[user_name]\n }\n for (user_name, user_id) in res[1].items()\n ]\n elif 'userRoles' in list(t.keys()):\n reqbody['userRoles'] = t['userRoles']\n else:\n reqbody['userRoles'] = []\n\n # Special handling for filters since we don't support blank filters\n if filter != None:\n reqbody['filter'] = filter\n elif 'filter' in list(t.keys()):\n reqbody['filter'] = t['filter']\n\n res = requests.put(self.url + '/api/teams/' + str(t['id']), headers=self.hdrs, data=json.dumps(reqbody), verify=self.ssl_verify)\n return self._request_result(res)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef delete_team(self, name):\n '''**Description**\n Deletes a team from Sysdig Monitor.\n\n **Arguments**\n - **name**: the name of the team that will be deleted from Sysdig Monitor\n\n **Example**\n `examples/user_team_mgmt.py `_\n '''\n res = self.get_team(name)\n if res[0] == False:\n return res\n\n t = res[1]\n res = requests.delete(self.url + '/api/teams/' + str(t['id']), headers=self.hdrs, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n return [True, None]", "response": "Delete a team from Sysdig Monitor."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef list_memberships(self, team):\n '''\n **Description**\n List all memberships for specified team.\n\n **Arguments**\n - **team**: the name of the team for which we want to see memberships\n\n **Result**\n Dictionary of (user-name, team-role) pairs that should describe memberships of the team.\n\n **Example**\n `examples/user_team_mgmt_extended.py `_\n '''\n res = self.get_team(team)\n if res[0] == False:\n return res\n\n raw_memberships = res[1]['userRoles']\n user_ids = [m['userId'] for m in raw_memberships]\n\n res = self._get_id_user_dict(user_ids)\n if res[0] == False:\n return [False, 'Could not fetch IDs for user names']\n else:\n id_user_dict = res[1]\n\n return [True, dict([(id_user_dict[m['userId']], m['role']) for m in raw_memberships])]", "response": "Returns a list of all memberships for the specified team."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsaves user team memberships", "response": "def save_memberships(self, team, memberships):\n '''\n **Description**\n Create new user team memberships or update existing ones.\n\n **Arguments**\n - **team**: the name of the team for which we are creating new memberships\n - **memberships**: dictionary of (user-name, team-role) pairs that should describe new memberships\n\n **Example**\n `examples/user_team_mgmt_extended.py `_\n '''\n\n res = self.list_memberships(team)\n\n if res[0] is False:\n return res\n\n full_memberships = res[1]\n full_memberships.update(memberships)\n\n res = self.edit_team(team, full_memberships)\n\n if res[0] is False:\n return res\n else:\n return [True, None]"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nremoves user memberships from specified team.", "response": "def remove_memberships(self, team, users):\n '''\n **Description**\n Remove user memberships from specified team.\n\n **Arguments**\n - **team**: the name of the team from which user memberships are removed\n - **users**: list of usernames which should be removed from team\n\n **Example**\n `examples/user_team_mgmt_extended.py `_\n '''\n\n res = self.list_memberships(team)\n\n if res[0] is False:\n return res\n\n old_memberships = res[1]\n new_memberships = {k: v for k, v in old_memberships.items() if k not in users}\n\n res = self.edit_team(team, new_memberships)\n\n if res[0] is False:\n return res\n else:\n return [True, None]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a new dashboard.", "response": "def create_dashboard(self, name):\n '''\n **Description**\n Creates an empty dashboard. You can then add panels by using ``add_dashboard_panel``.\n\n **Arguments**\n - **name**: the name of the dashboard that will be created.\n\n **Success Return Value**\n A dictionary showing the details of the new dashboard.\n\n **Example**\n `examples/dashboard.py `_\n '''\n dashboard_configuration = {\n 'name': name,\n 'schema': 2,\n 'items': []\n }\n\n #\n # Create the new dashboard\n #\n res = requests.post(self.url + self._dashboards_api_endpoint, headers=self.hdrs, data=json.dumps({'dashboard': dashboard_configuration}),\n verify=self.ssl_verify)\n return self._request_result(res)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_image(self, image, force=False, dockerfile=None, annotations={}, autosubscribe=True):\n '''**Description**\n Add an image to the scanner\n\n **Arguments**\n - image: Input image can be in the following formats: registry/repo:tag\n - dockerfile: The contents of the dockerfile as a str.\n - annotations: A dictionary of annotations {str: str}.\n - autosubscribe: Should active the subscription to this image?\n\n **Success Return Value**\n A JSON object representing the image that was added.\n '''\n itype = self._discover_inputimage_format(image)\n if itype != 'tag':\n return [False, \"can only add a tag\"]\n\n payload = {}\n if dockerfile:\n payload['dockerfile'] = base64.b64encode(dockerfile.encode()).decode(\"utf-8\")\n payload['tag'] = image\n if annotations:\n payload['annotations'] = annotations\n\n url = \"{base_url}/api/scanning/v1/anchore/images?autosubscribe={autosubscribe}{force}\".format(\n base_url=self.url,\n autosubscribe=str(autosubscribe),\n force=\"&force=true\" if force else \"\")\n\n res = requests.post(url, data=json.dumps(payload), headers=self.hdrs, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n\n return [True, res.json()]", "response": "This method adds an image to the scanner."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef import_image(self, image_data):\n '''**Description**\n Import an image from the scanner export\n\n **Arguments**\n - image_data: A JSON with the image information.\n\n **Success Return Value**\n A JSON object representing the image that was imported.\n '''\n url = self.url + \"/api/scanning/v1/anchore/imageimport\"\n res = requests.post(url, data=json.dumps(image_data), headers=self.hdrs, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n\n return [True, res.json()]", "response": "This method imports an image from the scanner and returns the image_data as a JSON object."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef query_image_content(self, image, content_type=\"\"):\n '''**Description**\n Find the image with the tag and return its content.\n\n **Arguments**\n - image: Input image can be in the following formats: registry/repo:tag\n - content_type: The content type can be one of the following types:\n - os: Operating System Packages\n - npm: Node.JS NPM Module\n - gem: Ruby GEM\n - files: Files\n\n **Success Return Value**\n A JSON object representing the image content.\n '''\n return self._query_image(image, query_group='content', query_type=content_type)", "response": "Query the content of an image."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nquery the metadata of an image.", "response": "def query_image_metadata(self, image, metadata_type=\"\"):\n '''**Description**\n Find the image with the tag and return its metadata.\n\n **Arguments**\n - image: Input image can be in the following formats: registry/repo:tag\n - metadata_type: The metadata type can be one of the types returned by running without a type specified\n\n **Success Return Value**\n A JSON object representing the image metadata.\n '''\n return self._query_image(image, query_group='metadata', query_type=metadata_type)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef query_image_vuln(self, image, vuln_type=\"\", vendor_only=True):\n '''**Description**\n Find the image with the tag and return its vulnerabilities.\n\n **Arguments**\n - image: Input image can be in the following formats: registry/repo:tag\n - vuln_type: Vulnerability type can be one of the following types:\n - os: CVE/distro vulnerabilities against operating system packages\n\n **Success Return Value**\n A JSON object representing the image vulnerabilities.\n '''\n return self._query_image(image, query_group='vuln', query_type=vuln_type, vendor_only=vendor_only)", "response": "Query the vulnerability of an image."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef delete_image(self, image, force=False):\n '''**Description**\n Delete image from the scanner.\n\n **Arguments**\n - None\n '''\n _, _, image_digest = self._discover_inputimage(image)\n if not image_digest:\n return [False, \"cannot use input image string: no discovered imageDigest\"]\n\n url = self.url + \"/api/scanning/v1/anchore/images/\" + image_digest\n res = requests.delete(url, headers=self.hdrs, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n\n return [True, res.json()]", "response": "Delete an image from the scanner."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_registry(self, registry, registry_user, registry_pass, insecure=False, registry_type=\"docker_v2\", validate=True):\n '''**Description**\n Add image registry\n\n **Arguments**\n - registry: Full hostname/port of registry. Eg. myrepo.example.com:5000\n - registry_user: Username\n - registry_pass: Password\n - insecure: Allow connection to registry without SSL cert checks (ex: if registry uses a self-signed SSL certificate)\n - registry_type: Specify the registry type. 'docker_v2' and 'awsecr' are supported (default='docker_v2')\n - validate: If set to 'False' will not attempt to validate registry/creds on registry add\n\n **Success Return Value**\n A JSON object representing the registry.\n '''\n registry_types = ['docker_v2', 'awsecr']\n if registry_type and registry_type not in registry_types:\n return [False, \"input registry type not supported (supported registry_types: \" + str(registry_types)]\n if self._registry_string_is_valid(registry):\n return [False, \"input registry name cannot contain '/' characters - valid registry names are of the form : where : is optional\"]\n\n if not registry_type:\n registry_type = self._get_registry_type(registry)\n\n payload = {\n 'registry': registry,\n 'registry_user': registry_user,\n 'registry_pass': registry_pass,\n 'registry_type': registry_type,\n 'registry_verify': not insecure}\n url = \"{base_url}/api/scanning/v1/anchore/registries?validate={validate}\".format(\n base_url=self.url,\n validate=validate)\n\n res = requests.post(url, data=json.dumps(payload), headers=self.hdrs, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n\n return [True, res.json()]", "response": "This function adds an image to an existing registry."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update_registry(self, registry, registry_user, registry_pass, insecure=False, registry_type=\"docker_v2\", validate=True):\n '''**Description**\n Update an existing image registry.\n\n **Arguments**\n - registry: Full hostname/port of registry. Eg. myrepo.example.com:5000\n - registry_user: Username\n - registry_pass: Password\n - insecure: Allow connection to registry without SSL cert checks (ex: if registry uses a self-signed SSL certificate)\n - registry_type: Specify the registry type. 'docker_v2' and 'awsecr' are supported (default='docker_v2')\n - validate: If set to 'False' will not attempt to validate registry/creds on registry add\n\n **Success Return Value**\n A JSON object representing the registry.\n '''\n if self._registry_string_is_valid(registry):\n return [False, \"input registry name cannot contain '/' characters - valid registry names are of the form : where : is optional\"]\n\n payload = {\n 'registry': registry,\n 'registry_user': registry_user,\n 'registry_pass': registry_pass,\n 'registry_type': registry_type,\n 'registry_verify': not insecure}\n url = \"{base_url}/api/scanning/v1/anchore/registries/{registry}?validate={validate}\".format(\n base_url=self.url,\n registry=registry,\n validate=validate)\n\n res = requests.put(url, data=json.dumps(payload), headers=self.hdrs, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n\n return [True, res.json()]", "response": "This method updates an existing image registry."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef delete_registry(self, registry):\n '''**Description**\n Delete an existing image registry\n\n **Arguments**\n - registry: Full hostname/port of registry. Eg. myrepo.example.com:5000\n '''\n # do some input string checking\n if re.match(\".*\\\\/.*\", registry):\n return [False, \"input registry name cannot contain '/' characters - valid registry names are of the form : where : is optional\"]\n\n url = self.url + \"/api/scanning/v1/anchore/registries/\" + registry\n res = requests.delete(url, headers=self.hdrs, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n\n return [True, res.json()]", "response": "This method deletes an existing image from an existing registry."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_policy(self, name, rules, comment=\"\", bundleid=None):\n '''**Description**\n Create a new policy\n\n **Arguments**\n - name: The name of the policy.\n - rules: A list of Anchore PolicyRule elements (while creating/updating a policy, new rule IDs will be created backend side)\n - comment: A human-readable description.\n - bundleid: Target bundle. If not specified, the currently active bundle will be used.\n\n **Success Return Value**\n A JSON object containing the policy description.\n '''\n policy = {\n 'name': name,\n 'comment': comment,\n 'rules': rules,\n 'version': '1_0'\n }\n if bundleid:\n policy['policyBundleId'] = bundleid\n\n url = self.url + '/api/scanning/v1/policies'\n data = json.dumps(policy)\n res = requests.post(url, headers=self.hdrs, data=data, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n\n return [True, res.json()]", "response": "This method creates a new policy in Anchore."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_alert(self, name, description=None, scope=\"\", triggers={'failed': True, 'unscanned': True},\n enabled=False, notification_channels=[]):\n '''**Description**\n Create a new alert\n\n **Arguments**\n - name: The name of the alert.\n - description: The descprition of the alert.\n - scope: An AND-composed string of predicates that selects the scope in which the alert will be applied. (like: 'host.domain = \"example.com\" and container.image != \"alpine:latest\"')\n - tiggers: A dict {str: bool} indicating wich triggers should be enabled/disabled. (default: {'failed': True, 'unscanned': True})\n - enabled: Whether this alert should actually be applied.\n - notification_channels: A list of notification channel ids.\n\n **Success Return Value**\n A JSON object containing the alert description.\n '''\n alert = {\n 'name': name,\n 'description': description,\n 'triggers': triggers,\n 'scope': scope,\n 'enabled': enabled,\n 'autoscan': True,\n 'notificationChannelIds': notification_channels,\n }\n\n url = self.url + '/api/scanning/v1/alerts'\n data = json.dumps(alert)\n res = requests.post(url, headers=self.hdrs, data=data, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n\n return [True, res.json()]", "response": "This method creates a new alert in the log directory."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef delete_alert(self, policyid):\n '''**Description**\n Delete the alert with the given id\n\n **Arguments**\n - alertid: Unique identifier associated with this alert.\n '''\n url = self.url + '/api/scanning/v1/alerts/' + policyid\n res = requests.delete(url, headers=self.hdrs, verify=self.ssl_verify)\n if not self._checkResponse(res):\n return [False, self.lasterr]\n\n return [True, res.text]", "response": "This method deletes the alert with the given id."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef addSourceAddr(self, addr):\n try:\n self._multiInSocket.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, self._makeMreq(addr))\n except socket.error: # if 1 interface has more than 1 address, exception is raised for the second\n pass\n\n sock = self._createMulticastOutSocket(addr, self._observer.ttl)\n self._multiOutUniInSockets[addr] = sock\n self._poll.register(sock, select.POLLIN)", "response": "Add a source address to the multicast interface."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nset callback which will be called when new service appeared online and sent Hi message.", "response": "def setRemoteServiceHelloCallback(self, cb, types=None, scopes=None):\n \"\"\"Set callback, which will be called when new service appeared online\n and sent Hi message\n\n typesFilter and scopesFilter might be list of types and scopes.\n If filter is set, callback is called only for Hello messages,\n which match filter\n\n Set None to disable callback\n \"\"\"\n self._remoteServiceHelloCallback = cb\n self._remoteServiceHelloCallbackTypesFilter = types\n self._remoteServiceHelloCallbackScopesFilter = scopes"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncleaning up and stops the discovery server", "response": "def stop(self):\n 'cleans up and stops the discovery server'\n\n self.clearRemoteServices()\n self.clearLocalServices()\n\n self._stopThreads()\n self._serverStarted = False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsend Bye messages for the services and remove them", "response": "def clearLocalServices(self):\n 'send Bye messages for the services and remove them'\n\n for service in list(self._localServices.values()):\n self._sendBye(service)\n\n self._localServices.clear()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef searchServices(self, types=None, scopes=None, timeout=3):\n 'search for services given the TYPES and SCOPES in a given TIMEOUT'\n\n if not self._serverStarted:\n raise Exception(\"Server not started\")\n\n self._sendProbe(types, scopes)\n\n time.sleep(timeout)\n\n return self._filterServices(list(self._remoteServices.values()), types, scopes)", "response": "search for services given the TYPES and SCOPES in a given TIMEOUT"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef publishService(self, types, scopes, xAddrs):\n\n if not self._serverStarted:\n raise Exception(\"Server not started\")\n\n instanceId = _generateInstanceId()\n\n service = Service(types, scopes, xAddrs, self.uuid, instanceId)\n self._localServices[self.uuid] = service\n self._sendHello(service)\n\n time.sleep(0.001)", "response": "Publish a service with the given TYPES SCOPES and XAddrs."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef createSOAPMessage(env):\n \"construct a a raw SOAP XML string, given a prepared SoapEnvelope object\"\n if env.getAction() == ACTION_PROBE:\n return createProbeMessage(env)\n if env.getAction() == ACTION_PROBE_MATCH:\n return createProbeMatchMessage(env)\n if env.getAction() == ACTION_RESOLVE:\n return createResolveMessage(env)\n if env.getAction() == ACTION_RESOLVE_MATCH:\n return createResolveMatchMessage(env)\n if env.getAction() == ACTION_HELLO:\n return createHelloMessage(env)\n if env.getAction() == ACTION_BYE:\n return createByeMessage(env)", "response": "construct a raw SOAP XML string given a prepared SoapEnvelope object"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parseSOAPMessage(data, ipAddr):\n \"parse raw XML data string, return a (minidom) xml document\"\n\n try:\n dom = minidom.parseString(data)\n except Exception:\n #print('Failed to parse message from %s\\n\"%s\": %s' % (ipAddr, data, ex), file=sys.stderr)\n return None\n\n if dom.getElementsByTagNameNS(NS_S, \"Fault\"):\n #print('Fault received from %s:' % (ipAddr, data), file=sys.stderr)\n return None\n\n soapAction = dom.getElementsByTagNameNS(NS_A, \"Action\")[0].firstChild.data.strip()\n if soapAction == ACTION_PROBE:\n return parseProbeMessage(dom)\n elif soapAction == ACTION_PROBE_MATCH:\n return parseProbeMatchMessage(dom)\n elif soapAction == ACTION_RESOLVE:\n return parseResolveMessage(dom)\n elif soapAction == ACTION_RESOLVE_MATCH:\n return parseResolveMatchMessage(dom)\n elif soapAction == ACTION_BYE:\n return parseByeMessage(dom)\n elif soapAction == ACTION_HELLO:\n return parseHelloMessage(dom)", "response": "parse raw XML data string return a ( minidom ) xml document"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef discover(scope, loglevel, capture):\n \"Discover systems using WS-Discovery\"\n\n if loglevel:\n level = getattr(logging, loglevel, None)\n if not level:\n print(\"Invalid log level '%s'\" % loglevel)\n return\n logger.setLevel(level)\n\n run(scope=scope, capture=capture)", "response": "Discover systems using WS - Discovery"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_tagged_item_manager(self):\n rel_name = self.through._meta.get_field('content_object').remote_field.get_accessor_name()\n return getattr(self.instance, rel_name)", "response": "Return the manager that handles the relation from this instance to the tagged_item class."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_serializable_data_for_fields(model):\n pk_field = model._meta.pk\n # If model is a child via multitable inheritance, use parent's pk\n while pk_field.remote_field and pk_field.remote_field.parent_link:\n pk_field = pk_field.remote_field.model._meta.pk\n\n obj = {'pk': get_field_value(pk_field, model)}\n\n for field in model._meta.fields:\n if field.serialize:\n obj[field.name] = get_field_value(field, model)\n\n return obj", "response": "Returns a serialised version of the model s fields which exist as local database\n columns ( i. e. excluding m2m and incoming foreign key relations."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a list of RelatedObject records for all child relations of the given model including ones attached to ancestors of the model.", "response": "def get_all_child_relations(model):\n \"\"\"\n Return a list of RelatedObject records for child relations of the given model,\n including ones attached to ancestors of the model\n \"\"\"\n return [\n field for field in model._meta.get_fields()\n if isinstance(field.remote_field, ParentalKey)\n ]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_all_child_m2m_relations(model):\n return [\n field for field in model._meta.get_fields()\n if isinstance(field, ParentalManyToManyField)\n ]", "response": "Returns a list of all child M2M relations on the given model"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef save(self, **kwargs):\n child_relation_names = [rel.get_accessor_name() for rel in get_all_child_relations(self)]\n child_m2m_field_names = [field.name for field in get_all_child_m2m_relations(self)]\n\n update_fields = kwargs.pop('update_fields', None)\n if update_fields is None:\n real_update_fields = None\n relations_to_commit = child_relation_names\n m2m_fields_to_commit = child_m2m_field_names\n else:\n real_update_fields = []\n relations_to_commit = []\n m2m_fields_to_commit = []\n for field in update_fields:\n if field in child_relation_names:\n relations_to_commit.append(field)\n elif field in child_m2m_field_names:\n m2m_fields_to_commit.append(field)\n else:\n real_update_fields.append(field)\n\n super(ClusterableModel, self).save(update_fields=real_update_fields, **kwargs)\n\n for relation in relations_to_commit:\n getattr(self, relation).commit()\n\n for field in m2m_fields_to_commit:\n getattr(self, field).commit()", "response": "Save the model and commit all child relations."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef from_serializable_data(cls, data, check_fks=True, strict_fks=False):\n obj = model_from_serializable_data(cls, data, check_fks=check_fks, strict_fks=strict_fks)\n if obj is None:\n return None\n\n child_relations = get_all_child_relations(cls)\n\n for rel in child_relations:\n rel_name = rel.get_accessor_name()\n try:\n child_data_list = data[rel_name]\n except KeyError:\n continue\n\n related_model = rel.related_model\n if hasattr(related_model, 'from_serializable_data'):\n children = [\n related_model.from_serializable_data(child_data, check_fks=check_fks, strict_fks=True)\n for child_data in child_data_list\n ]\n else:\n children = [\n model_from_serializable_data(related_model, child_data, check_fks=check_fks, strict_fks=True)\n for child_data in child_data_list\n ]\n\n children = filter(lambda child: child is not None, children)\n\n setattr(obj, rel_name, children)\n\n return obj", "response": "Build an instance of the class from the JSON - like structure passed in."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef validate_unique(self):\n '''This clean method will check for unique_together condition'''\n # Collect unique_checks and to run from all the forms.\n all_unique_checks = set()\n all_date_checks = set()\n forms_to_delete = self.deleted_forms\n valid_forms = [form for form in self.forms if form.is_valid() and form not in forms_to_delete]\n for form in valid_forms:\n unique_checks, date_checks = form.instance._get_unique_checks()\n all_unique_checks.update(unique_checks)\n all_date_checks.update(date_checks)\n\n errors = []\n # Do each of the unique checks (unique and unique_together)\n for uclass, unique_check in all_unique_checks:\n seen_data = set()\n for form in valid_forms:\n # Get the data for the set of fields that must be unique among the forms.\n row_data = (\n field if field in self.unique_fields else form.cleaned_data[field]\n for field in unique_check if field in form.cleaned_data\n )\n # Reduce Model instances to their primary key values\n row_data = tuple(d._get_pk_val() if hasattr(d, '_get_pk_val') else d\n for d in row_data)\n if row_data and None not in row_data:\n # if we've already seen it then we have a uniqueness failure\n if row_data in seen_data:\n # poke error messages into the right places and mark\n # the form as invalid\n errors.append(self.get_unique_error_message(unique_check))\n form._errors[NON_FIELD_ERRORS] = self.error_class([self.get_form_error()])\n # remove the data from the cleaned_data dict since it was invalid\n for field in unique_check:\n if field in form.cleaned_data:\n del form.cleaned_data[field]\n # mark the data as seen\n seen_data.add(row_data)\n\n if errors:\n raise ValidationError(errors)", "response": "This clean method will check for unique_together condition."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning True if data differs from initial.", "response": "def has_changed(self):\n \"\"\"Return True if data differs from initial.\"\"\"\n\n # Need to recurse over nested formsets so that the form is saved if there are changes\n # to child forms but not the parent\n if self.formsets:\n for formset in self.formsets.values():\n for form in formset.forms:\n if form.has_changed():\n return True\n return bool(self.changed_data)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a new related manager that wraps an ordinary RelatedManager with deferring behaviour.", "response": "def create_deferring_foreign_related_manager(related, original_manager_cls):\n \"\"\"\n Create a DeferringRelatedManager class that wraps an ordinary RelatedManager\n with 'deferring' behaviour: any updates to the object set (via e.g. add() or clear())\n are written to a holding area rather than committed to the database immediately.\n Writing to the database is deferred until the model is saved.\n \"\"\"\n\n relation_name = related.get_accessor_name()\n rel_field = related.field\n rel_model = related.related_model\n superclass = rel_model._default_manager.__class__\n\n class DeferringRelatedManager(superclass):\n def __init__(self, instance):\n super(DeferringRelatedManager, self).__init__()\n self.model = rel_model\n self.instance = instance\n\n def _get_cluster_related_objects(self):\n # Helper to retrieve the instance's _cluster_related_objects dict,\n # creating it if it does not already exist\n try:\n return self.instance._cluster_related_objects\n except AttributeError:\n cluster_related_objects = {}\n self.instance._cluster_related_objects = cluster_related_objects\n return cluster_related_objects\n\n def get_live_query_set(self):\n # deprecated; renamed to get_live_queryset to match the move from\n # get_query_set to get_queryset in Django 1.6\n return self.get_live_queryset()\n\n def get_live_queryset(self):\n \"\"\"\n return the original manager's queryset, which reflects the live database\n \"\"\"\n return original_manager_cls(self.instance).get_queryset()\n\n def get_queryset(self):\n \"\"\"\n return the current object set with any updates applied,\n wrapped up in a FakeQuerySet if it doesn't match the database state\n \"\"\"\n try:\n results = self.instance._cluster_related_objects[relation_name]\n except (AttributeError, KeyError):\n return self.get_live_queryset()\n\n return FakeQuerySet(related.related_model, results)\n\n def _apply_rel_filters(self, queryset):\n # Implemented as empty for compatibility sake\n # But there is probably a better implementation of this function\n return queryset._next_is_sticky()\n\n def get_prefetch_queryset(self, instances, queryset=None):\n if queryset is None:\n db = self._db or router.db_for_read(self.model, instance=instances[0])\n queryset = super(DeferringRelatedManager, self).get_queryset().using(db)\n\n rel_obj_attr = rel_field.get_local_related_value\n instance_attr = rel_field.get_foreign_related_value\n instances_dict = dict((instance_attr(inst), inst) for inst in instances)\n\n query = {'%s__in' % rel_field.name: instances}\n qs = queryset.filter(**query)\n # Since we just bypassed this class' get_queryset(), we must manage\n # the reverse relation manually.\n for rel_obj in qs:\n instance = instances_dict[rel_obj_attr(rel_obj)]\n setattr(rel_obj, rel_field.name, instance)\n cache_name = rel_field.related_query_name()\n return qs, rel_obj_attr, instance_attr, False, cache_name, False\n\n def get_object_list(self):\n \"\"\"\n return the mutable list that forms the current in-memory state of\n this relation. If there is no such list (i.e. the manager is returning\n querysets from the live database instead), one is created, populating it\n with the live database state\n \"\"\"\n cluster_related_objects = self._get_cluster_related_objects()\n\n try:\n object_list = cluster_related_objects[relation_name]\n except KeyError:\n object_list = list(self.get_live_queryset())\n cluster_related_objects[relation_name] = object_list\n\n return object_list\n\n def add(self, *new_items):\n \"\"\"\n Add the passed items to the stored object set, but do not commit them\n to the database\n \"\"\"\n items = self.get_object_list()\n\n for target in new_items:\n item_matched = False\n for i, item in enumerate(items):\n if item == target:\n # Replace the matched item with the new one. This ensures that any\n # modifications to that item's fields take effect within the recordset -\n # i.e. we can perform a virtual UPDATE to an object in the list\n # by calling add(updated_object). Which is semantically a bit dubious,\n # but it does the job...\n items[i] = target\n item_matched = True\n break\n if not item_matched:\n items.append(target)\n\n # update the foreign key on the added item to point back to the parent instance\n setattr(target, related.field.name, self.instance)\n\n # Sort list\n if rel_model._meta.ordering and len(items) > 1:\n sort_by_fields(items, rel_model._meta.ordering)\n\n def remove(self, *items_to_remove):\n \"\"\"\n Remove the passed items from the stored object set, but do not commit the change\n to the database\n \"\"\"\n items = self.get_object_list()\n\n # filter items list in place: see http://stackoverflow.com/a/1208792/1853523\n items[:] = [item for item in items if item not in items_to_remove]\n\n def create(self, **kwargs):\n items = self.get_object_list()\n new_item = related.related_model(**kwargs)\n items.append(new_item)\n return new_item\n\n def clear(self):\n \"\"\"\n Clear the stored object set, without affecting the database\n \"\"\"\n self.set([])\n\n def set(self, objs, bulk=True, clear=False):\n # cast objs to a list so that:\n # 1) we can call len() on it (which we can't do on, say, a queryset)\n # 2) if we need to sort it, we can do so without mutating the original\n objs = list(objs)\n\n cluster_related_objects = self._get_cluster_related_objects()\n\n for obj in objs:\n # update the foreign key on the added item to point back to the parent instance\n setattr(obj, related.field.name, self.instance)\n\n # Clone and sort the 'objs' list, if necessary\n if rel_model._meta.ordering and len(objs) > 1:\n sort_by_fields(objs, rel_model._meta.ordering)\n\n cluster_related_objects[relation_name] = objs\n\n def commit(self):\n \"\"\"\n Apply any changes made to the stored object set to the database.\n Any objects removed from the initial set will be deleted entirely\n from the database.\n \"\"\"\n if self.instance.pk is None:\n raise IntegrityError(\"Cannot commit relation %r on an unsaved model\" % relation_name)\n\n try:\n final_items = self.instance._cluster_related_objects[relation_name]\n except (AttributeError, KeyError):\n # _cluster_related_objects entry never created => no changes to make\n return\n\n original_manager = original_manager_cls(self.instance)\n\n live_items = list(original_manager.get_queryset())\n for item in live_items:\n if item not in final_items:\n item.delete()\n\n for item in final_items:\n # Django 1.9+ bulk updates items by default which assumes\n # that they have already been saved to the database.\n # Disable this behaviour.\n # https://code.djangoproject.com/ticket/18556\n # https://github.com/django/django/commit/adc0c4fbac98f9cb975e8fa8220323b2de638b46\n original_manager.add(item, bulk=False)\n\n # purge the _cluster_related_objects entry, so we switch back to live SQL\n del self.instance._cluster_related_objects[relation_name]\n\n return DeferringRelatedManager"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef sort_by_fields(items, fields):\n # To get the desired behaviour, we need to order by keys in reverse order\n # See: https://docs.python.org/2/howto/sorting.html#sort-stability-and-complex-sorts\n for key in reversed(fields):\n # Check if this key has been reversed\n reverse = False\n if key[0] == '-':\n reverse = True\n key = key[1:]\n\n # Sort\n # Use a tuple of (v is not None, v) as the key, to ensure that None sorts before other values,\n # as comparing directly with None breaks on python3\n items.sort(key=lambda x: (getattr(x, key) is not None, getattr(x, key)), reverse=reverse)", "response": "Sort a list of objects on the given fields."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef with_valid_checksum(self):\n # type: () -> Address\n \"\"\"\n Returns the address with a valid checksum attached.\n \"\"\"\n return Address(\n trytes=self.address + self._generate_checksum(),\n\n # Make sure to copy all of the ancillary attributes, too!\n balance=self.balance,\n key_index=self.key_index,\n security_level=self.security_level,\n )", "response": "Returns the address with a valid checksum attached."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ngenerate the correct checksum for this address.", "response": "def _generate_checksum(self):\n # type: () -> AddressChecksum\n \"\"\"\n Generates the correct checksum for this address.\n \"\"\"\n checksum_trits = [] # type: MutableSequence[int]\n\n sponge = Kerl()\n sponge.absorb(self.address.as_trits())\n sponge.squeeze(checksum_trits)\n\n checksum_length = AddressChecksum.LEN * TRITS_PER_TRYTE\n\n return AddressChecksum.from_trits(checksum_trits[-checksum_length:])"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef execute(self, api, **arguments):\n # type: (Iota, **Any) -> Optional[int]\n \"\"\"\n Executes the command and (optionally) returns an exit code (used by\n the shell to determine if the application exited cleanly).\n\n :param api:\n The API object used to communicate with the node.\n\n :param arguments:\n Command-line arguments parsed by the argument parser.\n \"\"\"\n raise NotImplementedError(\n 'Not implemented in {cls}.'.format(cls=type(self).__name__),\n )", "response": "Executes the command and returns an exit code."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nexecutes the command from a collection of arguments and returns the exit code.", "response": "def run_from_argv(self, argv=None):\n # type: (Optional[tuple]) -> int\n \"\"\"\n Executes the command from a collection of arguments (e.g.,\n :py:data`sys.argv`) and returns the exit code.\n\n :param argv:\n Arguments to pass to the argument parser.\n If ``None``, defaults to ``sys.argv[1:]``.\n \"\"\"\n exit_code = self.execute(**self.parse_argv(argv))\n\n if exit_code is None:\n exit_code = 0\n\n return exit_code"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef parse_argv(self, argv=None):\n # type: (Optional[tuple]) -> dict\n \"\"\"\n Parses arguments for the command.\n\n :param argv:\n Arguments to pass to the argument parser.\n If ``None``, defaults to ``sys.argv[1:]``.\n \"\"\"\n arguments = vars(self.create_argument_parser().parse_args(argv))\n\n seed = None\n if self.requires_seed:\n seed_filepath = arguments.pop('seed_file')\n\n seed = (\n self.seed_from_filepath(seed_filepath)\n if seed_filepath\n else self.prompt_for_seed()\n )\n\n arguments['api'] = Iota(\n adapter=arguments.pop('uri'),\n seed=seed,\n testnet=arguments.pop('testnet'),\n )\n\n return arguments", "response": "Parses the command line arguments and returns a dictionary of arguments."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create_argument_parser(self):\n # type: () -> ArgumentParser\n \"\"\"\n Returns the argument parser that will be used to interpret\n arguments and options from argv.\n \"\"\"\n parser = ArgumentParser(\n description=self.__doc__,\n epilog='PyOTA v{version}'.format(version=__version__),\n )\n\n parser.add_argument(\n '--uri',\n type=text_type,\n default='http://localhost:14265/',\n\n help=(\n 'URI of the node to connect to '\n '(defaults to http://localhost:14265/).'\n ),\n )\n\n if self.requires_seed:\n parser.add_argument(\n '--seed-file',\n type=text_type,\n dest='seed_file',\n\n help=(\n 'Path to a file containing your seed in cleartext. '\n 'If not provided, you will be prompted to enter '\n 'your seed via stdin.'\n ),\n )\n\n parser.add_argument(\n '--testnet',\n action='store_true',\n default=False,\n help='If set, use testnet settings (e.g., for PoW).',\n )\n\n return parser", "response": "Creates an argument parser that will be used to interpret the arguments and options from argv."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nprompting the user to enter a seed and press return.", "response": "def prompt_for_seed():\n # type: () -> Seed\n \"\"\"\n Prompts the user to enter their seed via stdin.\n \"\"\"\n seed = secure_input(\n 'Enter seed and press return (typing will not be shown).\\n'\n 'If no seed is specified, a random one will be used instead.\\n'\n )\n\n if isinstance(seed, text_type):\n seed = seed.encode('ascii')\n\n return Seed(seed) if seed else Seed.random()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nnormalize a hash into a sequence of integers", "response": "def normalize(hash_):\n # type: (Hash) -> List[List[int]]\n \"\"\"\n \"Normalizes\" a hash, converting it into a sequence of integers\n (not trits!) suitable for use in signature generation/validation.\n\n The hash is divided up into 3 parts, each of which is \"balanced\"\n (sum of all the values is equal to zero).\n \"\"\"\n normalized = []\n source = hash_.as_integers()\n\n chunk_size = 27\n\n for i in range(Hash.LEN // chunk_size):\n start = i * chunk_size\n stop = start + chunk_size\n\n chunk = source[start:stop]\n chunk_sum = sum(chunk)\n\n while chunk_sum > 0:\n chunk_sum -= 1\n for j in range(chunk_size):\n if chunk[j] > -13:\n chunk[j] -= 1\n break\n\n while chunk_sum < 0:\n chunk_sum += 1\n for j in range(chunk_size):\n if chunk[j] < 13:\n chunk[j] += 1\n break\n\n normalized.append(chunk)\n\n return normalized"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate_signature_fragments(\n fragments,\n hash_,\n public_key,\n sponge_type=Kerl,\n):\n # type: (Sequence[TryteString], Hash, TryteString, type) -> bool\n \"\"\"\n Returns whether a sequence of signature fragments is valid.\n\n :param fragments:\n Sequence of signature fragments (usually\n :py:class:`iota.transaction.Fragment` instances).\n\n :param hash_:\n Hash used to generate the signature fragments (usually a\n :py:class:`iota.transaction.BundleHash` instance).\n\n :param public_key:\n The public key value used to verify the signature digest (usually a\n :py:class:`iota.types.Address` instance).\n\n :param sponge_type:\n The class used to create the cryptographic sponge (i.e., Curl or Kerl).\n \"\"\"\n checksum = [0] * (HASH_LENGTH * len(fragments))\n normalized_hash = normalize(hash_)\n\n for i, fragment in enumerate(fragments):\n outer_sponge = sponge_type()\n\n # If there are more than 3 iterations, loop back around to the\n # start.\n normalized_chunk = normalized_hash[i % len(normalized_hash)]\n\n buffer = []\n for j, hash_trytes in enumerate(fragment.iter_chunks(Hash.LEN)):\n buffer = hash_trytes.as_trits() # type: List[int]\n inner_sponge = sponge_type()\n\n # Note the sign flip compared to\n # :py;class:`SignatureFragmentGenerator`.\n for _ in range(13 + normalized_chunk[j]):\n inner_sponge.reset()\n inner_sponge.absorb(buffer)\n inner_sponge.squeeze(buffer)\n\n outer_sponge.absorb(buffer)\n\n outer_sponge.squeeze(buffer)\n checksum[i * HASH_LENGTH:(i + 1) * HASH_LENGTH] = buffer\n\n actual_public_key = [0] * HASH_LENGTH\n addy_sponge = sponge_type()\n addy_sponge.absorb(checksum)\n addy_sponge.squeeze(actual_public_key)\n\n return actual_public_key == public_key.as_trits()", "response": "Validate the signature fragments."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_key(self, index, iterations):\n # type: (int, int) -> PrivateKey\n \"\"\"\n Generates a single key.\n\n :param index:\n The key index.\n\n :param iterations:\n Number of transform iterations to apply to the key, also\n known as security level.\n Must be >= 1.\n\n Increasing this value makes key generation slower, but more\n resistant to brute-forcing.\n \"\"\"\n return (\n self.get_keys(\n start=index,\n count=1,\n step=1,\n iterations=iterations,\n )[0]\n )", "response": "Returns a private key for the specified index."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_key_for(self, address):\n return self.get_key(\n index=address.key_index,\n iterations=address.security_level,\n )", "response": "Returns the key associated with the specified address."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_keys(self, start, count=1, step=1, iterations=1):\n # type: (int, int, int, int) -> List[PrivateKey]\n \"\"\"\n Generates and returns one or more keys at the specified\n index(es).\n\n This is a one-time operation; if you want to create lots of keys\n across multiple contexts, consider invoking\n :py:meth:`create_iterator` and sharing the resulting generator\n object instead.\n\n Warning: This method may take awhile to run if the starting\n index and/or the number of requested keys is a large number!\n\n :param start:\n Starting index.\n Must be >= 0.\n\n :param count:\n Number of keys to generate.\n Must be > 0.\n\n :param step:\n Number of indexes to advance after each key.\n This may be any non-zero (positive or negative) integer.\n\n :param iterations:\n Number of transform iterations to apply to each key, also\n known as security level.\n Must be >= 1.\n\n Increasing this value makes key generation slower, but more\n resistant to brute-forcing.\n\n :return:\n Always returns a list, even if only one key is generated.\n\n The returned list will contain ``count`` keys, except when\n ``step * count < start`` (only applies when ``step`` is\n negative).\n \"\"\"\n if count < 1:\n raise with_context(\n exc=ValueError('``count`` must be positive.'),\n\n context={\n 'start': start,\n 'count': count,\n 'step': step,\n 'iterations': iterations,\n },\n )\n\n if not step:\n raise with_context(\n exc=ValueError('``step`` must not be zero.'),\n\n context={\n 'start': start,\n 'count': count,\n 'step': step,\n 'iterations': iterations,\n },\n )\n\n iterator = self.create_iterator(start, step, iterations)\n\n keys = []\n for _ in range(count):\n try:\n next_key = next(iterator)\n except StopIteration:\n break\n else:\n keys.append(next_key)\n\n return keys", "response": "Returns a list of keys at the specified index."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a generator that can be used to progressively generate a new key.", "response": "def create_iterator(self, start=0, step=1, security_level=1):\n # type: (int, int, int) -> KeyIterator\n \"\"\"\n Creates a generator that can be used to progressively generate\n new keys.\n\n :param start:\n Starting index.\n\n Warning: This method may take awhile to reset if ``start``\n is a large number!\n\n :param step:\n Number of indexes to advance after each key.\n\n This value can be negative; the generator will exit if it\n reaches an index < 0.\n\n Warning: The generator may take awhile to advance between\n iterations if ``step`` is a large number!\n\n :param security_level:\n Number of _transform iterations to apply to each key.\n Must be >= 1.\n\n Increasing this value makes key generation slower, but more\n resistant to brute-forcing.\n \"\"\"\n return KeyIterator(self.seed, start, step, security_level)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _create_sponge(self, index):\n # type: (int) -> Kerl\n \"\"\"\n Prepares the hash sponge for the generator.\n \"\"\"\n seed = self.seed_as_trits[:]\n\n sponge = Kerl()\n sponge.absorb(add_trits(seed, trits_from_int(index)))\n\n # Squeeze all of the trits out of the sponge and re-absorb them.\n # Note that the sponge transforms several times per operation,\n # so this sequence is not as redundant as it looks at first\n # glance.\n sponge.squeeze(seed)\n sponge.reset()\n sponge.absorb(seed)\n\n return sponge", "response": "Creates a new sponge for the given index."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef absorb(self, trits, offset=0, length=None):\n # type: (Sequence[int], Optional[int], Optional[int]) -> None\n \"\"\"\n Absorb trits into the sponge.\n\n :param trits:\n Sequence of trits to absorb.\n\n :param offset:\n Starting offset in ``trits``.\n\n :param length:\n Number of trits to absorb. Defaults to ``len(trits)``.\n \"\"\"\n pad = ((len(trits) % HASH_LENGTH) or HASH_LENGTH)\n trits += [0] * (HASH_LENGTH - pad)\n\n if length is None:\n length = len(trits)\n\n if length < 1:\n raise with_context(\n exc=ValueError('Invalid length passed to ``absorb``.'),\n\n context={\n 'trits': trits,\n 'offset': offset,\n 'length': length,\n },\n )\n\n # Copy trits from ``trits`` into internal state, one hash at a\n # time, transforming internal state in between hashes.\n while offset < length:\n start = offset\n stop = min(start + HASH_LENGTH, length)\n\n # Copy the next hash worth of trits to internal state.\n #\n # Note that we always copy the trits to the start of the\n # state. ``self._state`` is 3 hashes long, but only the\n # first hash is \"public\"; the other 2 are only accessible to\n # :py:meth:`_transform`.\n self._state[0:stop - start] = trits[start:stop]\n\n # Transform.\n self._transform()\n\n # Move on to the next hash.\n offset += HASH_LENGTH", "response": "Absorbs trits into the sponge."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef squeeze(self, trits, offset=0, length=HASH_LENGTH):\n # type: (MutableSequence[int], Optional[int], Optional[int]) -> None\n \"\"\"\n Squeeze trits from the sponge.\n\n :param trits:\n Sequence that the squeezed trits will be copied to.\n Note: this object will be modified!\n\n :param offset:\n Starting offset in ``trits``.\n\n :param length:\n Number of trits to squeeze, default to ``HASH_LENGTH``\n \"\"\"\n # Squeeze is kind of like the opposite of absorb; it copies\n # trits from internal state to the ``trits`` parameter, one hash\n # at a time, and transforming internal state in between hashes.\n #\n # However, only the first hash of the state is \"public\", so we\n # can simplify the implementation somewhat.\n\n # Ensure length can be mod by HASH_LENGTH\n if length % HASH_LENGTH != 0:\n raise with_context(\n exc=ValueError('Invalid length passed to ``squeeze`.'),\n\n context={\n 'trits': trits,\n 'offset': offset,\n 'length': length,\n })\n\n # Ensure that ``trits`` can hold at least one hash worth of\n # trits.\n trits.extend([0] * max(0, length - len(trits)))\n\n # Check trits with offset can handle hash length\n if len(trits) - offset < HASH_LENGTH:\n raise with_context(\n exc=ValueError('Invalid offset passed to ``squeeze``.'),\n\n context={\n 'trits': trits,\n 'offset': offset,\n 'length': length\n },\n )\n\n while length >= HASH_LENGTH:\n # Copy exactly one hash.\n trits[offset:offset + HASH_LENGTH] = self._state[0:HASH_LENGTH]\n\n # One hash worth of trits copied; now transform.\n self._transform()\n\n offset += HASH_LENGTH\n length -= HASH_LENGTH", "response": "Squeeze the internal state of the current state from the given trits."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ntransforming internal state of the internal state of the C Entry.", "response": "def _transform(self):\n # type: () -> None\n \"\"\"\n Transforms internal state.\n \"\"\"\n # Copy some values locally so we can avoid global lookups in the\n # inner loop.\n #\n # References:\n #\n # - https://wiki.python.org/moin/PythonSpeed/PerformanceTips#Local_Variables\n state_length = STATE_LENGTH\n truth_table = TRUTH_TABLE\n\n # Operate on a copy of ``self._state`` to eliminate dot lookups\n # in the inner loop.\n #\n # References:\n #\n # - https://wiki.python.org/moin/PythonSpeed/PerformanceTips#Avoiding_dots...\n # - http://stackoverflow.com/a/2612990/\n prev_state = self._state[:]\n new_state = prev_state[:]\n\n # Note: This code looks significantly different from the C\n # implementation because it has been optimized to limit the\n # number of list item lookups (these are relatively slow in\n # Python).\n index = 0\n for _ in range(NUMBER_OF_ROUNDS):\n prev_trit = prev_state[index]\n\n for pos in range(state_length):\n index += (364 if index < 365 else -365)\n\n new_trit = prev_state[index]\n\n new_state[pos] = truth_table[prev_trit + (3 * new_trit) + 4]\n\n prev_trit = new_trit\n\n prev_state = new_state\n new_state = new_state[:]\n\n self._state = new_state"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a dictionary containing one or more key digests for the given key.", "response": "def get_digests(\n self,\n index=0,\n count=1,\n security_level=AddressGenerator.DEFAULT_SECURITY_LEVEL,\n ):\n # type: (int, int, int) -> dict\n \"\"\"\n Generates one or more key digests from the seed.\n\n Digests are safe to share; use them to generate multisig\n addresses.\n\n :param index:\n The starting key index.\n\n :param count:\n Number of digests to generate.\n\n :param security_level:\n Number of iterations to use when generating new addresses.\n\n Larger values take longer, but the resulting signatures are\n more secure.\n\n This value must be between 1 and 3, inclusive.\n\n :return:\n Dict with the following items::\n\n {\n 'digests': List[Digest],\n Always contains a list, even if only one digest\n was generated.\n }\n \"\"\"\n return commands.GetDigestsCommand(self.adapter)(\n seed=self.seed,\n index=index,\n count=count,\n securityLevel=security_level,\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a dictionary containing one or more private keys from the seed.", "response": "def get_private_keys(\n self,\n index=0,\n count=1,\n security_level=AddressGenerator.DEFAULT_SECURITY_LEVEL,\n ):\n # type: (int, int, int) -> dict\n \"\"\"\n Generates one or more private keys from the seed.\n\n As the name implies, private keys should not be shared.\n However, in a few cases it may be necessary (e.g., for M-of-N\n transactions).\n\n :param index:\n The starting key index.\n\n :param count:\n Number of keys to generate.\n\n :param security_level:\n Number of iterations to use when generating new keys.\n\n Larger values take longer, but the resulting signatures are\n more secure.\n\n This value must be between 1 and 3, inclusive.\n\n :return:\n Dict with the following items::\n\n {\n 'keys': List[PrivateKey],\n Always contains a list, even if only one key was\n generated.\n }\n\n References:\n\n - :py:class:`iota.crypto.signing.KeyGenerator`\n - https://github.com/iotaledger/wiki/blob/master/multisigs.md#how-m-of-n-works\n \"\"\"\n return commands.GetPrivateKeysCommand(self.adapter)(\n seed=self.seed,\n index=index,\n count=count,\n securityLevel=security_level,\n )"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef prepare_multisig_transfer(\n self,\n transfers, # type: Iterable[ProposedTransaction]\n multisig_input, # type: MultisigAddress\n change_address=None, # type: Optional[Address]\n ):\n # type: (...) -> dict\n \"\"\"\n Prepares a bundle that authorizes the spending of IOTAs from a\n multisig address.\n\n .. note::\n This method is used exclusively to spend IOTAs from a\n multisig address.\n\n If you want to spend IOTAs from non-multisig addresses, or\n if you want to create 0-value transfers (i.e., that don't\n require inputs), use\n :py:meth:`iota.api.Iota.prepare_transfer` instead.\n\n :param transfers:\n Transaction objects to prepare.\n\n .. important::\n Must include at least one transaction that spends IOTAs\n (i.e., has a nonzero ``value``). If you want to prepare\n a bundle that does not spend any IOTAs, use\n :py:meth:`iota.api.prepare_transfer` instead.\n\n :param multisig_input:\n The multisig address to use as the input for the transfers.\n\n .. note::\n This method only supports creating a bundle with a\n single multisig input.\n\n If you would like to spend from multiple multisig\n addresses in the same bundle, create the\n :py:class:`iota.multisig.transaction.ProposedMultisigBundle`\n object manually.\n\n :param change_address:\n If inputs are provided, any unspent amount will be sent to\n this address.\n\n If the bundle has no unspent inputs, ``change_address` is\n ignored.\n\n .. important::\n Unlike :py:meth:`iota.api.Iota.prepare_transfer`, this\n method will NOT generate a change address automatically.\n If there are unspent inputs and ``change_address`` is\n empty, an exception will be raised.\n\n This is because multisig transactions typically involve\n multiple individuals, and it would be unfair to the\n participants if we generated a change address\n automatically using the seed of whoever happened to run\n the ``prepare_multisig_transfer`` method!\n\n .. danger::\n Note that this protective measure is not a\n substitute for due diligence!\n\n Always verify the details of every transaction in a\n bundle (including the change transaction) before\n signing the input(s)!\n\n :return:\n Dict containing the following values::\n\n {\n 'trytes': List[TransactionTrytes],\n Finalized bundle, as trytes.\n The input transactions are not signed.\n }\n\n In order to authorize the spending of IOTAs from the multisig\n input, you must generate the correct private keys and invoke\n the :py:meth:`iota.crypto.types.PrivateKey.sign_input_at`\n method for each key, in the correct order.\n\n Once the correct signatures are applied, you can then perform\n proof of work (``attachToTangle``) and broadcast the bundle\n using :py:meth:`iota.api.Iota.send_trytes`.\n \"\"\"\n return commands.PrepareMultisigTransferCommand(self.adapter)(\n changeAddress=change_address,\n multisigInput=multisig_input,\n transfers=transfers,\n )", "response": "Prepares a bundle that authorizes the spending of IOTAs from a multisig address."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding two sequences of trits together.", "response": "def add_trits(left, right):\n # type: (Sequence[int], Sequence[int]) -> List[int]\n \"\"\"\n Adds two sequences of trits together.\n\n The result is a list of trits equal in length to the longer of the\n two sequences.\n\n .. note::\n Overflow is possible.\n\n For example, ``add_trits([1], [1])`` returns ``[-1]``.\n \"\"\"\n target_len = max(len(left), len(right))\n\n res = [0] * target_len\n left += [0] * (target_len - len(left))\n right += [0] * (target_len - len(right))\n\n carry = 0\n for i in range(len(res)):\n res[i], carry = _full_add_trits(left[i], right[i], carry)\n\n return res"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef trits_from_int(n, pad=1):\n # type: (int, Optional[int]) -> List[int]\n \"\"\"\n Returns a trit representation of an integer value.\n\n :param n:\n Integer value to convert.\n\n :param pad:\n Ensure the result has at least this many trits.\n\n References:\n\n - https://dev.to/buntine/the-balanced-ternary-machines-of-soviet-russia\n - https://en.wikipedia.org/wiki/Balanced_ternary\n - https://rosettacode.org/wiki/Balanced_ternary#Python\n \"\"\"\n if n == 0:\n trits = []\n else:\n quotient, remainder = divmod(n, 3)\n\n if remainder == 2:\n # Lend 1 to the next place so we can make this trit\n # negative.\n quotient += 1\n remainder = -1\n\n trits = [remainder] + trits_from_int(quotient, pad=0)\n\n if pad:\n trits += [0] * max(0, pad - len(trits))\n\n return trits", "response": "Converts an integer value to a list of trits."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd two individual trits together.", "response": "def _add_trits(left, right):\n # type: (int, int) -> int\n \"\"\"\n Adds two individual trits together.\n\n The result is always a single trit.\n \"\"\"\n res = left + right\n return res if -2 < res < 2 else (res < 0) - (res > 0)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding two trits together with support for a carry trit.", "response": "def _full_add_trits(left, right, carry):\n # type: (int, int, int) -> Tuple[int, int]\n \"\"\"\n Adds two trits together, with support for a carry trit.\n \"\"\"\n sum_both = _add_trits(left, right)\n cons_left = _cons_trits(left, right)\n cons_right = _cons_trits(sum_both, carry)\n\n return _add_trits(sum_both, carry), _any_trits(cons_left, cons_right)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\noutputs the user s seed to stdout along with lots of warnings about security.", "response": "def output_seed(seed):\n # type: (Seed) -> None\n \"\"\"\n Outputs the user's seed to stdout, along with lots of warnings\n about security.\n \"\"\"\n print(\n 'WARNING: Anyone who has your seed can spend your IOTAs! '\n 'Clear the screen after recording your seed!'\n )\n compat.input('')\n print('Your seed is:')\n print('')\n print(binary_type(seed).decode('ascii'))\n print('')\n\n print(\n 'Clear the screen to prevent shoulder surfing, '\n 'and press return to continue.'\n )\n print('https://en.wikipedia.org/wiki/Shoulder_surfing_(computer_security)')\n compat.input('')"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef attach_to_tangle(\n self,\n trunk_transaction, # type: TransactionHash\n branch_transaction, # type: TransactionHash\n trytes, # type: Iterable[TryteString]\n min_weight_magnitude=None, # type: Optional[int]\n ):\n # type: (...) -> dict\n \"\"\"\n Attaches the specified transactions (trytes) to the Tangle by\n doing Proof of Work. You need to supply branchTransaction as\n well as trunkTransaction (basically the tips which you're going\n to validate and reference with this transaction) - both of which\n you'll get through the getTransactionsToApprove API call.\n\n The returned value is a different set of tryte values which you\n can input into :py:meth:`broadcast_transactions` and\n :py:meth:`store_transactions`.\n\n References:\n\n - https://iota.readme.io/docs/attachtotangle\n \"\"\"\n if min_weight_magnitude is None:\n min_weight_magnitude = self.default_min_weight_magnitude\n\n return core.AttachToTangleCommand(self.adapter)(\n trunkTransaction=trunk_transaction,\n branchTransaction=branch_transaction,\n minWeightMagnitude=min_weight_magnitude,\n trytes=trytes,\n )", "response": "This method will attach the specified transactions to the Tangle by doing Proof of Work."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef find_transactions(\n self,\n bundles=None, # type: Optional[Iterable[BundleHash]]\n addresses=None, # type: Optional[Iterable[Address]]\n tags=None, # type: Optional[Iterable[Tag]]\n approvees=None, # type: Optional[Iterable[TransactionHash]]\n ):\n # type: (...) -> dict\n \"\"\"\n Find the transactions which match the specified input and\n return.\n\n All input values are lists, for which a list of return values\n (transaction hashes), in the same order, is returned for all\n individual elements.\n\n Using multiple of these input fields returns the intersection of\n the values.\n\n :param bundles:\n List of bundle IDs.\n\n :param addresses:\n List of addresses.\n\n :param tags:\n List of tags.\n\n :param approvees:\n List of approvee transaction IDs.\n\n References:\n\n - https://iota.readme.io/docs/findtransactions\n \"\"\"\n return core.FindTransactionsCommand(self.adapter)(\n bundles=bundles,\n addresses=addresses,\n tags=tags,\n approvees=approvees,\n )", "response": "Find the transactions which match the specified input and return."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_balances(self, addresses, threshold=100):\n # type: (Iterable[Address], int) -> dict\n \"\"\"\n Similar to :py:meth:`get_inclusion_states`. Returns the\n confirmed balance which a list of addresses have at the latest\n confirmed milestone.\n\n In addition to the balances, it also returns the milestone as\n well as the index with which the confirmed balance was\n determined. The balances are returned as a list in the same\n order as the addresses were provided as input.\n\n :param addresses:\n List of addresses to get the confirmed balance for.\n\n :param threshold:\n Confirmation threshold.\n\n References:\n\n - https://iota.readme.io/docs/getbalances\n \"\"\"\n return core.GetBalancesCommand(self.adapter)(\n addresses=addresses,\n threshold=threshold,\n )", "response": "Returns the confirmed balance for the given list of addresses."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_inclusion_states(self, transactions, tips):\n # type: (Iterable[TransactionHash], Iterable[TransactionHash]) -> dict\n \"\"\"\n Get the inclusion states of a set of transactions. This is for\n determining if a transaction was accepted and confirmed by the\n network or not. You can search for multiple tips (and thus,\n milestones) to get past inclusion states of transactions.\n\n :param transactions:\n List of transactions you want to get the inclusion state\n for.\n\n :param tips:\n List of tips (including milestones) you want to search for\n the inclusion state.\n\n References:\n\n - https://iota.readme.io/docs/getinclusionstates\n \"\"\"\n return core.GetInclusionStatesCommand(self.adapter)(\n transactions=transactions,\n tips=tips,\n )", "response": "This method returns the inclusion states of a set of transactions."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting all possible inputs of a seed and returns them, along with the total balance. This is either done deterministically (by generating all addresses until :py:meth:`find_transactions` returns an empty result), or by providing a key range to search. :param start: Starting key index. Defaults to 0. :param stop: Stop before this index. Note that this parameter behaves like the ``stop`` attribute in a :py:class:`slice` object; the stop index is *not* included in the result. If ``None`` (default), then this method will not stop until it finds an unused address. :param threshold: If set, determines the minimum threshold for a successful result: - As soon as this threshold is reached, iteration will stop. - If the command runs out of addresses before the threshold is reached, an exception is raised. .. note:: This method does not attempt to \"optimize\" the result (e.g., smallest number of inputs, get as close to ``threshold`` as possible, etc.); it simply accumulates inputs in order until the threshold is met. If ``threshold`` is 0, the first address in the key range with a non-zero balance will be returned (if it exists). If ``threshold`` is ``None`` (default), this method will return **all** inputs in the specified key range. :param security_level: Number of iterations to use when generating new addresses (see :py:meth:`get_new_addresses`). This value must be between 1 and 3, inclusive. If not set, defaults to :py:attr:`AddressGenerator.DEFAULT_SECURITY_LEVEL`. :return: Dict with the following structure:: { 'inputs': List[Address], Addresses with nonzero balances that can be used as inputs. 'totalBalance': int, Aggregate balance from all matching addresses. } Note that each Address in the result has its ``balance`` attribute set. Example: .. code-block:: python response = iota.get_inputs(...) input0 = response['inputs'][0] # type: Address input0.balance # 42 :raise: - :py:class:`iota.adapter.BadApiResponse` if ``threshold`` is not met. Not applicable if ``threshold`` is ``None``. References: - https://github.com/iotaledger/wiki/blob/master/api-proposal.md#getinputs", "response": "def get_inputs(\n self,\n start=0,\n stop=None,\n threshold=None,\n security_level=None,\n ):\n # type: (int, Optional[int], Optional[int], Optional[int]) -> dict\n \"\"\"\n Gets all possible inputs of a seed and returns them, along with\n the total balance.\n\n This is either done deterministically (by generating all\n addresses until :py:meth:`find_transactions` returns an empty\n result), or by providing a key range to search.\n\n :param start:\n Starting key index.\n Defaults to 0.\n\n :param stop:\n Stop before this index.\n\n Note that this parameter behaves like the ``stop`` attribute\n in a :py:class:`slice` object; the stop index is *not*\n included in the result.\n\n If ``None`` (default), then this method will not stop until\n it finds an unused address.\n\n :param threshold:\n If set, determines the minimum threshold for a successful\n result:\n\n - As soon as this threshold is reached, iteration will stop.\n - If the command runs out of addresses before the threshold\n is reached, an exception is raised.\n\n .. note::\n This method does not attempt to \"optimize\" the result\n (e.g., smallest number of inputs, get as close to\n ``threshold`` as possible, etc.); it simply accumulates\n inputs in order until the threshold is met.\n\n If ``threshold`` is 0, the first address in the key range\n with a non-zero balance will be returned (if it exists).\n\n If ``threshold`` is ``None`` (default), this method will\n return **all** inputs in the specified key range.\n\n :param security_level:\n Number of iterations to use when generating new addresses\n (see :py:meth:`get_new_addresses`).\n\n This value must be between 1 and 3, inclusive.\n\n If not set, defaults to\n :py:attr:`AddressGenerator.DEFAULT_SECURITY_LEVEL`.\n\n :return:\n Dict with the following structure::\n\n {\n 'inputs': List[Address],\n Addresses with nonzero balances that can be used\n as inputs.\n\n 'totalBalance': int,\n Aggregate balance from all matching addresses.\n }\n\n Note that each Address in the result has its ``balance``\n attribute set.\n\n Example:\n\n .. code-block:: python\n\n response = iota.get_inputs(...)\n\n input0 = response['inputs'][0] # type: Address\n input0.balance # 42\n\n :raise:\n - :py:class:`iota.adapter.BadApiResponse` if ``threshold``\n is not met. Not applicable if ``threshold`` is ``None``.\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/api-proposal.md#getinputs\n \"\"\"\n return extended.GetInputsCommand(self.adapter)(\n seed=self.seed,\n start=start,\n stop=stop,\n threshold=threshold,\n securityLevel=security_level\n )"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_new_addresses(\n self,\n index=0,\n count=1,\n security_level=AddressGenerator.DEFAULT_SECURITY_LEVEL,\n checksum=False,\n ):\n # type: (int, Optional[int], int, bool) -> dict\n \"\"\"\n Generates one or more new addresses from the seed.\n\n :param index:\n The key index of the first new address to generate (must be\n >= 1).\n\n :param count:\n Number of addresses to generate (must be >= 1).\n\n .. tip::\n This is more efficient than calling ``get_new_address``\n inside a loop.\n\n If ``None``, this method will progressively generate\n addresses and scan the Tangle until it finds one that has no\n transactions referencing it.\n\n :param security_level:\n Number of iterations to use when generating new addresses.\n\n Larger values take longer, but the resulting signatures are\n more secure.\n\n This value must be between 1 and 3, inclusive.\n\n :param checksum:\n Specify whether to return the address with the checksum.\n Defaults to ``False``.\n\n :return:\n Dict with the following structure::\n\n {\n 'addresses': List[Address],\n Always a list, even if only one address was\n generated.\n }\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/api-proposal.md#getnewaddress\n \"\"\"\n return extended.GetNewAddressesCommand(self.adapter)(\n count=count,\n index=index,\n securityLevel=security_level,\n checksum=checksum,\n seed=self.seed,\n )", "response": "This method generates one or more new addresses from the seed."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_transfers(self, start=0, stop=None, inclusion_states=False):\n # type: (int, Optional[int], bool) -> dict\n \"\"\"\n Returns all transfers associated with the seed.\n\n :param start:\n Starting key index.\n\n :param stop:\n Stop before this index.\n\n Note that this parameter behaves like the ``stop`` attribute\n in a :py:class:`slice` object; the stop index is *not*\n included in the result.\n\n If ``None`` (default), then this method will check every\n address until it finds one without any transfers.\n\n :param inclusion_states:\n Whether to also fetch the inclusion states of the transfers.\n\n This requires an additional API call to the node, so it is\n disabled by default.\n\n :return:\n Dict with the following structure::\n\n {\n 'bundles': List[Bundle],\n Matching bundles, sorted by tail transaction\n timestamp.\n\n This value is always a list, even if only one\n bundle was found.\n }\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/api-proposal.md#gettransfers\n \"\"\"\n return extended.GetTransfersCommand(self.adapter)(\n seed=self.seed,\n start=start,\n stop=stop,\n inclusionStates=inclusion_states,\n )", "response": "Returns all the transfers associated with the seed."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\npreparing the given list of transfers for sending to the Tangle.", "response": "def prepare_transfer(\n self,\n transfers, # type: Iterable[ProposedTransaction]\n inputs=None, # type: Optional[Iterable[Address]]\n change_address=None, # type: Optional[Address]\n security_level=None, # type: Optional[int]\n ):\n # type: (...) -> dict\n \"\"\"\n Prepares transactions to be broadcast to the Tangle, by\n generating the correct bundle, as well as choosing and signing\n the inputs (for value transfers).\n\n :param transfers:\n Transaction objects to prepare.\n\n :param inputs:\n List of addresses used to fund the transfer.\n Ignored for zero-value transfers.\n\n If not provided, addresses will be selected automatically by\n scanning the Tangle for unspent inputs. Depending on how\n many transfers you've already sent with your seed, this\n process could take awhile.\n\n :param change_address:\n If inputs are provided, any unspent amount will be sent to\n this address.\n\n If not specified, a change address will be generated\n automatically.\n\n :param security_level:\n Number of iterations to use when generating new addresses\n (see :py:meth:`get_new_addresses`).\n\n This value must be between 1 and 3, inclusive.\n\n If not set, defaults to\n :py:attr:`AddressGenerator.DEFAULT_SECURITY_LEVEL`.\n\n :return:\n Dict with the following structure::\n\n {\n 'trytes': List[TransactionTrytes],\n Raw trytes for the transactions in the bundle,\n ready to be provided to :py:meth:`send_trytes`.\n }\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/api-proposal.md#preparetransfers\n \"\"\"\n return extended.PrepareTransferCommand(self.adapter)(\n seed=self.seed,\n transfers=transfers,\n inputs=inputs,\n changeAddress=change_address,\n securityLevel=security_level,\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef promote_transaction(\n self,\n transaction,\n depth=3,\n min_weight_magnitude=None,\n ):\n # type: (TransactionHash, int, Optional[int]) -> dict\n \"\"\"\n Promotes a transaction by adding spam on top of it.\n\n :return:\n Dict with the following structure::\n\n {\n 'bundle': Bundle,\n The newly-published bundle.\n }\n \"\"\"\n if min_weight_magnitude is None:\n min_weight_magnitude = self.default_min_weight_magnitude\n\n return extended.PromoteTransactionCommand(self.adapter)(\n transaction=transaction,\n depth=depth,\n minWeightMagnitude=min_weight_magnitude,\n )", "response": "Promotes a transaction by adding spam on top of it."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ntake a tail transaction hash as input, gets the bundle associated with the transaction and then replays the bundle by attaching it to the Tangle. :param transaction: Transaction hash. Must be a tail. :param depth: Depth at which to attach the bundle. Defaults to 3. :param min_weight_magnitude: Min weight magnitude, used by the node to calibrate Proof of Work. If not provided, a default value will be used. :return: Dict with the following structure:: { 'trytes': List[TransactionTrytes], Raw trytes that were published to the Tangle. } References: - https://github.com/iotaledger/wiki/blob/master/api-proposal.md#replaytransfer", "response": "def replay_bundle(\n self,\n transaction,\n depth=3,\n min_weight_magnitude=None,\n ):\n # type: (TransactionHash, int, Optional[int]) -> dict\n \"\"\"\n Takes a tail transaction hash as input, gets the bundle\n associated with the transaction and then replays the bundle by\n attaching it to the Tangle.\n\n :param transaction:\n Transaction hash. Must be a tail.\n\n :param depth:\n Depth at which to attach the bundle.\n Defaults to 3.\n\n :param min_weight_magnitude:\n Min weight magnitude, used by the node to calibrate Proof of\n Work.\n\n If not provided, a default value will be used.\n\n :return:\n Dict with the following structure::\n\n {\n 'trytes': List[TransactionTrytes],\n Raw trytes that were published to the Tangle.\n }\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/api-proposal.md#replaytransfer\n \"\"\"\n if min_weight_magnitude is None:\n min_weight_magnitude = self.default_min_weight_magnitude\n\n return extended.ReplayBundleCommand(self.adapter)(\n transaction=transaction,\n depth=depth,\n minWeightMagnitude=min_weight_magnitude,\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nsend a set of transfers to the Tangle and returns a dict with the result of the send transfer command.", "response": "def send_transfer(\n self,\n transfers, # type: Iterable[ProposedTransaction]\n depth=3, # type: int\n inputs=None, # type: Optional[Iterable[Address]]\n change_address=None, # type: Optional[Address]\n min_weight_magnitude=None, # type: Optional[int]\n security_level=None, # type: Optional[int]\n ):\n # type: (...) -> dict\n \"\"\"\n Prepares a set of transfers and creates the bundle, then\n attaches the bundle to the Tangle, and broadcasts and stores the\n transactions.\n\n :param transfers:\n Transfers to include in the bundle.\n\n :param depth:\n Depth at which to attach the bundle.\n Defaults to 3.\n\n :param inputs:\n List of inputs used to fund the transfer.\n Not needed for zero-value transfers.\n\n :param change_address:\n If inputs are provided, any unspent amount will be sent to\n this address.\n\n If not specified, a change address will be generated\n automatically.\n\n :param min_weight_magnitude:\n Min weight magnitude, used by the node to calibrate Proof of\n Work.\n\n If not provided, a default value will be used.\n\n :param security_level:\n Number of iterations to use when generating new addresses\n (see :py:meth:`get_new_addresses`).\n\n This value must be between 1 and 3, inclusive.\n\n If not set, defaults to\n :py:attr:`AddressGenerator.DEFAULT_SECURITY_LEVEL`.\n\n :return:\n Dict with the following structure::\n\n {\n 'bundle': Bundle,\n The newly-published bundle.\n }\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/api-proposal.md#sendtransfer\n \"\"\"\n if min_weight_magnitude is None:\n min_weight_magnitude = self.default_min_weight_magnitude\n\n return extended.SendTransferCommand(self.adapter)(\n seed=self.seed,\n depth=depth,\n transfers=transfers,\n inputs=inputs,\n changeAddress=change_address,\n minWeightMagnitude=min_weight_magnitude,\n securityLevel=security_level,\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nsend a list of trytes to the Tangle.", "response": "def send_trytes(self, trytes, depth=3, min_weight_magnitude=None):\n # type: (Iterable[TransactionTrytes], int, Optional[int]) -> dict\n \"\"\"\n Attaches transaction trytes to the Tangle, then broadcasts and\n stores them.\n\n :param trytes:\n Transaction encoded as a tryte sequence.\n\n :param depth:\n Depth at which to attach the bundle.\n Defaults to 3.\n\n :param min_weight_magnitude:\n Min weight magnitude, used by the node to calibrate Proof of\n Work.\n\n If not provided, a default value will be used.\n\n :return:\n Dict with the following structure::\n\n {\n 'trytes': List[TransactionTrytes],\n Raw trytes that were published to the Tangle.\n }\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/api-proposal.md#sendtrytes\n \"\"\"\n if min_weight_magnitude is None:\n min_weight_magnitude = self.default_min_weight_magnitude\n\n return extended.SendTrytesCommand(self.adapter)(\n trytes=trytes,\n depth=depth,\n minWeightMagnitude=min_weight_magnitude,\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef resolve_adapter(uri):\n # type: (AdapterSpec) -> BaseAdapter\n \"\"\"\n Given a URI, returns a properly-configured adapter instance.\n \"\"\"\n if isinstance(uri, BaseAdapter):\n return uri\n\n parsed = compat.urllib_parse.urlsplit(uri) # type: SplitResult\n\n if not parsed.scheme:\n raise with_context(\n exc=InvalidUri(\n 'URI must begin with \"://\" (e.g., \"udp://\").',\n ),\n\n context={\n 'parsed': parsed,\n 'uri': uri,\n },\n )\n\n try:\n adapter_type = adapter_registry[parsed.scheme]\n except KeyError:\n raise with_context(\n exc=InvalidUri('Unrecognized protocol {protocol!r}.'.format(\n protocol=parsed.scheme,\n )),\n\n context={\n 'parsed': parsed,\n 'uri': uri,\n },\n )\n\n return adapter_type.configure(parsed)", "response": "Resolves a URI and returns a properly - configured adapter instance."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef send_request(self, payload, **kwargs):\n # type: (dict, dict) -> dict\n \"\"\"\n Sends an API request to the node.\n\n :param payload:\n JSON payload.\n\n :param kwargs:\n Additional keyword arguments for the adapter.\n\n :return:\n Decoded response from the node.\n\n :raise:\n - :py:class:`BadApiResponse` if a non-success response was\n received.\n \"\"\"\n raise NotImplementedError(\n 'Not implemented in {cls}.'.format(cls=type(self).__name__),\n )", "response": "Sends an API request to the node."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nlog a message to the instance s logger.", "response": "def _log(self, level, message, context=None):\n # type: (int, Text, Optional[dict]) -> None\n \"\"\"\n Sends a message to the instance's logger, if configured.\n \"\"\"\n if self._logger:\n self._logger.log(level, message, extra={'context': context or {}})"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsend the HTTP request to the specified url and return the response.", "response": "def _send_http_request(self, url, payload, method='post', **kwargs):\n # type: (Text, Optional[Text], Text, dict) -> Response\n \"\"\"\n Sends the actual HTTP request.\n\n Split into its own method so that it can be mocked during unit\n tests.\n \"\"\"\n kwargs.setdefault(\n 'timeout',\n self.timeout if self.timeout else get_default_timeout(),\n )\n\n if self.authentication:\n kwargs.setdefault('auth', auth.HTTPBasicAuth(*self.authentication))\n\n self._log(\n level=DEBUG,\n\n message='Sending {method} to {url}: {payload!r}'.format(\n method=method,\n payload=payload,\n url=url,\n ),\n\n context={\n 'request_method': method,\n 'request_kwargs': kwargs,\n 'request_payload': payload,\n 'request_url': url,\n },\n )\n\n response = request(method=method, url=url, data=payload, **kwargs)\n\n self._log(\n level=DEBUG,\n\n message='Receiving {method} from {url}: {response!r}'.format(\n method=method,\n response=response.content,\n url=url,\n ),\n\n context={\n 'request_method': method,\n 'request_kwargs': kwargs,\n 'request_payload': payload,\n 'request_url': url,\n\n 'response_headers': response.headers,\n 'response_content': response.content,\n },\n )\n\n return response"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _interpret_response(self, response, payload, expected_status):\n # type: (Response, dict, Container[int]) -> dict\n \"\"\"\n Interprets the HTTP response from the node.\n\n :param response:\n The response object received from\n :py:meth:`_send_http_request`.\n\n :param payload:\n The request payload that was sent (used for debugging).\n\n :param expected_status:\n The response should match one of these status codes to be\n considered valid.\n \"\"\"\n raw_content = response.text\n if not raw_content:\n raise with_context(\n exc=BadApiResponse(\n 'Empty {status} response from node.'.format(\n status=response.status_code,\n ),\n ),\n\n context={\n 'request': payload,\n },\n )\n\n try:\n decoded = json.loads(raw_content) # type: dict\n # :bc: py2k doesn't have JSONDecodeError\n except ValueError:\n raise with_context(\n exc=BadApiResponse(\n 'Non-JSON {status} response from node: '\n '{raw_content}'.format(\n status=response.status_code,\n raw_content=raw_content,\n )\n ),\n\n context={\n 'request': payload,\n 'raw_response': raw_content,\n },\n )\n\n if not isinstance(decoded, dict):\n raise with_context(\n exc=BadApiResponse(\n 'Malformed {status} response from node: {decoded!r}'.format(\n status=response.status_code,\n decoded=decoded,\n ),\n ),\n\n context={\n 'request': payload,\n 'response': decoded,\n },\n )\n\n if response.status_code in expected_status:\n return decoded\n\n error = None\n try:\n if response.status_code == codes['bad_request']:\n error = decoded['error']\n elif response.status_code == codes['internal_server_error']:\n error = decoded['exception']\n except KeyError:\n pass\n\n raise with_context(\n exc=BadApiResponse(\n '{status} response from node: {error}'.format(\n error=error or decoded,\n status=response.status_code,\n ),\n ),\n\n context={\n 'request': payload,\n 'response': decoded,\n },\n )", "response": "Interprets the HTTP response from the node and returns the response payload."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef seed_response(self, command, response):\n # type: (Text, dict) -> MockAdapter\n \"\"\"\n Sets the response that the adapter will return for the specified\n command.\n\n You can seed multiple responses per command; the adapter will\n put them into a FIFO queue. When a request comes in, the\n adapter will pop the corresponding response off of the queue.\n\n Example:\n\n .. code-block:: python\n\n adapter.seed_response('sayHello', {'message': 'Hi!'})\n adapter.seed_response('sayHello', {'message': 'Hello!'})\n\n adapter.send_request({'command': 'sayHello'})\n # {'message': 'Hi!'}\n\n adapter.send_request({'command': 'sayHello'})\n # {'message': 'Hello!'}\n \"\"\"\n if command not in self.responses:\n self.responses[command] = deque()\n\n self.responses[command].append(response)\n return self", "response": "Seeds the response that the adapter will return for the specified command."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadd a digest into the sponge.", "response": "def add_digest(self, digest):\n # type: (Digest) -> None\n \"\"\"\n Absorbs a digest into the sponge.\n\n .. important::\n Keep track of the order that digests are added!\n\n To spend inputs from a multisig address, you must provide\n the private keys in the same order!\n\n References:\n\n - https://github.com/iotaledger/wiki/blob/master/multisigs.md#spending-inputs\n \"\"\"\n if self._address:\n raise ValueError('Cannot add digests once an address is extracted.')\n\n self._sponge.absorb(digest.as_trits())\n self._digests.append(digest)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the new multisig address for the given acoustic entry.", "response": "def get_address(self):\n # type: () -> MultisigAddress\n \"\"\"\n Returns the new multisig address.\n\n Note that you can continue to add digests after extracting an\n address; the next address will use *all* of the digests that\n have been added so far.\n \"\"\"\n if not self._digests:\n raise ValueError(\n 'Must call ``add_digest`` at least once '\n 'before calling ``get_address``.',\n )\n\n if not self._address:\n address_trits = [0] * HASH_LENGTH\n self._sponge.squeeze(address_trits)\n\n self._address = MultisigAddress.from_trits(\n address_trits,\n digests=self._digests[:],\n )\n\n return self._address"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_addresses(self, start, count=1, step=1):\n # type: (int, int, int) -> List[Address]\n \"\"\"\n Generates and returns one or more addresses at the specified\n index(es).\n\n This is a one-time operation; if you want to create lots of\n addresses across multiple contexts, consider invoking\n :py:meth:`create_iterator` and sharing the resulting generator\n object instead.\n\n Warning: This method may take awhile to run if the starting\n index and/or the number of requested addresses is a large\n number!\n\n :param start:\n Starting index.\n Must be >= 0.\n\n :param count:\n Number of addresses to generate.\n Must be > 0.\n\n :param step:\n Number of indexes to advance after each address.\n This may be any non-zero (positive or negative) integer.\n\n :return:\n Always returns a list, even if only one address is generated.\n\n The returned list will contain ``count`` addresses, except\n when ``step * count < start`` (only applies when ``step`` is\n negative).\n \"\"\"\n if count < 1:\n raise with_context(\n exc=ValueError('``count`` must be positive.'),\n\n context={\n 'start': start,\n 'count': count,\n 'step': step,\n },\n )\n\n if not step:\n raise with_context(\n exc=ValueError('``step`` must not be zero.'),\n\n context={\n 'start': start,\n 'count': count,\n 'step': step,\n },\n )\n\n generator = self.create_iterator(start, step)\n\n addresses = []\n for _ in range(count):\n try:\n next_addy = next(generator)\n except StopIteration:\n break\n else:\n addresses.append(next_addy)\n\n return addresses", "response": "Returns a list of addresses at the specified index."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating an iterator that can be used to progressively generate newrac addresses.", "response": "def create_iterator(self, start=0, step=1):\n # type: (int, int) -> Generator[Address, None, None]\n \"\"\"\n Creates an iterator that can be used to progressively generate new\n addresses.\n\n :param start:\n Starting index.\n\n Warning: This method may take awhile to reset if ``start``\n is a large number!\n\n :param step:\n Number of indexes to advance after each address.\n\n Warning: The generator may take awhile to advance between\n iterations if ``step`` is a large number!\n \"\"\"\n key_iterator = (\n KeyGenerator(self.seed).create_iterator(\n start,\n step,\n self.security_level,\n )\n )\n\n while True:\n yield self._generate_address(key_iterator)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef address_from_digest(digest):\n # type: (Digest) -> Address\n \"\"\"\n Generates an address from a private key digest.\n \"\"\"\n address_trits = [0] * (Address.LEN * TRITS_PER_TRYTE) # type: List[int]\n\n sponge = Kerl()\n sponge.absorb(digest.as_trits())\n sponge.squeeze(address_trits)\n\n return Address.from_trits(\n trits=address_trits,\n\n key_index=digest.key_index,\n security_level=digest.security_level,\n )", "response": "Generates an address from a private key digest."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngenerating a new address.", "response": "def _generate_address(self, key_iterator):\n # type: (KeyIterator) -> Address\n \"\"\"\n Generates a new address.\n\n Used in the event of a cache miss.\n \"\"\"\n if self.checksum:\n return (\n self.address_from_digest(\n digest=self._get_digest(key_iterator),\n ).with_valid_checksum()\n )\n else:\n return self.address_from_digest(self._get_digest(key_iterator))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef find_transaction_objects(adapter, **kwargs):\n # type: (BaseAdapter, **Iterable) -> List[Transaction]\n \"\"\"\n Finds transactions matching the specified criteria, fetches the\n corresponding trytes and converts them into Transaction objects.\n \"\"\"\n ft_response = FindTransactionsCommand(adapter)(**kwargs)\n\n hashes = ft_response['hashes']\n\n if hashes:\n gt_response = GetTrytesCommand(adapter)(hashes=hashes)\n\n return list(map(\n Transaction.from_tryte_string,\n gt_response.get('trytes') or [],\n )) # type: List[Transaction]\n\n return []", "response": "Returns a list of Transaction objects corresponding to the specified criteria."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nyielding the addresses that are used by the given tag.", "response": "def iter_used_addresses(\n adapter, # type: BaseAdapter\n seed, # type: Seed\n start, # type: int\n security_level=None, # type: Optional[int]\n):\n # type: (...) -> Generator[Tuple[Address, List[TransactionHash]], None, None]\n \"\"\"\n Scans the Tangle for used addresses.\n\n This is basically the opposite of invoking ``getNewAddresses`` with\n ``stop=None``.\n \"\"\"\n if security_level is None:\n security_level = AddressGenerator.DEFAULT_SECURITY_LEVEL\n\n ft_command = FindTransactionsCommand(adapter)\n\n for addy in AddressGenerator(seed, security_level).create_iterator(start):\n ft_response = ft_command(addresses=[addy])\n\n if ft_response['hashes']:\n yield addy, ft_response['hashes']\n else:\n break\n\n # Reset the command so that we can call it again.\n ft_command.reset()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_bundles_from_transaction_hashes(\n adapter,\n transaction_hashes,\n inclusion_states,\n):\n # type: (BaseAdapter, Iterable[TransactionHash], bool) -> List[Bundle]\n \"\"\"\n Given a set of transaction hashes, returns the corresponding bundles,\n sorted by tail transaction timestamp.\n \"\"\"\n transaction_hashes = list(transaction_hashes)\n if not transaction_hashes:\n return []\n\n my_bundles = [] # type: List[Bundle]\n\n # Sort transactions into tail and non-tail.\n tail_transaction_hashes = set()\n non_tail_bundle_hashes = set()\n\n gt_response = GetTrytesCommand(adapter)(hashes=transaction_hashes)\n all_transactions = list(map(\n Transaction.from_tryte_string,\n gt_response['trytes'],\n )) # type: List[Transaction]\n\n for txn in all_transactions:\n if txn.is_tail:\n tail_transaction_hashes.add(txn.hash)\n else:\n # Capture the bundle ID instead of the transaction hash so\n # that we can query the node to find the tail transaction\n # for that bundle.\n non_tail_bundle_hashes.add(txn.bundle_hash)\n\n if non_tail_bundle_hashes:\n for txn in find_transaction_objects(\n adapter=adapter,\n bundles=list(non_tail_bundle_hashes),\n ):\n if txn.is_tail:\n if txn.hash not in tail_transaction_hashes:\n all_transactions.append(txn)\n tail_transaction_hashes.add(txn.hash)\n\n # Filter out all non-tail transactions.\n tail_transactions = [\n txn\n for txn in all_transactions\n if txn.hash in tail_transaction_hashes\n ]\n\n # Attach inclusion states, if requested.\n if inclusion_states:\n gli_response = GetLatestInclusionCommand(adapter)(\n hashes=list(tail_transaction_hashes),\n )\n\n for txn in tail_transactions:\n txn.is_confirmed = gli_response['states'].get(txn.hash)\n\n # Find the bundles for each transaction.\n for txn in tail_transactions:\n gb_response = GetBundlesCommand(adapter)(transaction=txn.hash)\n txn_bundles = gb_response['bundles'] # type: List[Bundle]\n\n if inclusion_states:\n for bundle in txn_bundles:\n bundle.is_confirmed = txn.is_confirmed\n\n my_bundles.extend(txn_bundles)\n\n return list(sorted(\n my_bundles,\n key=lambda bundle_: bundle_.tail_transaction.timestamp,\n ))", "response": "Given a set of transaction hashes returns the corresponding bundles sorted by tail transaction timestamp."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd inputs to the bundle.", "response": "def add_inputs(self, inputs):\n # type: (Iterable[MultisigAddress]) -> None\n \"\"\"\n Adds inputs to spend in the bundle.\n\n Note that each input may require multiple transactions, in order to\n hold the entire signature.\n\n :param inputs:\n MultisigAddresses to use as the inputs for this bundle.\n\n Note: at this time, only a single multisig input is supported.\n \"\"\"\n if self.hash:\n raise RuntimeError('Bundle is already finalized.')\n\n count = 0\n for addy in inputs:\n if count > 0:\n raise ValueError(\n '{cls} only supports 1 input.'.format(cls=type(self).__name__),\n )\n\n if not isinstance(addy, MultisigAddress):\n raise with_context(\n exc =\n TypeError(\n 'Incorrect input type for {cls} '\n '(expected {expected}, actual {actual}).'.format(\n actual = type(addy).__name__,\n cls = type(self).__name__,\n expected = MultisigAddress.__name__,\n ),\n ),\n\n context = {\n 'actual_input': addy,\n },\n )\n\n security_level = addy.security_level\n if security_level < 1:\n raise with_context(\n exc =\n ValueError(\n 'Unable to determine security level for {type} '\n '(is ``digests`` populated correctly?).'.format(\n type = type(addy).__name__,\n ),\n ),\n\n context = {\n 'actual_input': addy,\n 'security_level': security_level,\n },\n )\n\n if not addy.balance:\n raise with_context(\n exc =\n ValueError(\n 'Cannot add input with empty/unknown balance to {type} '\n '(use ``Iota.get_balances`` to get balance first).'.format(\n type = type(self).__name__,\n ),\n ),\n\n context = {\n 'actual_input': addy,\n },\n )\n\n self._create_input_transactions(addy)\n\n count += 1"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef check_trytes_codec(encoding):\n if encoding == AsciiTrytesCodec.name:\n return AsciiTrytesCodec.get_codec_info()\n\n elif encoding == AsciiTrytesCodec.compat_name:\n warn(\n '\"{old_codec}\" codec will be removed in PyOTA v2.1. '\n 'Use \"{new_codec}\" instead.'.format(\n new_codec=AsciiTrytesCodec.name,\n old_codec=AsciiTrytesCodec.compat_name,\n ),\n\n DeprecationWarning,\n )\n return AsciiTrytesCodec.get_codec_info()\n\n return None", "response": "Checks if the given encoding is valid and returns the codec info."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_codec_info(cls):\n codec = cls()\n\n codec_info = {\n 'encode': codec.encode,\n 'decode': codec.decode,\n }\n\n # In Python 2, all codecs are made equal.\n # In Python 3, some codecs are more equal than others.\n if PY3:\n codec_info['_is_text_encoding'] = False\n\n return CodecInfo(**codec_info)", "response": "Returns information used by the codecs library to configure the\n codec for use."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef encode(self, input, errors='strict'):\n if isinstance(input, memoryview):\n input = input.tobytes()\n\n if not isinstance(input, (binary_type, bytearray)):\n raise with_context(\n exc=TypeError(\n \"Can't encode {type}; byte string expected.\".format(\n type=type(input).__name__,\n )),\n\n context={\n 'input': input,\n },\n )\n\n # :bc: In Python 2, iterating over a byte string yields\n # characters instead of integers.\n if not isinstance(input, bytearray):\n input = bytearray(input)\n\n trytes = bytearray()\n\n for c in input:\n second, first = divmod(c, len(self.alphabet))\n\n trytes.append(self.alphabet[first])\n trytes.append(self.alphabet[second])\n\n return binary_type(trytes), len(input)", "response": "Encodes a byte string into a trytes."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndecoding a tryte string into bytes.", "response": "def decode(self, input, errors='strict'):\n \"\"\"\n Decodes a tryte string into bytes.\n \"\"\"\n if isinstance(input, memoryview):\n input = input.tobytes()\n\n if not isinstance(input, (binary_type, bytearray)):\n raise with_context(\n exc=TypeError(\n \"Can't decode {type}; byte string expected.\".format(\n type=type(input).__name__,\n )),\n\n context={\n 'input': input,\n },\n )\n\n # :bc: In Python 2, iterating over a byte string yields\n # characters instead of integers.\n if not isinstance(input, bytearray):\n input = bytearray(input)\n\n bytes_ = bytearray()\n\n for i in range(0, len(input), 2):\n try:\n first, second = input[i:i + 2]\n except ValueError:\n if errors == 'strict':\n raise with_context(\n exc=TrytesDecodeError(\n \"'{name}' codec can't decode value; \"\n \"tryte sequence has odd length.\".format(\n name=self.name,\n ),\n ),\n\n context={\n 'input': input,\n },\n )\n elif errors == 'replace':\n bytes_ += b'?'\n\n continue\n\n try:\n bytes_.append(\n self.index[first]\n + (self.index[second] * len(self.index))\n )\n except ValueError:\n # This combination of trytes yields a value > 255 when\n # decoded.\n # Naturally, we can't represent this using ASCII.\n if errors == 'strict':\n raise with_context(\n exc=TrytesDecodeError(\n \"'{name}' codec can't decode trytes {pair} \"\n \"at position {i}-{j}: \"\n \"ordinal not in range(255)\".format(\n name=self.name,\n pair=chr(first) + chr(second),\n i=i,\n j=i + 1,\n ),\n ),\n\n context={\n 'input': input,\n }\n )\n elif errors == 'replace':\n bytes_ += b'?'\n\n return binary_type(bytes_), len(input)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _find_addresses(self, seed, index, count, security_level, checksum):\n # type: (Seed, int, Optional[int], int, bool) -> List[Address]\n \"\"\"\n Find addresses matching the command parameters.\n \"\"\"\n generator = AddressGenerator(seed, security_level, checksum)\n\n if count is None:\n # Connect to Tangle and find the first address without any\n # transactions.\n for addy in generator.create_iterator(start=index):\n # We use addy.address here because FindTransactions does\n # not work on an address with a checksum\n response = FindTransactionsCommand(self.adapter)(\n addresses=[addy.address],\n )\n\n if not response.get('hashes'):\n return [addy]\n\n return generator.get_addresses(start=index, count=count)", "response": "Returns a list of addresses matching the command parameters."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_route(self, command, adapter):\n # type: (Text, AdapterSpec) -> RoutingWrapper\n \"\"\"\n Adds a route to the wrapper.\n\n :param command:\n The name of the command to route (e.g., \"attachToTangle\").\n\n :param adapter:\n The adapter object or URI to route requests to.\n \"\"\"\n if not isinstance(adapter, BaseAdapter):\n try:\n adapter = self.adapter_aliases[adapter]\n except KeyError:\n self.adapter_aliases[adapter] = adapter = resolve_adapter(\n adapter\n )\n\n self.routes[command] = adapter\n\n return self", "response": "Adds a route to the wrapper."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef from_tryte_string(cls, trytes, hash_=None):\n # type: (TrytesCompatible, Optional[TransactionHash]) -> Transaction\n \"\"\"\n Creates a Transaction object from a sequence of trytes.\n\n :param trytes:\n Raw trytes. Should be exactly 2673 trytes long.\n\n :param hash_:\n The transaction hash, if available.\n\n If not provided, it will be computed from the transaction\n trytes.\n \"\"\"\n tryte_string = TransactionTrytes(trytes)\n\n if not hash_:\n hash_trits = [0] * HASH_LENGTH # type: MutableSequence[int]\n\n sponge = Curl()\n sponge.absorb(tryte_string.as_trits())\n sponge.squeeze(hash_trits)\n\n hash_ = TransactionHash.from_trits(hash_trits)\n\n return cls(\n hash_=hash_,\n signature_message_fragment=Fragment(tryte_string[0:2187]),\n address=Address(tryte_string[2187:2268]),\n value=int_from_trits(tryte_string[2268:2295].as_trits()),\n legacy_tag=Tag(tryte_string[2295:2322]),\n timestamp=int_from_trits(tryte_string[2322:2331].as_trits()),\n current_index=int_from_trits(tryte_string[2331:2340].as_trits()),\n last_index=int_from_trits(tryte_string[2340:2349].as_trits()),\n bundle_hash=BundleHash(tryte_string[2349:2430]),\n trunk_transaction_hash=TransactionHash(tryte_string[2430:2511]),\n branch_transaction_hash=TransactionHash(tryte_string[2511:2592]),\n tag=Tag(tryte_string[2592:2619]),\n\n attachment_timestamp=int_from_trits(\n tryte_string[2619:2628].as_trits()),\n\n attachment_timestamp_lower_bound=int_from_trits(\n tryte_string[2628:2637].as_trits()),\n\n attachment_timestamp_upper_bound=int_from_trits(\n tryte_string[2637:2646].as_trits()),\n\n nonce=Nonce(tryte_string[2646:2673]),\n )", "response": "Creates a Transaction object from a sequence of trytes."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a JSON - compatible representation of the object.", "response": "def as_json_compatible(self):\n # type: () -> dict\n \"\"\"\n Returns a JSON-compatible representation of the object.\n\n References:\n\n - :py:class:`iota.json.JsonEncoder`.\n \"\"\"\n return {\n 'hash_': self.hash,\n 'signature_message_fragment': self.signature_message_fragment,\n 'address': self.address,\n 'value': self.value,\n 'legacy_tag': self.legacy_tag,\n 'timestamp': self.timestamp,\n 'current_index': self.current_index,\n 'last_index': self.last_index,\n 'bundle_hash': self.bundle_hash,\n 'trunk_transaction_hash': self.trunk_transaction_hash,\n 'branch_transaction_hash': self.branch_transaction_hash,\n 'tag': self.tag,\n 'attachment_timestamp': self.attachment_timestamp,\n\n 'attachment_timestamp_lower_bound':\n self.attachment_timestamp_lower_bound,\n\n 'attachment_timestamp_upper_bound':\n self.attachment_timestamp_upper_bound,\n\n 'nonce': self.nonce,\n }"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef as_tryte_string(self):\n # type: () -> TransactionTrytes\n \"\"\"\n Returns a TryteString representation of the transaction.\n \"\"\"\n return TransactionTrytes(\n self.signature_message_fragment\n + self.address.address\n + self.value_as_trytes\n + self.legacy_tag\n + self.timestamp_as_trytes\n + self.current_index_as_trytes\n + self.last_index_as_trytes\n + self.bundle_hash\n + self.trunk_transaction_hash\n + self.branch_transaction_hash\n + self.tag\n + self.attachment_timestamp_as_trytes\n + self.attachment_timestamp_lower_bound_as_trytes\n + self.attachment_timestamp_upper_bound_as_trytes\n + self.nonce\n )", "response": "Returns a string representation of the transaction."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the values needed to validate the transaction s signature_message_fragment value.", "response": "def get_signature_validation_trytes(self):\n # type: () -> TryteString\n \"\"\"\n Returns the values needed to validate the transaction's\n ``signature_message_fragment`` value.\n \"\"\"\n return (\n self.address.address\n + self.value_as_trytes\n + self.legacy_tag\n + self.timestamp_as_trytes\n + self.current_index_as_trytes\n + self.last_index_as_trytes\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nset the is_confirmed attribute of all the related objects.", "response": "def is_confirmed(self, new_is_confirmed):\n # type: (bool) -> None\n \"\"\"\n Sets the ``is_confirmed`` for the bundle.\n \"\"\"\n self._is_confirmed = new_is_confirmed\n\n for txn in self:\n txn.is_confirmed = new_is_confirmed"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a list of all the messages that can be deciphered from the transactions in the bundle.", "response": "def get_messages(self, errors='drop'):\n # type: (Text) -> List[Text]\n \"\"\"\n Attempts to decipher encoded messages from the transactions in\n the bundle.\n\n :param errors:\n How to handle trytes that can't be converted, or bytes that\n can't be decoded using UTF-8:\n\n 'drop'\n Drop the trytes from the result.\n\n 'strict'\n Raise an exception.\n\n 'replace'\n Replace with a placeholder character.\n\n 'ignore'\n Omit the invalid tryte/byte sequence.\n \"\"\"\n decode_errors = 'strict' if errors == 'drop' else errors\n\n messages = []\n\n for group in self.group_transactions():\n # Ignore inputs.\n if group[0].value < 0:\n continue\n\n message_trytes = TryteString(b'')\n for txn in group:\n message_trytes += txn.signature_message_fragment\n\n if message_trytes:\n try:\n messages.append(message_trytes.decode(decode_errors))\n except (TrytesDecodeError, UnicodeDecodeError):\n if errors != 'drop':\n raise\n\n return messages"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef as_tryte_strings(self, head_to_tail=False):\n # type: (bool) -> List[TransactionTrytes]\n \"\"\"\n Returns TryteString representations of the transactions in this\n bundle.\n\n :param head_to_tail:\n Determines the order of the transactions:\n\n - ``True``: head txn first, tail txn last.\n - ``False`` (default): tail txn first, head txn last.\n\n Note that the order is reversed by default, as this is the\n way bundles are typically broadcast to the Tangle.\n \"\"\"\n transactions = self if head_to_tail else reversed(self)\n return [t.as_tryte_string() for t in transactions]", "response": "Returns a list of TryteStrings representation of the transactions in this bundle."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ngroup the transactions in the bundle by address.", "response": "def group_transactions(self):\n # type: () -> List[List[Transaction]]\n \"\"\"\n Groups transactions in the bundle by address.\n \"\"\"\n groups = []\n\n if self:\n last_txn = self.tail_transaction\n current_group = [last_txn]\n for current_txn in self.transactions[1:]:\n # Transactions are grouped by address, so as long as the\n # address stays consistent from one transaction to\n # another, we are still in the same group.\n if current_txn.address == last_txn.address:\n current_group.append(current_txn)\n else:\n groups.append(current_group)\n current_group = [current_txn]\n\n last_txn = current_txn\n\n if current_group:\n groups.append(current_group)\n\n return groups"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef discover_commands(package, recursively=True):\n # type: (Union[ModuleType, Text], bool) -> Dict[Text, 'CommandMeta']\n \"\"\"\n Automatically discover commands in the specified package.\n\n :param package:\n Package path or reference.\n\n :param recursively:\n If True, will descend recursively into sub-packages.\n\n :return:\n All commands discovered in the specified package, indexed by\n command name (note: not class name).\n \"\"\"\n # http://stackoverflow.com/a/25562415/\n if isinstance(package, string_types):\n package = import_module(package) # type: ModuleType\n\n commands = {}\n\n for _, name, is_package in walk_packages(package.__path__, package.__name__ + '.'):\n # Loading the module is good enough; the CommandMeta metaclass will\n # ensure that any commands in the module get registered.\n\n # Prefix in name module move to function \"walk_packages\" for fix\n # conflict with names importing packages\n # Bug https://github.com/iotaledger/iota.lib.py/issues/63\n sub_package = import_module(name)\n\n # Index any command classes that we find.\n for (_, obj) in get_members(sub_package):\n if is_class(obj) and isinstance(obj, CommandMeta):\n command_name = getattr(obj, 'command')\n if command_name:\n commands[command_name] = obj\n\n if recursively and is_package:\n commands.update(discover_commands(sub_package))\n\n return commands", "response": "Automatically discover commands in the specified package."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nexecuting the request object to the adapter and returns the response.", "response": "def _execute(self, request):\n # type: (dict) -> dict\n \"\"\"\n Sends the request object to the adapter and returns the response.\n\n The command name will be automatically injected into the request\n before it is sent (note: this will modify the request object).\n \"\"\"\n request['command'] = self.command\n return self.adapter.send_request(request)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\napply a filter to a value.", "response": "def _apply_filter(value, filter_, failure_message):\n # type: (dict, Optional[f.BaseFilter], Text) -> dict\n \"\"\"\n Applies a filter to a value. If the value does not pass the\n filter, an exception will be raised with lots of contextual info\n attached to it.\n \"\"\"\n if filter_:\n runner = f.FilterRunner(filter_, value)\n\n if runner.is_valid():\n return runner.cleaned_data\n else:\n raise with_context(\n exc = ValueError(\n '{message} ({error_codes}) '\n '(`exc.context[\"filter_errors\"]` '\n 'contains more information).'.format(\n message = failure_message,\n error_codes = runner.error_codes,\n ),\n ),\n\n context = {\n 'filter_errors': runner.get_errors(with_context=True),\n },\n )\n\n return value"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_jobs_url(self, job_id):\n # type: (Text) -> Text\n \"\"\"\n Returns the URL to check job status.\n\n :param job_id:\n The ID of the job to check.\n \"\"\"\n return compat.urllib_parse.urlunsplit((\n self.uri.scheme,\n self.uri.netloc,\n self.uri.path.rstrip('/') + '/jobs/' + job_id,\n self.uri.query,\n self.uri.fragment,\n ))", "response": "Returns the URL to check job status."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef errors(self):\n # type: () -> List[Text]\n \"\"\"\n Returns all errors found with the bundle.\n \"\"\"\n try:\n self._errors.extend(self._validator) # type: List[Text]\n except StopIteration:\n pass\n\n return self._errors", "response": "Returns all errors found with the bundle."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef is_valid(self):\n # type: () -> bool\n \"\"\"\n Returns whether the bundle is valid.\n \"\"\"\n if not self._errors:\n try:\n # We only have to check for a single error to determine\n # if the bundle is valid or not.\n self._errors.append(next(self._validator))\n except StopIteration:\n pass\n\n return not self._errors", "response": "Returns whether the bundle is valid."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a generator that does all the work.", "response": "def _create_validator(self):\n # type: () -> Generator[Text, None, None]\n \"\"\"\n Creates a generator that does all the work.\n \"\"\"\n # Group transactions by address to make it easier to iterate\n # over inputs.\n grouped_transactions = self.bundle.group_transactions()\n\n # Define a few expected values.\n bundle_hash = self.bundle.hash\n last_index = len(self.bundle) - 1\n\n # Track a few others as we go along.\n balance = 0\n\n # Check indices and balance first.\n # Note that we use a counter to keep track of the current index,\n # since at this point we can't trust that the transactions have\n # correct ``current_index`` values.\n counter = 0\n for group in grouped_transactions:\n for txn in group:\n balance += txn.value\n\n if txn.bundle_hash != bundle_hash:\n yield 'Transaction {i} has invalid bundle hash.'.format(\n i=counter,\n )\n\n if txn.current_index != counter:\n yield (\n 'Transaction {i} has invalid current index value '\n '(expected {i}, actual {actual}).'.format(\n actual=txn.current_index,\n i=counter,\n )\n )\n\n if txn.last_index != last_index:\n yield (\n 'Transaction {i} has invalid last index value '\n '(expected {expected}, actual {actual}).'.format(\n actual=txn.last_index,\n expected=last_index,\n i=counter,\n )\n )\n\n counter += 1\n\n # Bundle must be balanced (spends must match inputs).\n if balance != 0:\n yield (\n 'Bundle has invalid balance '\n '(expected 0, actual {actual}).'.format(\n actual=balance,\n )\n )\n\n # Signature validation is only meaningful if the transactions\n # are otherwise valid.\n if not self._errors:\n signature_validation_queue = [] # type: List[List[Transaction]]\n\n for group in grouped_transactions:\n # Signature validation only applies to inputs.\n if group[0].value >= 0:\n continue\n\n validate_group_signature = True\n for j, txn in enumerate(group):\n if (j > 0) and (txn.value != 0):\n # Input is malformed; signature fragments after\n # the first should have zero value.\n yield (\n 'Transaction {i} has invalid value '\n '(expected 0, actual {actual}).'.format(\n actual=txn.value,\n\n # If we get to this point, we know that\n # the ``current_index`` value for each\n # transaction can be trusted.\n i=txn.current_index,\n )\n )\n\n # We won't be able to validate the signature,\n # but continue anyway, so that we can check that\n # the other transactions in the group have the\n # correct ``value``.\n validate_group_signature = False\n continue\n\n # After collecting the signature fragment from each\n # transaction in the group, queue them up to run through\n # the validator.\n #\n # We have to perform signature validation separately so\n # that we can try different algorithms (for\n # backwards-compatibility).\n #\n # References:\n #\n # - https://github.com/iotaledger/kerl#kerl-integration-in-iota\n if validate_group_signature:\n signature_validation_queue.append(group)\n\n # Once we've finished checking the attributes from each\n # transaction in the bundle, go back and validate\n # signatures.\n if signature_validation_queue:\n # ``yield from`` is an option here, but for\n # compatibility with Python 2 clients, we will do it the\n # old-fashioned way.\n for error in self._get_bundle_signature_errors(\n signature_validation_queue\n ):\n yield error"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_bundle_signature_errors(self, groups):\n # type: (List[List[Transaction]]) -> List[Text]\n \"\"\"\n Validates the signature fragments in the bundle.\n\n :return:\n List of error messages.\n If empty, signature fragments are valid.\n \"\"\"\n # Start with the currently-supported hash algo.\n current_pos = None\n current_errors = []\n for current_pos, group in enumerate(groups):\n error = self._get_group_signature_error(group, SUPPORTED_SPONGE)\n if error:\n current_errors.append(error)\n\n # Pause and retry with the legacy algo.\n break\n\n # If validation failed, then go back and try with the legacy\n # algo (only applies if we are currently transitioning to a new\n # algo).\n if current_errors and LEGACY_SPONGE:\n for group in groups:\n # noinspection PyTypeChecker\n if self._get_group_signature_error(group, LEGACY_SPONGE):\n # Legacy algo doesn't work, either; no point in\n # continuing.\n break\n else:\n # If we get here, then we were able to validate the\n # signature fragments successfully using the legacy\n # algorithm.\n return []\n\n # If we get here, then validation also failed when using the\n # legacy algorithm.\n\n # At this point, we know that the bundle is invalid, but we will\n # continue validating with the supported algorithm anyway, so\n # that we can return an error message for every invalid input.\n current_errors.extend(filter(None, (\n self._get_group_signature_error(group, SUPPORTED_SPONGE)\n for group in groups[current_pos + 1:]\n )))\n\n return current_errors", "response": "Validates the signature fragments in the bundle."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _get_group_signature_error(group, sponge_type):\n # type: (List[Transaction], type) -> Optional[Text]\n \"\"\"\n Validates the signature fragments for a group of transactions\n using the specified sponge type.\n\n Note: this method assumes that the transactions in the group\n have already passed basic validation (see\n :py:meth:`_create_validator`).\n\n :return:\n - ``None``: Indicates that the signature fragments are valid.\n - ``Text``: Error message indicating the fragments are invalid.\n \"\"\"\n validate_group_signature = validate_signature_fragments(\n fragments=[txn.signature_message_fragment for txn in group],\n hash_=group[0].bundle_hash,\n public_key=group[0].address,\n sponge_type=sponge_type,\n )\n\n if validate_group_signature:\n return None\n\n return (\n 'Transaction {i} has invalid signature '\n '(using {fragments} fragments).'.format(\n fragments=len(group),\n i=group[0].current_index,\n )\n )", "response": "Validate the signature fragments for a group of transactions."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _traverse_bundle(self, txn_hash, target_bundle_hash=None):\n # type: (TransactionHash, Optional[BundleHash]) -> List[Transaction]\n \"\"\"\n Recursively traverse the Tangle, collecting transactions until\n we hit a new bundle.\n\n This method is (usually) faster than ``findTransactions``, and\n it ensures we don't collect transactions from replayed bundles.\n \"\"\"\n trytes = (\n GetTrytesCommand(self.adapter)(hashes=[txn_hash])['trytes']\n ) # type: List[TryteString]\n\n if not trytes:\n raise with_context(\n exc=BadApiResponse(\n 'Bundle transactions not visible '\n '(``exc.context`` has more info).',\n ),\n\n context={\n 'transaction_hash': txn_hash,\n 'target_bundle_hash': target_bundle_hash,\n },\n )\n\n transaction = Transaction.from_tryte_string(trytes[0])\n\n if (not target_bundle_hash) and transaction.current_index:\n raise with_context(\n exc=BadApiResponse(\n '``_traverse_bundle`` started with a non-tail transaction '\n '(``exc.context`` has more info).',\n ),\n\n context={\n 'transaction_object': transaction,\n 'target_bundle_hash': target_bundle_hash,\n },\n )\n\n if target_bundle_hash:\n if target_bundle_hash != transaction.bundle_hash:\n # We've hit a different bundle; we can stop now.\n return []\n else:\n target_bundle_hash = transaction.bundle_hash\n\n if transaction.current_index == transaction.last_index == 0:\n # Bundle only has one transaction.\n return [transaction]\n\n # Recursively follow the trunk transaction, to fetch the next\n # transaction in the bundle.\n return [transaction] + self._traverse_bundle(\n txn_hash=transaction.trunk_transaction_hash,\n target_bundle_hash=target_bundle_hash\n )", "response": "Traverse the bundle and return a list of transactions."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngenerates a random seed using a CSPRNG.", "response": "def random(cls, length=Hash.LEN):\n \"\"\"\n Generates a random seed using a CSPRNG.\n\n :param length:\n Length of seed, in trytes.\n\n For maximum security, this should always be set to 81, but\n you can change it if you're 110% sure you know what you're\n doing.\n\n See https://iota.stackexchange.com/q/249 for more info.\n \"\"\"\n return super(Seed, cls).random(length)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_digest(self):\n # type: () -> Digest\n \"\"\"\n Generates the digest used to do the actual signing.\n\n Signing keys can have variable length and tend to be quite long,\n which makes them not-well-suited for use in crypto algorithms.\n\n The digest is essentially the result of running the signing key\n through a PBKDF, yielding a constant-length hash that can be\n used for crypto.\n \"\"\"\n hashes_per_fragment = FRAGMENT_LENGTH // Hash.LEN\n\n key_fragments = self.iter_chunks(FRAGMENT_LENGTH)\n\n # The digest will contain one hash per key fragment.\n digest = [0] * HASH_LENGTH * len(key_fragments)\n\n # Iterate over each fragment in the key.\n for i, fragment in enumerate(key_fragments):\n fragment_trits = fragment.as_trits()\n\n key_fragment = [0] * FRAGMENT_LENGTH\n hash_trits = []\n\n # Within each fragment, iterate over one hash at a time.\n for j in range(hashes_per_fragment):\n hash_start = j * HASH_LENGTH\n hash_end = hash_start + HASH_LENGTH\n hash_trits = fragment_trits[hash_start:hash_end]\n\n for k in range(26):\n sponge = Kerl()\n sponge.absorb(hash_trits)\n sponge.squeeze(hash_trits)\n\n key_fragment[hash_start:hash_end] = hash_trits\n\n # After processing all of the hashes in the fragment,\n # generate a final hash and append it to the digest.\n #\n # Note that we will do this once per fragment in the key, so\n # the longer the key is, the longer the digest will be.\n sponge = Kerl()\n sponge.absorb(key_fragment)\n sponge.squeeze(hash_trits)\n\n fragment_hash_start = i * HASH_LENGTH\n fragment_hash_end = fragment_hash_start + HASH_LENGTH\n\n digest[fragment_hash_start:fragment_hash_end] = hash_trits\n\n return Digest(TryteString.from_trits(digest), self.key_index)", "response": "Generates the digest for the given key."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsigns the inputs starting at the specified index.", "response": "def sign_input_transactions(self, bundle, start_index):\n # type: (Bundle, int) -> None\n \"\"\"\n Signs the inputs starting at the specified index.\n\n :param bundle:\n The bundle that contains the input transactions to sign.\n\n :param start_index:\n The index of the first input transaction.\n\n If necessary, the resulting signature will be split across\n subsequent transactions automatically.\n \"\"\"\n\n if not bundle.hash:\n raise with_context(\n exc=ValueError('Cannot sign inputs without a bundle hash!'),\n\n context={\n 'bundle': bundle,\n 'key_index': self.key_index,\n 'start_index': start_index,\n },\n )\n\n from iota.crypto.signing import SignatureFragmentGenerator\n signature_fragment_generator = (\n SignatureFragmentGenerator(self, bundle.hash)\n )\n\n # We can only fit one signature fragment into each transaction,\n # so we have to split the entire signature.\n for j in range(self.security_level):\n # Do lots of validation before we attempt to sign the\n # transaction, and attach lots of context info to any\n # exception.\n #\n # This method is likely to be invoked at a very low level in\n # the application, so if anything goes wrong, we want to\n # make sure it's as easy to troubleshoot as possible!\n try:\n txn = bundle[start_index + j]\n except IndexError as e:\n raise with_context(\n exc=e,\n\n context={\n 'bundle': bundle,\n 'key_index': self.key_index,\n 'current_index': start_index + j,\n },\n )\n\n # Only inputs can be signed.\n if txn.value > 0:\n raise with_context(\n exc=ValueError(\n 'Attempting to sign non-input transaction #{i} '\n '(value={value}).'.format(\n i=txn.current_index,\n value=txn.value,\n ),\n ),\n\n context={\n 'bundle': bundle,\n 'key_index': self.key_index,\n 'start_index': start_index,\n },\n )\n\n if txn.signature_message_fragment:\n raise with_context(\n exc=ValueError(\n 'Attempting to sign input transaction #{i}, '\n 'but it has a non-empty fragment '\n '(is it already signed?).'.format(\n i=txn.current_index,\n ),\n ),\n\n context={\n 'bundle': bundle,\n 'key_index': self.key_index,\n 'start_index': start_index,\n },\n )\n\n txn.signature_message_fragment = next(signature_fragment_generator)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nprints out the JSON representation of the object.", "response": "def _repr_pretty_(self, p, cycle):\n \"\"\"\n Makes JSON-serializable objects play nice with IPython's default\n pretty-printer.\n\n Sadly, :py:func:`pprint.pprint` does not have a similar\n mechanism.\n\n References:\n\n - http://ipython.readthedocs.io/en/stable/api/generated/IPython.lib.pretty.html\n - :py:meth:`IPython.lib.pretty.RepresentationPrinter.pretty`\n - :py:func:`pprint._safe_repr`\n \"\"\"\n class_name = type(self).__name__\n\n if cycle:\n p.text('{cls}(...)'.format(\n cls=class_name,\n ))\n else:\n with p.group(\n len(class_name) + 1,\n '{cls}('.format(cls=class_name),\n ')',\n ):\n prepared = self.as_json_compatible()\n\n if isinstance(prepared, Mapping):\n p.text('**')\n elif isinstance(prepared, Iterable):\n p.text('*')\n\n p.pretty(prepared)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef absorb(self, trits, offset=0, length=None):\n # type: (MutableSequence[int], int, Optional[int]) -> None\n \"\"\"\n Absorb trits into the sponge from a buffer.\n\n :param trits:\n Buffer that contains the trits to absorb.\n\n :param offset:\n Starting offset in ``trits``.\n\n :param length:\n Number of trits to absorb. Defaults to ``len(trits)``.\n \"\"\"\n # Pad input if necessary, so that it can be divided evenly into\n # hashes.\n # Note that this operation creates a COPY of ``trits``; the\n # incoming buffer is not modified!\n pad = ((len(trits) % TRIT_HASH_LENGTH) or TRIT_HASH_LENGTH)\n trits += [0] * (TRIT_HASH_LENGTH - pad)\n\n if length is None:\n length = len(trits)\n\n if length < 1:\n raise with_context(\n exc=ValueError('Invalid length passed to ``absorb``.'),\n\n context={\n 'trits': trits,\n 'offset': offset,\n 'length': length,\n },\n )\n\n while offset < length:\n stop = min(offset + TRIT_HASH_LENGTH, length)\n\n # If we're copying over a full chunk, zero last trit.\n if stop - offset == TRIT_HASH_LENGTH:\n trits[stop - 1] = 0\n\n signed_nums = conv.convertToBytes(trits[offset:stop])\n\n # Convert signed bytes into their equivalent unsigned\n # representation, in order to use Python's built-in bytes\n # type.\n unsigned_bytes = bytearray(\n conv.convert_sign(b) for b in signed_nums\n )\n\n self.k.update(unsigned_bytes)\n\n offset += TRIT_HASH_LENGTH", "response": "This method absorbs the trits into the sponge."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsqueeze the trits from the sponge into a buffer.", "response": "def squeeze(self, trits, offset=0, length=None):\n # type: (MutableSequence[int], int, Optional[int]) -> None\n \"\"\"\n Squeeze trits from the sponge into a buffer.\n\n :param trits:\n Buffer that will hold the squeezed trits.\n\n IMPORTANT: If ``trits`` is too small, it will be extended!\n\n :param offset:\n Starting offset in ``trits``.\n\n :param length:\n Number of trits to squeeze from the sponge.\n\n If not specified, defaults to :py:data:`TRIT_HASH_LENGTH`\n (i.e., by default, we will try to squeeze exactly 1 hash).\n \"\"\"\n # Pad input if necessary, so that it can be divided evenly into\n # hashes.\n pad = ((len(trits) % TRIT_HASH_LENGTH) or TRIT_HASH_LENGTH)\n trits += [0] * (TRIT_HASH_LENGTH - pad)\n\n if length is None:\n # By default, we will try to squeeze one hash.\n # Note that this is different than ``absorb``.\n length = len(trits) or TRIT_HASH_LENGTH\n\n if length < 1:\n raise with_context(\n exc=ValueError('Invalid length passed to ``squeeze``.'),\n\n context={\n 'trits': trits,\n 'offset': offset,\n 'length': length,\n },\n )\n\n while offset < length:\n unsigned_hash = self.k.digest()\n\n if PY2:\n unsigned_hash = map(ord, unsigned_hash) # type: ignore\n\n signed_hash = [conv.convert_sign(b) for b in unsigned_hash]\n\n trits_from_hash = conv.convertToTrits(signed_hash)\n trits_from_hash[TRIT_HASH_LENGTH - 1] = 0\n\n stop = min(TRIT_HASH_LENGTH, length - offset)\n trits[offset:offset + stop] = trits_from_hash[0:stop]\n\n flipped_bytes = bytearray(\n conv.convert_sign(~b) for b in unsigned_hash\n )\n\n # Reset internal state before feeding back in.\n self.reset()\n self.k.update(flipped_bytes)\n\n offset += TRIT_HASH_LENGTH"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd a context value to an exception.", "response": "def with_context(exc, context):\n # type: (Exception, dict) -> Exception\n \"\"\"\n Attaches a ``context`` value to an Exception.\n\n Before:\n\n .. code-block:: python\n\n exc = Exception('Frog blast the vent core!')\n exc.context = { ... }\n raise exc\n\n After:\n\n .. code-block:: python\n\n raise with_context(\n exc=Exception('Frog blast the vent core!'),\n context={ ... },\n )\n \"\"\"\n if not hasattr(exc, 'context'):\n exc.context = {}\n\n exc.context.update(context)\n return exc"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef SecurityLevel():\n return (\n f.Type(int) |\n f.Min(1) |\n f.Max(3) |\n f.Optional(default=AddressGenerator.DEFAULT_SECURITY_LEVEL)\n )", "response": "Generates a filter chain for validating a security level."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a TryteString representation of the transaction.", "response": "def as_tryte_string(self):\n # type: () -> TryteString\n \"\"\"\n Returns a TryteString representation of the transaction.\n \"\"\"\n if not self.bundle_hash:\n raise with_context(\n exc=RuntimeError(\n 'Cannot get TryteString representation of {cls} instance '\n 'without a bundle hash; call ``bundle.finalize()`` first '\n '(``exc.context`` has more info).'.format(\n cls=type(self).__name__,\n ),\n ),\n\n context={\n 'transaction': self,\n },\n )\n\n return super(ProposedTransaction, self).as_tryte_string()"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nincrementing the legacy tag of the current transaction.", "response": "def increment_legacy_tag(self):\n \"\"\"\n Increments the transaction's legacy tag, used to fix insecure\n bundle hashes when finalizing a bundle.\n\n References:\n\n - https://github.com/iotaledger/iota.lib.py/issues/84\n \"\"\"\n self._legacy_tag = (\n Tag.from_trits(add_trits(self.legacy_tag.as_trits(), [1]))\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndetermines the most relevant tag for the bundle.", "response": "def tag(self):\n # type: () -> Tag\n \"\"\"\n Determines the most relevant tag for the bundle.\n \"\"\"\n for txn in reversed(self): # type: ProposedTransaction\n if txn.tag:\n return txn.tag\n\n return Tag(b'')"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_transaction(self, transaction):\n # type: (ProposedTransaction) -> None\n \"\"\"\n Adds a transaction to the bundle.\n\n If the transaction message is too long, it will be split\n automatically into multiple transactions.\n \"\"\"\n if self.hash:\n raise RuntimeError('Bundle is already finalized.')\n\n if transaction.value < 0:\n raise ValueError('Use ``add_inputs`` to add inputs to the bundle.')\n\n self._transactions.append(ProposedTransaction(\n address=transaction.address,\n value=transaction.value,\n tag=transaction.tag,\n message=transaction.message[:Fragment.LEN],\n timestamp=transaction.timestamp,\n ))\n\n # If the message is too long to fit in a single transactions,\n # it must be split up into multiple transactions so that it will\n # fit.\n fragment = transaction.message[Fragment.LEN:]\n while fragment:\n self._transactions.append(ProposedTransaction(\n address=transaction.address,\n value=0,\n tag=transaction.tag,\n message=fragment[:Fragment.LEN],\n timestamp=transaction.timestamp,\n ))\n\n fragment = fragment[Fragment.LEN:]", "response": "Adds a transaction to the bundle."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add_inputs(self, inputs):\n # type: (Iterable[Address]) -> None\n \"\"\"\n Adds inputs to spend in the bundle.\n\n Note that each input may require multiple transactions, in order\n to hold the entire signature.\n\n :param inputs:\n Addresses to use as the inputs for this bundle.\n\n .. important::\n Must have ``balance`` and ``key_index`` attributes!\n Use :py:meth:`iota.api.get_inputs` to prepare inputs.\n \"\"\"\n if self.hash:\n raise RuntimeError('Bundle is already finalized.')\n\n for addy in inputs:\n if addy.balance is None:\n raise with_context(\n exc=ValueError(\n 'Address {address} has null ``balance`` '\n '(``exc.context`` has more info).'.format(\n address=addy,\n ),\n ),\n\n context={\n 'address': addy,\n },\n )\n\n if addy.key_index is None:\n raise with_context(\n exc=ValueError(\n 'Address {address} has null ``key_index`` '\n '(``exc.context`` has more info).'.format(\n address=addy,\n ),\n ),\n\n context={\n 'address': addy,\n },\n )\n\n self._create_input_transactions(addy)", "response": "Adds inputs to spend in the bundle."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef finalize(self):\n # type: () -> None\n \"\"\"\n Finalizes the bundle, preparing it to be attached to the Tangle.\n \"\"\"\n if self.hash:\n raise RuntimeError('Bundle is already finalized.')\n\n if not self:\n raise ValueError('Bundle has no transactions.')\n\n # Quick validation.\n balance = self.balance\n\n if balance < 0:\n if self.change_address:\n self.add_transaction(ProposedTransaction(\n address=self.change_address,\n value=-balance,\n tag=self.tag,\n ))\n else:\n raise ValueError(\n 'Bundle has unspent inputs (balance: {balance}); '\n 'use ``send_unspent_inputs_to`` to create '\n 'change transaction.'.format(\n balance=balance,\n ),\n )\n elif balance > 0:\n raise ValueError(\n 'Inputs are insufficient to cover bundle spend '\n '(balance: {balance}).'.format(\n balance=balance,\n ),\n )\n\n # Generate bundle hash.\n while True:\n sponge = Kerl()\n last_index = len(self) - 1\n\n for i, txn in enumerate(self):\n txn.current_index = i\n txn.last_index = last_index\n\n sponge.absorb(txn.get_signature_validation_trytes().as_trits())\n\n bundle_hash_trits = [0] * HASH_LENGTH\n sponge.squeeze(bundle_hash_trits)\n\n bundle_hash = BundleHash.from_trits(bundle_hash_trits)\n\n # Check that we generated a secure bundle hash.\n # https://github.com/iotaledger/iota.lib.py/issues/84\n if any(13 in part for part in normalize(bundle_hash)):\n # Increment the legacy tag and try again.\n tail_transaction = (\n self.tail_transaction\n ) # type: ProposedTransaction\n tail_transaction.increment_legacy_tag()\n else:\n break\n\n # Copy bundle hash to individual transactions.\n for txn in self:\n txn.bundle_hash = bundle_hash\n\n # Initialize signature/message fragment.\n txn.signature_message_fragment = Fragment(txn.message or b'')", "response": "Finalizes the bundle, preparing it to be attached to the Tangle."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsigning the inputs in a finalized bundle.", "response": "def sign_inputs(self, key_generator):\n # type: (KeyGenerator) -> None\n \"\"\"\n Sign inputs in a finalized bundle.\n \"\"\"\n if not self.hash:\n raise RuntimeError('Cannot sign inputs until bundle is finalized.')\n\n # Use a counter for the loop so that we can skip ahead as we go.\n i = 0\n while i < len(self):\n txn = self[i]\n\n if txn.value < 0:\n # In order to sign the input, we need to know the index\n # of the private key used to generate it.\n if txn.address.key_index is None:\n raise with_context(\n exc=ValueError(\n 'Unable to sign input {input}; '\n '``key_index`` is None '\n '(``exc.context`` has more info).'.format(\n input=txn.address,\n ),\n ),\n\n context={\n 'transaction': txn,\n },\n )\n\n if txn.address.security_level is None:\n raise with_context(\n exc=ValueError(\n 'Unable to sign input {input}; '\n '``security_level`` is None '\n '(``exc.context`` has more info).'.format(\n input=txn.address,\n ),\n ),\n\n context={\n 'transaction': txn,\n },\n )\n\n self.sign_input_at(i, key_generator.get_key_for(txn.address))\n\n i += txn.address.security_level\n else:\n # No signature needed (nor even possible, in some\n # cases); skip this transaction.\n i += 1"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsigning the input at the specified index.", "response": "def sign_input_at(self, start_index, private_key):\n # type: (int, PrivateKey) -> None\n \"\"\"\n Signs the input at the specified index.\n\n :param start_index:\n The index of the first input transaction.\n\n If necessary, the resulting signature will be split across\n multiple transactions automatically (i.e., if an input has\n ``security_level=2``, you still only need to call\n :py:meth:`sign_input_at` once).\n\n :param private_key:\n The private key that will be used to generate the signature.\n\n .. important::\n Be sure that the private key was generated using the\n correct seed, or the resulting signature will be\n invalid!\n \"\"\"\n if not self.hash:\n raise RuntimeError('Cannot sign inputs until bundle is finalized.')\n\n private_key.sign_input_transactions(self, start_index)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate transactions for the input address.", "response": "def _create_input_transactions(self, addy):\n # type: (Address) -> None\n \"\"\"\n Creates transactions for the specified input address.\n \"\"\"\n self._transactions.append(ProposedTransaction(\n address=addy,\n tag=self.tag,\n\n # Spend the entire address balance; if necessary, we will\n # add a change transaction to the bundle.\n value=-addy.balance,\n ))\n\n # Signatures require additional transactions to store, due to\n # transaction length limit.\n # Subtract 1 to account for the transaction we just added.\n for _ in range(addy.security_level - 1):\n self._transactions.append(ProposedTransaction(\n address=addy,\n tag=self.tag,\n\n # Note zero value; this is a meta transaction.\n value=0,\n ))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef convert_value_to_standard_unit(value, symbol='i'):\n # type: (Text, Text) -> float\n \"\"\"\n Converts between any two standard units of iota.\n\n :param value:\n Value (affixed) to convert. For example: '1.618 Mi'.\n\n :param symbol:\n Unit symbol of iota to convert to. For example: 'Gi'.\n\n :return:\n Float as units of given symbol to convert to.\n \"\"\"\n try:\n # Get input value\n value_tuple = value.split()\n amount = float(value_tuple[0])\n except (ValueError, IndexError, AttributeError):\n raise with_context(\n ValueError('Value to convert is not valid.'),\n\n context={\n 'value': value,\n },\n )\n\n try:\n # Set unit symbols and find factor/multiplier.\n unit_symbol_from = value_tuple[1]\n unit_factor_from = float(STANDARD_UNITS[unit_symbol_from])\n unit_factor_to = float(STANDARD_UNITS[symbol])\n except (KeyError, IndexError):\n # Invalid symbol or no factor\n raise with_context(\n ValueError('Invalid IOTA unit.'),\n\n context={\n 'value': value,\n 'symbol': symbol,\n },\n )\n\n return amount * (unit_factor_from / unit_factor_to)", "response": "Converts a value to a standard unit of the given symbol."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef modular_squareroot_in_FQ2(value: FQ2) -> FQ2:\n candidate_squareroot = value ** ((FQ2_order + 8) // 16)\n check = candidate_squareroot ** 2 / value\n if check in eighth_roots_of_unity[::2]:\n x1 = candidate_squareroot / eighth_roots_of_unity[eighth_roots_of_unity.index(check) // 2]\n x2 = -x1\n x1_re, x1_im = x1.coeffs\n x2_re, x2_im = x2.coeffs\n return x1 if (x1_im > x2_im or (x1_im == x2_im and x1_re > x2_re)) else x2\n return None", "response": "Return the modular squareroot in FQ2."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef compress_G1(pt: G1Uncompressed) -> G1Compressed:\n if is_inf(pt):\n # Set c_flag = 1 and b_flag = 1. leave a_flag = x = 0\n return G1Compressed(POW_2_383 + POW_2_382)\n else:\n x, y = normalize(pt)\n # Record y's leftmost bit to the a_flag\n a_flag = (y.n * 2) // q\n # Set c_flag = 1 and b_flag = 0\n return G1Compressed(x.n + a_flag * POW_2_381 + POW_2_383)", "response": "Compress a G1 compressed point."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef decompress_G1(z: G1Compressed) -> G1Uncompressed:\n # b_flag == 1 indicates the infinity point\n b_flag = (z % POW_2_383) // POW_2_382\n if b_flag == 1:\n return Z1\n x = z % POW_2_381\n\n # Try solving y coordinate from the equation Y^2 = X^3 + b\n # using quadratic residue\n y = pow((x**3 + b.n) % q, (q + 1) // 4, q)\n\n if pow(y, 2, q) != (x**3 + b.n) % q:\n raise ValueError(\n \"The given point is not on G1: y**2 = x**3 + b\"\n )\n # Choose the y whose leftmost bit is equal to the a_flag\n a_flag = (z % POW_2_382) // POW_2_381\n if (y * 2) // q != a_flag:\n y = q - y\n return (FQ(x), FQ(y), FQ(1))", "response": "Decompress a G1 compressed point into a G1Uncompressed object."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncompresses a G2 point.", "response": "def compress_G2(pt: G2Uncompressed) -> G2Compressed:\n \"\"\"\n The compressed point (z1, z2) has the bit order:\n z1: (c_flag1, b_flag1, a_flag1, x1)\n z2: (c_flag2, b_flag2, a_flag2, x2)\n where\n - c_flag1 is always set to 1\n - b_flag1 indicates infinity when set to 1\n - a_flag1 helps determine the y-coordinate when decompressing,\n - a_flag2, b_flag2, and c_flag2 are always set to 0\n \"\"\"\n if not is_on_curve(pt, b2):\n raise ValueError(\n \"The given point is not on the twisted curve over FQ**2\"\n )\n if is_inf(pt):\n return G2Compressed((POW_2_383 + POW_2_382, 0))\n x, y = normalize(pt)\n x_re, x_im = x.coeffs\n y_re, y_im = y.coeffs\n # Record the leftmost bit of y_im to the a_flag1\n # If y_im happens to be zero, then use the bit of y_re\n a_flag1 = (y_im * 2) // q if y_im > 0 else (y_re * 2) // q\n\n # Imaginary part of x goes to z1, real part goes to z2\n # c_flag1 = 1, b_flag1 = 0\n z1 = x_im + a_flag1 * POW_2_381 + POW_2_383\n # a_flag2 = b_flag2 = c_flag2 = 0\n z2 = x_re\n return G2Compressed((z1, z2))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndecompresses a G2Compressed object.", "response": "def decompress_G2(p: G2Compressed) -> G2Uncompressed:\n \"\"\"\n Recovers x and y coordinates from the compressed point (z1, z2).\n \"\"\"\n z1, z2 = p\n\n # b_flag == 1 indicates the infinity point\n b_flag1 = (z1 % POW_2_383) // POW_2_382\n if b_flag1 == 1:\n return Z2\n\n x1 = z1 % POW_2_381\n x2 = z2\n # x1 is the imaginary part, x2 is the real part\n x = FQ2([x2, x1])\n y = modular_squareroot_in_FQ2(x**3 + b2)\n if y is None:\n raise ValueError(\"Failed to find a modular squareroot\")\n\n # Choose the y whose leftmost bit of the imaginary part is equal to the a_flag1\n # If y_im happens to be zero, then use the bit of y_re\n a_flag1 = (z1 % POW_2_382) // POW_2_381\n y_re, y_im = y.coeffs\n if (y_im > 0 and (y_im * 2) // q != a_flag1) or (y_im == 0 and (y_re * 2) // q != a_flag1):\n y = FQ2((y * -1).coeffs)\n\n if not is_on_curve((x, y, FQ2([1, 0])), b2):\n raise ValueError(\n \"The given point is not on the twisted curve over FQ**2\"\n )\n return (x, y, FQ2([1, 0]))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef prime_field_inv(a: int, n: int) -> int:\n if a == 0:\n return 0\n lm, hm = 1, 0\n low, high = a % n, n\n while low > 1:\n r = high // low\n nm, new = hm - lm * r, high - low * r\n lm, low, hm, high = nm, new, lm, low\n return lm % n", "response": "Calculates the prime field inverses for a modular field."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef from_json_file(cls, filename):\n with open(filename, 'r') as fp:\n return cls(json.load(fp))", "response": "Load a lexicon from a JSON file."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef find_word_groups(self, text, category, proximity=2):\n f = re.IGNORECASE\n words = getattr(self, category)\n regex = re.compile(r'(\\b' + r'\\b|\\b'.join(words) + r'\\b)', flags=f)\n candidates = regex.finditer(text)\n\n starts, ends = [], []\n groups = []\n\n for item in candidates:\n starts.append(item.span()[0])\n ends.append(item.span()[1])\n groups.append(item.group().lower())\n\n new_starts = [] # As a check only.\n new_groups = [] # This is what I want.\n\n skip = False\n for i, g in enumerate(groups):\n if skip:\n skip = False\n continue\n if (i < len(groups)-1) and (starts[i+1]-ends[i] <= proximity):\n if g[-1] == '-':\n sep = '' # Don't insert spaces after hyphens.\n else:\n sep = ' '\n new_groups.append(g + sep + groups[i+1])\n new_starts.append(starts[i])\n skip = True\n else:\n if g not in new_groups:\n new_groups.append(g)\n new_starts.append(starts[i])\n skip = False\n\n return new_groups", "response": "Given a string and a category find and combine words into groups based on their proximity."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngiving a string and a dict of synonyms returns the preferred word. Case insensitive.", "response": "def find_synonym(self, word):\n \"\"\"\n Given a string and a dict of synonyms, returns the 'preferred'\n word. Case insensitive.\n\n Args:\n word (str): A word.\n\n Returns:\n str: The preferred word, or the input word if not found.\n\n Example:\n >>> syn = {'snake': ['python', 'adder']}\n >>> find_synonym('adder', syn)\n 'snake'\n >>> find_synonym('rattler', syn)\n 'rattler'\n\n TODO:\n Make it handle case, returning the same case it received.\n \"\"\"\n if word and self.synonyms:\n # Make the reverse look-up table.\n reverse_lookup = {}\n for k, v in self.synonyms.items():\n for i in v:\n reverse_lookup[i.lower()] = k.lower()\n\n # Now check words against this table.\n if word.lower() in reverse_lookup:\n return reverse_lookup[word.lower()]\n\n return word"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse a piece of text and replace any abbreviations with their full word equivalents.", "response": "def expand_abbreviations(self, text):\n \"\"\"\n Parse a piece of text and replace any abbreviations with their full\n word equivalents. Uses the lexicon.abbreviations dictionary to find\n abbreviations.\n\n Args:\n text (str): The text to parse.\n\n Returns:\n str: The text with abbreviations replaced.\n \"\"\"\n if not self.abbreviations:\n raise LexiconError(\"No abbreviations in lexicon.\")\n\n def chunks(data, SIZE=25):\n \"\"\"\n Regex only supports 100 groups for munging callbacks. So we have to\n chunk the abbreviation dicitonary.\n \"\"\"\n it = iter(data)\n for i in range(0, len(data), SIZE):\n yield {k: data[k] for k in islice(it, SIZE)}\n\n def cb(g):\n \"\"\"Regex callback\"\"\"\n return self.abbreviations.get(g.group(0)) or g.group(0)\n\n # Special cases.\n\n # TODO: We should handle these with a special set of\n # replacements that are made before the others.\n text = re.sub(r'w/', r'wi', text)\n\n # Main loop.\n for subdict in chunks(self.abbreviations):\n regex = r'(\\b' + r'\\b)|(\\b'.join(subdict.keys()) + r'\\b)'\n text = re.sub(regex, cb, text)\n\n return text"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_component(self, text, required=False, first_only=True):\n component = {}\n\n for i, (category, words) in enumerate(self.__dict__.items()):\n\n # There is probably a more elegant way to do this.\n if category in SPECIAL:\n # There are special entries in the lexicon.\n continue\n\n groups = self.find_word_groups(text, category)\n\n if groups and first_only:\n groups = groups[:1]\n elif groups:\n # groups = groups\n pass\n else:\n groups = [None]\n if required:\n with warnings.catch_warnings():\n warnings.simplefilter(\"always\")\n w = \"No lithology in lexicon matching '{0}'\"\n warnings.warn(w.format(text))\n\n filtered = [self.find_synonym(i) for i in groups]\n if first_only:\n component[category] = filtered[0]\n else:\n component[category] = filtered\n\n return component", "response": "Takes a piece of text representing a lithologic description for one - term component and turns it into a dictionary containing the attributes of the component."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsplit a description into parts each of which can be turned into a single component.", "response": "def split_description(self, text):\n \"\"\"\n Split a description into parts, each of which can be turned into\n a single component.\n \"\"\"\n # Protect some special sequences.\n t = re.sub(r'(\\d) ?in\\. ', r'\\1 inch ', text) # Protect.\n t = re.sub(r'(\\d) ?ft\\. ', r'\\1 feet ', t) # Protect.\n\n # Transform all part delimiters to first splitter.\n words = getattr(self, 'splitters')\n try:\n splitter = words[0].strip()\n except:\n splitter = 'with'\n t = re.sub(r'\\,?\\;?\\.? ?((under)?(less than)? \\d+%) (?=\\w)', r' '+splitter+' \\1 ', t)\n\n # Split.\n f = re.IGNORECASE\n pattern = re.compile(r'(?:' + r'|'.join(words) + r')', flags=f)\n parts = filter(None, pattern.split(t))\n\n return [i.strip() for i in parts]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nlisting the categories in the lexicon except the optional categories.", "response": "def categories(self):\n \"\"\"\n Lists the categories in the lexicon, except the\n optional categories.\n\n Returns:\n list: A list of strings of category names.\n \"\"\"\n keys = [k for k in self.__dict__.keys() if k not in SPECIAL]\n return keys"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _repr_html_row_(self, keys):\n tr, th, c = '', '', ''\n r = '{v}'\n h = '{k}'\n for k in keys:\n v = self.__dict__.get(k)\n\n if k == '_colour':\n k = 'colour'\n c = utils.text_colour_for_hex(v)\n style = 'color:{}; background-color:{}'.format(c, v)\n else:\n style = 'color:black; background-color:white'\n\n if k == 'component':\n try:\n v = v._repr_html_()\n except AttributeError:\n v = v.__repr__()\n\n tr += r.format(v=v, stl=style)\n th += h.format(k=k)\n\n return th, tr", "response": "Jupyter Notebook magic repr function as a row \u2013\u00a0used by Legend. _repr_html_"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef random(cls, component):\n colour = random.sample([i for i in range(256)], 3)\n return cls({'colour': colour, 'component': component, 'width': 1.0})", "response": "Returns a minimal Decor with a random colour."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nmakes a simple plot of the component summaries of the current object.", "response": "def plot(self, fmt=None, fig=None, ax=None):\n \"\"\"\n Make a simple plot of the Decor.\n\n Args:\n fmt (str): A Python format string for the component summaries.\n fig (Pyplot figure): A figure, optional. Use either fig or ax, not\n both.\n ax (Pyplot axis): An axis, optional. Use either fig or ax, not\n both.\n\n Returns:\n fig or ax or None. If you pass in an ax, you get it back. If you pass\n in a fig, you get it. If you pass nothing, the function creates a\n plot object as a side-effect.\n \"\"\"\n\n u = 4 # aspect ratio of decor plot\n v = 0.25 # ratio of decor tile width\n\n r = None\n\n if (fig is None) and (ax is None):\n fig = plt.figure(figsize=(u, 1))\n else:\n r = fig\n\n if ax is None:\n ax = fig.add_axes([0.1*v, 0.1, 0.8*v, 0.8])\n else:\n r = ax\n\n rect1 = patches.Rectangle((0, 0),\n u*v, u*v,\n color=self.colour,\n lw=1,\n hatch=self.hatch,\n ec='k')\n ax.add_patch(rect1)\n ax.text(1.0+0.1*v*u, u*v*0.5,\n self.component.summary(fmt=fmt),\n fontsize=max(u, 15),\n verticalalignment='center',\n horizontalalignment='left')\n ax.set_xlim([0, u*v])\n ax.set_ylim([0, u*v])\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n ax.invert_yaxis()\n\n return r"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef builtin(cls, name):\n names = {\n 'nsdoe': LEGEND__NSDOE,\n 'canstrat': LEGEND__Canstrat,\n 'nagmdm__6_2': LEGEND__NAGMDM__6_2,\n 'nagmdm__6_1': LEGEND__NAGMDM__6_1,\n 'nagmdm__4_3': LEGEND__NAGMDM__4_3,\n 'sgmc': LEGEND__SGMC,\n }\n return cls.from_csv(text=names[name.lower()])", "response": "Generate a builtin legend."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef builtin_timescale(cls, name):\n names = {\n 'isc': TIMESCALE__ISC,\n 'usgs_isc': TIMESCALE__USGS_ISC,\n 'dnag': TIMESCALE__DNAG,\n }\n return cls.from_csv(text=names[name.lower()])", "response": "Generate a default timescale legend. No arguments."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngenerate a random legend for a given list of components.", "response": "def random(cls, components, width=False, colour=None):\n \"\"\"\n Generate a random legend for a given list of components.\n\n Args:\n components (list or Striplog): A list of components. If you pass\n a Striplog, it will use the primary components. If you pass a\n component on its own, you will get a random Decor.\n width (bool): Also generate widths for the components, based on the\n order in which they are encountered.\n colour (str): If you want to give the Decors all the same colour,\n provide a hex string.\n Returns:\n Legend or Decor: A legend (or Decor) with random colours.\n TODO:\n It might be convenient to have a partial method to generate an\n 'empty' legend. Might be an easy way for someone to start with a\n template, since it'll have the components in it already.\n \"\"\"\n try: # Treating as a Striplog.\n list_of_Decors = [Decor.random(c)\n for c\n in [i[0] for i in components.unique if i[0]]\n ]\n except:\n try:\n list_of_Decors = [Decor.random(c) for c in components.copy()]\n except:\n # It's a single component.\n list_of_Decors = [Decor.random(components)]\n\n if colour is not None:\n for d in list_of_Decors:\n d.colour = colour\n\n if width:\n for i, d in enumerate(list_of_Decors):\n d.width = i + 1\n\n return cls(list_of_Decors)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a new object from a list of components and a file.", "response": "def from_image(cls, filename, components,\n ignore=None,\n col_offset=0.1,\n row_offset=2):\n \"\"\"\n A slightly easier way to make legends from images.\n\n Args:\n filename (str)\n components (list)\n ignore (list): Colours to ignore, e.g. \"#FFFFFF\" to ignore white.\n col_offset (Number): If < 1, interpreted as proportion of way\n across the image. If > 1, interpreted as pixels from left.\n row_offset (int): Number of pixels to skip at the top of each\n interval.\n \"\"\"\n if ignore is None:\n ignore = []\n\n rgb = utils.loglike_from_image(filename, offset=col_offset)\n loglike = np.array([utils.rgb_to_hex(t) for t in rgb])\n\n # Get the pixels and colour values at 'tops' (i.e. changes).\n _, hexes = utils.tops_from_loglike(loglike, offset=row_offset)\n\n # Reduce to unique colours.\n hexes_reduced = []\n for h in hexes:\n if h not in hexes_reduced:\n if h not in ignore:\n hexes_reduced.append(h)\n\n list_of_Decors = []\n for i, c in enumerate(components):\n d = Decor({'colour': hexes_reduced[i], 'component': c})\n list_of_Decors.append(d)\n\n return cls(list_of_Decors)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreading a CSV file and generate a new Legend object.", "response": "def from_csv(cls, filename=None, text=None):\n \"\"\"\n Read CSV text and generate a Legend.\n\n Args:\n string (str): The CSV string.\n\n In the first row, list the properties. Precede the properties of the\n component with 'comp ' or 'component '. For example:\n\n colour, width, comp lithology, comp colour\n #FFFFFF, 0, ,\n #F7E9A6, 3, Sandstone, Grey\n #FF99CC, 2, Anhydrite,\n ... etc\n\n Note:\n To edit a legend, the easiest thing to do is probably this:\n\n - `legend.to_csv()`\n - Edit the legend, call it `new_legend`.\n - `legend = Legend.from_csv(text=new_legend)`\n \"\"\"\n if (filename is None) and (text is None):\n raise LegendError(\"You must provide a filename or CSV text.\")\n\n if (filename is not None):\n with open(filename, 'r') as f:\n text = f.read()\n\n try:\n f = StringIO(text) # Python 3\n except TypeError:\n f = StringIO(unicode(text)) # Python 2\n\n r = csv.DictReader(f, skipinitialspace=True)\n list_of_Decors, components = [], []\n kind = 'component'\n for row in r:\n d, component = {}, {}\n for (k, v) in row.items():\n if (k in [None, '']):\n continue\n if (v in [None, '']):\n if k.lower() not in ['color', 'colour']:\n continue\n if k[:4].lower() == 'comp':\n prop = ' '.join(k.split()[1:])\n if v.lower() == 'true':\n component[prop] = True\n elif v.lower() == 'false':\n component[prop] = False\n else:\n try:\n component[prop] = float(v)\n except ValueError:\n component[prop] = v.lower()\n\n elif k[:5].lower() == 'curve':\n prop = ' '.join(k.split()[1:])\n component[prop] = v.lower()\n kind = 'curve'\n else:\n try:\n d[k] = float(v)\n except ValueError:\n d[k] = v.lower()\n\n this_component = Component(component)\n d[kind] = this_component\n\n # Check for duplicates and warn.\n if this_component in components:\n with warnings.catch_warnings():\n warnings.simplefilter(\"always\")\n w = \"This legend contains duplicate components.\"\n warnings.warn(w)\n components.append(this_component)\n\n # Append to the master list and continue.\n list_of_Decors.append(Decor(d))\n\n return cls(list_of_Decors)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef to_csv(self):\n # We can't delegate this to Decor because we need to know the superset\n # of all Decor properties. There may be lots of blanks.\n header = []\n component_header = []\n for row in self:\n for j in row.__dict__.keys():\n if j == '_colour':\n j = 'colour'\n header.append(j)\n for k in row.component.__dict__.keys():\n component_header.append(k)\n header = set(header)\n component_header = set(component_header)\n header.remove('component')\n header_row = ''\n if 'colour' in header:\n header_row += 'colour,'\n header.remove('colour')\n has_colour = True\n for item in header:\n header_row += item + ','\n for item in component_header:\n header_row += 'component ' + item + ','\n\n # Now we have a header row! Phew.\n # Next we'll go back over the legend and collect everything.\n result = header_row.strip(',') + '\\n'\n for row in self:\n if has_colour:\n result += row.__dict__.get('_colour', '') + ','\n for item in header:\n result += str(row.__dict__.get(item, '')) + ','\n for item in component_header:\n result += str(row.component.__dict__.get(item, '')) + ','\n result += '\\n'\n\n return result", "response": "Renders a legend as a CSV string."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef max_width(self):\n try:\n maximum = max([row.width for row in self.__list if row.width is not None])\n return maximum\n except:\n return 0", "response": "Returns the maximum width of all the Decors in the Legend."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_decor(self, c, match_only=None):\n if isinstance(c, Component):\n if c:\n if match_only:\n # Filter the component only those attributes\n c = Component({k: getattr(c, k, None) for k in match_only})\n for decor in self.__list:\n try:\n if c == decor.component:\n return decor\n except AttributeError:\n continue\n else:\n for decor in self.__list:\n try:\n if getattr(c, 'mnemonic').lower() == decor.curve.mnemonic:\n return decor\n except AttributeError:\n continue\n return Decor({'colour': '#eeeeee', 'component': Component()})", "response": "Get the decor for a component."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the attribute of a component.", "response": "def getattr(self, c, attr, default=None, match_only=None):\n \"\"\"\n Get the attribute of a component.\n\n Args:\n c (component): The component to look up.\n attr (str): The attribute to get.\n default (str): What to return in the event of no match.\n match_only (list of str): The component attributes to include in the\n comparison. Default: All of them.\n\n Returns:\n obj. The specified attribute of the matching Decor in the Legend.\n \"\"\"\n matching_decor = self.get_decor(c, match_only=match_only)\n\n try:\n return getattr(matching_decor, attr)\n except AttributeError:\n return default"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the display colour of a component.", "response": "def get_colour(self, c, default='#eeeeee', match_only=None):\n \"\"\"\n Get the display colour of a component. Wraps `getattr()`.\n\n Development note:\n Cannot define this as a `partial()` because I want\n to maintain the order of arguments in `getattr()`.\n\n Args:\n c (component): The component to look up.\n default (str): The colour to return in the event of no match.\n match_only (list of str): The component attributes to include in the\n comparison. Default: All of them.\n\n Returns:\n str. The hex string of the matching Decor in the Legend.\n \"\"\"\n return self.getattr(c=c,\n attr='colour',\n default=default,\n match_only=match_only)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the display width of a component.", "response": "def get_width(self, c, default=0, match_only=None):\n \"\"\"\n Get the display width of a component. Wraps `getattr()`.\n\n Development note: Cannot define this as a `partial()` because I want\n to maintain the order of arguments in `getattr()`.\n\n Args:\n c (component): The component to look up.\n default (float): The width to return in the event of no match.\n match_only (list of str): The component attributes to include in the\n comparison. Default: All of them.\n\n Returns:\n float. The width of the matching Decor in the Legend.\n \"\"\"\n return self.getattr(c=c,\n attr='width',\n default=default,\n match_only=match_only)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_component(self, colour, tolerance=0, default=None):\n if not (0 <= tolerance <= np.sqrt(195075)):\n raise LegendError('Tolerance must be between 0 and 441.67')\n\n for decor in self.__list:\n if colour.lower() == decor.colour:\n return decor.component\n\n # If we're here, we didn't find one yet.\n r1, g1, b1 = utils.hex_to_rgb(colour)\n\n # Start with a best match of black.\n best_match = '#000000'\n best_match_dist = np.sqrt(r1**2. + g1**2. + b1**2.)\n\n # Now compare to each colour in the legend.\n for decor in self.__list:\n r2, g2, b2 = decor.rgb\n distance = np.sqrt((r2-r1)**2. + (g2-g1)**2. + (b2-b1)**2.)\n if distance < best_match_dist:\n best_match = decor.component\n best_match_dist = distance\n best_match_colour = decor.colour\n\n if best_match_dist <= tolerance:\n return best_match\n else:\n with warnings.catch_warnings():\n warnings.simplefilter(\"always\")\n w = \"No match found for {0} \".format(colour.lower())\n w += \"with tolerance of {0}. Best match is \".format(tolerance)\n w += \"{0}, {1}\".format(best_match.summary(), best_match_colour)\n w += \", d={0}\".format(best_match_dist)\n warnings.warn(w)\n\n return default", "response": "This method returns the component corresponding to a colour. This method is used to get the component corresponding to a display colour. This method is used to get the component corresponding to a colour. This method is used to get the component corresponding to a colour. This method is used to get the component corresponding to a display colour."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef plot(self, fmt=None):\n for d in self.__list:\n d.plot(fmt=fmt)\n\n return None", "response": "Make a simple plot of the current set of items."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _repr_html_(self):\n rows = ''\n s = '{k}{v}'\n for k, v in self.__dict__.items():\n rows += s.format(k=k, v=v)\n html = '{}
'.format(rows)\n return html", "response": "Jupyter Notebook magic repr function."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef from_text(cls, text, lexicon, required=None, first_only=True):\n component = lexicon.get_component(text, first_only=first_only)\n if required and (required not in component):\n return None\n else:\n return cls(component)", "response": "Generates a Component object from a text string using a Lexicon."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef summary(self, fmt=None, initial=True, default=''):\n if default and not self.__dict__:\n return default\n\n if fmt == '':\n return default\n\n keys = [k for k, v in self.__dict__.items() if v is not '']\n\n f = fmt or '{' + '}, {'.join(keys) + '}'\n\n try:\n summary = CustomFormatter().format(f, **self.__dict__)\n except KeyError as e:\n raise ComponentError(\"Error building summary, \"+str(e))\n\n if summary and initial and not fmt:\n summary = summary[0].upper() + summary[1:]\n\n return summary", "response": "Returns a summary string of a component."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef Rock(*args, **kwargs):\n\n with warnings.catch_warnings():\n warnings.simplefilter(\"always\")\n w = \"The 'Rock' class was renamed 'Component'. \"\n w += \"Please update your code.\"\n warnings.warn(w, DeprecationWarning, stacklevel=2)\n\n return Component(*args, **kwargs)", "response": "Deprecated. Use Component instead."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nprocess a single row from the file.", "response": "def _process_row(text, columns):\n \"\"\"\n Processes a single row from the file.\n \"\"\"\n if not text:\n return\n\n # Construct the column dictionary that maps each field to\n # its start, its length, and its read and write functions.\n coldict = {k: {'start': s,\n 'len': l,\n 'read': r,\n 'write': w} for k, (s, l, r, w) in columns.items()}\n\n # Now collect the item\n item = {}\n for field in coldict:\n value = _get_field(text, coldict, field)\n if value is not None:\n item[field] = value\n\n return item"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse_canstrat(text):\n result = {}\n for row in text.split('\\n'):\n if not row:\n continue\n\n if len(row) < 8: # Not a real record.\n continue\n\n # Read the metadata for this row/\n row_header = _process_row(row, columns_) or {'card': None}\n card = row_header['card']\n\n # Now we know the card type for this row, we can process it.\n if card is not None:\n item = _process_row(row, columns[card])\n\n this_list = result.get(card, [])\n this_list.append(item)\n result[card] = this_list\n\n # Flatten if possible.\n for c, d in result.items():\n if len(d) == 1:\n result[c] = d[0]\n\n return result", "response": "Read all the rows and return a dict of the results."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the template from the given name.", "response": "def get_template(name):\n \"\"\"\n Still unsure about best way to do this, hence cruft.\n \"\"\"\n text = re.sub(r'\\r\\n', r'\\n', name)\n text = re.sub(r'\\{([FISDE\u00b0].*?)\\}', r'{{\\1}}', text)\n return text"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef unique(self):\n all_rx = set([iv.primary for iv in self])\n table = {r: 0 for r in all_rx}\n for iv in self:\n table[iv.primary] += iv.thickness\n\n return sorted(table.items(), key=operator.itemgetter(1), reverse=True)", "response": "Property. Summarize a Striplog with some statistics."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef top(self):\n # For backwards compatibility.\n with warnings.catch_warnings():\n warnings.simplefilter(\"always\")\n w = \"Striplog.top is deprecated; please use Striplog.unique\"\n warnings.warn(w, DeprecationWarning, stacklevel=2)\n return self.unique", "response": "Return the top entry of the log."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef __intervals_from_tops(self,\n tops,\n values,\n basis,\n components,\n field=None,\n ignore_nan=True):\n \"\"\"\n Private method. Take a sequence of tops in an arbitrary dimension,\n and provide a list of intervals from which a striplog can be made.\n\n This is only intended to be used by ``from_image()``.\n\n Args:\n tops (iterable). A list of floats.\n values (iterable). A list of values to look up.\n basis (iterable). A list of components.\n components (iterable). A list of Components.\n\n Returns:\n List. A list of Intervals.\n \"\"\"\n # Scale tops to actual depths.\n length = float(basis.size)\n start, stop = basis[0], basis[-1]\n tops = [start + (p/(length-1)) * (stop-start) for p in tops]\n bases = tops[1:] + [stop]\n\n list_of_Intervals = []\n for i, t in enumerate(tops):\n\n v, c, d = values[i], [], {}\n\n if ignore_nan and np.isnan(v):\n continue\n\n if (field is not None):\n d = {field: v}\n\n if components is not None:\n try:\n c = [deepcopy(components[int(v)])]\n except IndexError:\n c = []\n\n if c and (c[0] is None):\n c = []\n\n interval = Interval(t, bases[i], data=d, components=c)\n list_of_Intervals.append(interval)\n\n return list_of_Intervals", "response": "Private method that returns a list of Intervals."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _clean_longitudinal_data(cls, data, null=None):\n\n # Rename 'depth' or 'MD'\n if ('top' not in data.keys()):\n data['top'] = data.pop('depth', data.pop('MD', None))\n\n # Sort everything\n idx = list(data.keys()).index('top')\n values = sorted(zip(*data.values()), key=lambda x: x[idx])\n data = {k: list(v) for k, v in zip(data.keys(), zip(*values))}\n\n if data['top'] is None:\n raise StriplogError('Could not get tops.')\n\n # Get rid of null-like values if specified.\n if null is not None:\n for k, v in data.items():\n data[k] = [i if i != null else None for i in v]\n\n return data", "response": "Private function to clean up the longitudinal data."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef from_petrel(cls, filename,\n stop=None,\n points=False,\n null=None,\n function=None,\n include=None,\n exclude=None,\n remap=None,\n ignore=None):\n\n \"\"\"\n Makes a striplog from a Petrel text file.\n\n Returns:\n striplog.\n \"\"\"\n result = utils.read_petrel(filename,\n function=function,\n remap=remap,\n )\n\n data = cls._clean_longitudinal_data(result,\n null=null\n )\n\n list_of_Intervals = cls._build_list_of_Intervals(data,\n stop=stop,\n points=points,\n include=include,\n exclude=exclude,\n ignore=ignore\n )\n if list_of_Intervals:\n return cls(list_of_Intervals)\n return None", "response": "Makes a striplog from a Petrel text file."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _build_list_of_Intervals(cls,\n data_dict,\n stop=None,\n points=False,\n include=None,\n exclude=None,\n ignore=None,\n lexicon=None):\n \"\"\"\n Private function. Takes a data dictionary and reconstructs a list\n of Intervals from it.\n\n Args:\n data_dict (dict)\n stop (float): Where to end the last interval.\n points (bool)\n include (dict)\n exclude (dict)\n ignore (list)\n lexicon (Lexicon)\n\n Returns:\n list.\n \"\"\"\n\n include = include or {}\n exclude = exclude or {}\n ignore = ignore or []\n\n # Reassemble as list of dicts\n all_data = []\n for data in zip(*data_dict.values()):\n all_data.append({k: v for k, v in zip(data_dict.keys(), data)})\n\n # Sort\n all_data = sorted(all_data, key=lambda x: x['top'])\n\n # Filter down:\n wanted_data = []\n for dictionary in all_data:\n keep = True\n delete = []\n for k, v in dictionary.items():\n incl = include.get(k, utils.null_default(True))\n excl = exclude.get(k, utils.null_default(False))\n if k in ignore:\n delete.append(k)\n if not incl(v):\n keep = False\n if excl(v):\n keep = False\n if delete:\n for key in delete:\n _ = dictionary.pop(key, None)\n if keep:\n wanted_data.append(dictionary)\n\n # Fill in\n if not points:\n for i, iv in enumerate(wanted_data):\n if iv.get('base', None) is None:\n try: # To set from next interval\n iv['base'] = wanted_data[i+1]['top']\n except (IndexError, KeyError):\n # It's the last interval\n if stop is not None:\n thick = stop - iv['top']\n else:\n thick = 1\n iv['base'] = iv['top'] + thick\n\n # Build the list of intervals to pass to __init__()\n list_of_Intervals = []\n for iv in wanted_data:\n top = iv.pop('top')\n base = iv.pop('base', None)\n descr = iv.pop('description', '')\n if iv:\n c, d = {}, {}\n for k, v in iv.items():\n if (k[:5].lower() == 'comp ') or (k[:9].lower() == 'component'):\n k = re.sub(r'comp(?:onent)? ', '', k, flags=re.I)\n c[k] = v # It's a component\n else:\n if v is not None:\n d[k] = v # It's data\n comp = [Component(c)] if c else None\n this = Interval(**{'top': top,\n 'base': base,\n 'description': descr,\n 'data': d,\n 'components': comp})\n else:\n this = Interval(**{'top': top,\n 'base': base,\n 'description': descr,\n 'lexicon': lexicon})\n list_of_Intervals.append(this)\n\n return list_of_Intervals", "response": "Private function to reconstruct a list of Intervals from a dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef from_csv(cls, filename=None,\n text=None,\n dlm=',',\n lexicon=None,\n points=False,\n include=None,\n exclude=None,\n remap=None,\n function=None,\n null=None,\n ignore=None,\n source=None,\n stop=None,\n fieldnames=None):\n \"\"\"\n Load from a CSV file or text.\n \"\"\"\n if (filename is None) and (text is None):\n raise StriplogError(\"You must provide a filename or CSV text.\")\n\n if (filename is not None):\n if source is None:\n source = filename\n with open(filename, 'r') as f:\n text = f.read()\n\n source = source or 'CSV'\n\n # Deal with multiple spaces in space delimited file.\n if dlm == ' ':\n text = re.sub(r'[ \\t]+', ' ', text)\n\n if fieldnames is not None:\n text = dlm.join(fieldnames) + '\\n' + text\n\n try:\n f = StringIO(text) # Python 3\n except TypeError:\n f = StringIO(unicode(text)) # Python 2\n\n reader = csv.DictReader(f, delimiter=dlm)\n\n # Reorganize the data to make fixing it easier.\n reorg = {k.strip().lower(): [] for k in reader.fieldnames if k is not None}\n t = f.tell()\n for key in reorg:\n f.seek(t)\n for r in reader:\n s = {k.strip().lower(): v.strip() for k, v in r.items()}\n try:\n reorg[key].append(float(s[key]))\n except ValueError:\n reorg[key].append(s[key])\n\n f.close()\n\n remap = remap or {}\n for k, v in remap.items():\n reorg[v] = reorg.pop(k)\n\n data = cls._clean_longitudinal_data(reorg, null=null)\n\n list_of_Intervals = cls._build_list_of_Intervals(data,\n points=points,\n lexicon=lexicon,\n include=include,\n exclude=exclude,\n ignore=ignore,\n stop=stop)\n\n return cls(list_of_Intervals, source=source)", "response": "Load a new entry from a CSV file or text."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef from_descriptions(cls, text,\n lexicon=None,\n source='CSV',\n dlm=',',\n points=False,\n abbreviations=False,\n complete=False,\n order='depth',\n columns=None,\n ):\n \"\"\"\n Convert a CSV string into a striplog. Expects 2 or 3 fields:\n top, description\n OR\n top, base, description\n\n Args:\n text (str): The input text, given by ``well.other``.\n lexicon (Lexicon): A lexicon, required to extract components.\n source (str): A source. Default: 'CSV'.\n dlm (str): The delimiter, given by ``well.dlm``. Default: ','\n points (bool): Whether to treat as points or as intervals.\n abbreviations (bool): Whether to expand abbreviations in the\n description. Default: False.\n complete (bool): Whether to make 'blank' intervals, or just leave\n gaps. Default: False.\n order (str): The order, 'depth' or 'elevation'. Default: 'depth'.\n columns (tuple or list): The names of the columns.\n\n Returns:\n Striplog: A ``striplog`` object.\n\n Example:\n # TOP BOT LITH\n 312.34, 459.61, Sandstone\n 459.71, 589.61, Limestone\n 589.71, 827.50, Green shale\n 827.60, 1010.84, Fine sandstone\n \"\"\"\n\n text = re.sub(r'(\\n+|\\r\\n|\\r)', '\\n', text.strip())\n\n as_strings = []\n try:\n f = StringIO(text) # Python 3\n except TypeError:\n f = StringIO(unicode(text)) # Python 2\n reader = csv.reader(f, delimiter=dlm, skipinitialspace=True)\n for row in reader:\n as_strings.append(row)\n f.close()\n\n if not columns:\n if order[0].lower() == 'e':\n columns = ('base', 'top', 'description')\n else:\n columns = ('top', 'base', 'description')\n\n result = {k: [] for k in columns}\n\n # Set the indices for the fields.\n tix = columns.index('top')\n bix = columns.index('base')\n dix = columns.index('description')\n\n for i, row in enumerate(as_strings):\n\n # THIS ONLY WORKS FOR MISSING TOPS!\n if len(row) == 2:\n row = [row[0], None, row[1]]\n\n # TOP\n this_top = float(row[tix])\n\n # THIS ONLY WORKS FOR MISSING TOPS!\n # BASE\n # Base is null: use next top if this isn't the end.\n if row[1] is None:\n if i < len(as_strings)-1:\n this_base = float(as_strings[i+1][0]) # Next top.\n else:\n this_base = this_top + 1 # Default to 1 m thick at end.\n else:\n this_base = float(row[bix])\n\n # DESCRIPTION\n this_descr = row[dix].strip()\n\n # Deal with making intervals or points...\n if not points:\n # Insert intervals where needed.\n if complete and (i > 0) and (this_top != result['base'][-1]):\n result['top'].append(result['base'][-1])\n result['base'].append(this_top)\n result['description'].append('')\n else:\n this_base = None # Gets set to Top in striplog creation\n\n # ASSIGN\n result['top'].append(this_top)\n result['base'].append(this_base)\n result['description'].append(this_descr)\n\n # Build the list.\n list_of_Intervals = []\n for i, t in enumerate(result['top']):\n b = result['base'][i]\n d = result['description'][i]\n interval = Interval(t, b, description=d,\n lexicon=lexicon,\n abbreviations=abbreviations)\n list_of_Intervals.append(interval)\n\n return cls(list_of_Intervals, source=source)", "response": "Convert a CSV string into a striplog."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef from_image(cls, filename, start, stop, legend,\n source=\"Image\",\n col_offset=0.1,\n row_offset=2,\n tolerance=0):\n \"\"\"\n Read an image and generate Striplog.\n\n Args:\n filename (str): An image file, preferably high-res PNG.\n start (float or int): The depth at the top of the image.\n stop (float or int): The depth at the bottom of the image.\n legend (Legend): A legend to look up the components in.\n source (str): A source for the data. Default: 'Image'.\n col_offset (Number): The proportion of the way across the image\n from which to extract the pixel column. Default: 0.1 (ie 10%).\n row_offset (int): The number of pixels to skip at the top of\n each change in colour. Default: 2.\n tolerance (float): The Euclidean distance between hex colours,\n which has a maximum (black to white) of 441.67 in base 10.\n Default: 0.\n\n Returns:\n Striplog: The ``striplog`` object.\n \"\"\"\n rgb = utils.loglike_from_image(filename, col_offset)\n loglike = np.array([utils.rgb_to_hex(t) for t in rgb])\n\n # Get the pixels and colour values at 'tops' (i.e. changes).\n tops, hexes = utils.tops_from_loglike(loglike, offset=row_offset)\n\n # If there are consecutive tops, we assume it's because there is a\n # single-pixel row that we don't want. So take the second one only.\n # We used to do this reduction in ``utils.tops_from_loglike()`` but\n # it was prventing us from making intervals only one sample thick.\n nonconsecutive = np.append(np.diff(tops), 2)\n tops = tops[nonconsecutive > 1]\n hexes = hexes[nonconsecutive > 1]\n\n # Get the set of unique colours.\n hexes_reduced = list(set(hexes))\n\n # Get the components corresponding to the colours.\n components = [legend.get_component(h, tolerance=tolerance)\n for h in hexes_reduced]\n\n # Turn them into integers.\n values = [hexes_reduced.index(i) for i in hexes]\n\n basis = np.linspace(start, stop, loglike.size)\n\n list_of_Intervals = cls.__intervals_from_tops(tops,\n values,\n basis,\n components)\n\n return cls(list_of_Intervals, source=\"Image\")", "response": "Reads an image and generates a Striplog object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _from_array(cls, a,\n lexicon=None,\n source=\"\",\n points=False,\n abbreviations=False):\n \"\"\"\n DEPRECATING.\n\n Turn an array-like into a Striplog. It should have the following\n format (where ``base`` is optional):\n\n [(top, base, description),\n (top, base, description),\n ...\n ]\n\n Args:\n a (array-like): A list of lists or of tuples, or an array.\n lexicon (Lexicon): A language dictionary to extract structured\n objects from the descriptions.\n source (str): The source of the data. Default: ''.\n points (bool): Whether to treat as point data. Default: False.\n\n Returns:\n Striplog: The ``striplog`` object.\n \"\"\"\n\n with warnings.catch_warnings():\n warnings.simplefilter(\"always\")\n w = \"from_array() is deprecated.\"\n warnings.warn(w, DeprecationWarning, stacklevel=2)\n\n csv_text = ''\n for interval in a:\n interval = [str(i) for i in interval]\n if (len(interval) < 2) or (len(interval) > 3):\n raise StriplogError('Elements must have 2 or 3 items')\n descr = interval[-1].strip('\" ')\n interval[-1] = '\"' + descr + '\"'\n csv_text += ', '.join(interval) + '\\n'\n\n return cls.from_descriptions(csv_text,\n lexicon,\n source=source,\n points=points,\n abbreviations=abbreviations)", "response": "Convert an array - like object into a Striplog object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nturn a 1D array into a striplog object given a cutoff value.", "response": "def from_log(cls, log,\n cutoff=None,\n components=None,\n legend=None,\n legend_field=None,\n field=None,\n right=False,\n basis=None,\n source='Log'):\n \"\"\"\n Turn a 1D array into a striplog, given a cutoff.\n\n Args:\n log (array-like): A 1D array or a list of integers.\n cutoff (number or array-like): The log value(s) at which to bin\n the log. Optional.\n components (array-like): A list of components. Use this or\n ``legend``.\n legend (``Legend``): A legend object. Use this or ``components``.\n legend_field ('str'): If you're not trying to match against\n components, then you can match the log values to this field in\n the Decors.\n field (str): The field in the Interval's ``data`` to store the log\n values as.\n right (bool): Which side of the cutoff to send things that are\n equal to, i.e. right on, the cutoff.\n basis (array-like): A depth basis for the log, so striplog knows\n where to put the boundaries.\n source (str): The source of the data. Default 'Log'.\n\n Returns:\n Striplog: The ``striplog`` object.\n \"\"\"\n if (components is None) and (legend is None) and (field is None):\n m = 'You must provide a list of components, and legend, or a field.'\n raise StriplogError(m)\n\n if (legend is not None) and (legend_field is None):\n try: # To treat it like a legend.\n components = [deepcopy(decor.component) for decor in legend]\n except AttributeError: # It's just a list of components.\n pass\n\n if legend_field is not None:\n field_values = [getattr(d, legend_field, 0) for d in legend]\n components = [Component() for i in range(int(max(field_values)+1))]\n for i, decor in enumerate(legend):\n components[i] = deepcopy(decor.component)\n\n if cutoff is not None:\n\n # First make sure we have enough components.\n try:\n n = len(cutoff)\n except TypeError:\n n = 1\n if len(components) < n+1:\n m = 'For n cutoffs, you need to provide at least'\n m += 'n+1 components.'\n raise StriplogError(m)\n\n # Digitize.\n try: # To use cutoff as a list.\n a = np.digitize(log, cutoff, right)\n except ValueError: # It's just a number.\n a = np.digitize(log, [cutoff], right)\n\n else:\n a = np.copy(log)\n\n tops, values = utils.tops_from_loglike(a)\n\n if basis is None:\n m = 'You must provide a depth or elevation basis.'\n raise StriplogError(m)\n\n list_of_Intervals = cls.__intervals_from_tops(tops,\n values,\n basis,\n components,\n field=field\n )\n\n return cls(list_of_Intervals, source=source)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef from_las3(cls, string, lexicon=None,\n source=\"LAS\",\n dlm=',',\n abbreviations=False):\n \"\"\"\n Turn LAS3 'lithology' section into a Striplog.\n\n Args:\n string (str): A section from an LAS3 file.\n lexicon (Lexicon): The language for conversion to components.\n source (str): A source for the data.\n dlm (str): The delimiter.\n abbreviations (bool): Whether to expand abbreviations.\n\n Returns:\n Striplog: The ``striplog`` object.\n\n Note:\n Handles multiple 'Data' sections. It would be smarter for it\n to handle one at a time, and to deal with parsing the multiple\n sections in the Well object.\n\n Does not read an actual LAS file. Use the Well object for that.\n \"\"\"\n f = re.DOTALL | re.IGNORECASE\n regex = r'\\~\\w+?_Data.+?\\n(.+?)(?:\\n\\n+|\\n*\\~|\\n*$)'\n pattern = re.compile(regex, flags=f)\n text = pattern.search(string).group(1)\n\n s = re.search(r'\\.(.+?)\\: ?.+?source', string)\n if s:\n source = s.group(1).strip()\n\n return cls.from_descriptions(text, lexicon,\n source=source,\n dlm=dlm,\n abbreviations=abbreviations)", "response": "Turn a LAS3 file into a Striplog object."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef from_canstrat(cls, filename, source='canstrat'):\n with open(filename) as f:\n dat = f.read()\n\n data = parse_canstrat(dat)\n\n list_of_Intervals = []\n for d in data[7]: # 7 is the 'card type' for lithology info.\n if d.pop('skip'):\n continue\n top = d.pop('top')\n base = d.pop('base')\n comps = [Component({'lithology': d['rtc'],\n 'colour': d['colour_name']\n })]\n iv = Interval(top=top, base=base, components=comps, data=d)\n list_of_Intervals.append(iv)\n\n return cls(list_of_Intervals, source=source)", "response": "Create a striplog from a Canstrat DAT file."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a shallow copy of this Striplog.", "response": "def copy(self):\n \"\"\"Returns a shallow copy.\"\"\"\n return Striplog([i.copy() for i in self],\n order=self.order,\n source=self.source)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a CSV string of the Intervals.", "response": "def to_csv(self,\n filename=None,\n as_text=True,\n use_descriptions=False,\n dlm=\",\",\n header=True):\n \"\"\"\n Returns a CSV string built from the summaries of the Intervals.\n\n Args:\n use_descriptions (bool): Whether to use descriptions instead\n of summaries, if available.\n dlm (str): The delimiter.\n header (bool): Whether to form a header row.\n\n Returns:\n str: A string of comma-separated values.\n \"\"\"\n if (filename is None):\n if (not as_text):\n raise StriplogError(\"You must provide a filename or set as_text to True.\")\n else:\n as_text = False\n\n if as_text:\n output = StringIO()\n else:\n output = open(filename, 'w')\n\n fieldnames = ['Top', 'Base', 'Component']\n writer = csv.DictWriter(output,\n delimiter=dlm,\n fieldnames=fieldnames,\n quoting=csv.QUOTE_MINIMAL)\n\n if header:\n writer.writeheader()\n\n for i in self.__list:\n if use_descriptions and i.description:\n text = i.description\n elif i.primary:\n text = i.primary.summary()\n else:\n text = ''\n data = {j: k for j, k in zip(fieldnames, [i.top.z, i.base.z, text])}\n writer.writerow(data)\n\n if as_text:\n return output.getvalue()\n #return output\n else:\n output.close\n return None"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert the object to a LAS 3. 0 section string.", "response": "def to_las3(self, use_descriptions=False, dlm=\",\", source=\"Striplog\"):\n \"\"\"\n Returns an LAS 3.0 section string.\n\n Args:\n use_descriptions (bool): Whether to use descriptions instead\n of summaries, if available.\n dlm (str): The delimiter.\n source (str): The sourse of the data.\n\n Returns:\n str: A string forming Lithology section of an LAS3 file.\n \"\"\"\n data = self.to_csv(use_descriptions=use_descriptions,\n dlm=dlm,\n header=False)\n\n return templates.section.format(name='Lithology',\n short=\"LITH\",\n source=source,\n data=data)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a fully sampled log from a striplog file.", "response": "def to_log(self,\n step=1.0,\n start=None,\n stop=None,\n basis=None,\n field=None,\n field_function=None,\n dtype=None,\n table=None,\n legend=None,\n legend_field=None,\n match_only=None,\n undefined=0,\n return_meta=False\n ):\n \"\"\"\n Return a fully sampled log from a striplog. Useful for crossplotting\n with log data, for example.\n\n Args:\n step (float): The step size. Default: 1.0.\n start (float): The start depth of the new log. You will want to\n match the logs, so use the start depth from the LAS file.\n Default: The basis if provided, else the start of the striplog.\n stop (float): The stop depth of the new log. Use the stop depth\n of the LAS file. Default: The basis if provided, else the stop\n depth of the striplog.\n field (str): If you want the data to come from one of the\n attributes of the components in the striplog, provide it.\n field_function (function): Provide a function to apply to the field\n you are asking for. It's up to you to make sure the function\n does what you want.\n legend (Legend): If you want the codes to come from a legend,\n provide one. Otherwise the codes come from the log, using\n integers in the order of prevalence. If you use a legend,\n they are assigned in the order of the legend.\n legend_field (str): If you want to get a log representing one of\n the fields in the legend, such as 'width' or 'grainsize'.\n match_only (list): If you only want to match some attributes of\n the Components (e.g. lithology), provide a list of those\n you want to match.\n undefined (number): What to fill in where no value can be\n determined, e.g. ``-999.25`` or ``np.null``. Default 0.\n return_meta (bool): If ``True``, also return the depth basis\n (np.linspace), and the component table.\n\n Returns:\n ndarray: If ``return_meta`` was ``True``, you get:\n\n * The log data as an array of ints.\n * The depth basis as an array of floats.\n * A list of the components in the order matching the ints.\n\n If ``return_meta`` was ``False`` (the default), you only get\n the log data.\n \"\"\"\n # Make the preparations.\n if basis is not None:\n start, stop = basis[0], basis[-1]\n step = basis[1] - start\n else:\n start = start or self.start.z\n stop = stop or self.stop.z\n pts = np.ceil((stop - start)/step) + 1\n basis = np.linspace(start, stop, int(pts))\n\n if (field is not None) or (legend_field is not None):\n result = np.zeros_like(basis, dtype=dtype)\n else:\n result = np.zeros_like(basis, dtype=np.int)\n\n if np.isnan(undefined):\n try:\n result[:] = np.nan\n except:\n pass # array type is int\n\n # If needed, make a look-up table for the log values.\n if table is None:\n table = [Component({})]\n if legend:\n table += [j.component for j in legend]\n elif field:\n s = set([iv.data.get(field) for iv in self])\n table = [None] + list(filter(None, s))\n else:\n table += [j[0] for j in self.unique]\n\n # Adjust the table if necessary. Go over all the components in the\n # table list, and remove elements that are not in the match list.\n # Careful! This results in a new table, with components that may not\n # be in the original list of components.\n if match_only is not None:\n if not isinstance(match_only, (list, tuple, set,)):\n raise StriplogError(\"match_only should be a list, not a string\")\n table_new = []\n for c in table:\n if c == '':\n continue # No idea why sometimes there's a ''\n c_new = Component({k: v for k, v in c.__dict__.items()\n if k in match_only})\n # Only add unique, and preserve order.\n if c_new not in table_new:\n table_new.append(c_new)\n table = table_new\n else:\n match_only = []\n\n start_ix = self.read_at(start, index=True)\n stop_ix = self.read_at(stop, index=True)\n if stop_ix is not None:\n stop_ix += 1\n\n # Assign the values.\n for i in self[start_ix:stop_ix]:\n c = i.primary\n if match_only:\n c = Component({k: getattr(c, k, None)\n for k in match_only})\n\n if legend and legend_field: # Use the legend field.\n try:\n key = legend.getattr(c, legend_field, undefined)\n key = key or undefined\n except ValueError:\n key = undefined\n elif field: # Get data directly from that field in iv.data.\n f = field_function or utils.null\n try:\n v = f(i.data.get(field, undefined)) or undefined\n key = table.index(v)\n except ValueError:\n key = undefined\n else: # Use the lookup table.\n try:\n key = table.index(c) or undefined\n except ValueError:\n key = undefined\n\n top_index = int(np.ceil((max(start, i.top.z)-start)/step))\n base_index = int(np.ceil((min(stop, i.base.z)-start)/step))\n\n try:\n result[top_index:base_index+1] = key\n except: # Have a list or array or something.\n result[top_index:base_index+1] = key[0]\n\n if return_meta:\n return result, basis, table\n else:\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef plot_points(self, ax,\n legend=None,\n field=None,\n field_function=None,\n undefined=0,\n **kwargs):\n \"\"\"\n Plotting, but only for points (as opposed to intervals).\n \"\"\"\n\n ys = [iv.top.z for iv in self]\n\n if field is not None:\n f = field_function or utils.null\n xs = [f(iv.data.get(field, undefined)) for iv in self]\n else:\n xs = [1 for iv in self]\n\n ax.set_xlim((min(xs), max(xs)))\n for x, y in zip(xs, ys):\n ax.axhline(y, color='lightgray', zorder=0)\n\n ax.scatter(xs, ys, clip_on=False, **kwargs)\n\n return ax", "response": "Plots the points of the current object."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nplots the tops of the related object.", "response": "def plot_tops(self, ax, legend=None, field=None, **kwargs):\n \"\"\"\n Plotting, but only for tops (as opposed to intervals).\n \"\"\"\n if field is None:\n raise StriplogError('You must provide a field to plot.')\n\n ys = [iv.top.z for iv in self]\n\n try:\n try:\n ts = [getattr(iv.primary, field) for iv in self]\n except:\n ts = [iv.data.get(field) for iv in self]\n except:\n raise StriplogError('Could not retrieve field.')\n\n for y, t in zip(ys, ts):\n ax.axhline(y, color='lightblue', lw=3, zorder=0)\n ax.text(0.1, y-max(ys)/200, t, ha='left')\n\n return ax"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nplot the data for the current calendar.", "response": "def plot_field(self, ax, legend=None, field=None, **kwargs):\n \"\"\"\n Plotting, but only for tops (as opposed to intervals).\n \"\"\"\n if field is None:\n raise StriplogError('You must provide a field to plot.')\n\n try:\n try:\n xs = [getattr(iv.primary, field) for iv in self]\n except:\n xs = [iv.data.get(field) for iv in self]\n except:\n raise StriplogError('Could not retrieve field.')\n\n for iv, x in zip(self.__list, xs):\n _, ymin = utils.axis_transform(ax, 0, iv.base.z, ylim=(self.start.z, self.stop.z), inverse=True)\n _, ymax = utils.axis_transform(ax, 0, iv.top.z, ylim=(self.start.z, self.stop.z), inverse=True)\n ax.axvline(x, ymin=ymin, ymax=ymax)\n\n return ax"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nplot the data in the current figure.", "response": "def plot_axis(self,\n ax,\n legend,\n ladder=False,\n default_width=1,\n match_only=None,\n colour=None,\n colour_function=None,\n cmap=None,\n default=None,\n width_field=None,\n **kwargs\n ):\n \"\"\"\n Plotting, but only the Rectangles. You have to set up the figure.\n Returns a matplotlib axis object.\n\n Args:\n ax (axis): The matplotlib axis to plot into.\n legend (Legend): The Legend to use for colours, etc.\n ladder (bool): Whether to use widths or not. Default False.\n default_width (int): A width for the plot if not using widths.\n Default 1.\n match_only (list): A list of strings matching the attributes you\n want to compare when plotting.\n colour (str): Which data field to use for colours.\n cmap (cmap): Matplotlib colourmap. Default ``viridis``.\n default (float): The default (null) value.\n width_field (str): The field to use for the width of the patches.\n **kwargs are passed through to matplotlib's ``patches.Rectangle``.\n\n Returns:\n axis: The matplotlib.pyplot axis.\n \"\"\"\n default_c = None\n patches = []\n for iv in self.__list:\n origin = (0, iv.top.z)\n d = legend.get_decor(iv.primary, match_only=match_only)\n thick = iv.base.z - iv.top.z\n\n if ladder:\n if width_field is not None:\n w = iv.data.get(width_field, 1)\n w = default_width * w/self.max_field(width_field)\n default_c = 'gray'\n elif legend is not None:\n w = d.width or default_width\n try:\n w = default_width * w/legend.max_width\n except:\n w = default_width\n else:\n w = default_width\n\n # Allow override of lw\n this_patch_kwargs = kwargs.copy()\n lw = this_patch_kwargs.pop('lw', 0)\n ec = this_patch_kwargs.pop('ec', 'k')\n fc = this_patch_kwargs.pop('fc', None) or default_c or d.colour\n\n if colour is None:\n rect = mpl.patches.Rectangle(origin,\n w,\n thick,\n fc=fc,\n lw=lw,\n hatch=d.hatch,\n ec=ec, # edgecolour for hatching\n **this_patch_kwargs)\n ax.add_patch(rect)\n else:\n rect = mpl.patches.Rectangle(origin,\n w,\n thick,\n lw=lw,\n ec=ec, # edgecolour for hatching\n **this_patch_kwargs)\n patches.append(rect)\n\n if colour is not None:\n cmap = cmap or 'viridis'\n p = mpl.collections.PatchCollection(patches, cmap=cmap, lw=lw)\n p.set_array(self.get_data(colour, colour_function, default=default))\n ax.add_collection(p)\n cb = plt.colorbar(p) # orientation='horizontal' only really works with ticks=[0, 0.1, 0.2] say\n cb.outline.set_linewidth(0)\n\n return ax"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget data from the striplog.", "response": "def get_data(self, field, function=None, default=None):\n \"\"\"\n Get data from the striplog.\n \"\"\"\n f = function or utils.null\n data = []\n for iv in self:\n d = iv.data.get(field)\n if d is None:\n if default is not None:\n d = default\n else:\n d = np.nan\n data.append(f(d))\n\n return np.array(data)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nplots the object with the specified parameters.", "response": "def plot(self,\n legend=None,\n width=1.5,\n ladder=True,\n aspect=10,\n ticks=(1, 10),\n match_only=None,\n ax=None,\n return_fig=False,\n colour=None,\n cmap='viridis',\n default=None,\n style='intervals',\n field=None,\n **kwargs):\n \"\"\"\n Hands-free plotting.\n\n Args:\n legend (Legend): The Legend to use for colours, etc.\n width (int): The width of the plot, in inches. Default 1.\n ladder (bool): Whether to use widths or not. Default False.\n aspect (int): The aspect ratio of the plot. Default 10.\n ticks (int or tuple): The (minor,major) tick interval for depth.\n Only the major interval is labeled. Default (1,10).\n match_only (list): A list of strings matching the attributes you\n want to compare when plotting.\n ax (ax): A maplotlib axis to plot onto. If you pass this, it will\n be returned. Optional.\n return_fig (bool): Whether or not to return the maplotlib ``fig``\n object. Default False.\n colour (str): Which data field to use for colours.\n cmap (cmap): Matplotlib colourmap. Default ``viridis``.\n **kwargs are passed through to matplotlib's ``patches.Rectangle``.\n\n Returns:\n None. Unless you specify ``return_fig=True`` or pass in an ``ax``.\n \"\"\"\n if legend is None:\n legend = Legend.random(self.components)\n\n if style.lower() == 'tops':\n # Make sure width is at least 3 for 'tops' style\n width = max([3, width])\n\n if ax is None:\n return_ax = False\n fig = plt.figure(figsize=(width, aspect*width))\n ax = fig.add_axes([0.35, 0.05, 0.6, 0.95])\n else:\n return_ax = True\n\n if (self.order == 'none') or (style.lower() == 'points'):\n # Then this is a set of points.\n ax = self.plot_points(ax=ax, legend=legend, field=field, **kwargs)\n elif style.lower() == 'field':\n if field is None:\n raise StriplogError('You must provide a field to plot.')\n ax = self.plot_field(ax=ax, legend=legend, field=field)\n elif style.lower() == 'tops':\n ax = self.plot_tops(ax=ax, legend=legend, field=field)\n ax.set_xticks([])\n else:\n ax = self.plot_axis(ax=ax,\n legend=legend,\n ladder=ladder,\n default_width=width,\n match_only=kwargs.get('match_only', match_only),\n colour=colour,\n cmap=cmap,\n default=default,\n width_field=field,\n **kwargs\n )\n\n ax.set_xlim([0, width])\n ax.set_xticks([])\n\n # Rely on interval order.\n lower, upper = self[-1].base.z, self[0].top.z\n rng = abs(upper - lower)\n\n ax.set_ylim([lower, upper])\n\n # Make sure ticks is a tuple.\n try:\n ticks = tuple(ticks)\n except TypeError:\n ticks = (1, ticks)\n\n # Avoid MAXTICKS error.\n while rng/ticks[0] > 250:\n mi, ma = 10*ticks[0], ticks[1]\n if ma <= mi:\n ma = 10 * mi\n ticks = (mi, ma)\n\n # Carry on plotting...\n minorLocator = mpl.ticker.MultipleLocator(ticks[0])\n ax.yaxis.set_minor_locator(minorLocator)\n\n majorLocator = mpl.ticker.MultipleLocator(ticks[1])\n majorFormatter = mpl.ticker.FormatStrFormatter('%d')\n ax.yaxis.set_major_locator(majorLocator)\n ax.yaxis.set_major_formatter(majorFormatter)\n\n ax.spines['top'].set_visible(False)\n ax.spines['right'].set_visible(False)\n ax.spines['bottom'].set_visible(False)\n ax.yaxis.set_ticks_position('left')\n ax.get_yaxis().set_tick_params(which='both', direction='out')\n\n # Optional title.\n title = getattr(self, 'title', None)\n if title is not None:\n ax.set_title(title)\n\n ax.patch.set_alpha(0)\n\n if return_ax:\n return ax\n elif return_fig:\n return fig\n else:\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the index of the interval at a particular depth.", "response": "def read_at(self, d, index=False):\n \"\"\"\n Get the index of the interval at a particular 'depth' (though this\n might be an elevation or age or anything).\n\n Args:\n d (Number): The 'depth' to query.\n index (bool): Whether to return the index instead of the interval.\n\n Returns:\n Interval: The interval, or if ``index==True`` the index of the\n interval, at the specified 'depth', or ``None`` if the depth is\n outside the striplog's range.\n \"\"\"\n for i, iv in enumerate(self):\n if iv.spans(d):\n return i if index else iv\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the depth of the current object.", "response": "def depth(self, d):\n \"\"\"\n For backwards compatibility.\n \"\"\"\n with warnings.catch_warnings():\n warnings.simplefilter(\"always\")\n w = \"depth() is deprecated; please use read_at()\"\n warnings.warn(w)\n return self.read_at(d)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nextracts a log into the components of a striplog.", "response": "def extract(self, log, basis, name, function=None):\n \"\"\"\n 'Extract' a log into the components of a striplog.\n\n Args:\n log (array_like). A log or other 1D data.\n basis (array_like). The depths or elevations of the log samples.\n name (str). The name of the attribute to store in the components.\n function (function). A function that takes an array as the only\n input, and returns whatever you want to store in the 'name'\n attribute of the primary component.\n Returns:\n None. The function works on the striplog in place.\n \"\"\"\n # Build a dict of {index: [log values]} to keep track.\n intervals = {}\n previous_ix = -1\n for i, z in enumerate(basis):\n ix = self.read_at(z, index=True)\n if ix is None:\n continue\n if ix == previous_ix:\n intervals[ix].append(log[i])\n else:\n intervals[ix] = [log[i]]\n previous_ix = ix\n\n # Set the requested attribute in the primary comp of each interval.\n for ix, data in intervals.items():\n f = function or utils.null\n d = f(np.array(data))\n self[ix].data[name] = d\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsearching for a regex expression in the descriptions of the striplog and returns the first occurrence of the search_term.", "response": "def find(self, search_term, index=False):\n \"\"\"\n Look for a regex expression in the descriptions of the striplog.\n If there's no description, it looks in the summaries.\n\n If you pass a Component, then it will search the components, not the\n descriptions or summaries.\n\n Case insensitive.\n\n Args:\n search_term (string or Component): The thing you want to search\n for. Strings are treated as regular expressions.\n index (bool): Whether to return the index instead of the interval.\n Returns:\n Striplog: A striplog that contains only the 'hit' Intervals.\n However, if ``index`` was ``True``, then that's what you get.\n \"\"\"\n hits = []\n for i, iv in enumerate(self):\n try:\n search_text = iv.description or iv.primary.summary()\n pattern = re.compile(search_term, flags=re.IGNORECASE)\n if pattern.search(search_text):\n hits.append(i)\n except TypeError:\n if search_term in iv.components:\n hits.append(i)\n if hits and index:\n return hits\n elif hits:\n return self[hits]\n else:\n return"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef __find_incongruities(self, op, index):\n if len(self) == 1:\n return\n\n hits = []\n intervals = []\n\n if self.order == 'depth':\n one, two = 'base', 'top'\n else:\n one, two = 'top', 'base'\n\n for i, iv in enumerate(self[:-1]):\n next_iv = self[i+1]\n if op(getattr(iv, one), getattr(next_iv, two)):\n hits.append(i)\n\n top = getattr(iv, one)\n base = getattr(next_iv, two)\n iv_gap = Interval(top, base)\n intervals.append(iv_gap)\n\n if index and hits:\n return hits\n elif intervals:\n return Striplog(intervals)\n else:\n return", "response": "Private method to find gaps and overlaps in a striplog."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfind overlaps in a striplog.", "response": "def find_overlaps(self, index=False):\n \"\"\"\n Find overlaps in a striplog.\n\n Args:\n index (bool): If True, returns indices of intervals with\n gaps after them.\n\n Returns:\n Striplog: A striplog of all the overlaps as intervals.\n \"\"\"\n return self.__find_incongruities(op=operator.gt, index=index)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef find_gaps(self, index=False):\n return self.__find_incongruities(op=operator.lt, index=index)", "response": "Returns a list of all the gaps in a striplog."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef prune(self, limit=None, n=None, percentile=None, keep_ends=False):\n strip = self.copy()\n\n if not (limit or n or percentile):\n m = \"You must provide a limit or n or percentile for pruning.\"\n raise StriplogError(m)\n if limit:\n prune = [i for i, iv in enumerate(strip) if iv.thickness < limit]\n if n:\n prune = strip.thinnest(n=n, index=True)\n if percentile:\n n = np.floor(len(strip)*percentile/100)\n prune = strip.thinnest(n=n, index=True)\n\n if keep_ends:\n first, last = 0, len(strip) - 1\n if first in prune:\n prune.remove(first)\n if last in prune:\n prune.remove(last)\n\n del strip[prune]\n\n return strip", "response": "Remove intervals below a certain limit thickness. In place."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef anneal(self):\n strip = self.copy()\n\n gaps = strip.find_gaps(index=True)\n\n if not gaps:\n return\n\n for gap in gaps:\n before = strip[gap]\n after = strip[gap + 1]\n\n if strip.order == 'depth':\n t = (after.top.z-before.base.z)/2\n before.base = before.base.z + t\n after.top = after.top.z - t\n else:\n t = (after.base-before.top)/2\n before.top = before.top.z + t\n after.base = after.base.z - t\n\n return strip", "response": "Fill in empty intervals by growing from top and base."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfills the set with the component provided.", "response": "def fill(self, component=None):\n \"\"\"\n Fill gaps with the component provided.\n\n Example\n t = s.fill(Component({'lithology': 'cheese'}))\n \"\"\"\n c = [component] if component is not None else []\n\n # Make the intervals to go in the gaps.\n gaps = self.find_gaps()\n if not gaps:\n return self\n for iv in gaps:\n iv.components = c\n\n return deepcopy(self) + gaps"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a new striplog with all the striplogs in self and other.", "response": "def union(self, other):\n \"\"\"\n Makes a striplog of all unions.\n\n Args:\n Striplog. The striplog instance to union with.\n\n Returns:\n Striplog. The result of the union.\n \"\"\"\n if not isinstance(other, self.__class__):\n m = \"You can only union striplogs with each other.\"\n raise StriplogError(m)\n\n result = []\n for iv in deepcopy(self):\n for jv in other:\n if iv.any_overlaps(jv):\n iv = iv.union(jv)\n result.append(iv)\n return Striplog(result)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ntakes a striplog instance and returns a striplog of all intersections with the other.", "response": "def intersect(self, other):\n \"\"\"\n Makes a striplog of all intersections.\n\n Args:\n Striplog. The striplog instance to intersect with.\n\n Returns:\n Striplog. The result of the intersection.\n \"\"\"\n if not isinstance(other, self.__class__):\n m = \"You can only intersect striplogs with each other.\"\n raise StriplogError(m)\n\n result = []\n for iv in self:\n for jv in other:\n try:\n result.append(iv.intersect(jv))\n except IntervalError:\n # The intervals don't overlap\n pass\n return Striplog(result)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef merge_overlaps(self):\n overlaps = np.array(self.find_overlaps(index=True))\n\n if not overlaps.any():\n return\n\n for overlap in overlaps:\n before = self[overlap].copy()\n after = self[overlap + 1].copy()\n\n # Get rid of the before and after pieces.\n del self[overlap]\n del self[overlap]\n\n # Make the new piece.\n new_segment = before.merge(after)\n\n # Insert it.\n self.__insert(overlap, new_segment)\n\n overlaps += 1\n\n return", "response": "Merges overlaps by merging overlapping Intervals.\n\n The function takes no arguments and returns ``None``. It operates on\n the striplog 'in place'\n\n TODO: This function will not work if any interval overlaps more than\n one other intervals at either its base or top."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nmerging the two striplogs in which the same neighbour is unioned.", "response": "def merge_neighbours(self, strict=True):\n \"\"\"\n Makes a new striplog in which matching neighbours (for which the\n components are the same) are unioned. That is, they are replaced by\n a new Interval with the same top as the uppermost and the same bottom\n as the lowermost.\n\n Args\n strict (bool): If True, then all of the components must match.\n If False, then only the primary must match.\n\n Returns:\n Striplog. A new striplog.\n\n TODO:\n Might need to be tweaked to deal with 'binary striplogs' if those\n aren't implemented with components.\n \"\"\"\n new_strip = [self[0].copy()]\n\n for lower in self[1:]:\n\n # Determine if touching.\n touching = new_strip[-1].touches(lower)\n\n # Decide if match.\n if strict:\n similar = new_strip[-1].components == lower.components\n else:\n similar = new_strip[-1].primary == lower.primary\n\n # Union if both criteria met.\n if touching and similar:\n new_strip[-1] = new_strip[-1].union(lower)\n else:\n new_strip.append(lower.copy())\n\n return Striplog(new_strip)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the thickest interval in the striplog.", "response": "def thickest(self, n=1, index=False):\n \"\"\"\n Returns the thickest interval(s) as a striplog.\n\n Args:\n n (int): The number of thickest intervals to return. Default: 1.\n index (bool): If True, only the indices of the intervals are\n returned. You can use this to index into the striplog.\n\n Returns:\n Interval. The thickest interval. Or, if ``index`` was ``True``,\n the index of the thickest interval.\n \"\"\"\n s = sorted(range(len(self)), key=lambda k: self[k].thickness)\n indices = s[-n:]\n if index:\n return indices\n else:\n if n == 1:\n # Then return an interval.\n i = indices[0]\n return self[i]\n else:\n return self[indices]"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nplot a histogram of the components of the current node.", "response": "def hist(self,\n lumping=None,\n summary=False,\n sort=True,\n plot=True,\n legend=None,\n ax=None\n ):\n \"\"\"\n Plots a histogram and returns the data for it.\n\n Args:\n lumping (str): If given, the bins will be lumped based on this\n attribute of the primary components of the intervals\n encountered.\n summary (bool): If True, the summaries of the components are\n returned as the bins. Otherwise, the default behaviour is to\n return the Components themselves.\n sort (bool): If True (default), the histogram is sorted by value,\n starting with the largest.\n plot (bool): If True (default), produce a bar plot.\n legend (Legend): The legend with which to colour the bars.\n ax (axis): An axis object, which will be returned if provided.\n If you don't provide one, it will be created but not returned.\n\n Returns:\n Tuple: A tuple of tuples of entities and counts.\n\n TODO:\n Deal with numeric properties, so I can histogram 'Vp' values, say.\n \"\"\"\n # This seems like overkill, but collecting all this stuff gives\n # the user some choice about what they get back.\n comps = []\n labels = []\n entries = defaultdict(int)\n for i in self:\n if lumping:\n k = i.primary[lumping]\n else:\n if summary:\n k = i.primary.summary()\n else:\n k = i.primary\n comps.append(i.primary)\n labels.append(i.primary.summary())\n entries[k] += i.thickness\n\n if sort:\n allitems = sorted(entries.items(), key=lambda i: i[1], reverse=True)\n ents, counts = zip(*allitems)\n else:\n ents, counts = tuple(entries.keys()), tuple(entries.values())\n\n # Make plot.\n if plot:\n if ax is None:\n fig, ax = plt.subplots()\n return_ax = False\n else:\n return_ax = True\n ind = np.arange(len(ents))\n bars = ax.bar(ind, counts, align='center')\n ax.set_xticks(ind)\n ax.set_xticklabels(labels)\n if legend:\n colours = [legend.get_colour(c) for c in comps]\n for b, c in zip(bars, colours):\n b.set_color(c)\n ax.set_ylabel('Thickness [m]')\n else:\n bars = []\n\n if plot and return_ax:\n return counts, ents, ax\n\n return counts, ents, bars"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nmakes a bar plot of thickness per interval.", "response": "def bar(self, height='thickness', sort=False, reverse=False,\n legend=None, ax=None, figsize=None, **kwargs):\n \"\"\"\n Make a bar plot of thickness per interval.\n \n Args:\n height (str): The property of the primary component to plot.\n sort (bool or function): Either pass a boolean indicating whether\n to reverse sort by thickness, or pass a function to be used as\n the sort key.\n reverse (bool): Reverses the sort order.\n legend (Legend): The legend to plot with.\n ax (axis): Optional axis to plot to.\n figsize (tuple): A figure size, (width, height), optional.\n **kwargs: passed to the matplotlib bar plot command, ax.bar().\n \n Returns:\n axis: If you sent an axis in, you get it back.\n \"\"\"\n if sort:\n if sort is True:\n def func(x): return x.thickness\n reverse = True\n data = sorted(self, key=func, reverse=reverse)\n else:\n data = self[:]\n\n if ax is None:\n fig, ax = plt.subplots(figsize=figsize)\n\n heights = [getattr(i, height) for i in data]\n \n comps = [i[0] for i in self.unique]\n \n if legend is None:\n legend = Legend.random(comps)\n \n colors = [legend.get_colour(i.primary) for i in data]\n\n bars = ax.bar(range(len(data)), height=heights, color=colors, **kwargs)\n \n # Legend.\n colourables = [i.primary.summary() for i in data]\n unique_bars = dict(zip(colourables, bars))\n ax.legend(unique_bars.values(), unique_bars.keys())\n\n ax.set_ylabel(height.title())\n\n return ax"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef invert(self, copy=False):\n if copy:\n return Striplog([i.invert(copy=True) for i in self])\n else:\n for i in self:\n i.invert()\n self.__sort()\n o = self.order\n self.order = {'depth': 'elevation', 'elevation': 'depth'}[o]\n return", "response": "Inverts the striplog changing its order and the order of its contents."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef crop(self, extent, copy=False):\n try:\n if extent[0] is None:\n extent = (self.start.z, extent[1])\n if extent[1] is None:\n extent = (extent[0], self.stop.z)\n except:\n m = \"You must provide a 2-tuple for the new extents. Use None for\"\n m += \" the existing start or stop.\"\n raise StriplogError(m)\n\n first_ix = self.read_at(extent[0], index=True)\n last_ix = self.read_at(extent[1], index=True)\n\n first = self[first_ix].split_at(extent[0])[1]\n last = self[last_ix].split_at(extent[1])[0]\n\n new_list = self.__list[first_ix:last_ix+1].copy()\n new_list[0] = first\n new_list[-1] = last\n\n if copy:\n return Striplog(new_list)\n else:\n self.__list = new_list\n return", "response": "Crop the striplog to a new depth range."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef quality(self, tests, alias=None):\n # This is hacky... striplog should probably merge with welly...\n\n # Ignore aliases\n alias = alias or {}\n alias = alias.get('striplog', alias.get('Striplog', []))\n\n # Gather the tests.\n # First, anything called 'all', 'All', or 'ALL'.\n # Second, anything with the name of the curve we're in now.\n # Third, anything that the alias list has for this curve.\n # (This requires a reverse look-up so it's a bit messy.)\n this_tests =\\\n tests.get('all', [])+tests.get('All', [])+tests.get('ALL', [])\\\n + tests.get('striplog', tests.get('Striplog', []))\\\n + utils.flatten_list([tests.get(a) for a in alias])\n this_tests = filter(None, this_tests)\n\n # If we explicitly set zero tests for a particular key, then this\n # overrides the 'all' tests.\n if not tests.get('striplog', tests.get('Striplog', 1)):\n this_tests = []\n\n return {test.__name__: test(self) for test in this_tests}", "response": "Run a series of tests and return the corresponding results."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef hex_to_name(hexx):\n for n, h in defaults.COLOURS.items():\n if (len(n) > 1) and (h == hexx.upper()):\n return n.lower()\n return None", "response": "Convert a hex colour to a color name using matplotlib s colour names."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef loglike_from_image(filename, offset):\n im = plt.imread(filename)\n if offset < 1:\n col = int(im.shape[1] * offset)\n else:\n col = offset\n return im[:, col, :3]", "response": "Get a log - like stream of RGB values from an image."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ntake a log - like stream of numbers or strings and return two arrays that are the tops and values from the stream.", "response": "def tops_from_loglike(a, offset=0, null=None):\n \"\"\"\n Take a log-like stream of numbers or strings, and return two arrays:\n one of the tops (changes), and one of the values from the stream.\n\n Args:\n loglike (array-like): The input stream of loglike data.\n offset (int): Offset (down) from top at which to get lithology,\n to be sure of getting 'clean' pixels.\n\n Returns:\n ndarray: Two arrays, tops and values.\n \"\"\"\n a = np.copy(a)\n\n try:\n contains_nans = np.isnan(a).any()\n except:\n contains_nans = False\n\n if contains_nans:\n # Find a null value that's not in the log, and apply it if possible.\n _null = null or -1\n while _null in a:\n _null -= 1\n\n try:\n a[np.isnan(a)] = _null\n transformed = True\n except:\n transformed = False\n\n edges = a[1:] == a[:-1]\n edges = np.append(True, edges)\n\n tops = np.where(~edges)[0]\n tops = np.append(0, tops)\n\n values = a[tops + offset]\n\n if contains_nans and transformed:\n values[values == _null] = np.nan\n\n return tops, values"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef axis_transform(ax, x, y, xlim=None, ylim=None, inverse=False):\n xlim = xlim or ax.get_xlim()\n ylim = ylim or ax.get_ylim()\n\n xdelta = xlim[1] - xlim[0]\n ydelta = ylim[1] - ylim[0]\n\n if not inverse:\n xout = xlim[0] + x * xdelta\n yout = ylim[0] + y * ydelta\n else:\n xdelta2 = x - xlim[0]\n ydelta2 = y - ylim[0]\n xout = xdelta2 / xdelta\n yout = ydelta2 / ydelta\n\n return xout, yout", "response": "Transform the data in the axis to the corresponding coordinates."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning an underscore if the attribute is absent.", "response": "def get_field(self, field_name, args, kwargs):\n \"\"\"\n Return an underscore if the attribute is absent.\n Not all components have the same attributes.\n \"\"\"\n try:\n s = super(CustomFormatter, self)\n return s.get_field(field_name, args, kwargs)\n except KeyError: # Key is missing\n return (\"_\", field_name)\n except IndexError: # Value is missing\n return (\"_\", field_name)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef convert_field(self, value, conversion):\n try: # If the normal behaviour works, do it.\n s = super(CustomFormatter, self)\n return s.convert_field(value, conversion)\n except ValueError:\n funcs = {'s': str, # Default.\n 'r': repr, # Default.\n 'a': ascii, # Default.\n 'u': str.upper,\n 'l': str.lower,\n 'c': str.capitalize,\n 't': str.title,\n 'm': np.mean,\n '\u00b5': np.mean,\n 'v': np.var,\n 'd': np.std,\n '+': np.sum,\n '\u2211': np.sum,\n 'x': np.product,\n }\n return funcs.get(conversion)(value)", "response": "Convert a value to a specific field."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a list of all the jobs registered with Nomad.", "response": "def get_jobs(self, prefix=None):\n \"\"\" Lists all the jobs registered with Nomad.\n\n https://www.nomadproject.io/docs/http/jobs.html\n arguments:\n - prefix :(str) optional, specifies a string to filter jobs on based on an prefix.\n This is specified as a querystring parameter.\n returns: list\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException\n \"\"\"\n params = {\"prefix\": prefix}\n return self.request(method=\"get\", params=params).json()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nparsing a HCL Job file. Returns a dict with the JSON formatted job.", "response": "def parse(self, hcl, canonicalize=False):\n \"\"\" Parse a HCL Job file. Returns a dict with the JSON formatted job.\n This API endpoint is only supported from Nomad version 0.8.3.\n\n https://www.nomadproject.io/api/jobs.html#parse-job\n\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException\n \"\"\"\n return self.request(\"parse\", json={\"JobHCL\": hcl, \"Canonicalize\": canonicalize}, method=\"post\", allow_redirects=True).json()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef update_token(self, id, token):\n return self.request(\"token\", id, json=token, method=\"post\").json()", "response": "Update a token in the specified account"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_policy(self, id, policy):\n return self.request(\"policy\", id, json=policy, method=\"post\")", "response": "Create a policy with the specified ID."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nupdates a policy with the specified ID.", "response": "def update_policy(self, id, policy):\n \"\"\" Create policy.\n\n https://www.nomadproject.io/api/acl-policies.html\n\n arguments:\n - name\n - policy\n returns: request.Response\n\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException\n \"\"\"\n return self.request(\"policy\", id, json=policy, method=\"post\")"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_allocations(self, prefix=None):\n params = {\"prefix\": prefix}\n return self.request(method=\"get\", params=params).json()", "response": "Lists all the allocations."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef fail_deployment(self, id):\n fail_json = {\"DeploymentID\": id}\n return self.request(\"fail\", id, json=fail_json, method=\"post\").json()", "response": "This endpoint is used to mark a deployment as failed."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\npromotes all canaries for a task group to all canaries for a deployment.", "response": "def promote_deployment_all(self, id, all=True):\n \"\"\" This endpoint is used to promote task groups that have canaries for a deployment. This should be done when\n the placed canaries are healthy and the rolling upgrade of the remaining allocations should begin.\n\n https://www.nomadproject.io/docs/http/deployments.html\n\n arguments:\n - id\n - all, Specifies whether all task groups should be promoted.\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException\n \"\"\"\n promote_all_json = {\"All\": all,\n \"DeploymentID\": id}\n return self.request(\"promote\", id, json=promote_all_json, method=\"post\").json()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef promote_deployment_groups(self, id, groups=list()):\n promote_groups_json = {\"Groups\": groups,\n \"DeploymentID\": id}\n return self.request(\"promote\", id, json=promote_groups_json, method=\"post\").json()", "response": "Promotes the set of canaries that have been placed for a deployment."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ntoggle the drain mode of the node. When enabled, no further allocations will be assigned and existing allocations will be migrated. https://www.nomadproject.io/docs/http/node.html arguments: - id (str uuid): node id - enable (bool): enable node drain or not to enable node drain returns: dict raises: - nomad.api.exceptions.BaseNomadException - nomad.api.exceptions.URLNotFoundNomadException", "response": "def drain_node(self, id, enable=False):\n \"\"\" Toggle the drain mode of the node.\n When enabled, no further allocations will be\n assigned and existing allocations will be migrated.\n\n https://www.nomadproject.io/docs/http/node.html\n\n arguments:\n - id (str uuid): node id\n - enable (bool): enable node drain or not to enable node drain\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException\n \"\"\"\n\n return self.request(id, \"drain\", params={\"enable\": enable}, method=\"post\").json()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef drain_node_with_spec(self, id, drain_spec, mark_eligible=None):\n payload = {}\n\n if drain_spec and mark_eligible is not None:\n payload = {\n \"NodeID\": id,\n \"DrainSpec\": drain_spec,\n \"MarkEligible\": mark_eligible\n }\n elif drain_spec and mark_eligible is None:\n payload = {\n \"NodeID\": id,\n \"DrainSpec\": drain_spec\n }\n elif not drain_spec and mark_eligible is not None:\n payload = {\n \"NodeID\": id,\n \"DrainSpec\": None,\n \"MarkEligible\": mark_eligible\n }\n elif not drain_spec and mark_eligible is None:\n payload = {\n \"NodeID\": id,\n \"DrainSpec\": None,\n }\n\n return self.request(id, \"drain\", json=payload, method=\"post\").json()", "response": "This endpoint toggles the drain mode of a node."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef eligible_node(self, id, eligible=None, ineligible=None):\n payload = {}\n\n if eligible is not None and ineligible is not None:\n raise nomad.api.exceptions.InvalidParameters\n if eligible is None and ineligible is None:\n raise nomad.api.exceptions.InvalidParameters\n\n if eligible is not None and eligible:\n payload = {\"Eligibility\": \"eligible\", \"NodeID\": id}\n elif eligible is not None and not eligible:\n payload = {\"Eligibility\": \"ineligible\", \"NodeID\": id}\n elif ineligible is not None:\n payload = {\"Eligibility\": \"ineligible\", \"NodeID\": id}\n elif ineligible is not None and not ineligible:\n payload = {\"Eligibility\": \"eligible\", \"NodeID\": id}\n\n return self.request(id, \"eligibility\", json=payload, method=\"post\").json()", "response": "Toggle the eligibility of the node."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nlisting files in an allocation directory.", "response": "def list_files(self, id=None, path=\"/\"):\n \"\"\" List files in an allocation directory.\n\n https://www.nomadproject.io/docs/http/client-fs-ls.html\n\n arguments:\n - id\n - path \n returns: list\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException\n \"\"\"\n if id:\n return self.request(id, params={\"path\": path}, method=\"get\").json()\n else:\n return self.request(params={\"path\": path}, method=\"get\").json()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nread contents of a file in an allocation directory.", "response": "def read_file(self, id=None, path=\"/\"):\n \"\"\" Read contents of a file in an allocation directory.\n\n https://www.nomadproject.io/docs/http/client-fs-cat.html\n\n arguments:\n - id\n - path\n returns: (str) text\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException\n \"\"\"\n if id:\n return self.request(id, params={\"path\": path}, method=\"get\").text\n else:\n return self.request(params={\"path\": path}, method=\"get\").text"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreads contents of a file in an allocation directory.", "response": "def read_file_offset(self, id, offset, limit, path=\"/\"):\n \"\"\" Read contents of a file in an allocation directory.\n\n https://www.nomadproject.io/docs/http/client-fs-cat.html\n\n arguments:\n - id: (str) allocation_id required\n - offset: (int) required\n - limit: (int) required\n - path: (str) optional\n returns: (str) text\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.BadRequestNomadException\n \"\"\"\n params = {\n \"path\": path,\n \"offset\": offset,\n \"limit\": limit\n }\n return self.request(id, params=params, method=\"get\").text"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef stream(self, id, offset, origin, path=\"/\"):\n params = {\n \"path\": path,\n \"offset\": offset,\n \"origin\": origin\n }\n return self.request(id, params=params, method=\"get\").text", "response": "This endpoint streams the contents of a file in an allocation directory."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef stream(self, id, task, type, follow=False, offset=0, origin=\"start\", plain=False):\n params = {\n \"task\": task,\n \"type\": type,\n \"follow\": follow,\n \"offset\": offset,\n \"origin\": origin,\n \"plain\": plain\n }\n return self.request(id, params=params, method=\"get\").text", "response": "This endpoint streams a task s stderr and stdout logs from a task s allocation."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ninitiate a join between the agent and target peers.", "response": "def join_agent(self, addresses):\n \"\"\"Initiate a join between the agent and target peers.\n\n https://www.nomadproject.io/docs/http/agent-join.html\n\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException\n \"\"\"\n params = {\"address\": addresses}\n return self.request(\"join\", params=params, method=\"post\").json()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef update_servers(self, addresses):\n params = {\"address\": addresses}\n return self.request(\"servers\", params=params, method=\"post\").status_code", "response": "Updates the list of known servers to the provided list."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef force_leave(self, node):\n params = {\"node\": node}\n return self.request(\"force-leave\", params=params, method=\"post\").status_code", "response": "Force a failed gossip member into the left state."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_nodes(self, prefix=None):\n params = {\"prefix\": prefix}\n return self.request(method=\"get\", params=params).json()", "response": "Returns a list of all the client nodes that are in the cluster."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nlist all the evaluations on the specified resource.", "response": "def get_evaluations(self, prefix=None):\n \"\"\" Lists all the evaluations.\n\n https://www.nomadproject.io/docs/http/evals.html\n arguments:\n - prefix :(str) optional, specifies a string to filter evaluations on based on an prefix.\n This is specified as a querystring parameter.\n returns: list\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException\n \"\"\"\n params = {\"prefix\": prefix}\n return self.request(method=\"get\", params=params).json()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_namespaces(self, prefix=None):\n params = {\"prefix\": prefix}\n return self.request(method=\"get\", params=params).json()", "response": "Returns a list of all the namespaces registered with Nomad."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef register_job(self, id, job):\n return self.request(id, json=job, method=\"post\").json()", "response": "Registers a new job or updates an existing job"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ninvokes a dry - run of the scheduler for the job.", "response": "def plan_job(self, id, job, diff=False, policy_override=False):\n \"\"\" Invoke a dry-run of the scheduler for the job.\n\n https://www.nomadproject.io/docs/http/job.html\n\n arguments:\n - id\n - job, dict\n - diff, boolean\n - policy_override, boolean\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException\n \"\"\"\n json_dict = {}\n json_dict.update(job)\n json_dict.setdefault('Diff', diff)\n json_dict.setdefault('PolicyOverride', policy_override)\n return self.request(id, \"plan\", json=json_dict, method=\"post\").json()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndispatch a new instance of a parameterized job.", "response": "def dispatch_job(self, id, payload=None, meta=None):\n \"\"\" Dispatches a new instance of a parameterized job.\n\n https://www.nomadproject.io/docs/http/job.html\n\n arguments:\n - id\n - payload\n - meta\n returns: dict\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException\n \"\"\"\n dispatch_json = {\"Meta\": meta, \"Payload\": payload}\n return self.request(id, \"dispatch\", json=dispatch_json, method=\"post\").json()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef revert_job(self, id, version, enforce_prior_version=None):\n revert_json = {\"JobID\": id,\n \"JobVersion\": version,\n \"EnforcePriorVersion\": enforce_prior_version}\n return self.request(id, \"revert\", json=revert_json, method=\"post\").json()", "response": "This endpoint reverts the job to an older version."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nquery the status of a client node registered with Nomad.", "response": "def get_configuration(self, stale=False):\n \"\"\" Query the status of a client node registered with Nomad.\n\n https://www.nomadproject.io/docs/http/operator.html\n\n returns: dict\n optional arguments:\n - stale, (defaults to False), Specifies if the cluster should respond without an active leader.\n This is specified as a querystring parameter.\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException\n \"\"\"\n\n params = {\"stale\": stale}\n return self.request(\"raft\", \"configuration\", params=params, method=\"get\").json()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef delete_peer(self, peer_address, stale=False):\n\n params = {\"address\": peer_address, \"stale\": stale}\n return self.request(\"raft\", \"peer\", params=params, method=\"delete\").ok", "response": "Remove the Nomad server with given address from the Raft configuration."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nupdate the namespace of a specific resource in a specific resource.", "response": "def update_namespace(self, id, namespace):\n \"\"\" Update namespace\n\n https://www.nomadproject.io/api/namespaces.html\n\n arguments:\n - id\n - namespace (dict)\n returns: requests.Response\n raises:\n - nomad.api.exceptions.BaseNomadException\n - nomad.api.exceptions.URLNotFoundNomadException\n \"\"\"\n return self.request(id, json=namespace, method=\"post\")"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _get_random(self, obj_type):\n return self.mutator[obj_type][random.randint(0, self.config.level)]", "response": "Get a random mutator from a list of mutators\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_mutator(self, obj, obj_type):\n if obj_type == unicode:\n obj_type = str\n obj = str(obj)\n return self._get_random(obj_type)(obj)", "response": "Get a random mutator for the given object type"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a string containing the original object", "response": "def get_string_polyglot_attack(self, obj):\n \"\"\"\n Return a polyglot attack containing the original object\n \"\"\"\n return self.polyglot_attacks[random.choice(self.config.techniques)] % obj"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef fuzz(self, obj):\n buf = list(obj)\n FuzzFactor = random.randrange(1, len(buf))\n numwrites=random.randrange(math.ceil((float(len(buf)) / FuzzFactor)))+1\n for j in range(numwrites):\n self.random_action(buf)\n return self.safe_unicode(buf)", "response": "Perform the fuzzing on the object."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef random_action(self, b):\n action = random.choice([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])\n if len(b) >= 3:\n pos = random.randint(0, len(b)-2)\n if action == 1:\n rbyte = random.randrange(256)\n rn = random.randrange(len(b))\n b[rn] = \"%c\" % rbyte\n elif action == 2:\n howmany = random.randint(1, 100)\n curpos = pos\n for _ in range(0, howmany):\n b.insert(curpos, b[pos])\n pos += 1\n elif action == 3:\n n = random.choice([1, 2, 4])\n for _ in range(0, n):\n if len(b) > pos+1:\n tmp = b[pos]\n b[pos] = b[pos+1]\n b[pos+1] = tmp\n pos += 1\n else:\n pos -= 2\n tmp = b[pos]\n b[pos] = b[pos+1]\n b[pos+1] = tmp\n pos += 1\n elif action in [4, 5]:\n op = {\n 4: lambda x, y: ord(x) << y,\n 5: lambda x, y: ord(x) >> y,\n }\n n = random.choice([1, 2, 4])\n if len(b) < pos+n:\n pos = len(b) - (pos+n)\n if n == 1:\n f = \"\")(self.custom_html)\n if self.config.fuzz_web:\n self.request_checker.start()\n self.httpd.start()\n self.httpsd.start()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nkill the servers and the HTTPD and HTTPSD processes", "response": "def stop(self):\n \"\"\"\n Kill the servers\n \"\"\"\n os.kill(self.httpd.pid, signal.SIGKILL)\n os.kill(self.httpsd.pid, signal.SIGKILL)\n self.client_queue.put((0,0))\n if self.config.fuzz_web:\n self.request_checker.join()\n self.logger.debug(\"[{0}] - PJFServer successfully completed\".format(time.strftime(\"%H:%M:%S\")))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nserving custom HTML page", "response": "def custom_html(self, filepath):\n \"\"\"\n Serve custom HTML page\n \"\"\"\n try:\n response.headers.append(\"Access-Control-Allow-Origin\", \"*\")\n response.headers.append(\"Accept-Encoding\", \"identity\")\n response.headers.append(\"Content-Type\", \"text/html\")\n return static_file(filepath, root=self.config.html)\n except Exception as e:\n raise PJFBaseException(e.message if hasattr(e, \"message\") else str(e))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nserves fuzzed JSON object.", "response": "def serve(self):\n \"\"\"\n Serve fuzzed JSON object\n \"\"\"\n try:\n fuzzed = self.json.fuzzed\n if self.config.fuzz_web:\n self.client_queue.put((request.environ.get('REMOTE_ADDR'), fuzzed))\n response.headers.append(\"Access-Control-Allow-Origin\", \"*\")\n response.headers.append(\"Accept-Encoding\", \"identity\")\n response.headers.append(\"Content-Type\", self.config.content_type)\n if self.config.notify:\n PJFTestcaseServer.send_testcase(fuzzed, '127.0.0.1', self.config.ports[\"servers\"][\"TCASE_PORT\"])\n yield fuzzed\n except Exception as e:\n raise PJFBaseException(e.message if hasattr(e, \"message\") else str(e))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\napplying patch to the socket module.", "response": "def apply_patch(self):\n \"\"\"\n Fix default socket lib to handle client disconnection while receiving data (Broken pipe)\n \"\"\"\n if sys.version_info >= (3, 0):\n # No patch for python >= 3.0\n pass\n else:\n from .patch.socket import socket as patch\n socket.socket = patch"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef fuzz(self, obj):\n decorators = self.decorators\n\n @decorators.mutate_object_decorate\n def mutate():\n return obj\n return mutate()", "response": "Generic fuzz mutator use a decorator for the given type\n "} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nspawning a new process using subprocess. Popen", "response": "def spawn(self, cmd, stdin_content=\"\", stdin=False, shell=False, timeout=2):\n \"\"\"\n Spawn a new process using subprocess\n \"\"\"\n try:\n if type(cmd) != list:\n raise PJFInvalidType(type(cmd), list)\n if type(stdin_content) != str:\n raise PJFInvalidType(type(stdin_content), str)\n if type(stdin) != bool:\n raise PJFInvalidType(type(stdin), bool)\n self._in = stdin_content\n try:\n self.process = subprocess.Popen(cmd, stdout=PIPE, stderr=PIPE, stdin=PIPE, shell=shell)\n self.finish_read(timeout, stdin_content, stdin)\n if self.process.poll() is not None:\n self.close()\n except KeyboardInterrupt:\n return\n except OSError:\n raise PJFProcessExecutionError(\"Binary <%s> does not exist\" % cmd[0])\n except Exception as e:\n raise PJFBaseException(e.message if hasattr(e, \"message\") else str(e))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntries to get output in a separate thread.", "response": "def get_output(self, stdin_content, stdin):\n \"\"\"\n Try to get output in a separate thread\n \"\"\"\n try:\n if stdin:\n if sys.version_info >= (3, 0):\n self.process.stdin.write(bytes(stdin_content, \"utf-8\"))\n else:\n self.process.stdin.write(stdin_content)\n self._out = self.process.communicate()[0]\n except (error, IOError):\n self._out = self._in\n pass"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef finish_read(self, timeout=2, stdin_content=\"\", stdin=False):\n process = Thread(target=self.get_output, args=(stdin_content, stdin))\n process.start()\n if timeout > 0:\n process.join(timeout)\n else:\n process.join()\n if process.is_alive():\n self.close()\n self.return_code = -signal.SIGHUP\n else:\n self.return_code = self.process.returncode", "response": "Wait until we got output or until timeout is over\n "} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncloses the process and close the process object.", "response": "def close(self):\n \"\"\"\n Terminate the newly created process\n \"\"\"\n try:\n self.process.terminate()\n self.return_code = self.process.returncode\n except OSError:\n pass\n self.process.stdin.close()\n self.process.stdout.close()\n self.process.stderr.close()\n self.logger.debug(\"[{0}] - PJFExecutor successfully completed\".format(time.strftime(\"%H:%M:%S\")))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nstarts the fuzzing process.", "response": "def start(self):\n \"\"\"\n Parse the command line and start PyJFuzz\n \"\"\"\n from .pjf_worker import PJFWorker\n worker = PJFWorker(self)\n if self.update_pjf:\n worker.update_library()\n elif self.browser_auto:\n worker.browser_autopwn()\n elif self.fuzz_web:\n worker.web_fuzzer()\n elif self.json:\n if not self.web_server and not self.ext_fuzz and not self.cmd_fuzz:\n worker.fuzz()\n elif self.ext_fuzz:\n if self.stdin:\n worker.fuzz_stdin()\n else:\n worker.fuzz_command_line()\n elif self.cmd_fuzz:\n if self.stdin:\n worker.fuzz_external(True)\n else:\n worker.fuzz_external()\n else:\n worker.start_http_server()\n elif self.json_file:\n worker.start_file_fuzz()\n elif self.process_to_monitor:\n worker.start_process_monitor()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nbuild the provided value while properly handling native Python types.", "response": "def val(val, pre=None, shortest=False):\n \"\"\"Build the provided value, while properly handling\n native Python types, :any:`gramfuzz.fields.Field` instances, and :any:`gramfuzz.fields.Field`\n subclasses.\n\n :param list pre: The prerequisites list\n :returns: str\n \"\"\"\n if pre is None:\n pre = []\n\n fields = gramfuzz.fields\n MF = fields.MetaField\n F = fields.Field\n if type(val) is MF:\n val = val()\n\n if isinstance(val, F):\n val = str(val.build(pre, shortest=shortest))\n\n return str(val)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef execute(self, obj):\n try:\n if self.config.stdin:\n self.spawn(self.config.command, stdin_content=obj, stdin=True, timeout=1)\n else:\n if \"@@\" not in self.config.command:\n raise PJFMissingArgument(\"Missing @@ filename indicator while using non-stdin fuzzing method\")\n for x in self.config.command:\n if \"@@\" in x:\n self.config.command[self.config.command.index(x)] = x.replace(\"@@\", obj)\n self.spawn(self.config.command, timeout=2)\n self.logger.debug(\"[{0}] - PJFExternalFuzzer successfully completed\".format(time.strftime(\"%H:%M:%S\")))\n return self._out\n except KeyboardInterrupt:\n return \"\"\n except Exception as e:\n raise PJFBaseException(e.message if hasattr(e, \"message\") else str(e))", "response": "Execute the external fuzzing method."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef json_encode(func):\n def func_wrapper(self, indent, utf8):\n if utf8:\n encoding = \"\\\\x%02x\"\n else:\n encoding = \"\\\\u%04x\"\n hex_regex = re.compile(r\"(\\\\\\\\x[a-fA-F0-9]{2})\")\n unicode_regex = re.compile(r\"(\\\\u[a-fA-F0-9]{4})\")\n\n def encode_decode_all(d, _decode=True):\n if type(d) == dict:\n for k in d:\n if type(d[k]) in [dict, list]:\n if _decode:\n d[k] = encode_decode_all(d[k])\n else:\n d[k] = encode_decode_all(d[k], _decode=False)\n elif type(d[k]) == str:\n if _decode:\n d[k] = decode(d[k])\n else:\n d[k] = encode(d[k])\n elif type(d) == list:\n arr = []\n for e in d:\n if type(e) == str:\n if _decode:\n arr.append(decode(e))\n else:\n arr.append(encode(e))\n elif type(e) in [dict, list]:\n if _decode:\n arr.append(encode_decode_all(e))\n else:\n arr.append(encode_decode_all(e, _decode=False))\n else:\n arr.append(e)\n return arr\n else:\n if _decode:\n return decode(d)\n else:\n return encode(d)\n return d\n\n def decode(x):\n tmp = \"\".join(encoding % ord(c) if c not in p else c for c in x)\n if sys.version_info >= (3, 0):\n return str(tmp)\n else:\n for encoded in unicode_regex.findall(tmp):\n tmp = tmp.replace(encoded, encoded.decode(\"unicode_escape\"))\n return unicode(tmp)\n\n def encode(x):\n for encoded in hex_regex.findall(x):\n if sys.version_info >= (3, 0):\n x = x.replace(encoded, bytes(str(encoded).replace(\"\\\\\\\\x\", \"\\\\x\"),\"utf-8\").decode(\"unicode_escape\"))\n else:\n x = x.replace(encoded, str(encoded).replace(\"\\\\\\\\x\", \"\\\\x\").decode(\"string_escape\"))\n return x\n\n if indent:\n return encode_decode_all(\"{0}\".format(json.dumps(encode_decode_all(func(self)), indent=5)),\n _decode=False)\n else:\n return encode_decode_all(\"{0}\".format(json.dumps(encode_decode_all(func(self)))), _decode=False)\n\n return func_wrapper", "response": "Decorator used to change the return value from PJFFactory. fuzzed it makes the structure printable\n "} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _odds_val(self):\n if len(self.odds) == 0:\n self.odds = [(1.00, [self.min, self.max])]\n\n rand_val = rand.random()\n total = 0\n for percent,v in self.odds:\n if total <= rand_val < total+percent:\n found_v = v\n break\n total += percent\n\n res = None\n if isinstance(v, (tuple,list)):\n rand_func = rand.randfloat if type(v[0]) is float else rand.randint\n\n if len(v) == 2:\n res = rand_func(v[0], v[1])\n elif len(v) == 1:\n res = v[0]\n else:\n res = v\n\n return res", "response": "Determine a new random value derived from the\n defined in the field. ods value."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef build(self, pre=None, shortest=False):\n if pre is None:\n pre = []\n\n if self.value is not None and rand.maybe():\n return utils.val(self.value, pre, shortest=shortest)\n\n if self.min == self.max:\n return self.min\n\n res = self._odds_val()\n if self.neg and rand.maybe():\n res = -res\n return res", "response": "Builds the integer value of the field."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nbuilding the String instance of the class.", "response": "def build(self, pre=None, shortest=False):\n \"\"\"Build the String instance\n\n :param list pre: The prerequisites list (optional, default=None)\n :param bool shortest: Whether or not the shortest reference-chain (most minimal) version of the field should be generated.\n \"\"\"\n if pre is None:\n pre = []\n\n if self.value is not None and rand.maybe():\n return utils.val(self.value, pre, shortest=shortest)\n\n length = super(String, self).build(pre, shortest=shortest)\n res = rand.data(length, self.charset)\n return res"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nbuild the join field instance.", "response": "def build(self, pre=None, shortest=False):\n \"\"\"Build the ``Join`` field instance.\n \n :param list pre: The prerequisites list\n :param bool shortest: Whether or not the shortest reference-chain (most minimal) version of the field should be generated.\n \"\"\"\n\n if pre is None:\n pre = []\n\n if self.max is not None:\n if shortest:\n vals = [self.values[0]]\n else:\n # +1 to make it inclusive\n vals = [self.values[0]] * rand.randint(1, self.max+1)\n else:\n vals = self.values\n\n joins = []\n for val in vals:\n try:\n v = utils.val(val, pre, shortest=shortest)\n joins.append(v)\n except errors.OptGram as e:\n continue\n return self.sep.join(joins)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef build(self, pre=None, shortest=False):\n if pre is None:\n pre = []\n\n res = deque()\n for x in self.values:\n try:\n res.append(utils.val(x, pre, shortest=shortest))\n except errors.OptGram as e:\n continue\n except errors.FlushGrams as e:\n prev = \"\".join(res)\n res.clear()\n # this is assuming a scope was pushed!\n if len(self.fuzzer._scope_stack) == 1:\n pre.append(prev)\n else:\n stmts = self.fuzzer._curr_scope.setdefault(\"prev_append\", deque())\n stmts.extend(pre)\n stmts.append(prev)\n pre.clear()\n continue\n\n return self.sep.join(res)", "response": "Builds the And instance of the class."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nbuild the quote instance of the object.", "response": "def build(self, pre=None, shortest=False):\n \"\"\"Build the ``Quote`` instance\n\n :param list pre: The prerequisites list\n :param bool shortest: Whether or not the shortest reference-chain (most minimal) version of the field should be generated.\n \"\"\"\n res = super(Q, self).build(pre, shortest=shortest)\n\n if self.escape:\n return repr(res)\n elif self.html_js_escape:\n return (\"'\" + res.encode(\"string_escape\").replace(\"<\", \"\\\\x3c\").replace(\">\", \"\\\\x3e\") + \"'\")\n else:\n return \"{q}{r}{q}\".format(q=self.quote, r=res)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef build(self, pre=None, shortest=False):\n if pre is None:\n pre = []\n\n # self.shortest_vals will be set by the GramFuzzer and will\n # contain a list of value options that have a minimal reference\n # chain\n if shortest and self.shortest_vals is not None:\n return utils.val(rand.choice(self.shortest_vals), pre, shortest=shortest)\n else:\n return utils.val(rand.choice(self.values), pre, shortest=shortest)", "response": "Builds the value of the attribute value for the current value of the attribute."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nbuilds the current Opt instance.", "response": "def build(self, pre=None, shortest=False):\n \"\"\"Build the current ``Opt`` instance\n\n :param list pre: The prerequisites list\n :param bool shortest: Whether or not the shortest reference-chain (most minimal) version of the field should be generated.\n \"\"\"\n if pre is None:\n pre = []\n\n if shortest or rand.maybe(self.prob):\n raise errors.OptGram\n\n return super(Opt, self).build(pre, shortest=shortest)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nbuilds the rule definition for the current rule.", "response": "def build(self, pre=None, shortest=False):\n \"\"\"Build this rule definition\n \n :param list pre: The prerequisites list\n :param bool shortest: Whether or not the shortest reference-chain (most minimal) version of the field should be generated.\n \"\"\"\n if pre is None:\n pre = []\n\n res = deque()\n for value in self.values:\n try:\n res.append(utils.val(value, pre, shortest=shortest))\n except errors.FlushGrams as e:\n prev = \"\".join(res)\n res.clear()\n # this is assuming a scope was pushed!\n if len(self.fuzzer._scope_stack) == 1:\n pre.append(prev)\n else:\n stmts = self.fuzzer._curr_scope.setdefault(\"prev_append\", deque())\n stmts.extend(pre)\n stmts.append(prev)\n pre.clear()\n continue\n except errors.OptGram as e:\n continue\n except errors.GramFuzzError as e:\n print(\"{} : {}\".format(self.name, str(e)))\n raise\n\n return self.sep.join(res)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nbuild the ref instance by fetching the rule from the GramFuzzer instance and building it.", "response": "def build(self, pre=None, shortest=False):\n \"\"\"Build the ``Ref`` instance by fetching the rule from\n the GramFuzzer instance and building it\n\n :param list pre: The prerequisites list\n :param bool shortest: Whether or not the shortest reference-chain (most minimal) version of the field should be generated.\n \"\"\"\n global REF_LEVEL\n REF_LEVEL += 1\n\n try:\n if pre is None:\n pre = []\n\n #print(\"{:04d} - {} - {}:{}\".format(REF_LEVEL, shortest, self.cat, self.refname))\n\n definition = self.fuzzer.get_ref(self.cat, self.refname)\n res = utils.val(\n definition,\n pre,\n shortest=(shortest or REF_LEVEL >= self.max_recursion)\n )\n\n return res\n\n # this needs to happen no matter what\n finally:\n REF_LEVEL -= 1"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef build(self, pre=None, shortest=False):\n if pre is None:\n pre = []\n\n if shortest:\n raise errors.OptGram\n elif rand.maybe():\n return super(STAR, self).build(pre, shortest=shortest)\n else:\n raise errors.OptGram", "response": "Build the STAR field."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef shutdown(self, *args):\n try:\n self._shutdown()\n if self.process:\n self.process.wait()\n self.process.stdout.close()\n self.process.stdin.close()\n self.process.stderr.close()\n self.finished = True\n self.send_testcase('', '127.0.0.1', self.config.ports[\"servers\"][\"TCASE_PORT\"])\n self.logger.debug(\"[{0}] - PJFProcessMonitor successfully completed\".format(time.strftime(\"%H:%M:%S\")))\n except Exception as e:\n raise PJFBaseException(e.message if hasattr(e, \"message\") else str(e))", "response": "Shutdown the running process and the monitor process."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nruns command once and check exit code", "response": "def run_and_monitor(self):\n \"\"\"\n Run command once and check exit code\n \"\"\"\n signal.signal(signal.SIGINT, self.shutdown)\n self.spawn(self.config.process_to_monitor, timeout=0)\n return self._is_sigsegv(self.return_code)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nstarting monitor process and check exit status", "response": "def start_monitor(self, standalone=True):\n \"\"\"\n Run command in a loop and check exit status plus restart process when needed\n \"\"\"\n try:\n self.start()\n cmdline = shlex.split(self.config.process_to_monitor)\n if standalone:\n signal.signal(signal.SIGINT, self.shutdown)\n self.process = subprocess.Popen(cmdline, stdin=PIPE, stdout=PIPE, stderr=PIPE)\n while self.process and not self.finished:\n self.process.wait()\n if self._is_sigsegv(self.process.returncode):\n if self.config.debug:\n print(\"[\\033[92mINFO\\033[0m] Process crashed with \\033[91mSIGSEGV\\033[0m, waiting for testcase...\")\n while not self.got_testcase():\n time.sleep(1)\n self.save_testcase(self.testcase[-10:]) # just take last 10 testcases\n if self.process:\n self.process = subprocess.Popen(cmdline, stdin=PIPE, stdout=PIPE, stderr=PIPE)\n except OSError:\n self.shutdown()\n self.process = False\n self.got_testcase = lambda: True\n raise PJFProcessExecutionError(\"Binary <%s> does not exist\" % cmdline[0])\n except Exception as e:\n raise PJFBaseException(\"Unknown error please send log to author\")"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a random float from a and b.", "response": "def randfloat(a, b=None):\n \"\"\"Return a random float\n\n :param float a: Either the minimum value (inclusive) if ``b`` is set, or\n the maximum value if ``b`` is not set (non-inclusive, in which case the minimum\n is implicitly 0.0)\n :param float b: The maximum value to generate (non-inclusive)\n :returns: float\n \"\"\"\n if b is None:\n max_ = a\n min_ = 0.0\n else:\n min_ = a\n max_ = b\n\n diff = max_ - min_\n res = _random()\n res *= diff\n res += min_\n return res"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load_grammar(self, path):\n if not os.path.exists(path):\n raise Exception(\"path does not exist: {!r}\".format(path))\n \n # this will let grammars reference eachother with relative\n # imports.\n #\n # E.g.:\n # grams/\n # gram1.py\n # gram2.py\n #\n # gram1.py can do \"import gram2\" to require rules in gram2.py to\n # be loaded\n grammar_path = os.path.dirname(path)\n if grammar_path not in sys.path:\n sys.path.append(grammar_path)\n\n with open(path, \"r\") as f:\n data = f.read()\n code = compile(data, path, \"exec\")\n\n locals_ = {\"GRAMFUZZER\": self, \"__file__\": path}\n exec(code) in locals_\n\n if \"TOP_CAT\" in locals_:\n cat_group = os.path.basename(path).replace(\".py\", \"\")\n self.set_cat_group_top_level_cat(cat_group, locals_[\"TOP_CAT\"])", "response": "Loads a grammar file by path."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef set_max_recursion(self, level):\n import gramfuzz.fields\n gramfuzz.fields.Ref.max_recursion = level", "response": "Set the maximum recursion level for the nested reference."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef preprocess_rules(self):\n to_prune = self._find_shortest_paths()\n self._prune_rules(to_prune)\n\n self._rules_processed = True", "response": "Preprocess the rules for the current locale."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd a new rule definition to the internal dictionary.", "response": "def add_definition(self, cat, def_name, def_val, no_prune=False, gram_file=\"default\"):\n \"\"\"Add a new rule definition named ``def_name`` having value ``def_value`` to\n the category ``cat``.\n\n :param str cat: The category to add the rule to\n :param str def_name: The name of the rule definition\n :param def_val: The value of the rule definition\n :param bool no_prune: If the rule should explicitly *NOT*\n be pruned even if it has been determined to be unreachable (default=``False``)\n :param str gram_file: The file the rule was defined in (default=``\"default\"``).\n \"\"\"\n self._rules_processed = False\n\n self.add_to_cat_group(cat, gram_file, def_name)\n\n if no_prune:\n self.no_prunes.setdefault(cat, {}).setdefault(def_name, True)\n\n if self._staged_defs is not None:\n # if we're tracking changes during rule generation, add any new rules\n # to _staged_defs so they can be reverted if something goes wrong\n self._staged_defs.append((cat, def_name, def_val))\n else:\n self.defs.setdefault(cat, {}).setdefault(def_name, deque()).append(def_val)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef add_to_cat_group(self, cat, cat_group, def_name):\n self.cat_groups.setdefault(cat, {}).setdefault(cat_group, deque()).append(def_name)", "response": "Associate the provided rule definition name def_name with the the\n category group cat_group in the category cat."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_ref(self, cat, refname):\n if cat not in self.defs:\n raise errors.GramFuzzError(\"referenced definition category ({!r}) not defined\".format(cat))\n \n if refname == \"*\":\n refname = rand.choice(self.defs[cat].keys())\n \n if refname not in self.defs[cat]:\n raise errors.GramFuzzError(\"referenced definition ({!r}) not defined\".format(refname))\n\n return rand.choice(self.defs[cat][refname])", "response": "Return one of the rules in the category cat with the name\n refname."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngenerates a number of rules from the given category and category group.", "response": "def gen(self, num, cat=None, cat_group=None, preferred=None, preferred_ratio=0.5, max_recursion=None, auto_process=True):\n \"\"\"Generate ``num`` rules from category ``cat``, optionally specifying\n preferred category groups ``preferred`` that should be preferred at\n probability ``preferred_ratio`` over other randomly-chosen rule definitions.\n\n :param int num: The number of rules to generate\n :param str cat: The name of the category to generate ``num`` rules from\n :param str cat_group: The category group (ie python file) to generate rules from. This\n was added specifically to make it easier to generate data based on the name\n of the file the grammar was defined in, and is intended to work with the\n ``TOP_CAT`` values that may be defined in a loaded grammar file.\n :param list preferred: A list of preferred category groups to generate rules from\n :param float preferred_ratio: The percent probability that the preferred\n groups will be chosen over randomly choosen rule definitions from category ``cat``.\n :param int max_recursion: The maximum amount to allow references to recurse\n :param bool auto_process: Whether rules should be automatically pruned and\n shortest reference paths determined. See :any:`gramfuzz.GramFuzzer.preprocess_rules`\n for what would automatically be done.\n \"\"\"\n import gramfuzz.fields\n gramfuzz.fields.REF_LEVEL = 1\n\n if cat is None and cat_group is None:\n raise gramfuzz.errors.GramFuzzError(\"cat and cat_group are None, one must be set\")\n\n if cat is None and cat_group is not None:\n if cat_group not in self.cat_group_defaults:\n raise gramfuzz.errors.GramFuzzError(\n \"cat_group {!r} did not define a TOP_CAT variable\"\n )\n cat = self.cat_group_defaults[cat_group]\n if not isinstance(cat, basestring):\n raise gramfuzz.errors.GramFuzzError(\n \"cat_group {!r}'s TOP_CAT variable was not a string\"\n )\n\n if auto_process and self._rules_processed == False:\n self.preprocess_rules()\n\n if max_recursion is not None:\n self.set_max_recursion(max_recursion)\n\n if preferred is None:\n preferred = []\n\n res = deque()\n cat_defs = self.defs[cat]\n\n # optimizations\n _res_append = res.append\n _res_extend = res.extend\n _choice = rand.choice\n _maybe = rand.maybe\n _val = utils.val\n\n keys = self.defs[cat].keys()\n\n self._last_pref_keys = self._get_pref_keys(cat, preferred)\n # be sure to set this *after* fetching the pref keys (above^)\n self._last_prefs = preferred\n\n total_errors = deque()\n total_gend = 0\n while total_gend < num:\n # use a rule definition from one of the preferred category\n # groups\n if len(self._last_pref_keys) > 0 and _maybe(preferred_ratio):\n rand_key = _choice(self._last_pref_keys)\n if rand_key not in cat_defs:\n # TODO this means it was removed / pruned b/c it was unreachable??\n # TODO look into this more\n rand_key = _choice(list(keys))\n\n # else just choose a key at random from the category\n else:\n rand_key = _choice(list(keys))\n\n # pruning failed, this rule is not defined/reachable\n if rand_key not in cat_defs:\n continue\n\n v = _choice(cat_defs[rand_key])\n\n # not used directly by GramFuzzer, but could be useful\n # to subclasses of GramFuzzer\n info = {}\n\n pre = deque()\n self.pre_revert(info)\n val_res = None\n\n try:\n val_res = _val(v, pre)\n except errors.GramFuzzError as e:\n raise\n #total_errors.append(e)\n #self.revert(info)\n #continue\n except RuntimeError as e:\n print(\"RUNTIME ERROR\")\n self.revert(info)\n continue\n\n if val_res is not None:\n _res_extend(pre)\n _res_append(val_res)\n\n total_gend += 1\n self.post_revert(cat, res, total_gend, num, info)\n\n return res"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncommit any staged rule definition changes.", "response": "def post_revert(self, cat, res, total_num, num, info):\n \"\"\"Commit any staged rule definition changes (rule generation went\n smoothly).\n \"\"\"\n if self._staged_defs is None:\n return\n for cat,def_name,def_value in self._staged_defs:\n self.defs.setdefault(cat, {}).setdefault(def_name, deque()).append(def_value)\n self._staged_defs = None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfuzz all elements inside the object.", "response": "def fuzz_elements(self, element):\n \"\"\"\n Fuzz all elements inside the object\n \"\"\"\n try:\n if type(element) == dict:\n tmp_element = {}\n for key in element:\n if len(self.config.parameters) > 0:\n if self.config.exclude_parameters:\n fuzz = key not in self.config.parameters\n else:\n fuzz = key in self.config.parameters\n else:\n fuzz = True\n if fuzz:\n if type(element[key]) == dict:\n tmp_element.update({key: self.fuzz_elements(element[key])})\n elif type(element[key]) == list:\n tmp_element.update({key: self.fuzz_elements(element[key])})\n else:\n tmp_element.update({key: self.mutator.fuzz(element[key])})\n else:\n tmp_element.update({key: self.fuzz_elements(element[key])})\n element = tmp_element\n del tmp_element\n elif type(element) == list:\n arr = []\n for key in element:\n if type(key) == dict:\n arr.append(self.fuzz_elements(key))\n elif type(key) == list:\n arr.append(self.fuzz_elements(key))\n else:\n if len(self.config.parameters) <= 0:\n arr.append(self.mutator.fuzz(key))\n else:\n arr.append(key)\n element = arr\n del arr\n except Exception as e:\n raise PJFBaseException(e.message if hasattr(e, \"message\") else str(e))\n return element"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef fuzzed(self):\n try:\n if self.config.strong_fuzz:\n fuzzer = PJFMutators(self.config)\n if self.config.url_encode:\n if sys.version_info >= (3, 0):\n return urllib.parse.quote(fuzzer.fuzz(json.dumps(self.config.json)))\n else:\n return urllib.quote(fuzzer.fuzz(json.dumps(self.config.json)))\n else:\n if type(self.config.json) in [list, dict]:\n return fuzzer.fuzz(json.dumps(self.config.json))\n else:\n return fuzzer.fuzz(self.config.json)\n else:\n if self.config.url_encode:\n if sys.version_info >= (3, 0):\n return urllib.parse.quote(self.get_fuzzed(self.config.indent, self.config.utf8))\n else:\n return urllib.quote(self.get_fuzzed(self.config.indent, self.config.utf8))\n else:\n return self.get_fuzzed(self.config.indent, self.config.utf8)\n except Exception as e:\n raise PJFBaseException(e.message if hasattr(e, \"message\") else str(e))", "response": "Get a printable fuzzed object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the fuzzed object.", "response": "def get_fuzzed(self, indent=False, utf8=False):\n \"\"\"\n Return the fuzzed object\n \"\"\"\n try:\n if \"array\" in self.json:\n return self.fuzz_elements(dict(self.json))[\"array\"]\n else:\n return self.fuzz_elements(dict(self.json))\n except Exception as e:\n raise PJFBaseException(e.message if hasattr(e, \"message\") else str(e))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef mutate_object_decorate(self, func):\n def mutate():\n obj = func()\n return self.Mutators.get_mutator(obj, type(obj))\n return mutate", "response": "Decorator to decorate a generic object based on type\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nrewrite redis url to redis server", "response": "def rewrite_redis_url(self):\n \"\"\"\\\n if REDIS_SERVER is just an ip address, then we try to translate it to\n redis_url, redis://REDIS_SERVER so that it doesn't try to connect to\n localhost while you try to connect to another server\n :return:\n \"\"\"\n if self.REDIS_SERVER.startswith('unix://') or \\\n self.REDIS_SERVER.startswith('redis://') or \\\n self.REDIS_SERVER.startswith('rediss://'):\n return self.REDIS_SERVER\n return 'redis://{}/'.format(self.REDIS_SERVER)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_host_identifier(self):\n if self._host_identifier:\n return self._host_identifier\n local_ip_addr = self.get_redis().connection_pool\\\n .get_connection('ping')._sock.getsockname()[0]\n self._host_identifier = '{}:{}'.format(local_ip_addr, os.getpid())\n return self._host_identifier", "response": "\\ Gets the host identifier for the current singlebeat instance."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsignal handler for the signal.", "response": "def sigterm_handler(self, signum, frame):\n \"\"\" When we get term signal\n if we are waiting and got a sigterm, we just exit.\n if we have a child running, we pass the signal first to the child\n then we exit.\n\n :param signum:\n :param frame:\n :return:\n \"\"\"\n assert(self.state in ('WAITING', 'RUNNING', 'PAUSED'))\n logger.debug(\"our state %s\", self.state)\n if self.state == 'WAITING':\n return self.ioloop.stop()\n\n if self.state == 'RUNNING':\n logger.debug('already running sending signal to child - %s',\n self.sprocess.pid)\n os.kill(self.sprocess.pid, signum)\n self.ioloop.stop()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nresuming the children of the current user", "response": "def cli_command_resume(self, msg):\n \"\"\"\\\n sets state to waiting - so we resume spawning children\n \"\"\"\n if self.state == State.PAUSED:\n self.state = State.WAITING"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef cli_command_stop(self, msg):\n info = ''\n if self.state == State.RUNNING and self.sprocess and self.sprocess.proc:\n self.state = State.PAUSED\n self.sprocess.set_exit_callback(self.proc_exit_cb_state_set)\n self.sprocess.proc.kill()\n info = 'killed'\n # TODO: check if process is really dead etc.\n return info", "response": "Stop the running child process"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef cli_command_restart(self, msg):\n info = ''\n if self.state == State.RUNNING and self.sprocess and self.sprocess.proc:\n self.state = State.RESTARTING\n self.sprocess.set_exit_callback(self.proc_exit_cb_restart)\n self.sprocess.proc.kill()\n info = 'killed'\n # TODO: check if process is really dead etc.\n return info", "response": "restart the subprocess with the given heartbeat"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nretrieves a list of events since the last poll.", "response": "def getEvents(self):\n \"\"\"\n Retrieve a list of events since the last poll. Multiple calls may be needed to retrieve all events.\n\n If no events occur, the API will block for up to 30 seconds, after which an empty list is returned. As soon as\n an event is received in this time, it is returned immediately.\n\n Returns:\n :class:`.SkypeEvent` list: a list of events, possibly empty\n \"\"\"\n events = []\n for json in self.conn.endpoints[\"self\"].getEvents():\n events.append(SkypeEvent.fromRaw(self, json))\n return events"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef setPresence(self, status=SkypeUtils.Status.Online):\n self.conn(\"PUT\", \"{0}/users/ME/presenceDocs/messagingService\".format(self.conn.msgsHost),\n auth=SkypeConnection.Auth.RegToken, json={\"status\": status.label})", "response": "Sets the current user s presence on the network."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nupdate the activity message for the current user.", "response": "def setMood(self, mood):\n \"\"\"\n Update the activity message for the current user.\n\n Args:\n mood (str): new mood message\n \"\"\"\n self.conn(\"POST\", \"{0}/users/{1}/profile/partial\".format(SkypeConnection.API_USER, self.userId),\n auth=SkypeConnection.Auth.SkypeToken, json={\"payload\": {\"mood\": mood or \"\"}})\n self.user.mood = SkypeUser.Mood(plain=mood) if mood else None"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nupdates the profile picture for the current user.", "response": "def setAvatar(self, image):\n \"\"\"\n Update the profile picture for the current user.\n\n Args:\n image (file): a file-like object to read the image from\n \"\"\"\n self.conn(\"PUT\", \"{0}/users/{1}/profile/avatar\".format(SkypeConnection.API_USER, self.userId),\n auth=SkypeConnection.Auth.SkypeToken, data=image.read())"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef getUrlMeta(self, url):\n return self.conn(\"GET\", SkypeConnection.API_URL, params={\"url\": url},\n auth=SkypeConnection.Auth.Authorize).json()", "response": "Retrieve various metadata associated with a URL as seen by Skype."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nrequesting one batch of events from Skype calling onEvent with each event in turn.", "response": "def cycle(self):\n \"\"\"\n Request one batch of events from Skype, calling :meth:`onEvent` with each event in turn.\n\n Subclasses may override this method to alter loop functionality.\n \"\"\"\n try:\n events = self.getEvents()\n except requests.ConnectionError:\n return\n for event in events:\n self.onEvent(event)\n if self.autoAck:\n event.ack()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef syncFlags(self):\n self.flags = set(self.skype.conn(\"GET\", SkypeConnection.API_FLAGS,\n auth=SkypeConnection.Auth.SkypeToken).json())", "response": "Sync the flags of the current resource with the current flags."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef contact(self, id):\n try:\n json = self.skype.conn(\"POST\", \"{0}/users/batch/profiles\".format(SkypeConnection.API_USER),\n json={\"usernames\": [id]}, auth=SkypeConnection.Auth.SkypeToken).json()\n contact = SkypeContact.fromRaw(self.skype, json[0])\n if contact.id not in self.contactIds:\n self.contactIds.append(contact.id)\n return self.merge(contact)\n except SkypeApiException as e:\n if len(e.args) >= 2 and getattr(e.args[1], \"status_code\", None) == 403:\n # Not a contact, so no permission to retrieve information.\n return None\n raise", "response": "Retrieve all details for a specific contact."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef user(self, id):\n json = self.skype.conn(\"POST\", \"{0}/batch/profiles\".format(SkypeConnection.API_PROFILE),\n auth=SkypeConnection.Auth.SkypeToken, json={\"usernames\": [id]}).json()\n if json and \"status\" not in json[0]:\n return self.merge(SkypeUser.fromRaw(self.skype, json[0]))\n else:\n return None", "response": "Retrieve public information about a user."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef bots(self):\n json = self.skype.conn(\"GET\", \"{0}/agents\".format(SkypeConnection.API_BOT),\n auth=SkypeConnection.Auth.SkypeToken).json().get(\"agentDescriptions\", [])\n return [self.merge(SkypeBotUser.fromRaw(self.skype, raw)) for raw in json]", "response": "Retrieve a list of all known bots."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef bot(self, id):\n json = self.skype.conn(\"GET\", \"{0}/agents\".format(SkypeConnection.API_BOT), params={\"agentId\": id},\n auth=SkypeConnection.Auth.SkypeToken).json().get(\"agentDescriptions\", [])\n return self.merge(SkypeBotUser.fromRaw(self.skype, json[0])) if json else None", "response": "Retrieve a single bot."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsearching the Skype Directory for a user.", "response": "def search(self, query):\n \"\"\"\n Search the Skype Directory for a user.\n\n Args:\n query (str): name to search for\n\n Returns:\n SkypeUser list: collection of possible results\n \"\"\"\n results = self.skype.conn(\"GET\", SkypeConnection.API_DIRECTORY,\n auth=SkypeConnection.Auth.SkypeToken,\n params={\"searchstring\": query, \"requestId\": \"0\"}).json().get(\"results\", [])\n return [SkypeUser.fromRaw(self.skype, json.get(\"nodeProfileData\", {})) for json in results]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef requests(self):\n requests = []\n for json in self.skype.conn(\"GET\", \"{0}/users/{1}/invites\"\n .format(SkypeConnection.API_CONTACTS, self.skype.userId),\n auth=SkypeConnection.Auth.SkypeToken).json().get(\"invite_list\", []):\n for invite in json.get(\"invites\", []):\n # Copy user identifier to each invite message.\n invite[\"userId\"] = SkypeUtils.noPrefix(json.get(\"mri\"))\n requests.append(SkypeRequest.fromRaw(self.skype, invite))\n return requests", "response": "Retrieve any pending contact requests."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a new instance based on the raw properties of an API response.", "response": "def fromRaw(cls, skype=None, raw={}):\n \"\"\"\n Create a new instance based on the raw properties of an API response.\n\n This can be overridden to automatically create subclass instances based on the raw content.\n\n Args:\n skype (Skype): parent Skype instance\n raw (dict): raw object, as provided by the API\n\n Returns:\n SkypeObj: the new class instance\n \"\"\"\n return cls(skype, raw, **cls.rawToFields(raw))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nmerging the properties from other into self.", "response": "def merge(self, other):\n \"\"\"\n Copy properties from other into self, skipping ``None`` values. Also merges the raw data.\n\n Args:\n other (SkypeObj): second object to copy fields from\n \"\"\"\n for attr in self.attrs:\n if not getattr(other, attr, None) is None:\n setattr(self, attr, getattr(other, attr))\n if other.raw:\n if not self.raw:\n self.raw = {}\n self.raw.update(other.raw)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef merge(self, obj):\n if obj.id in self.cache:\n self.cache[obj.id].merge(obj)\n else:\n self.cache[obj.id] = obj\n return self.cache[obj.id]", "response": "Adds a given object to the cache or updates an existing entry if it does not exist. Returns the new object."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmaking a public API call with a connected instance.", "response": "def externalCall(cls, method, url, codes=(200, 201, 204, 207), **kwargs):\n \"\"\"\n Make a public API call without a connected :class:`.Skype` instance.\n\n The obvious implications are that no authenticated calls are possible, though this allows accessing some public\n APIs such as join URL lookups.\n\n Args:\n method (str): HTTP request method\n url (str): full URL to connect to\n codes (int list): expected HTTP response codes for success\n kwargs (dict): any extra parameters to pass to :func:`requests.request`\n\n Returns:\n requests.Response: response object provided by :mod:`requests`\n\n Raises:\n .SkypeAuthException: if an authentication rate limit is reached\n .SkypeApiException: if a successful status code is not received\n \"\"\"\n if os.getenv(\"SKPY_DEBUG_HTTP\"):\n print(\"<= [{0}] {1} {2}\".format(datetime.now().strftime(\"%d/%m %H:%M:%S\"), method, url))\n print(pformat(kwargs))\n resp = cls.extSess.request(method, url, **kwargs)\n if os.getenv(\"SKPY_DEBUG_HTTP\"):\n print(\"=> [{0}] {1}\".format(datetime.now().strftime(\"%d/%m %H:%M:%S\"), resp.status_code))\n print(pformat(dict(resp.headers)))\n try:\n print(pformat(resp.json()))\n except ValueError:\n print(resp.text)\n if resp.status_code not in codes:\n raise SkypeApiException(\"{0} response from {1} {2}\".format(resp.status_code, method, url), resp)\n return resp"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef syncStateCall(self, method, url, params={}, **kwargs):\n try:\n states = self.syncStates[(method, url)]\n except KeyError:\n states = self.syncStates[(method, url)] = []\n if states:\n # We have a state link, use it to replace the URL and query string.\n url = states[-1]\n params = {}\n resp = self(method, url, params=params, **kwargs)\n try:\n json = resp.json()\n except ValueError:\n # Don't do anything if not a JSON response.\n pass\n else:\n # If a state link exists in the response, store it for later.\n state = json.get(\"_metadata\", {}).get(\"syncState\")\n if state:\n states.append(state)\n return resp", "response": "Follow and track sync state URLs provided by an API endpoint."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef setUserPwd(self, user, pwd):\n def getSkypeToken(self):\n self.liveLogin(user, pwd)\n self.getSkypeToken = MethodType(getSkypeToken, self)", "response": "Replace the stub method getUserPwd with a new method that connects via Microsoft account flow using the passed credentials."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef readToken(self):\n if not self.tokenFile:\n raise SkypeAuthException(\"No token file specified\")\n try:\n with open(self.tokenFile, \"r\") as f:\n lines = f.read().splitlines()\n except OSError:\n raise SkypeAuthException(\"Token file doesn't exist or not readable\")\n try:\n user, skypeToken, skypeExpiry, regToken, regExpiry, msgsHost = lines\n skypeExpiry = datetime.fromtimestamp(int(skypeExpiry))\n regExpiry = datetime.fromtimestamp(int(regExpiry))\n except ValueError:\n raise SkypeAuthException(\"Token file is malformed\")\n if datetime.now() >= skypeExpiry:\n raise SkypeAuthException(\"Token file has expired\")\n self.userId = user\n self.tokens[\"skype\"] = skypeToken\n self.tokenExpiry[\"skype\"] = skypeExpiry\n if datetime.now() < regExpiry:\n self.tokens[\"reg\"] = regToken\n self.tokenExpiry[\"reg\"] = regExpiry\n self.msgsHost = msgsHost\n else:\n self.getRegToken()", "response": "Reads the Skype token file and sets the attributes of the instance variables."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef writeToken(self):\n # Write token file privately.\n with os.fdopen(os.open(self.tokenFile, os.O_WRONLY | os.O_CREAT, 0o600), \"w\") as f:\n # When opening files via os, truncation must be done manually.\n f.truncate()\n f.write(self.userId + \"\\n\")\n f.write(self.tokens[\"skype\"] + \"\\n\")\n f.write(str(int(time.mktime(self.tokenExpiry[\"skype\"].timetuple()))) + \"\\n\")\n f.write(self.tokens[\"reg\"] + \"\\n\")\n f.write(str(int(time.mktime(self.tokenExpiry[\"reg\"].timetuple()))) + \"\\n\")\n f.write(self.msgsHost + \"\\n\")", "response": "Write the token file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef verifyToken(self, auth):\n if auth in (self.Auth.SkypeToken, self.Auth.Authorize):\n if \"skype\" not in self.tokenExpiry or datetime.now() >= self.tokenExpiry[\"skype\"]:\n if not hasattr(self, \"getSkypeToken\"):\n raise SkypeAuthException(\"Skype token expired, and no password specified\")\n self.getSkypeToken()\n elif auth == self.Auth.RegToken:\n if \"reg\" not in self.tokenExpiry or datetime.now() >= self.tokenExpiry[\"reg\"]:\n self.getRegToken()", "response": "Ensures that the authentication token for the given auth method is still valid."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef liveLogin(self, user, pwd):\n self.tokens[\"skype\"], self.tokenExpiry[\"skype\"] = SkypeLiveAuthProvider(self).auth(user, pwd)\n self.getUserId()\n self.getRegToken()", "response": "Perform a login to Skype for Web on Microsoft account login page."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconnecting to Skype as a guest joining a given conversation.", "response": "def guestLogin(self, url, name):\n \"\"\"\n Connect to Skype as a guest, joining a given conversation.\n\n In this state, some APIs (such as contacts) will return 401 status codes. A guest can only communicate with\n the conversation they originally joined.\n\n Args:\n url (str): public join URL for conversation, or identifier from it\n name (str): display name as shown to other participants\n\n Raises:\n .SkypeAuthException: if the login request is rejected\n .SkypeApiException: if the login form can't be processed\n \"\"\"\n self.tokens[\"skype\"], self.tokenExpiry[\"skype\"] = SkypeGuestAuthProvider(self).auth(url, name)\n self.getUserId()\n self.getRegToken()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef refreshSkypeToken(self):\n self.tokens[\"skype\"], self.tokenExpiry[\"skype\"] = SkypeRefreshAuthProvider(self).auth(self.tokens[\"skype\"])\n self.getRegToken()", "response": "Refresh the Skype token and update the expiry time with other credentials."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nasks Skype for the authenticated user s identifier and store it on the connection object.", "response": "def getUserId(self):\n \"\"\"\n Ask Skype for the authenticated user's identifier, and store it on the connection object.\n \"\"\"\n self.userId = self(\"GET\", \"{0}/users/self/profile\".format(self.API_USER),\n auth=self.Auth.SkypeToken).json().get(\"username\")"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nacquires a new registration token.", "response": "def getRegToken(self):\n \"\"\"\n Acquire a new registration token.\n\n Once successful, all tokens and expiry times are written to the token file (if specified on initialisation).\n \"\"\"\n self.verifyToken(self.Auth.SkypeToken)\n token, expiry, msgsHost, endpoint = SkypeRegistrationTokenProvider(self).auth(self.tokens[\"skype\"])\n self.tokens[\"reg\"] = token\n self.tokenExpiry[\"reg\"] = expiry\n self.msgsHost = msgsHost\n if endpoint:\n endpoint.config()\n self.endpoints[\"main\"] = endpoint\n self.syncEndpoints()\n if self.tokenFile:\n self.writeToken()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nretrieves all current endpoints for the connected user.", "response": "def syncEndpoints(self):\n \"\"\"\n Retrieve all current endpoints for the connected user.\n \"\"\"\n self.endpoints[\"all\"] = []\n for json in self(\"GET\", \"{0}/users/ME/presenceDocs/messagingService\".format(self.msgsHost),\n params={\"view\": \"expanded\"}, auth=self.Auth.RegToken).json().get(\"endpointPresenceDocs\", []):\n id = json.get(\"link\", \"\").split(\"/\")[7]\n self.endpoints[\"all\"].append(SkypeEndpoint(self, id))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nperform a login with the given Skype username and its password.", "response": "def auth(self, user, pwd):\n \"\"\"\n Perform a login with the given Skype username and its password. This emulates a login to Skype for Web on\n ``api.skype.com``.\n\n Args:\n user (str): username of the connecting account\n pwd (str): password of the connecting account\n\n Returns:\n (str, datetime.datetime) tuple: Skype token, and associated expiry if known\n\n Raises:\n .SkypeAuthException: if the login request is rejected\n .SkypeApiException: if the login form can't be processed\n \"\"\"\n # Wrap up the credentials ready to send.\n pwdHash = base64.b64encode(hashlib.md5((user + \"\\nskyper\\n\" + pwd).encode(\"utf-8\")).digest()).decode(\"utf-8\")\n json = self.conn(\"POST\", \"{0}/login/skypetoken\".format(SkypeConnection.API_USER),\n json={\"username\": user, \"passwordHash\": pwdHash, \"scopes\": \"client\"}).json()\n if \"skypetoken\" not in json:\n raise SkypeAuthException(\"Couldn't retrieve Skype token from response\")\n expiry = None\n if \"expiresIn\" in json:\n expiry = datetime.fromtimestamp(int(time.time()) + int(json[\"expiresIn\"]))\n return json[\"skypetoken\"], expiry"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef checkUser(self, user):\n return not self.conn(\"POST\", \"{0}/GetCredentialType.srf\".format(SkypeConnection.API_MSACC),\n json={\"username\": user}).json().get(\"IfExistsResult\")", "response": "Checks if a user exists in Microsoft Account Store"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nperforming a Skype authentication with the given user and password.", "response": "def auth(self, user, pwd):\n \"\"\"\n Obtain connection parameters from the Microsoft account login page, and perform a login with the given email\n address or Skype username, and its password. This emulates a login to Skype for Web on ``login.live.com``.\n\n .. note::\n Microsoft accounts with two-factor authentication enabled are not supported, and will cause a\n :class:`.SkypeAuthException` to be raised. See the exception definitions for other possible causes.\n\n Args:\n user (str): username or email address of the connecting account\n pwd (str): password of the connecting account\n\n Returns:\n (str, datetime.datetime) tuple: Skype token, and associated expiry if known\n\n Raises:\n .SkypeAuthException: if the login request is rejected\n .SkypeApiException: if the login form can't be processed\n \"\"\"\n # Do the authentication dance.\n params = self.getParams()\n t = self.sendCreds(user, pwd, params)\n return self.getToken(t)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef auth(self, url, name):\n urlId = url.split(\"/\")[-1]\n # Pretend to be Chrome on Windows (required to avoid \"unsupported device\" messages).\n agent = \"Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) \" \\\n \"Chrome/33.0.1750.117 Safari/537.36\"\n cookies = self.conn(\"GET\", \"{0}/{1}\".format(SkypeConnection.API_JOIN, urlId),\n headers={\"User-Agent\": agent}).cookies\n ids = self.conn(\"POST\", \"{0}/api/v2/conversation/\".format(SkypeConnection.API_JOIN),\n json={\"shortId\": urlId, \"type\": \"wl\"}).json()\n token = self.conn(\"POST\", \"{0}/api/v1/users/guests\".format(SkypeConnection.API_JOIN),\n headers={\"csrf_token\": cookies.get(\"csrf_token\"),\n \"X-Skype-Request-Id\": cookies.get(\"launcher_session_id\")},\n json={\"flowId\": cookies.get(\"launcher_session_id\"),\n \"shortId\": urlId,\n \"longId\": ids.get(\"Long\"),\n \"threadId\": ids.get(\"Resource\"),\n \"name\": name}).json().get(\"skypetoken\")\n # Assume the token lasts 24 hours, as a guest account only lasts that long anyway.\n expiry = datetime.now() + timedelta(days=1)\n return token, expiry", "response": "Connect to Skype as a guest joining a given conversation."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef auth(self, token):\n t = self.sendToken(token)\n return self.getToken(t)", "response": "Takes an existing Skype token and refresh it and returns the token and associated expiry time."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nrequests a new registration token using a current Skype token.", "response": "def auth(self, skypeToken):\n \"\"\"\n Request a new registration token using a current Skype token.\n\n Args:\n skypeToken (str): existing Skype token\n\n Returns:\n (str, datetime.datetime, str, SkypeEndpoint) tuple: registration token, associated expiry if known,\n resulting endpoint hostname, endpoint if provided\n\n Raises:\n .SkypeAuthException: if the login request is rejected\n .SkypeApiException: if the login form can't be processed\n \"\"\"\n token = expiry = endpoint = None\n msgsHost = SkypeConnection.API_MSGSHOST\n while not token:\n secs = int(time.time())\n hash = self.getMac256Hash(str(secs))\n headers = {\"LockAndKey\": \"appId=msmsgs@msnmsgr.com; time={0}; lockAndKeyResponse={1}\".format(secs, hash),\n \"Authentication\": \"skypetoken=\" + skypeToken, \"BehaviorOverride\": \"redirectAs404\"}\n endpointResp = self.conn(\"POST\", \"{0}/users/ME/endpoints\".format(msgsHost), codes=(200, 201, 404),\n headers=headers, json={\"endpointFeatures\": \"Agent\"})\n regTokenHead = endpointResp.headers.get(\"Set-RegistrationToken\")\n locHead = endpointResp.headers.get(\"Location\")\n if locHead:\n locParts = re.search(r\"(https://[^/]+/v1)/users/ME/endpoints(/(%7B[a-z0-9\\-]+%7D))?\", locHead).groups()\n if locParts[2]:\n endpoint = SkypeEndpoint(self.conn, locParts[2].replace(\"%7B\", \"{\").replace(\"%7D\", \"}\"))\n if not locParts[0] == msgsHost:\n # Skype is requiring the use of a different hostname.\n msgsHost = locHead.rsplit(\"/\", 4 if locParts[2] else 3)[0]\n # Don't accept the token if present, we need to re-register first.\n continue\n if regTokenHead:\n token = re.search(r\"(registrationToken=[a-z0-9\\+/=]+)\", regTokenHead, re.I).group(1)\n regExpiry = re.search(r\"expires=(\\d+)\", regTokenHead).group(1)\n expiry = datetime.fromtimestamp(int(regExpiry))\n regEndMatch = re.search(r\"endpointId=({[a-z0-9\\-]+})\", regTokenHead)\n if regEndMatch:\n endpoint = SkypeEndpoint(self.conn, regEndMatch.group(1))\n if not endpoint and endpointResp.status_code == 200 and endpointResp.json():\n # Use the most recent endpoint listed in the JSON response.\n endpoint = SkypeEndpoint(self.conn, endpointResp.json()[0][\"id\"])\n return token, expiry, msgsHost, endpoint"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef getMac256Hash(challenge, appId=\"msmsgs@msnmsgr.com\", key=\"Q1P7W2E4J9R8U3S5\"):\n clearText = challenge + appId\n clearText += \"0\" * (8 - len(clearText) % 8)\n\n def int32ToHexString(n):\n hexChars = \"0123456789abcdef\"\n hexString = \"\"\n for i in range(4):\n hexString += hexChars[(n >> (i * 8 + 4)) & 15]\n hexString += hexChars[(n >> (i * 8)) & 15]\n return hexString\n\n def int64Xor(a, b):\n sA = \"{0:b}\".format(a)\n sB = \"{0:b}\".format(b)\n sC = \"\"\n sD = \"\"\n diff = abs(len(sA) - len(sB))\n for i in range(diff):\n sD += \"0\"\n if len(sA) < len(sB):\n sD += sA\n sA = sD\n elif len(sB) < len(sA):\n sD += sB\n sB = sD\n for i in range(len(sA)):\n sC += \"0\" if sA[i] == sB[i] else \"1\"\n return int(sC, 2)\n\n def cS64(pdwData, pInHash):\n MODULUS = 2147483647\n CS64_a = pInHash[0] & MODULUS\n CS64_b = pInHash[1] & MODULUS\n CS64_c = pInHash[2] & MODULUS\n CS64_d = pInHash[3] & MODULUS\n CS64_e = 242854337\n pos = 0\n qwDatum = 0\n qwMAC = 0\n qwSum = 0\n for i in range(len(pdwData) // 2):\n qwDatum = int(pdwData[pos])\n pos += 1\n qwDatum *= CS64_e\n qwDatum = qwDatum % MODULUS\n qwMAC += qwDatum\n qwMAC *= CS64_a\n qwMAC += CS64_b\n qwMAC = qwMAC % MODULUS\n qwSum += qwMAC\n qwMAC += int(pdwData[pos])\n pos += 1\n qwMAC *= CS64_c\n qwMAC += CS64_d\n qwMAC = qwMAC % MODULUS\n qwSum += qwMAC\n qwMAC += CS64_b\n qwMAC = qwMAC % MODULUS\n qwSum += CS64_d\n qwSum = qwSum % MODULUS\n return [qwMAC, qwSum]\n\n cchClearText = len(clearText) // 4\n pClearText = []\n for i in range(cchClearText):\n pClearText = pClearText[:i] + [0] + pClearText[i:]\n for pos in range(4):\n pClearText[i] += ord(clearText[4 * i + pos]) * (256 ** pos)\n sha256Hash = [0, 0, 0, 0]\n hash = hashlib.sha256((challenge + key).encode(\"utf-8\")).hexdigest().upper()\n for i in range(len(sha256Hash)):\n sha256Hash[i] = 0\n for pos in range(4):\n dpos = 8 * i + pos * 2\n sha256Hash[i] += int(hash[dpos:dpos + 2], 16) * (256 ** pos)\n macHash = cS64(pClearText, sha256Hash)\n macParts = [macHash[0], macHash[1], macHash[0], macHash[1]]\n return \"\".join(map(int32ToHexString, map(int64Xor, sha256Hash, macParts)))", "response": "Generate the lock - and - key response for the given challenge and appId."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef config(self, name=\"skype\"):\n self.conn(\"PUT\", \"{0}/users/ME/endpoints/{1}/presenceDocs/messagingService\"\n .format(self.conn.msgsHost, self.id),\n auth=SkypeConnection.Auth.RegToken,\n json={\"id\": \"messagingService\",\n \"type\": \"EndpointPresenceDoc\",\n \"selfLink\": \"uri\",\n \"privateInfo\": {\"epname\": name},\n \"publicInfo\": {\"capabilities\": \"\",\n \"type\": 1,\n \"skypeNameVersion\": \"skype.com\",\n \"nodeInfo\": \"xx\",\n \"version\": \"908/1.30.0.128\"}})", "response": "Configure this endpoint to allow setting presence."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsend a keep - alive request to the endpoint.", "response": "def ping(self, timeout=12):\n \"\"\"\n Send a keep-alive request for the endpoint.\n\n Args:\n timeout (int): maximum amount of time for the endpoint to stay active\n \"\"\"\n self.conn(\"POST\", \"{0}/users/ME/endpoints/{1}/active\".format(self.conn.msgsHost, self.id),\n auth=SkypeConnection.Auth.RegToken, json={\"timeout\": timeout})"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef subscribe(self):\n self.conn(\"POST\", \"{0}/users/ME/endpoints/{1}/subscriptions\".format(self.conn.msgsHost, self.id),\n auth=SkypeConnection.Auth.RegToken,\n json={\"interestedResources\": [\"/v1/threads/ALL\",\n \"/v1/users/ME/contacts/ALL\",\n \"/v1/users/ME/conversations/ALL/messages\",\n \"/v1/users/ME/conversations/ALL/properties\"],\n \"template\": \"raw\",\n \"channelType\": \"httpLongPoll\"})\n self.subscribed = True", "response": "Subscribe to contact and conversation events."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef recent(self):\n url = \"{0}/users/ME/conversations\".format(self.skype.conn.msgsHost)\n params = {\"startTime\": 0,\n \"view\": \"msnp24Equivalent\",\n \"targetType\": \"Passport|Skype|Lync|Thread\"}\n resp = self.skype.conn.syncStateCall(\"GET\", url, params, auth=SkypeConnection.Auth.RegToken).json()\n chats = {}\n for json in resp.get(\"conversations\", []):\n cls = SkypeSingleChat\n if \"threadProperties\" in json:\n info = self.skype.conn(\"GET\", \"{0}/threads/{1}\".format(self.skype.conn.msgsHost, json.get(\"id\")),\n auth=SkypeConnection.Auth.RegToken,\n params={\"view\": \"msnp24Equivalent\"}).json()\n json.update(info)\n cls = SkypeGroupChat\n chats[json.get(\"id\")] = self.merge(cls.fromRaw(self.skype, json))\n return chats", "response": "Retrieve a selection of conversations with the most recent activity and store them in the cache."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget a single or group chat by identifier.", "response": "def chat(self, id):\n \"\"\"\n Get a single conversation by identifier.\n\n Args:\n id (str): single or group chat identifier\n \"\"\"\n json = self.skype.conn(\"GET\", \"{0}/users/ME/conversations/{1}\".format(self.skype.conn.msgsHost, id),\n auth=SkypeConnection.Auth.RegToken, params={\"view\": \"msnp24Equivalent\"}).json()\n cls = SkypeSingleChat\n if \"threadProperties\" in json:\n info = self.skype.conn(\"GET\", \"{0}/threads/{1}\".format(self.skype.conn.msgsHost, json.get(\"id\")),\n auth=SkypeConnection.Auth.RegToken, params={\"view\": \"msnp24Equivalent\"}).json()\n json.update(info)\n cls = SkypeGroupChat\n return self.merge(cls.fromRaw(self.skype, json))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a new group chat with the given users.", "response": "def create(self, members=(), admins=()):\n \"\"\"\n Create a new group chat with the given users.\n\n The current user is automatically added to the conversation as an admin. Any other admin identifiers must also\n be present in the member list.\n\n Args:\n members (str list): user identifiers to initially join the conversation\n admins (str list): user identifiers to gain admin privileges\n \"\"\"\n memberObjs = [{\"id\": \"8:{0}\".format(self.skype.userId), \"role\": \"Admin\"}]\n for id in members:\n if id == self.skype.userId:\n continue\n memberObjs.append({\"id\": \"8:{0}\".format(id), \"role\": \"Admin\" if id in admins else \"User\"})\n resp = self.skype.conn(\"POST\", \"{0}/threads\".format(self.skype.conn.msgsHost),\n auth=SkypeConnection.Auth.RegToken, json={\"members\": memberObjs})\n return self.chat(resp.headers[\"Location\"].rsplit(\"/\", 1)[1])"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nresolves a public join. skype. com URL and returns various identifiers for the group conversation.", "response": "def urlToIds(url):\n \"\"\"\n Resolve a ``join.skype.com`` URL and returns various identifiers for the group conversation.\n\n Args:\n url (str): public join URL, or identifier from it\n\n Returns:\n dict: related conversation's identifiers -- keys: ``id``, ``long``, ``blob``\n \"\"\"\n urlId = url.split(\"/\")[-1]\n convUrl = \"https://join.skype.com/api/v2/conversation/\"\n json = SkypeConnection.externalCall(\"POST\", convUrl, json={\"shortId\": urlId, \"type\": \"wl\"}).json()\n return {\"id\": json.get(\"Resource\"),\n \"long\": json.get(\"Id\"),\n \"blob\": json.get(\"ChatBlob\")}"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nextracting the username from a contact URL.", "response": "def userToId(url):\n \"\"\"\n Extract the username from a contact URL.\n\n Matches addresses containing ``users/`` or ``users/ME/contacts/``.\n\n Args:\n url (str): Skype API URL\n\n Returns:\n str: extracted identifier\n \"\"\"\n match = re.search(r\"users(/ME/contacts)?/[0-9]+:([^/]+)\", url)\n return match.group(2) if match else None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nextracting the conversation ID from a Skype API URL.", "response": "def chatToId(url):\n \"\"\"\n Extract the conversation ID from a conversation URL.\n\n Matches addresses containing ``conversations/``.\n\n Args:\n url (str): Skype API URL\n\n Returns:\n str: extracted identifier\n \"\"\"\n match = re.search(r\"conversations/([0-9]+:[^/]+)\", url)\n return match.group(1) if match else None"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nclasses decorator that automatically generates an __init__ method that expects args from cls. attrs and stores them.", "response": "def initAttrs(cls):\n \"\"\"\n Class decorator: automatically generate an ``__init__`` method that expects args from cls.attrs and stores them.\n\n Args:\n cls (class): class to decorate\n\n Returns:\n class: same, but modified, class\n \"\"\"\n def __init__(self, skype=None, raw=None, *args, **kwargs):\n super(cls, self).__init__(skype, raw)\n # Merge args into kwargs based on cls.attrs.\n for i in range(len(args)):\n kwargs[cls.attrs[i]] = args[i]\n # Disallow any unknown kwargs.\n unknown = set(kwargs) - set(cls.attrs)\n if unknown:\n unknownDesc = \"an unexpected keyword argument\" if len(unknown) == 1 else \"unexpected keyword arguments\"\n unknownList = \", \".join(\"'{0}'\".format(k) for k in sorted(unknown))\n raise TypeError(\"__init__() got {0} {1}\".format(unknownDesc, unknownList))\n # Set each attribute from kwargs, or use the default if not specified.\n for k in cls.attrs:\n setattr(self, k, kwargs.get(k, cls.defaults.get(k)))\n\n # Add the init method to the class.\n setattr(cls, \"__init__\", __init__)\n return cls"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nclasses decorator: add helper methods to convert identifier properties into SkypeObjs. Args: types (str list): simple field types to add properties for (``user``, ``users`` or ``chat``) user (str list): attribute names to treat as single user identifier fields users (str list): attribute names to treat as user identifier lists chat (str list): attribute names to treat as chat identifier fields Returns: method: decorator function, ready to apply to other methods", "response": "def convertIds(*types, **kwargs):\n \"\"\"\n Class decorator: add helper methods to convert identifier properties into SkypeObjs.\n\n Args:\n types (str list): simple field types to add properties for (``user``, ``users`` or ``chat``)\n user (str list): attribute names to treat as single user identifier fields\n users (str list): attribute names to treat as user identifier lists\n chat (str list): attribute names to treat as chat identifier fields\n\n Returns:\n method: decorator function, ready to apply to other methods\n \"\"\"\n user = kwargs.get(\"user\", ())\n users = kwargs.get(\"users\", ())\n chat = kwargs.get(\"chat\", ())\n\n def userObj(self, field):\n return self.skype.contacts[getattr(self, field)]\n\n def userObjs(self, field):\n return (self.skype.contacts[id] for id in getattr(self, field))\n\n def chatObj(self, field):\n return self.skype.chats[getattr(self, field)]\n\n def attach(cls, fn, field, idField):\n \"\"\"\n Generate the property object and attach it to the class.\n\n Args:\n cls (type): class to attach the property to\n fn (method): function to be attached\n field (str): attribute name for the new property\n idField (str): reference field to retrieve identifier from\n \"\"\"\n setattr(cls, field, property(functools.wraps(fn)(functools.partial(fn, field=idField))))\n\n def wrapper(cls):\n # Shorthand identifiers, e.g. @convertIds(\"user\", \"chat\").\n for type in types:\n if type == \"user\":\n attach(cls, userObj, \"user\", \"userId\")\n elif type == \"users\":\n attach(cls, userObjs, \"users\", \"userIds\")\n elif type == \"chat\":\n attach(cls, chatObj, \"chat\", \"chatId\")\n # Custom field names, e.g. @convertIds(user=[\"creator\"]).\n for field in user:\n attach(cls, userObj, field, \"{0}Id\".format(field))\n for field in users:\n attach(cls, userObjs, \"{0}s\".format(field), \"{0}Ids\".format(field))\n for field in chat:\n attach(cls, chatObj, field, \"{0}Id\".format(field))\n return cls\n\n return wrapper"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nclasses decorator to set truthiness based on any attr being present.", "response": "def truthyAttrs(cls):\n \"\"\"\n Class decorator: override __bool__ to set truthiness based on any attr being present.\n\n Args:\n cls (class): class to decorate\n\n Returns:\n class: same, but modified, class\n \"\"\"\n def __bool__(self):\n return bool(any(getattr(self, attr) for attr in self.attrs))\n\n cls.__bool__ = cls.__nonzero__ = __bool__\n return cls"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef cacheResult(fn):\n cache = {}\n\n @functools.wraps(fn)\n def wrapper(*args, **kwargs):\n # Imperfect key generation (args may be passed as kwargs, so multiple ways to represent one key).\n key = args + tuple(kwargs.items())\n # Order of operations here tries to minimise use of exceptions.\n try:\n # Don't call the function here, as it may throw a TypeError itself (or from incorrect arguments).\n if key in cache:\n return cache[key]\n except TypeError:\n # Key is not hashable, so we can't cache with these args -- just return the result.\n return fn(*args, **kwargs)\n # Not yet cached, so generate the result and store it.\n cache[key] = fn(*args, **kwargs)\n return cache[key]\n\n # Make cache accessible externally.\n wrapper.cache = cache\n return wrapper", "response": "Decorator for caching the result of a function."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning unicode text no matter what", "response": "def u(text, encoding='utf-8'):\n \"Return unicode text, no matter what\"\n\n if isinstance(text, six.binary_type):\n text = text.decode(encoding)\n\n # it's already unicode\n text = text.replace('\\r\\n', '\\n')\n return text"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndetects the format of the given text.", "response": "def detect_format(text, handlers):\n \"\"\"\n Figure out which handler to use, based on metadata.\n Returns a handler instance or None.\n\n ``text`` should be unicode text about to be parsed.\n\n ``handlers`` is a dictionary where keys are opening delimiters \n and values are handler instances.\n \"\"\"\n for pattern, handler in handlers.items():\n if pattern.match(text):\n return handler\n\n # nothing matched, give nothing back\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nparses text with frontmatter and return metadata and content.", "response": "def parse(text, encoding='utf-8', handler=None, **defaults):\n \"\"\"\n Parse text with frontmatter, return metadata and content.\n Pass in optional metadata defaults as keyword args.\n\n If frontmatter is not found, returns an empty metadata dictionary\n (or defaults) and original text content.\n\n ::\n\n >>> with open('tests/hello-world.markdown') as f:\n ... metadata, content = frontmatter.parse(f.read())\n >>> print(metadata['title'])\n Hello, world!\n\n \"\"\"\n # ensure unicode first\n text = u(text, encoding).strip()\n\n # metadata starts with defaults\n metadata = defaults.copy()\n\n # this will only run if a handler hasn't been set higher up\n handler = handler or detect_format(text, handlers)\n if handler is None:\n return metadata, text\n\n # split on the delimiters\n try:\n fm, content = handler.split(text)\n except ValueError:\n # if we can't split, bail\n return metadata, text\n\n # parse, now that we have frontmatter\n fm = handler.load(fm)\n if isinstance(fm, dict):\n metadata.update(fm)\n\n return metadata, content.strip()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nload and parse a file - like object or filename returns a : py : class : Post.", "response": "def load(fd, encoding='utf-8', handler=None, **defaults):\n \"\"\"\n Load and parse a file-like object or filename, \n return a :py:class:`post `.\n\n ::\n\n >>> post = frontmatter.load('tests/hello-world.markdown')\n >>> with open('tests/hello-world.markdown') as f:\n ... post = frontmatter.load(f)\n\n \"\"\"\n if hasattr(fd, 'read'):\n text = fd.read()\n\n else:\n with codecs.open(fd, 'r', encoding) as f:\n text = f.read()\n\n handler = handler or detect_format(text, handlers)\n return loads(text, encoding, handler, **defaults)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nparse a markdown file and return a Post object.", "response": "def loads(text, encoding='utf-8', handler=None, **defaults):\n \"\"\"\n Parse text (binary or unicode) and return a :py:class:`post `.\n\n ::\n\n >>> with open('tests/hello-world.markdown') as f:\n ... post = frontmatter.loads(f.read())\n\n \"\"\"\n text = u(text, encoding)\n handler = handler or detect_format(text, handlers)\n metadata, content = parse(text, encoding, handler, **defaults)\n return Post(content, handler, **metadata)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef dump(post, fd, encoding='utf-8', handler=None, **kwargs):\n content = dumps(post, handler, **kwargs)\n if hasattr(fd, 'write'):\n fd.write(content.encode(encoding))\n\n else:\n with codecs.open(fd, 'w', encoding) as f:\n f.write(content)", "response": "Serialize a Post object to a string and write to a file - like object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef dumps(post, handler=None, **kwargs):\n if handler is None:\n handler = getattr(post, 'handler', None) or YAMLHandler()\n\n start_delimiter = kwargs.pop('start_delimiter', handler.START_DELIMITER)\n end_delimiter = kwargs.pop('end_delimiter', handler.END_DELIMITER)\n\n metadata = handler.export(post.metadata, **kwargs)\n\n return POST_TEMPLATE.format(\n metadata=metadata, content=post.content,\n start_delimiter=start_delimiter,\n end_delimiter=end_delimiter).strip()", "response": "Serializes a Post object to a string and returns text."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef to_dict(self):\n \"Post as a dict, for serializing\"\n d = self.metadata.copy()\n d['content'] = self.content\n return d", "response": "Post as a dict for serializing"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef load(self, fm, **kwargs):\n kwargs.setdefault('Loader', SafeLoader)\n return yaml.load(fm, **kwargs)", "response": "Parse YAML front matter and return a dict."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef export(self, metadata, **kwargs):\n kwargs.setdefault('Dumper', SafeDumper)\n kwargs.setdefault('default_flow_style', False)\n kwargs.setdefault('allow_unicode', True)\n\n metadata = yaml.dump(metadata, **kwargs).strip()\n return u(metadata)", "response": "Export metadata as YAML."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nturns metadata into JSON", "response": "def export(self, metadata, **kwargs):\n \"Turn metadata into JSON\"\n kwargs.setdefault('indent', 4)\n metadata = json.dumps(metadata, **kwargs)\n return u(metadata)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the match object for the current list.", "response": "def _match(self):\n \"\"\"Return the match object for the current list.\"\"\"\n cache_match, cache_string = self._match_cache\n string = self.string\n if cache_string == string:\n return cache_match\n cache_match = fullmatch(\n LIST_PATTERN_FORMAT.replace(b'{pattern}', self.pattern.encode()),\n self._shadow,\n MULTILINE,\n )\n self._match_cache = cache_match, string\n return cache_match"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn items as a list of strings. Don t include sub - items and the start pattern.", "response": "def items(self) -> List[str]:\n \"\"\"Return items as a list of strings.\n\n Don't include sub-items and the start pattern.\n \"\"\"\n items = [] # type: List[str]\n append = items.append\n string = self.string\n match = self._match\n ms = match.start()\n for s, e in match.spans('item'):\n append(string[s - ms:e - ms])\n return items"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the lists that are inside the item with the given index.", "response": "def sublists(\n self, i: int = None, pattern: str = None\n ) -> List['WikiList']:\n \"\"\"Return the Lists inside the item with the given index.\n\n :param i: The index if the item which its sub-lists are desired.\n The performance is likely to be better if `i` is None.\n\n :param pattern: The starting symbol for the desired sub-lists.\n The `pattern` of the current list will be automatically added\n as prefix.\n Although this parameter is optional, but specifying it can improve\n the performance.\n \"\"\"\n patterns = (r'\\#', r'\\*', '[:;]') if pattern is None \\\n else (pattern,) # type: Tuple[str, ...]\n self_pattern = self.pattern\n lists = self.lists\n sublists = [] # type: List['WikiList']\n sublists_append = sublists.append\n if i is None:\n # Any sublist is acceptable\n for pattern in patterns:\n for lst in lists(self_pattern + pattern):\n sublists_append(lst)\n return sublists\n # Only return sub-lists that are within the given item\n match = self._match\n fullitem_spans = match.spans('fullitem')\n ss = self._span[0]\n ms = match.start()\n s, e = fullitem_spans[i]\n e -= ms - ss\n s -= ms - ss\n for pattern in patterns:\n for lst in lists(self_pattern + pattern):\n # noinspection PyProtectedMember\n ls, le = lst._span\n if s < ls and le <= e:\n sublists_append(lst)\n return sublists"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconvert to another list type by replacing starting pattern.", "response": "def convert(self, newstart: str) -> None:\n \"\"\"Convert to another list type by replacing starting pattern.\"\"\"\n match = self._match\n ms = match.start()\n for s, e in reversed(match.spans('pattern')):\n self[s - ms:e - ms] = newstart\n self.pattern = escape(newstart)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef arguments(self) -> List[Argument]:\n shadow = self._shadow\n split_spans = self._args_matcher(shadow).spans('arg')\n if not split_spans:\n return []\n arguments = []\n arguments_append = arguments.append\n type_to_spans = self._type_to_spans\n ss, se = span = self._span\n type_ = id(span)\n lststr = self._lststr\n string = lststr[0]\n arg_spans = type_to_spans.setdefault(type_, [])\n span_tuple_to_span_get = {(s[0], s[1]): s for s in arg_spans}.get\n for arg_self_start, arg_self_end in split_spans:\n s, e = arg_span = [ss + arg_self_start, ss + arg_self_end]\n old_span = span_tuple_to_span_get((s, e))\n if old_span is None:\n insort(arg_spans, arg_span)\n else:\n arg_span = old_span\n arg = Argument(lststr, type_to_spans, arg_span, type_)\n arg._shadow_cache = (\n string[s:e], shadow[arg_self_start:arg_self_end])\n arguments_append(arg)\n return arguments", "response": "Parse template content. Create self. name and self. arguments."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef lists(self, pattern: str = None) -> List[WikiList]:\n return [\n lst for arg in self.arguments for lst in arg.lists(pattern) if lst]", "response": "Return the lists in all arguments."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn template s name ( includes whitespace ).", "response": "def name(self) -> str:\n \"\"\"Return template's name (includes whitespace).\"\"\"\n h = self._atomic_partition(self._first_arg_sep)[0]\n if len(h) == len(self.string):\n return h[2:-2]\n return h[2:]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a Trie out of a list of words and return an atomic regex pattern.", "response": "def _plant_trie(strings: _List[str]) -> dict:\n \"\"\"Create a Trie out of a list of words and return an atomic regex pattern.\n\n The corresponding Regex should match much faster than a simple Regex union.\n \"\"\"\n # plant the trie\n trie = {}\n for string in strings:\n d = trie\n for char in string:\n d[char] = char in d and d[char] or {}\n d = d[char]\n d[''] = None # EOS\n return trie"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _pattern(trie: dict) -> str:\n if '' in trie:\n if len(trie) == 1:\n return ''\n optional = True\n del trie['']\n else:\n optional = False\n\n subpattern_to_chars = _defaultdict(list)\n\n for char, sub_trie in trie.items():\n subpattern = _pattern(sub_trie)\n subpattern_to_chars[subpattern].append(char)\n\n alts = []\n for subpattern, chars in subpattern_to_chars.items():\n if len(chars) == 1:\n alts.append(chars[0] + subpattern)\n else:\n chars.sort(reverse=True)\n alts.append('[' + ''.join(chars) + ']' + subpattern)\n\n if len(alts) == 1:\n result = alts[0]\n if optional:\n if len(result) == 1:\n result += '?+'\n else: # more than one character in alts[0]\n result = '(?:' + result + ')?+'\n else:\n alts.sort(reverse=True)\n result = '(?>' + '|'.join(alts) + ')'\n if optional:\n result += '?+'\n return result", "response": "Convert a trie to a regex pattern."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _check_index(self, key: Union[slice, int]) -> (int, int):\n ss, se = self._span\n if isinstance(key, int):\n if key < 0:\n key += se - ss\n if key < 0:\n raise IndexError('index out of range')\n elif key >= se - ss:\n raise IndexError('index out of range')\n start = ss + key\n return start, start + 1\n # isinstance(key, slice)\n if key.step is not None:\n raise NotImplementedError(\n 'step is not implemented for string setter.')\n start, stop = key.start or 0, key.stop\n if start < 0:\n start += se - ss\n if start < 0:\n raise IndexError('start index out of range')\n if stop is None:\n stop = se - ss\n elif stop < 0:\n stop += se - ss\n if start > stop:\n raise IndexError(\n 'stop index out of range or start is after the stop')\n return start + ss, stop + ss", "response": "Return adjusted start and stop index as tuple.\n Used in setitem and __delitem__."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef insert(self, index: int, string: str) -> None:\n ss, se = self._span\n lststr = self._lststr\n lststr0 = lststr[0]\n if index < 0:\n index += se - ss\n if index < 0:\n index = 0\n elif index > se - ss: # Note that it is not >=. Index can be new.\n index = se - ss\n index += ss\n # Update lststr\n lststr[0] = lststr0[:index] + string + lststr0[index:]\n string_len = len(string)\n # Update spans\n self._insert_update(\n index=index,\n length=string_len)\n # Remember newly added spans by the string.\n type_to_spans = self._type_to_spans\n for type_, spans in parse_to_spans(\n bytearray(string, 'ascii', 'replace')\n ).items():\n for s, e in spans:\n insort(type_to_spans[type_], [index + s, index + e])", "response": "Insert the given string before the specified index."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the string representation of the object.", "response": "def string(self) -> str:\n \"\"\"Return str(self).\"\"\"\n start, end = self._span\n return self._lststr[0][start:end]"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _atomic_partition(self, char: int) -> Tuple[str, str, str]:\n s, e = self._span\n index = self._shadow.find(char)\n if index == -1:\n return self._lststr[0][s:e], '', ''\n lststr0 = self._lststr[0]\n return lststr0[s:s + index], chr(char), lststr0[s + index + 1:e]", "response": "Partition self. string where char s not in atomic sub - spans."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _subspans(self, type_: str) -> List[List[int]]:\n return self._type_to_spans[type_]", "response": "Return all the sub - span including self. _span."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _close_subspans(self, start: int, stop: int) -> None:\n ss, se = self._span\n for spans in self._type_to_spans.values():\n b = bisect(spans, [start])\n for i, (s, e) in enumerate(spans[b:bisect(spans, [stop], b)]):\n if e <= stop:\n if ss != s or se != e:\n spans.pop(i + b)[:] = -1, -1\n b -= 1", "response": "Close all sub - spans of start to stop."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nremove the removed span from the internal list of types and update the internal list of types.", "response": "def _shrink_update(self, rmstart: int, rmstop: int) -> None:\n \"\"\"Update self._type_to_spans according to the removed span.\n\n Warning: If an operation involves both _shrink_update and\n _insert_update, you might wanna consider doing the\n _insert_update before the _shrink_update as this function\n can cause data loss in self._type_to_spans.\n \"\"\"\n # Note: The following algorithm won't work correctly if spans\n # are not sorted.\n # Note: No span should be removed from _type_to_spans.\n for spans in self._type_to_spans.values():\n i = len(spans) - 1\n while i >= 0:\n s, e = span = spans[i]\n if rmstop <= s:\n # rmstart <= rmstop <= s <= e\n rmlength = rmstop - rmstart\n span[:] = s - rmlength, e - rmlength\n i -= 1\n continue\n break\n else:\n continue\n while True:\n if rmstart <= s:\n if rmstop < e:\n # rmstart < s <= rmstop < e\n span[:] = rmstart, e + rmstart - rmstop\n i -= 1\n if i < 0:\n break\n s, e = span = spans[i]\n continue\n # rmstart <= s <= e < rmstop\n spans.pop(i)[:] = -1, -1\n i -= 1\n if i < 0:\n break\n s, e = span = spans[i]\n continue\n break\n while i >= 0:\n if e <= rmstart:\n # s <= e <= rmstart <= rmstop\n i -= 1\n if i < 0:\n break\n s, e = span = spans[i]\n continue\n # s <= rmstart <= rmstop <= e\n span[1] -= rmstop - rmstart\n i -= 1\n if i < 0:\n break\n s, e = span = spans[i]\n continue"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nupdates self. _type_to_spans according to the added length.", "response": "def _insert_update(self, index: int, length: int) -> None:\n \"\"\"Update self._type_to_spans according to the added length.\"\"\"\n ss, se = self._span\n for spans in self._type_to_spans.values():\n for span in spans:\n if index < span[1] or span[1] == index == se:\n span[1] += length\n # index is before s, or at s but not on self_span\n if index < span[0] or span[0] == index != ss:\n span[0] += length"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the nesting level of this object.", "response": "def nesting_level(self) -> int:\n \"\"\"Return the nesting level of self.\n\n The minimum nesting_level is 0. Being part of any Template or\n ParserFunction increases the level by one.\n \"\"\"\n ss, se = self._span\n level = 0\n type_to_spans = self._type_to_spans\n for type_ in ('Template', 'ParserFunction'):\n spans = type_to_spans[type_]\n for s, e in spans[:bisect(spans, [ss + 1])]:\n if se <= e:\n level += 1\n return level"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _shadow(self) -> bytearray:\n ss, se = self._span\n string = self._lststr[0][ss:se]\n cached_string, shadow = getattr(\n self, '_shadow_cache', (None, None))\n if cached_string == string:\n return shadow\n # In the old method the existing spans were used to create the shadow.\n # But it was slow because there can be thousands of spans and iterating\n # over them to find the relevant sub-spans could take a significant\n # amount of time. The new method tries to parse the self.string which\n # is usually much more faster because there are usually far less\n # sub-spans for individual objects.\n shadow = bytearray(string, 'ascii', 'replace')\n if self._type in SPAN_PARSER_TYPES:\n head = shadow[:2]\n tail = shadow[-2:]\n shadow[:2] = shadow[-2:] = b'__'\n parse_to_spans(shadow)\n shadow[:2] = head\n shadow[-2:] = tail\n else:\n parse_to_spans(shadow)\n self._shadow_cache = string, shadow\n return shadow", "response": "Return a copy of self. string with specific sub - spans replaced."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _ext_link_shadow(self):\n ss, se = self._span\n string = self._lststr[0][ss:se]\n byte_array = bytearray(string, 'ascii', 'replace')\n subspans = self._subspans\n for type_ in 'Template', 'ParserFunction', 'Parameter':\n for s, e in subspans(type_):\n byte_array[s:e] = b' ' + INVALID_EXT_CHARS_SUB(\n b' ', byte_array[s + 2:e - 2]) + b' '\n for s, e in subspans('Comment'):\n byte_array[s:e] = (e - s) * b'_'\n return byte_array", "response": "Replace the invalid characters of SPAN_PARSER_TYPES with _."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating the arguments for the parse function used in pformat method.", "response": "def _pp_type_to_spans(self) -> Dict[str, List[List[int]]]:\n \"\"\"Create the arguments for the parse function used in pformat method.\n\n Only return sub-spans and change the them to fit the new scope, i.e\n self.string.\n \"\"\"\n ss, se = self._span\n if ss == 0 and se == len(self._lststr[0]):\n return deepcopy(self._type_to_spans)\n return {\n type_: [\n [s - ss, e - ss] for s, e in spans[bisect(spans, [ss]):]\n if e <= se\n ] for type_, spans in self._type_to_spans.items()}"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef pprint(self, indent: str = ' ', remove_comments=False):\n warn(\n 'pprint method is deprecated, use pformat instead.',\n DeprecationWarning,\n )\n return self.pformat(indent, remove_comments)", "response": "Deprecated use self. pformat instead."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a pretty - print of the WikiText as string.", "response": "def pformat(self, indent: str = ' ', remove_comments=False) -> str:\n \"\"\"Return a pretty-print of self.string as string.\n\n Try to organize templates and parser functions by indenting, aligning\n at the equal signs, and adding space where appropriate.\n\n Note that this function will not mutate self.\n \"\"\"\n ws = WS\n # Do not try to do inplace pformat. It will overwrite on some spans.\n string = self.string\n parsed = WikiText([string], self._pp_type_to_spans())\n # Since _type_to_spans arg of WikiText has been used, parsed._span\n # is not set yet.\n span = [0, len(string)]\n parsed._span = span\n parsed._type_to_spans['WikiText'] = [span]\n if remove_comments:\n for c in parsed.comments:\n del c[:]\n else:\n # Only remove comments that contain whitespace.\n for c in parsed.comments:\n if not c.contents.strip(ws):\n del c[:]\n # First remove all current spacings.\n for template in reversed(parsed.templates):\n stripped_tl_name = template.name.strip(ws)\n template.name = (\n ' ' + stripped_tl_name + ' '\n if stripped_tl_name[0] == '{' else stripped_tl_name\n )\n args = template.arguments\n if not args:\n continue\n if ':' in stripped_tl_name:\n # Don't use False because we don't know for sure.\n not_a_parser_function = None\n else:\n not_a_parser_function = True\n # Required for alignment\n arg_stripped_names = [a.name.strip(ws) for a in args]\n arg_positionalities = [a.positional for a in args]\n arg_name_lengths = [\n wcswidth(n.replace('\u0644\u0627', '?'))\n if not p else 0\n for n, p in zip(arg_stripped_names, arg_positionalities)\n ]\n max_name_len = max(arg_name_lengths)\n # Format template.name.\n level = template.nesting_level\n newline_indent = '\\n' + indent * level\n template.name += newline_indent\n if level == 1:\n last_comment_indent = ''\n else:\n last_comment_indent = ''\n # Special formatting for the last argument.\n last_arg = args.pop()\n last_is_positional = arg_positionalities.pop()\n last_value = last_arg.value\n last_stripped_value = last_value.strip(ws)\n if last_is_positional and last_value != last_stripped_value:\n stop_conversion = True\n if not last_value.endswith('\\n' + indent * (level - 1)):\n last_arg.value = last_value + last_comment_indent\n elif not_a_parser_function:\n stop_conversion = False\n last_arg.name = (\n ' ' + arg_stripped_names.pop() + ' ' +\n ' ' * (max_name_len - arg_name_lengths.pop()))\n last_arg.value = (\n ' ' + last_stripped_value + '\\n' + indent * (level - 1))\n elif last_is_positional:\n # (last_value == last_stripped_value\n # and not_a_parser_function is not True)\n stop_conversion = True\n # Can't strip or adjust the position of the value\n # because this could be a positional argument in a template.\n last_arg.value = last_value + last_comment_indent\n else:\n stop_conversion = True\n # This is either a parser function or a keyword\n # argument in a template. In both cases the name\n # can be lstripped and the value can be rstripped.\n last_arg.name = ' ' + last_arg.name.lstrip(ws)\n if not last_value.endswith('\\n' + indent * (level - 1)):\n last_arg.value = (\n last_value.rstrip(ws) + ' ' + last_comment_indent)\n if not args:\n continue\n comment_indent = ''\n for arg, stripped_name, positional, arg_name_len in zip(\n reversed(args),\n reversed(arg_stripped_names),\n reversed(arg_positionalities),\n reversed(arg_name_lengths),\n ):\n value = arg.value\n stripped_value = value.strip(ws)\n # Positional arguments of templates are sensitive to\n # whitespace. See:\n # https://meta.wikimedia.org/wiki/Help:Newlines_and_spaces\n if stop_conversion:\n if not value.endswith(newline_indent):\n arg.value += comment_indent\n elif positional and value != stripped_value:\n stop_conversion = True\n if not value.endswith(newline_indent):\n arg.value += comment_indent\n elif not_a_parser_function:\n arg.name = (\n ' ' + stripped_name + ' ' +\n ' ' * (max_name_len - arg_name_len))\n arg.value = ' ' + stripped_value + newline_indent\n i = 0\n functions = parsed.parser_functions\n while i < len(functions):\n func = functions[i]\n i += 1\n name = func.name\n ls_name = name.lstrip(ws)\n lws = len(name) - len(ls_name)\n if lws:\n del func[2:lws + 2]\n if ls_name.lower() in ('#tag', '#invoke', ''):\n # The 2nd argument of `tag` parser function is an exception\n # and cannot be stripped.\n # So in `{{#tag:tagname|arg1|...}}`, no whitespace should be\n # added/removed to/from arg1.\n # See: [[mw:Help:Extension:ParserFunctions#Miscellaneous]]\n # All args of #invoke are also whitespace-sensitive.\n # Todo: Instead use comments to indent.\n continue\n args = func.arguments\n # Whitespace, including newlines, tabs, and spaces is stripped\n # from the beginning and end of all the parameters of\n # parser functions. See:\n # www.mediawiki.org/wiki/Help:Extension:ParserFunctions#\n # Stripping_whitespace\n level = func.nesting_level\n short_indent = '\\n' + indent * (level - 1)\n newline_indent = short_indent + indent\n if len(args) == 1:\n arg = args[0]\n # the first arg is both the first and last argument\n if arg.positional:\n arg.value = (\n newline_indent + arg.value.strip(ws) + short_indent)\n else:\n # Note that we don't add spaces before and after the\n # '=' in parser functions because it could be part of\n # an ordinary string.\n arg.name = newline_indent + arg.name.lstrip(ws)\n arg.value = arg.value.rstrip(ws) + short_indent\n functions = parsed.parser_functions\n continue\n # Special formatting for the first argument\n arg = args[0]\n if arg.positional:\n arg.value = \\\n newline_indent + arg.value.strip(ws) + newline_indent\n else:\n arg.name = newline_indent + arg.name.lstrip(ws)\n arg.value = arg.value.rstrip(ws) + newline_indent\n # Formatting the middle arguments\n for arg in args[1:-1]:\n if arg.positional:\n arg.value = ' ' + arg.value.strip(ws) + newline_indent\n else:\n arg.name = ' ' + arg.name.lstrip(ws)\n arg.value = arg.value.rstrip(ws) + newline_indent\n # Special formatting for the last argument\n arg = args[-1]\n if arg.positional:\n arg.value = ' ' + arg.value.strip(ws) + short_indent\n else:\n arg.name = ' ' + arg.name.lstrip(ws)\n arg.value = arg.value.rstrip(ws) + short_indent\n functions = parsed.parser_functions\n return parsed.string"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a list of parameter objects.", "response": "def parameters(self) -> List['Parameter']:\n \"\"\"Return a list of parameter objects.\"\"\"\n _lststr = self._lststr\n _type_to_spans = self._type_to_spans\n return [\n Parameter(_lststr, _type_to_spans, span, 'Parameter')\n for span in self._subspans('Parameter')]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a list of parser functions.", "response": "def parser_functions(self) -> List['ParserFunction']:\n \"\"\"Return a list of parser function objects.\"\"\"\n _lststr = self._lststr\n _type_to_spans = self._type_to_spans\n return [\n ParserFunction(_lststr, _type_to_spans, span, 'ParserFunction')\n for span in self._subspans('ParserFunction')]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef templates(self) -> List['Template']:\n _lststr = self._lststr\n _type_to_spans = self._type_to_spans\n return [\n Template(_lststr, _type_to_spans, span, 'Template')\n for span in self._subspans('Template')]", "response": "Return a list of templates as template objects."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a list of wikilink objects.", "response": "def wikilinks(self) -> List['WikiLink']:\n \"\"\"Return a list of wikilink objects.\"\"\"\n _lststr = self._lststr\n _type_to_spans = self._type_to_spans\n return [\n WikiLink(_lststr, _type_to_spans, span, 'WikiLink')\n for span in self._subspans('WikiLink')]"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a list of Comment objects.", "response": "def comments(self) -> List['Comment']:\n \"\"\"Return a list of comment objects.\"\"\"\n _lststr = self._lststr\n _type_to_spans = self._type_to_spans\n return [\n Comment(_lststr, _type_to_spans, span, 'Comment')\n for span in self._subspans('Comment')]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a list of all external links in the WikiText.", "response": "def external_links(self) -> List['ExternalLink']:\n \"\"\"Return a list of found external link objects.\n\n Note:\n Templates adjacent to external links are considered part of the\n link. In reality, this depends on the contents of the template:\n\n >>> WikiText(\n ... 'http://example.com{{dead link}}'\n ...).external_links[0].url\n 'http://example.com{{dead link}}'\n\n >>> WikiText(\n ... '[http://example.com{{space template}} text]'\n ...).external_links[0].url\n 'http://example.com{{space template}}'\n \"\"\"\n external_links = [] # type: List['ExternalLink']\n external_links_append = external_links.append\n type_to_spans = self._type_to_spans\n lststr = self._lststr\n ss, se = self._span\n spans = type_to_spans.setdefault('ExternalLink', [])\n if not spans:\n # All the added spans will be new.\n spans_append = spans.append\n for m in EXTERNAL_LINK_FINDITER(self._ext_link_shadow):\n s, e = m.span()\n span = [ss + s, ss + e]\n spans_append(span)\n external_links_append(\n ExternalLink(lststr, type_to_spans, span, 'ExternalLink'))\n return external_links\n # There are already some ExternalLink spans. Use the already existing\n # ones when the detected span is one of those.\n span_tuple_to_span_get = {(s[0], s[1]): s for s in spans}.get\n for m in EXTERNAL_LINK_FINDITER(self._ext_link_shadow):\n s, e = m.span()\n span = s, e = [s + ss, e + ss]\n old_span = span_tuple_to_span_get((s, e))\n if old_span is None:\n insort(spans, span)\n else:\n span = old_span\n external_links_append(\n ExternalLink(lststr, type_to_spans, span, 'ExternalLink'))\n return external_links"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef sections(self) -> List['Section']:\n sections = [] # type: List['Section']\n sections_append = sections.append\n type_to_spans = self._type_to_spans\n lststr = self._lststr\n ss, se = _span = self._span\n type_spans = type_to_spans.setdefault('Section', [])\n full_match = SECTIONS_FULLMATCH(self._shadow)\n section_spans = full_match.spans('section')\n levels = [len(eq) for eq in full_match.captures('equals')]\n if not type_spans:\n # All spans are new\n spans_append = type_spans.append\n for current_index, (current_level, (s, e)) in enumerate(\n zip(levels, section_spans), 1\n ):\n # Add text of the current_section to any parent section.\n # Note that section 0 is not a parent for any subsection.\n for section_index, section_level in enumerate(\n levels[current_index:], current_index\n ):\n if current_level and section_level > current_level:\n e = section_spans[section_index][1]\n else:\n break\n span = [ss + s, ss + e]\n spans_append(span)\n sections_append(\n Section(lststr, type_to_spans, span, 'Section'))\n return sections\n # There are already some spans. Instead of appending new spans\n # use them when the detected span already exists.\n span_tuple_to_span = {(s[0], s[1]): s for s in type_spans}.get\n for current_index, (current_level, (s, e)) in enumerate(\n zip(levels, section_spans), 1\n ):\n # Add text of the current_section to any parent section.\n # Note that section 0 is not a parent for any subsection.\n for section_index, section_level in enumerate(\n levels[current_index:], current_index\n ):\n if current_level and section_level > current_level:\n e = section_spans[section_index][1]\n else:\n break\n s, e = ss + s, ss + e\n old_span = span_tuple_to_span((s, e))\n if old_span is None:\n span = [s, e]\n insort(type_spans, span)\n else:\n span = old_span\n sections_append(Section(lststr, type_to_spans, span, 'Section'))\n return sections", "response": "Return a list of sections in current wikitext."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a list of found table objects.", "response": "def tables(self) -> List['Table']:\n \"\"\"Return a list of found table objects.\"\"\"\n tables = [] # type: List['Table']\n tables_append = tables.append\n type_to_spans = self._type_to_spans\n lststr = self._lststr\n shadow = self._shadow[:]\n ss, se = self._span\n spans = type_to_spans.setdefault('Table', [])\n if not spans:\n # All the added spans will be new.\n m = True # type: Any\n while m:\n m = False\n for m in TABLE_FINDITER(shadow):\n ms, me = m.span()\n # Ignore leading whitespace using len(m[1]).\n span = [ss + ms + len(m[1]), ss + me]\n spans.append(span)\n tables_append(Table(lststr, type_to_spans, span, 'Table'))\n shadow[ms:me] = b'_' * (me - ms)\n return tables\n # There are already exists some spans. Try to use the already existing\n # before appending new spans.\n span_tuple_to_span_get = {(s[0], s[1]): s for s in spans}.get\n m = True\n while m:\n m = False\n for m in TABLE_FINDITER(shadow):\n ms, me = m.span()\n # Ignore leading whitespace using len(m[1]).\n s, e = ss + ms + len(m[1]), ss + me\n old_span = span_tuple_to_span_get((s, e))\n if old_span is None:\n span = [s, e]\n insort(spans, span)\n else:\n span = old_span\n tables_append(Table(lststr, type_to_spans, span, 'Table'))\n shadow[ms:me] = b'_' * (me - ms)\n return tables"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns all tags with the given name.", "response": "def tags(self, name=None) -> List['Tag']:\n \"\"\"Return all tags with the given name.\"\"\"\n lststr = self._lststr\n type_to_spans = self._type_to_spans\n if name:\n if name in _tag_extensions:\n string = lststr[0]\n return [\n Tag(lststr, type_to_spans, span, 'ExtensionTag')\n for span in type_to_spans['ExtensionTag']\n if string.startswith('<' + name, span[0])]\n tags = [] # type: List['Tag']\n else:\n # There is no name, add all extension tags. Before using shadow.\n tags = [\n Tag(lststr, type_to_spans, span, 'ExtensionTag')\n for span in type_to_spans['ExtensionTag']]\n tags_append = tags.append\n # Get the left-most start tag, match it to right-most end tag\n # and so on.\n ss = self._span[0]\n shadow = self._shadow\n if name:\n # There is a name but it is not in TAG_EXTENSIONS.\n reversed_start_matches = reversed([m for m in regex_compile(\n START_TAG_PATTERN.replace(\n rb'{name}', rb'(?P' + name.encode() + rb')')\n ).finditer(shadow)])\n end_search = regex_compile(END_TAG_PATTERN .replace(\n b'{name}', name.encode())).search\n else:\n reversed_start_matches = reversed(\n [m for m in START_TAG_FINDITER(shadow)])\n shadow_copy = shadow[:]\n spans = type_to_spans.setdefault('Tag', [])\n span_tuple_to_span_get = {(s[0], s[1]): s for s in spans}.get\n spans_append = spans.append\n for start_match in reversed_start_matches:\n if start_match['self_closing']:\n # Don't look for the end tag\n s, e = start_match.span()\n span = [ss + s, ss + e]\n else:\n # look for the end-tag\n if name:\n # the end_search is already available\n # noinspection PyUnboundLocalVariable\n end_match = end_search(shadow_copy, start_match.end())\n else:\n # build end_search according to start tag name\n end_match = search(\n END_TAG_PATTERN.replace(\n b'{name}', start_match['name']),\n shadow_copy)\n if end_match:\n s, e = end_match.span()\n shadow_copy[s:e] = b'_' * (e - s)\n span = [ss + start_match.start(), ss + e]\n else:\n # Assume start-only tag.\n s, e = start_match.span()\n span = [ss + s, ss + e]\n old_span = span_tuple_to_span_get((span[0], span[1]))\n if old_span is None:\n spans_append(span)\n else:\n span = old_span\n tags_append(Tag(lststr, type_to_spans, span, 'Tag'))\n return sorted(tags, key=attrgetter('_span'))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _subspans(self, _type: str) -> Generator[int, None, None]:\n ss, se = self._span\n spans = self._type_to_spans[_type]\n # Do not yield self._span by bisecting for s < ss.\n # The second bisect is an optimization and should be on [se + 1],\n # but empty spans are not desired thus [se] is used.\n b = bisect(spans, [ss])\n for span in spans[b:bisect(spans, [se], b)]:\n if span[1] <= se:\n yield span", "response": "Yield all the sub - span indices excluding self. _span."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the ancestors of the current node.", "response": "def ancestors(self, type_: Optional[str] = None) -> List['WikiText']:\n \"\"\"Return the ancestors of the current node.\n\n :param type_: the type of the desired ancestors as a string.\n Currently the following types are supported: {Template,\n ParserFunction, WikiLink, Comment, Parameter, ExtensionTag}.\n The default is None and means all the ancestors of any type above.\n \"\"\"\n if type_ is None:\n types = SPAN_PARSER_TYPES\n else:\n types = type_,\n lststr = self._lststr\n type_to_spans = self._type_to_spans\n ss, se = self._span\n ancestors = []\n ancestors_append = ancestors.append\n for type_ in types:\n cls = globals()[type_]\n spans = type_to_spans[type_]\n for span in spans[:bisect(spans, [ss])]:\n if se < span[1]:\n ancestors_append(cls(lststr, type_to_spans, span, type_))\n return sorted(ancestors, key=lambda i: ss - i._span[0])"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef parent(self, type_: Optional[str] = None) -> Optional['WikiText']:\n ancestors = self.ancestors(type_)\n if ancestors:\n return ancestors[0]\n return None", "response": "Return the parent node of the current object."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef mode(list_: List[T]) -> T:\n return max(set(list_), key=list_.count)", "response": "Return the most common item in the list."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the first argument in the args that has the given name.", "response": "def get_arg(name: str, args: Iterable[Argument]) -> Optional[Argument]:\n \"\"\"Return the first argument in the args that has the given name.\n\n Return None if no such argument is found.\n\n As the computation of self.arguments is a little costly, this\n function was created so that other methods that have already computed\n the arguments use it instead of calling self.get_arg directly.\n \"\"\"\n for arg in args:\n if arg.name.strip(WS) == name.strip(WS):\n return arg\n return None"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef normal_name(\n self,\n rm_namespaces=('Template',),\n capital_links=False,\n _code: str = None,\n *,\n code: str = None,\n capitalize=False\n ) -> str:\n \"\"\"Return normal form of self.name.\n\n - Remove comments.\n - Remove language code.\n - Remove namespace (\"template:\" or any of `localized_namespaces`.\n - Use space instead of underscore.\n - Remove consecutive spaces.\n - Use uppercase for the first letter if `capitalize`.\n - Remove #anchor.\n\n :param rm_namespaces: is used to provide additional localized\n namespaces for the template namespace. They will be removed from\n the result. Default is ('Template',).\n :param capitalize: If True, convert the first letter of the\n template's name to a capital letter. See\n [[mw:Manual:$wgCapitalLinks]] for more info.\n :param code: is the language code.\n :param capital_links: deprecated.\n :param _code: deprecated.\n\n Example:\n >>> Template(\n ... '{{ eN : tEmPlAtE : t_1 # b | a }}'\n ... ).normal_name(code='en')\n 'T 1'\n \"\"\"\n if capital_links:\n warn('`capital_links` argument is deprecated,'\n ' use `capitalize` instead', DeprecationWarning)\n capitalize = capital_links\n if _code:\n warn('`positional_code` argument is deprecated,'\n ' use `code` instead', DeprecationWarning)\n code = _code\n # Remove comments\n name = COMMENT_SUB('', self.name).strip(WS)\n # Remove code\n if code:\n head, sep, tail = name.partition(':')\n if not head and sep:\n name = tail.strip(' ')\n head, sep, tail = name.partition(':')\n if code.lower() == head.strip(' ').lower():\n name = tail.strip(' ')\n # Remove namespace\n head, sep, tail = name.partition(':')\n if not head and sep:\n name = tail.strip(' ')\n head, sep, tail = name.partition(':')\n if head:\n ns = head.strip(' ').lower()\n for namespace in rm_namespaces:\n if namespace.lower() == ns:\n name = tail.strip(' ')\n break\n # Use space instead of underscore\n name = name.replace('_', ' ')\n if capitalize:\n # Use uppercase for the first letter\n n0 = name[0]\n if n0.islower():\n name = n0.upper() + name[1:]\n # Remove #anchor\n name, sep, tail = name.partition('#')\n return ' '.join(name.split())", "response": "Return the name of the current object in normal form."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\neliminating duplicate arguments by removing the first occurrences of duplicate arguments regardless of their value.", "response": "def rm_first_of_dup_args(self) -> None:\n \"\"\"Eliminate duplicate arguments by removing the first occurrences.\n\n Remove the first occurrences of duplicate arguments, regardless of\n their value. Result of the rendered wikitext should remain the same.\n Warning: Some meaningful data may be removed from wikitext.\n\n Also see `rm_dup_args_safe` function.\n \"\"\"\n names = set() # type: set\n for a in reversed(self.arguments):\n name = a.name.strip(WS)\n if name in names:\n del a[:len(a.string)]\n else:\n names.add(name)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef rm_dup_args_safe(self, tag: str = None) -> None:\n name_to_lastarg_vals = {} \\\n # type: Dict[str, Tuple[Argument, List[str]]]\n # Removing positional args affects their name. By reversing the list\n # we avoid encountering those kind of args.\n for arg in reversed(self.arguments):\n name = arg.name.strip(WS)\n if arg.positional:\n # Value of keyword arguments is automatically stripped by MW.\n val = arg.value\n else:\n # But it's not OK to strip whitespace in positional arguments.\n val = arg.value.strip(WS)\n if name in name_to_lastarg_vals:\n # This is a duplicate argument.\n if not val:\n # This duplicate argument is empty. It's safe to remove it.\n del arg[0:len(arg.string)]\n else:\n # Try to remove any of the detected duplicates of this\n # that are empty or their value equals to this one.\n lastarg, dup_vals = name_to_lastarg_vals[name]\n if val in dup_vals:\n del arg[0:len(arg.string)]\n elif '' in dup_vals:\n # This happens only if the last occurrence of name has\n # been an empty string; other empty values will\n # be removed as they are seen.\n # In other words index of the empty argument in\n # dup_vals is always 0.\n del lastarg[0:len(lastarg.string)]\n dup_vals.pop(0)\n else:\n # It was not possible to remove any of the duplicates.\n dup_vals.append(val)\n if tag:\n arg.value += tag\n else:\n name_to_lastarg_vals[name] = (arg, [val])", "response": "Removes duplicate arguments in a safe manner."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nset the value for the name argument.", "response": "def set_arg(\n self, name: str,\n value: str,\n positional: bool = None,\n before: str = None,\n after: str = None,\n preserve_spacing: bool = True\n ) -> None:\n \"\"\"Set the value for `name` argument. Add it if it doesn't exist.\n\n - Use `positional`, `before` and `after` keyword arguments only when\n adding a new argument.\n - If `before` is given, ignore `after`.\n - If neither `before` nor `after` are given and it's needed to add a\n new argument, then append the new argument to the end.\n - If `positional` is True, try to add the given value as a positional\n argument. Ignore `preserve_spacing` if positional is True.\n If it's None, do what seems more appropriate.\n \"\"\"\n args = list(reversed(self.arguments))\n arg = get_arg(name, args)\n # Updating an existing argument.\n if arg:\n if positional:\n arg.positional = positional\n if preserve_spacing:\n val = arg.value\n arg.value = val.replace(val.strip(WS), value)\n else:\n arg.value = value\n return\n # Adding a new argument\n if not name and positional is None:\n positional = True\n # Calculate the whitespace needed before arg-name and after arg-value.\n if not positional and preserve_spacing and args:\n before_names = []\n name_lengths = []\n before_values = []\n after_values = []\n for arg in args:\n aname = arg.name\n name_len = len(aname)\n name_lengths.append(name_len)\n before_names.append(STARTING_WS_MATCH(aname)[0])\n arg_value = arg.value\n before_values.append(STARTING_WS_MATCH(arg_value)[0])\n after_values.append(ENDING_WS_MATCH(arg_value)[0])\n pre_name_ws_mode = mode(before_names)\n name_length_mode = mode(name_lengths)\n post_value_ws_mode = mode(\n [SPACE_AFTER_SEARCH(self.string)[0]] + after_values[1:]\n )\n pre_value_ws_mode = mode(before_values)\n else:\n preserve_spacing = False\n # Calculate the string that needs to be added to the Template.\n if positional:\n # Ignore preserve_spacing for positional args.\n addstring = '|' + value\n else:\n if preserve_spacing:\n # noinspection PyUnboundLocalVariable\n addstring = (\n '|' + (pre_name_ws_mode + name.strip(WS)).\n ljust(name_length_mode) +\n '=' + pre_value_ws_mode + value + post_value_ws_mode\n )\n else:\n addstring = '|' + name + '=' + value\n # Place the addstring in the right position.\n if before:\n arg = get_arg(before, args)\n arg.insert(0, addstring)\n elif after:\n arg = get_arg(after, args)\n arg.insert(len(arg.string), addstring)\n else:\n if args and not positional:\n arg = args[0]\n arg_string = arg.string\n if preserve_spacing:\n # Insert after the last argument.\n # The addstring needs to be recalculated because we don't\n # want to change the the whitespace before final braces.\n # noinspection PyUnboundLocalVariable\n arg[0:len(arg_string)] = (\n arg.string.rstrip(WS) + post_value_ws_mode +\n addstring.rstrip(WS) + after_values[0]\n )\n else:\n arg.insert(len(arg_string), addstring)\n else:\n # The template has no arguments or the new arg is\n # positional AND is to be added at the end of the template.\n self.insert(-2, addstring)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the last argument with the given name.", "response": "def get_arg(self, name: str) -> Optional[Argument]:\n \"\"\"Return the last argument with the given name.\n\n Return None if no argument with that name is found.\n \"\"\"\n return get_arg(name, reversed(self.arguments))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn true if the is an argument named name.", "response": "def has_arg(self, name: str, value: str = None) -> bool:\n \"\"\"Return true if the is an arg named `name`.\n\n Also check equality of values if `value` is provided.\n\n Note: If you just need to get an argument and you want to LBYL, it's\n better to get_arg directly and then check if the returned value\n is None.\n \"\"\"\n for arg in reversed(self.arguments):\n if arg.name.strip(WS) == name.strip(WS):\n if value:\n if arg.positional:\n if arg.value == value:\n return True\n return False\n if arg.value.strip(WS) == value.strip(WS):\n return True\n return False\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndelete all arguments with the given name.", "response": "def del_arg(self, name: str) -> None:\n \"\"\"Delete all arguments with the given then.\"\"\"\n for arg in reversed(self.arguments):\n if arg.name.strip(WS) == name.strip(WS):\n del arg[:]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nbuild a table of all equivalent format variations by scraping spatialreference. org.", "response": "def build_crs_table(savepath):\n \"\"\"\n Build crs table of all equivalent format variations by scraping spatialreference.org.\n Saves table as tab-delimited text file.\n NOTE: Might take a while.\n\n Arguments:\n\n - *savepath*: The absolute or relative filepath to which to save the crs table, including the \".txt\" extension. \n \"\"\"\n # create table\n outfile = open(savepath, \"wb\")\n \n # create fields\n fields = [\"codetype\", \"code\", \"proj4\", \"ogcwkt\", \"esriwkt\"]\n outfile.write(\"\\t\".join(fields) + \"\\n\")\n \n # make table from url requests\n for codetype in (\"epsg\", \"esri\", \"sr-org\"):\n print(codetype)\n \n # collect existing proj list\n print(\"fetching list of available codes\")\n codelist = []\n page = 1\n while True:\n try:\n link = 'http://spatialreference.org/ref/%s/?page=%s' %(codetype,page)\n html = urllib2.urlopen(link).read()\n codes = [match.groups()[0] for match in re.finditer(r'/ref/'+codetype+'/(\\d+)', html) ]\n if not codes: break\n print(\"page\",page)\n codelist.extend(codes)\n page += 1\n except:\n break\n\n print(\"fetching string formats for each projection\")\n for i,code in enumerate(codelist):\n \n # check if code exists\n link = 'http://spatialreference.org/ref/%s/%s/' %(codetype,code)\n urllib2.urlopen(link)\n \n # collect each projection format in a table row\n row = [codetype, code]\n for resulttype in (\"proj4\", \"ogcwkt\", \"esriwkt\"):\n try:\n link = 'http://spatialreference.org/ref/%s/%s/%s/' %(codetype,code,resulttype)\n result = urllib2.urlopen(link).read()\n row.append(result)\n except:\n pass\n\n print(\"projection %i of %i added\" %(i,len(codelist)) )\n outfile.write(\"\\t\".join(row) + \"\\n\")\n\n # close the file\n outfile.close()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef crscode_to_string(codetype, code, format):\n link = 'http://spatialreference.org/ref/%s/%s/%s/' %(codetype,code,format)\n result = urllib2.urlopen(link).read()\n if not isinstance(result, str):\n result = result.decode()\n return result", "response": "Lookup crscode on spatialreference. org and return in specified format."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the COE Australian proj4 string of the current CS.", "response": "def to_proj4(self, as_dict=False, toplevel=True):\n \"\"\"\n Returns the CS as a proj4 formatted string or dict.\n\n Arguments:\n\n - **as_dict** (optional): If True, returns the proj4 string as a dict (defaults to False).\n - **toplevel** (optional): If True, treats this CS as the final toplevel CS and adds the necessary proj4 elements (defaults to True).\n \"\"\"\n # dont parse axis to proj4, because in proj4, axis only applies to the cs, ie the projcs (not the geogcs, where wkt can specify with axis)\n # also proj4 cannot specify angular units\n if toplevel:\n string = \"+proj=longlat %s %s +nodef\" % (self.datum.to_proj4(), self.prime_mer.to_proj4())\n else:\n string = \"%s %s\" % (self.datum.to_proj4(), self.prime_mer.to_proj4())\n if as_dict:\n return dict([\n entry.lstrip('+').split('=')\n for entry in string.split()\n if entry != \"+no_defs\"\n ])\n else:\n return string"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef to_ogc_wkt(self):\n return 'GEOGCS[\"%s\", %s, %s, %s, AXIS[\"Lon\", %s], AXIS[\"Lat\", %s]]' % (self.name, self.datum.to_ogc_wkt(), self.prime_mer.to_ogc_wkt(), self.angunit.to_ogc_wkt(), self.twin_ax[0].ogc_wkt, self.twin_ax[1].ogc_wkt )", "response": "Returns the CS as an OGC WKT formatted string."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef to_esri_wkt(self):\n return 'GEOGCS[\"%s\", %s, %s, %s, AXIS[\"Lon\", %s], AXIS[\"Lat\", %s]]' % (self.name, self.datum.to_esri_wkt(), self.prime_mer.to_esri_wkt(), self.angunit.to_esri_wkt(), self.twin_ax[0].esri_wkt, self.twin_ax[1].esri_wkt )", "response": "Returns the CS as an ESRI WKT formatted string."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef to_proj4(self, as_dict=False):\n string = \"%s\" % self.proj.to_proj4()\n string += \" %s\" % self.geogcs.to_proj4(toplevel=False)\n string += \" \" + \" \".join(param.to_proj4() for param in self.params)\n string += \" %s\" % self.unit.to_proj4()\n string += \" +axis=\" + self.twin_ax[0].proj4 + self.twin_ax[1].proj4 + \"u\" # up set as default because only proj4 can set it I think...\n string += \" +no_defs\"\n \n if as_dict:\n return dict([\n entry.lstrip('+').split('=')\n for entry in string.split()\n if entry != \"+no_defs\"\n ])\n else:\n return string", "response": "Returns the CS as a proj4 formatted string or dict."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef to_ogc_wkt(self):\n string = 'PROJCS[\"%s\", %s, %s, ' % (self.name, self.geogcs.to_ogc_wkt(), self.proj.to_ogc_wkt() )\n string += \", \".join(param.to_ogc_wkt() for param in self.params)\n string += ', %s' % self.unit.to_ogc_wkt()\n string += ', AXIS[\"X\", %s], AXIS[\"Y\", %s]]' % (self.twin_ax[0].ogc_wkt, self.twin_ax[1].ogc_wkt )\n return string", "response": "Returns the CS as a OGC WKT formatted string."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the CS as an ESRI WKT formatted string.", "response": "def to_esri_wkt(self):\n \"\"\"\n Returns the CS as a ESRI WKT formatted string.\n \"\"\"\n string = 'PROJCS[\"%s\", %s, %s, ' % (self.name, self.geogcs.to_esri_wkt(), self.proj.to_esri_wkt() )\n string += \", \".join(param.to_esri_wkt() for param in self.params)\n string += ', %s' % self.unit.to_esri_wkt()\n string += ', AXIS[\"X\", %s], AXIS[\"Y\", %s]]' % (self.twin_ax[0].esri_wkt, self.twin_ax[1].esri_wkt )\n return string"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef find(ellipsname, crstype, strict=False):\n if not strict:\n ellipsname = ellipsname.lower().replace(\" \",\"_\")\n for itemname,item in globals().items():\n if itemname.startswith(\"_\") or itemname == 'Ellipsoid':\n continue\n try:\n if hasattr(item.name, crstype):\n itemname = getattr(item.name, crstype)\n if not strict:\n itemname = itemname.lower().replace(\" \",\"_\")\n if ellipsname == itemname:\n return item\n except:\n pass\n else:\n return None", "response": "Search for a single resource in the current module."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef from_url(url, format=None):\n # first get string from url\n string = urllib2.urlopen(url).read()\n \n if PY3 is True:\n # decode str into string\n string = string.decode('utf-8')\n\n # then determine parser\n if format:\n # user specified format\n format = format.lower().replace(\" \", \"_\")\n func = parse.__getattr__(\"from_%s\" % format)\n else:\n # unknown format\n func = parse.from_unknown_text\n\n # then load\n crs = func(string)\n return crs", "response": "Returns the crs object from a string interpreted as a specified format located at a given url site."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef from_file(filepath):\n if filepath.endswith(\".prj\"):\n string = open(filepath, \"r\").read()\n return parse.from_unknown_wkt(string)\n \n elif filepath.endswith((\".geojson\",\".json\")):\n raw = open(filepath).read()\n geoj = json.loads(raw)\n if \"crs\" in geoj:\n crsinfo = geoj[\"crs\"]\n \n if crsinfo[\"type\"] == \"name\":\n string = crsinfo[\"properties\"][\"name\"]\n return parse.from_unknown_text(string)\n \n elif crsinfo[\"type\"] == \"link\":\n url = crsinfo[\"properties\"][\"name\"]\n type = crsinfo[\"properties\"].get(\"type\")\n return from_url(url, format=type)\n \n else: raise FormatError(\"Invalid GeoJSON crs type: must be either 'name' or 'link'\")\n\n else:\n # assume default wgs84 as per the spec\n return parse.from_epsg_code(\"4326\")", "response": "Returns the crs object from a file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef from_epsg_code(code):\n # must go online (or look up local table) to get crs details\n code = str(code)\n proj4 = utils.crscode_to_string(\"epsg\", code, \"proj4\")\n crs = from_proj4(proj4)\n return crs", "response": "Load the CS object from an EPSG code."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nloading the CS object from an ESRI code.", "response": "def from_esri_code(code):\n \"\"\"\n Load crs object from esri code, via spatialreference.org.\n Parses based on the proj4 representation.\n\n Arguments:\n\n - *code*: The ESRI code as an integer.\n\n Returns:\n\n - A CS instance of the indicated type. \n \"\"\"\n # must go online (or look up local table) to get crs details\n code = str(code)\n proj4 = utils.crscode_to_string(\"esri\", code, \"proj4\")\n crs = from_proj4(proj4)\n return crs"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef from_sr_code(code):\n # must go online (or look up local table) to get crs details\n code = str(code)\n proj4 = utils.crscode_to_string(\"sr-org\", code, \"proj4\")\n crs = from_proj4(proj4)\n return crs", "response": "Load the CS object from the SR - ORG code via spatialreference. org."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _from_wkt(string, wkttype=None, strict=False):\n # TODO\n # - Make function for finding next elemt by name, instead of knowing its arg index position\n # - Maybe verify elem arg name\n\n # make sure valid wkttype\n if wkttype: wkttype = wkttype.lower()\n assert wkttype in (\"ogc\",\"esri\",None)\n \n # remove newlines and multi spaces\n string = \" \".join(string.split())\n \n # parse arguments into components\n def _consume_bracket(chars, char):\n \"char must be the opening bracket\"\n consumed = \"\"\n depth = 1\n while char and depth > 0:\n consumed += char\n char = next(chars, None)\n # update depth level\n if char == \"[\":\n depth += 1\n elif char == \"]\":\n depth -= 1\n consumed += char # consume the last closing char too\n return consumed\n \n def _consume_quote(chars, char, quotechar):\n \"char and quotechar must be the opening quote char\"\n consumed = \"\"\n # consume the first opening char\n consumed += char \n char = next(chars, None)\n # consume inside\n while char and char != quotechar:\n consumed += char\n char = next(chars, None)\n # consume the last closing char too\n consumed += char \n return consumed\n \n def _next_elem(chars, char):\n \"char must be the first char of the text that precedes brackets\"\n header = \"\"\n # skip until next header\n while not char.isalpha():\n char = next(chars, None)\n # first consume the element text header\n while char.isalpha():\n header += char\n char = next(chars, None)\n # skip until next brackets (in case of spaces)\n while char != \"[\":\n char = next(chars, None)\n # then consume the element bracket contents\n if char == \"[\":\n content = _consume_bracket(chars, char)\n char = next(chars, None)\n # split content into args list\n content = content[1:-1] # remove enclosing brackets\n content = _split_except(content)\n # recursively load all subelems\n for i,item in enumerate(content):\n if isinstance(item, str) and \"[\" in item:\n chars = (char for char in item)\n char = next(chars)\n item = _next_elem(chars, char)\n content[i] = item\n return header, content\n \n def _clean_value(string):\n string = string.strip()\n try: string = float(string)\n except: pass\n return string\n \n def _split_except(string):\n \"split the string on every comma, except not while inside quotes or square brackets\"\n chars = (char for char in string)\n char = next(chars)\n items = []\n consumed = \"\"\n while char:\n # dont split on quotes, just consume it\n if char in (\"'\", '\"'):\n consumed += _consume_quote(chars, char, char)\n # dont split inside brackets, just consume it\n elif char == \"[\":\n consumed += _consume_bracket(chars, char)\n # new splitchar found, add what has been consumed so far as an item, reset, and start consuming until next splitchar\n elif char == \",\":\n consumed = _clean_value(consumed)\n items.append(consumed)\n consumed = \"\"\n # consume normal char\n elif char:\n consumed += char\n # next\n char = next(chars, None)\n # append last item too\n consumed = _clean_value(consumed)\n items.append(consumed)\n return items\n \n # load into nested tuples and arglists\n crstuples = []\n chars = (char for char in string)\n char = next(chars)\n while char:\n header,content = _next_elem(chars, char)\n crstuples.append((header, content))\n char = next(chars, None)\n\n # autodetect wkttype if not specified\n if not wkttype:\n topheader,topcontent = crstuples[0]\n if topheader == \"PROJCS\":\n geogcsheader,geogcscontent = topcontent[1]\n elif topheader == \"GEOGCS\":\n geogcsheader,geogcscontent = topheader,topcontent\n\n # datum elem should be second under geogcs\n datumheader, datumcontent = geogcscontent[1]\n datumname = datumcontent[0].upper().strip('\"')\n \n # esri wkt datums all use \"D_\" before the datum name\n if datumname.startswith(\"D_\"):\n wkttype = \"esri\"\n else:\n wkttype = \"ogc\"\n\n # parse into actual crs objects\n def _parse_top(header, content):\n \"procedure for parsing the toplevel crs element and all its children\"\n if header.upper() == \"PROJCS\":\n \n # find name\n csname = content[0].strip('\"')\n \n # find geogcs elem (by running parse again)\n subheader, subcontent = content[1]\n geogcs = _parse_top(subheader, subcontent)\n \n # find projection elem\n for part in content:\n if isinstance(part, tuple):\n subheader,subcontent = part\n if subheader == \"PROJECTION\":\n break\n projname = subcontent[0].strip('\"')\n projclass = projections.find(projname, \"%s_wkt\" % wkttype, strict)\n if projclass:\n proj = projclass()\n else:\n raise NotImplementedError(\"Unsupported projection: The specified projection name %r could not be found in the list of supported projections\" % projname)\n \n # find params\n params = []\n for part in content:\n if isinstance(part, tuple):\n subheader,subcontent = part\n if subheader == \"PARAMETER\":\n name, value = subcontent[0].strip('\"'), subcontent[1]\n itemclass = parameters.find(name, \"%s_wkt\" % wkttype, strict)\n if itemclass:\n item = itemclass(value)\n params.append(item)\n \n # find unit\n for part in content:\n if isinstance(part, tuple):\n subheader,subcontent = part\n if subheader == \"UNIT\":\n break\n unitname,value = subcontent[0].strip('\"'), subcontent[1]\n unitclass = units.find(unitname, \"%s_wkt\" % wkttype, strict)\n if unitclass:\n unit = unitclass()\n else:\n unit = units.Unknown()\n\n unit.unitmultiplier.value = value # override default multiplier\n linunit = unit\n \n # find twin axis maybe\n## if len(content) >= 6:\n## twinax = (parameters.Axis(\n## else:\n## twinax = None\n \n # put it all together\n projcs = containers.ProjCS(csname, geogcs, proj, params, linunit) #, twinax)\n return projcs\n\n elif header.upper() == \"GEOGCS\":\n # name\n csname = content[0].strip('\"')\n \n # datum\n subheader, subcontent = content[1]\n \n ## datum name\n datumname = subcontent[0].strip('\"')\n datumclass = datums.find(datumname, \"%s_wkt\" % wkttype, strict)\n if datumclass:\n datum = datumclass()\n else:\n datum = datums.Unknown()\n \n ## datum ellipsoid\n subsubheader, subsubcontent = subcontent[1]\n ellipsname = subsubcontent[0].strip('\"')\n ellipsclass = ellipsoids.find(ellipsname, \"%s_wkt\" % wkttype, strict)\n if ellipsclass:\n ellipsoid = ellipsclass()\n else:\n ellipsoid = ellipsoids.Unknown()\n\n ellipsoid.semimaj_ax = parameters.SemiMajorRadius(subsubcontent[1])\n if subsubcontent[2] == 0:\n # WKT falsely sets inverse flattening to 0 for spheroids\n # but actually it cannot be 0, it is the flattening that is 0\n ellipsoid.flat = parameters.Flattening(subsubcontent[2])\n else:\n ellipsoid.inv_flat = parameters.InverseFlattening(subsubcontent[2])\n\n ## datum shift\n if wkttype == \"ogc\":\n for subsubheader,subsubcontent in subcontent[1:]:\n if subsubheader == \"TOWGS84\":\n datumshift = parameters.DatumShift(subsubcontent)\n break\n else:\n datumshift = None\n elif wkttype == \"esri\":\n # not used in esri wkt\n datumshift = None\n \n ## put it all togehter\n datum.ellips = ellipsoid\n datum.datumshift = datumshift\n \n # prime mer\n subheader, subcontent = content[2]\n prime_mer = parameters.PrimeMeridian(subcontent[1])\n \n # angunit\n subheader, subcontent = content[3]\n unitname,value = subcontent[0].strip('\"'), subcontent[1]\n unitclass = units.find(unitname, \"%s_wkt\" % wkttype, strict)\n if unitclass:\n unit = unitclass()\n else:\n unit = units.Unknown()\n unit.unitmultiplier.value = value # override default multiplier\n angunit = unit\n \n # twin axis\n # ...\n \n # put it all together\n geogcs = containers.GeogCS(csname, datum, prime_mer, angunit, twin_ax=None)\n return geogcs\n\n # toplevel collection\n header, content = crstuples[0]\n crs = _parse_top(header, content)\n \n # use args to create crs\n return crs", "response": "Parses a string into a CS instance of the specified type."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nparse a proj4 formatted string or dict and returns a CS instance of the specified type.", "response": "def from_proj4(proj4, strict=False):\n \"\"\"\n Parse crs as proj4 formatted string or dict and return the resulting crs object.\n\n Arguments:\n\n - *proj4*: The proj4 representation as a string or dict.\n - *strict* (optional): When True, the parser is strict about names having to match\n exactly with upper and lowercases. Default is not strict (False).\n\n Returns:\n\n - A CS instance of the indicated type. \n \"\"\"\n # parse arguments into components\n # use args to create crs\n\n # TODO: SLIGTHLY MESSY STILL, CLEANUP..\n\n params = []\n\n if isinstance(proj4, dict):\n # add leading + sign as expected below, proj4 dicts do not have that\n partdict = dict([('+'+k,v) for k,v in proj4.items()])\n else: \n partdict = dict([part.split(\"=\") for part in proj4.split()\n if len(part.split(\"=\")) == 2 ])\n\n # INIT CODES\n # eg, +init=EPSG:1234\n if \"+init\" in partdict:\n\n # first, get the default proj4 string of the +init code\n codetype, code = partdict[\"+init\"].split(\":\")\n if codetype == \"EPSG\":\n initproj4 = utils.crscode_to_string(\"epsg\", code, \"proj4\")\n elif codetype == \"ESRI\":\n initproj4 = utils.crscode_to_string(\"esri\", code, \"proj4\")\n\n # make the default into param dict\n initpartdict = dict([part.split(\"=\") for part in initproj4.split()\n if len(part.split(\"=\")) == 2 ])\n\n # override the default with any custom params specified along with the +init code\n initpartdict.update(partdict)\n\n # rerun from_proj4() again on the derived proj4 params as if it was not made with the +init code\n del initpartdict[\"+init\"]\n string = \" \".join(\"%s=%s\" % (key,val) for key,val in initpartdict.items())\n return from_proj4(string)\n\n # DATUM\n\n # datum param is required\n if \"+datum\" in partdict:\n \n # get predefined datum def\n datumname = partdict[\"+datum\"]\n datumclass = datums.find(datumname, \"proj4\", strict)\n if datumclass:\n datum = datumclass()\n else:\n datum = datums.Unknown()\n\n else:\n datum = datums.Unknown()\n\n # ELLIPS\n\n # ellipse param is required\n ellips = None\n if \"+ellps\" in partdict:\n\n # get predefined ellips def\n ellipsname = partdict[\"+ellps\"]\n ellipsclass = ellipsoids.find(ellipsname, \"proj4\", strict)\n if ellipsclass:\n ellips = ellipsclass()\n\n if not ellips:\n ellips = ellipsoids.Unknown()\n\n # TO WGS 84 COEFFS\n if \"+towgs84\" in partdict:\n coeffs = partdict[\"+towgs84\"].split(\",\")\n datumshift = parameters.DatumShift(coeffs)\n\n # TODO: if no datum, use ellips + towgs84 params to create the correct datum\n # ...??\n\n # COMBINE DATUM AND ELLIPS\n\n ## create datum and ellips param objs\n\n # +ellps loads all the required ellipsoid parameters\n # here we set or overwrite the parameters manually\n if \"+a\" in partdict:\n # semimajor radius\n ellips.semimaj_ax = parameters.SemiMajorRadius(partdict[\"+a\"]) \n\n if \"+b\" in partdict:\n # semiminor radius\n ellips.semimin_ax = parameters.SemiMinorRadius(partdict[\"+b\"])\n\n if \"+f\" in partdict:\n # flattening\n ellips.flat = parameters.Flattening(partdict[\"+f\"])\n\n if \"+rf\" in partdict:\n # inverse flattening\n ellips.inv_flat = parameters.InverseFlattening(partdict[\"+rf\"])\n\n # check that ellipsoid is sufficiently defined\n if ellips.semimaj_ax and ellips.semimin_ax:\n # +a (semimajor radius) and +b (semiminor radius) is enough and can be used to calculate flattening\n # see https://en.wikipedia.org/wiki/Flattening\n pass\n elif ellips.semimaj_ax and ellips.inv_flat:\n # alternatively, it is okay with if +a (semimajor) and +f (flattening) are specified\n pass\n elif ellips.semimaj_ax and ellips.flat:\n # alternatively, semimajor and +rf is also acceptable (the reciprocal/inverse of +f)\n pass\n else:\n raise FormatError(\"The format string is missing the required +ellps element, or the alternative manual specification of the +a with +b or +f/+rf elements: \\n\\t %s\" % partdict)\n \n if \"+datum\" in partdict:\n datum.ellips = ellips\n\n elif \"+towgs84\" in partdict:\n datum.ellips = ellips\n datum.datumshift = datumshift\n\n else:\n datum.ellips = ellips\n\n # PRIME MERIDIAN\n\n # set default\n prime_mer = parameters.PrimeMeridian(0)\n\n # overwrite with user input\n if \"+pm\" in partdict:\n prime_mer = parameters.PrimeMeridian(partdict[\"+pm\"])\n\n # ANGULAR UNIT \n\n ## proj4 cannot set angular unit, so just set to default\n angunit = units.Degree() \n\n # GEOGCS (note, currently does not load axes)\n\n geogcs = containers.GeogCS(\"Unknown\", datum, prime_mer, angunit) #, twin_ax)\n\n # PROJECTION\n \n if \"+proj\" in partdict:\n\n # get predefined proj def\n projname = partdict[\"+proj\"]\n projclass = projections.find(projname, \"proj4\", strict)\n if projclass:\n proj = projclass()\n elif projname == \"longlat\":\n # proj4 special case, longlat as projection name means unprojected geogcs\n proj = None\n else:\n raise NotImplementedError(\"Unsupported projection: The specified projection name %r could not be found in the list of supported projections\" % projname)\n\n else:\n raise FormatError(\"The format string is missing the required +proj element\")\n\n if proj:\n\n # Because proj4 has no element hierarchy, using automatic element find() would\n # ...would not be very effective, as that would need a try-fail approach for each\n # ...element type (parameter, projection, datum, ellipsoid, unit).\n # ...Instead load each element individually.\n\n # CENTRAL MERIDIAN\n\n if \"+lon_0\" in partdict:\n val = partdict[\"+lon_0\"]\n obj = parameters.CentralMeridian(val)\n params.append(obj)\n\n # FALSE EASTING\n\n if \"+x_0\" in partdict:\n val = partdict[\"+x_0\"]\n obj = parameters.FalseEasting(val)\n params.append(obj)\n\n # FALSE NORTHING\n\n if \"+y_0\" in partdict:\n val = partdict[\"+y_0\"]\n obj = parameters.FalseNorthing(val)\n params.append(obj)\n\n # SCALING FACTOR\n \n if \"+k_0\" in partdict or \"+k\" in partdict:\n if \"+k_0\" in partdict: val = partdict[\"+k_0\"]\n elif \"+k\" in partdict: val = partdict[\"+k\"]\n obj = parameters.ScalingFactor(val)\n params.append(obj)\n\n # LATITUDE ORIGIN\n\n if \"+lat_0\" in partdict:\n val = partdict[\"+lat_0\"]\n obj = parameters.LatitudeOrigin(val)\n params.append(obj)\n\n # LATITUDE TRUE SCALE\n\n if \"+lat_ts\" in partdict:\n val = partdict[\"+lat_ts\"]\n obj = parameters.LatitudeTrueScale(val)\n params.append(obj)\n\n # LONGITUDE CENTER\n\n if \"+lonc\" in partdict:\n val = partdict[\"+lonc\"]\n obj = parameters.LongitudeCenter(val)\n params.append(obj)\n\n # AZIMUTH\n\n if \"+alpha\" in partdict:\n val = partdict[\"+alpha\"]\n obj = parameters.Azimuth(val)\n params.append(obj)\n\n # STD PARALLEL 1\n\n if \"+lat_1\" in partdict:\n val = partdict[\"+lat_1\"]\n obj = parameters.LatitudeFirstStndParallel(val)\n params.append(obj)\n \n # STD PARALLEL 2\n\n if \"+lat_2\" in partdict:\n val = partdict[\"+lat_2\"]\n obj = parameters.LatitudeSecondStndParallel(val)\n params.append(obj)\n \n # SATELLITE HEIGHT\n if \"+h\" in partdict:\n val = partdict[\"+h\"]\n obj = parameters.SatelliteHeight(val)\n params.append(obj)\n\n # TILT ANGLE\n if \"+tilt\" in partdict:\n val = partdict[\"+tilt\"]\n obj = parameters.TiltAngle(val)\n params.append(obj)\n\n # UNIT\n\n # get values\n if \"+units\" in partdict:\n # unit name takes precedence over to_meter\n unitname = partdict[\"+units\"]\n unitclass = units.find(unitname, \"proj4\", strict)\n if unitclass:\n unit = unitclass() # takes meter multiplier from name, ignoring any custom meter multiplier\n else:\n raise FormatError(\"The specified unit name %r does not appear to be a valid unit name\" % unitname)\n elif \"+to_meter\" in partdict:\n # no unit name specified, only to_meter conversion factor\n unit = units.Unknown()\n unit.metermultiplier.value = partdict[\"+to_meter\"]\n else:\n # if nothing specified, defaults to meter\n unit = units.Meter()\n\n # PROJCS\n\n projcs = containers.ProjCS(\"Unknown\", geogcs, proj, params, unit)\n return projcs\n\n else:\n # means projdef was None, ie unprojected longlat geogcs\n return geogcs"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndetecting crs string format and parse into crs object with appropriate function.", "response": "def from_unknown_text(text, strict=False):\n \"\"\"\n Detect crs string format and parse into crs object with appropriate function.\n\n Arguments:\n\n - *text*: The crs text representation of unknown type. \n - *strict* (optional): When True, the parser is strict about names having to match\n exactly with upper and lowercases. Default is not strict (False).\n\n Returns:\n\n - CRS object.\n \"\"\"\n\n if text.startswith(\"+\"):\n crs = from_proj4(text, strict)\n\n elif text.startswith((\"PROJCS[\",\"GEOGCS[\")):\n crs = from_unknown_wkt(text, strict)\n\n #elif text.startswith(\"urn:\"):\n # crs = from_ogc_urn(text, strict)\n\n elif text.startswith(\"EPSG:\"):\n crs = from_epsg_code(text.split(\":\")[1])\n\n elif text.startswith(\"ESRI:\"):\n crs = from_esri_code(text.split(\":\")[1])\n\n elif text.startswith(\"SR-ORG:\"):\n crs = from_sr_code(text.split(\":\")[1])\n\n else: raise FormatError(\"Could not auto-detect the type of crs format, make sure it is one of the supported formats\")\n \n return crs"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nwrite the raw header content to the out stream", "response": "def write_to(self, out):\n \"\"\" Write the raw header content to the out stream\n\n Parameters:\n ----------\n out : {file object}\n The output stream\n \"\"\"\n\n out.write(bytes(self.header))\n out.write(self.record_data)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nread the raw VLR from the input stream and returns a new RawVLR object.", "response": "def read_from(cls, data_stream):\n \"\"\" Instantiate a RawVLR by reading the content from the\n data stream\n\n Parameters:\n ----------\n data_stream : {file object}\n The input stream\n Returns\n -------\n RawVLR\n The RawVLR read\n \"\"\"\n\n raw_vlr = cls()\n header = RawVLRHeader.from_stream(data_stream)\n raw_vlr.header = header\n raw_vlr.record_data = data_stream.read(header.record_length_after_header)\n return raw_vlr"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the 3 GeoTiff vlrs from the vlr_list and parses them into a nicer structure", "response": "def parse_geo_tiff_keys_from_vlrs(vlr_list: vlrlist.VLRList) -> List[GeoTiffKey]:\n \"\"\" Gets the 3 GeoTiff vlrs from the vlr_list and parse them into\n a nicer structure\n\n Parameters\n ----------\n vlr_list: pylas.vrls.vlrslist.VLRList list of vlrs from a las file\n\n Raises\n ------\n IndexError if any of the needed GeoTiffVLR is not found in the list\n\n Returns\n -------\n List of GeoTiff keys parsed from the VLRs\n\n \"\"\"\n geo_key_dir = vlr_list.get_by_id(\n GeoKeyDirectoryVlr.official_user_id(), GeoKeyDirectoryVlr.official_record_ids()\n )[0]\n geo_doubles = vlr_list.get_by_id(\n GeoDoubleParamsVlr.official_user_id(), GeoDoubleParamsVlr.official_record_ids()\n )[0]\n geo_ascii = vlr_list.get_by_id(\n GeoAsciiParamsVlr.official_user_id(), GeoAsciiParamsVlr.official_record_ids()\n )[0]\n return parse_geo_tiff(geo_key_dir, geo_doubles, geo_ascii)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse_geo_tiff(\n key_dir_vlr: GeoKeyDirectoryVlr,\n double_vlr: GeoDoubleParamsVlr,\n ascii_vlr: GeoAsciiParamsVlr,\n) -> List[GeoTiffKey]:\n \"\"\" Parses the GeoTiff VLRs information into nicer structs\n \"\"\"\n geotiff_keys = []\n\n for k in key_dir_vlr.geo_keys:\n if k.tiff_tag_location == 0:\n value = k.value_offset\n elif k.tiff_tag_location == 34736:\n value = double_vlr.doubles[k.value_offset]\n elif k.tiff_tag_location == 34737:\n try:\n value = ascii_vlr.strings[k.value_offset][k.count :]\n except IndexError:\n # Maybe I'm just misunderstanding the specification :thinking:\n value = ascii_vlr.strings[0][k.value_offset : k.value_offset + k.count]\n else:\n logger.warning(\n \"GeoTiffKey with unknown tiff tag location ({})\".format(\n k.tiff_tag_location\n )\n )\n continue\n\n geotiff_keys.append(GeoTiffKey(k.id, value))\n return geotiff_keys", "response": "Parses the GeoTiff VLRs information into nicer structs\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_signedness_for_extra_dim(type_index):\n try:\n t = _extra_dims_style_2[type_index]\n if \"uint\" in t:\n return DimensionSignedness.UNSIGNED\n elif \"int\" in t:\n return DimensionSignedness.SIGNED\n else:\n return DimensionSignedness.FLOATING\n except IndexError:\n raise errors.UnknownExtraType(type_index)", "response": "Returns the signedness of the extra dimension type_index."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_id_for_extra_dim_type(type_str):\n try:\n return _type_to_extra_dim_id_style_1[type_str]\n except KeyError:\n try:\n return _type_to_extra_dim_id_style_2[type_str]\n except KeyError:\n raise errors.UnknownExtraType(type_str)", "response": "Returns the index of the extra dimension type in the LAS Specification."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef from_point_record(cls, other_point_record, new_point_format):\n array = np.zeros_like(other_point_record.array, dtype=new_point_format.dtype)\n new_record = cls(array, new_point_format)\n new_record.copy_fields_from(other_point_record)\n return new_record", "response": "Construct a new PackedPointRecord from an existing point record."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef copy_fields_from(self, other_record):\n for dim_name in self.dimensions_names:\n try:\n self[dim_name] = other_record[dim_name]\n except ValueError:\n pass", "response": "Copies the values of the current dimensions from other_record into self."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns all the dimensions names including the names of sub_fields and their corresponding packed fields", "response": "def all_dimensions_names(self):\n \"\"\" Returns all the dimensions names, including the names of sub_fields\n and their corresponding packed fields\n \"\"\"\n return frozenset(self.array.dtype.names + tuple(self.sub_fields_dict.keys()))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a new point record with all dimensions initialized to zero.", "response": "def zeros(cls, point_format, point_count):\n \"\"\" Creates a new point record with all dimensions initialized to zero\n\n Parameters\n ----------\n point_format_id: int\n The point format id the point record should have\n point_count : int\n The number of point the point record should have\n\n Returns\n -------\n PackedPointRecord\n\n \"\"\"\n data = np.zeros(point_count, point_format.dtype)\n return cls(data, point_format)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconstructs a new instance of the class from the data in the given stream.", "response": "def from_stream(cls, stream, point_format, count):\n \"\"\" Construct the point record by reading the points from the stream\n \"\"\"\n points_dtype = point_format.dtype\n point_data_buffer = bytearray(stream.read(count * points_dtype.itemsize))\n\n try:\n data = np.frombuffer(point_data_buffer, dtype=points_dtype, count=count)\n except ValueError:\n expected_bytes_len = count * points_dtype.itemsize\n if len(point_data_buffer) % points_dtype.itemsize != 0:\n missing_bytes_len = expected_bytes_len - len(point_data_buffer)\n raise_not_enough_bytes_error(\n expected_bytes_len,\n missing_bytes_len,\n len(point_data_buffer),\n points_dtype,\n )\n else:\n actual_count = len(point_data_buffer) // points_dtype.itemsize\n logger.critical(\n \"Expected {} points, there are {} ({} missing)\".format(\n count, actual_count, count - actual_count\n )\n )\n data = np.frombuffer(\n point_data_buffer, dtype=points_dtype, count=actual_count\n )\n\n return cls(data, point_format)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef from_compressed_buffer(cls, compressed_buffer, point_format, count, laszip_vlr):\n point_dtype = point_format.dtype\n uncompressed = decompress_buffer(\n compressed_buffer, point_dtype, count, laszip_vlr\n )\n return cls(uncompressed, point_format)", "response": "Construct a new object from a compressed buffer."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef x(self):\n return scale_dimension(self.X, self.header.x_scale, self.header.x_offset)", "response": "Returns the x positions of the points in the log file as doubles\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef y(self):\n return scale_dimension(self.Y, self.header.y_scale, self.header.y_offset)", "response": "Returns the scaled y positions of the points as doubles\n "} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the scaled z positions of the points as doubles", "response": "def z(self):\n \"\"\" Returns the scaled z positions of the points as doubles\n \"\"\"\n return scale_dimension(self.Z, self.header.z_scale, self.header.z_offset)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef points(self, value):\n if value.dtype != self.points.dtype:\n raise errors.IncompatibleDataFormat('Cannot set points with a different point format, convert first')\n new_point_record = record.PackedPointRecord(value, self.points_data.point_format)\n dims.raise_if_version_not_compatible_with_fmt(\n new_point_record.point_format.id, self.header.version\n )\n self.points_data = new_point_record\n self.update_header()", "response": "Setter for the points property"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd a new extra dimension to the point record.", "response": "def add_extra_dim(self, name, type, description=\"\"):\n \"\"\" Adds a new extra dimension to the point record\n\n Parameters\n ----------\n name: str\n the name of the dimension\n type: str\n type of the dimension (eg 'uint8')\n description: str, optional\n a small description of the dimension\n \"\"\"\n name = name.replace(\" \", \"_\")\n type_id = extradims.get_id_for_extra_dim_type(type)\n extra_byte = ExtraBytesStruct(\n data_type=type_id, name=name.encode(), description=description.encode()\n )\n\n try:\n extra_bytes_vlr = self.vlrs.get(\"ExtraBytesVlr\")[0]\n except IndexError:\n extra_bytes_vlr = ExtraBytesVlr()\n self.vlrs.append(extra_bytes_vlr)\n finally:\n extra_bytes_vlr.extra_bytes_structs.append(extra_byte)\n self.points_data.add_extra_dims([(name, type)])"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwrites the data to a stream", "response": "def write_to(self, out_stream, do_compress=False):\n \"\"\" writes the data to a stream\n\n Parameters\n ----------\n out_stream: file object\n the destination stream, implementing the write method\n do_compress: bool, optional, default False\n Flag to indicate if you want the date to be compressed\n \"\"\"\n\n self.update_header()\n\n if (\n self.vlrs.get(\"ExtraBytesVlr\")\n and not self.points_data.extra_dimensions_names\n ):\n logger.error(\n \"Las contains an ExtraBytesVlr, but no extra bytes were found in the point_record, \"\n \"removing the vlr\"\n )\n self.vlrs.extract(\"ExtraBytesVlr\")\n\n if do_compress:\n laz_vrl = create_laz_vlr(self.points_data)\n self.vlrs.append(known.LasZipVlr(laz_vrl.data()))\n raw_vlrs = vlrlist.RawVLRList.from_list(self.vlrs)\n\n self.header.offset_to_point_data = (\n self.header.size + raw_vlrs.total_size_in_bytes()\n )\n self.header.point_format_id = uncompressed_id_to_compressed(\n self.header.point_format_id\n )\n self.header.number_of_vlr = len(raw_vlrs)\n\n points_bytes = compress_buffer(\n np.frombuffer(self.points_data.array, np.uint8),\n laz_vrl.schema,\n self.header.offset_to_point_data,\n ).tobytes()\n\n else:\n raw_vlrs = vlrlist.RawVLRList.from_list(self.vlrs)\n self.header.number_of_vlr = len(raw_vlrs)\n self.header.offset_to_point_data = (\n self.header.size + raw_vlrs.total_size_in_bytes()\n )\n points_bytes = self.points_data.raw_bytes()\n\n self.header.write_to(out_stream)\n self._raise_if_not_expected_pos(out_stream, self.header.size)\n raw_vlrs.write_to(out_stream)\n self._raise_if_not_expected_pos(out_stream, self.header.offset_to_point_data)\n out_stream.write(points_bytes)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nwrite the las data into a file.", "response": "def write_to_file(self, filename, do_compress=None):\n \"\"\" Writes the las data into a file\n\n Parameters\n ----------\n filename : str\n The file where the data should be written.\n do_compress: bool, optional, default None\n if None the extension of the filename will be used\n to determine if the data should be compressed\n otherwise the do_compress flag indicate if the data should be compressed\n \"\"\"\n is_ext_laz = filename.split(\".\")[-1] == \"laz\"\n if is_ext_laz and do_compress is None:\n do_compress = True\n with open(filename, mode=\"wb\") as out:\n self.write_to(out, do_compress=do_compress)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef write(self, destination, do_compress=None):\n if isinstance(destination, str):\n self.write_to_file(destination)\n else:\n if do_compress is None:\n do_compress = False\n self.write_to(destination, do_compress=do_compress)", "response": "Writes to a stream or file object containing the data of the current object."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _build_point_formats_dtypes(point_format_dimensions, dimensions_dict):\n return {\n fmt_id: _point_format_to_dtype(point_fmt, dimensions_dict)\n for fmt_id, point_fmt in point_format_dimensions.items()\n }", "response": "Builds the dict mapping point format id to numpy. dtype\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _build_unpacked_point_formats_dtypes(\n point_formats_dimensions, composed_fields_dict, dimensions_dict\n):\n \"\"\" Builds the dict mapping point format id to numpy.dtype\n In the dtypes, bit fields are unpacked and can be accessed directly\n \"\"\"\n unpacked_dtypes = {}\n for fmt_id, dim_names in point_formats_dimensions.items():\n composed_dims, dtype = composed_fields_dict[fmt_id], []\n for dim_name in dim_names:\n if dim_name in composed_dims:\n dtype.extend((f.name, f.type) for f in composed_dims[dim_name])\n else:\n dtype.append(dimensions_dict[dim_name])\n unpacked_dtypes[fmt_id] = np.dtype(dtype)\n return unpacked_dtypes", "response": "Builds the dict mapping point format id to numpy. dtype where bit fields are unpacked and bit dimensions can be accessed directly."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntries to find a matching point format id for the input numpy dtype.", "response": "def np_dtype_to_point_format(dtype, unpacked=False):\n \"\"\" Tries to find a matching point format id for the input numpy dtype\n To match, the input dtype has to be 100% equal to a point format dtype\n so all names & dimensions types must match\n\n Parameters:\n ----------\n dtype : numpy.dtype\n The input dtype\n unpacked : bool, optional\n [description] (the default is False, which [default_description])\n\n Raises\n ------\n errors.IncompatibleDataFormat\n If No compatible point format was found\n\n Returns\n -------\n int\n The compatible point format found\n \"\"\"\n\n all_dtypes = (\n ALL_POINT_FORMATS_DTYPE if not unpacked else UNPACKED_POINT_FORMATS_DTYPES\n )\n for format_id, fmt_dtype in all_dtypes.items():\n if fmt_dtype == dtype:\n return format_id\n else:\n raise errors.IncompatibleDataFormat(\n \"Data type of array is not compatible with any point format (array dtype: {})\".format(\n dtype\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef min_file_version_for_point_format(point_format_id):\n for version, point_formats in sorted(VERSION_TO_POINT_FMT.items()):\n if point_format_id in point_formats:\n return version\n else:\n raise errors.PointFormatNotSupported(point_format_id)", "response": "Returns the minimum file version that supports the given point format id"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef is_point_fmt_compatible_with_version(point_format_id, file_version):\n try:\n return point_format_id in VERSION_TO_POINT_FMT[str(file_version)]\n except KeyError:\n raise errors.FileVersionNotSupported(file_version)", "response": "Checks if the point format id is compatible with the file version."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_by_id(self, user_id=\"\", record_ids=(None,)):\n if user_id != \"\" and record_ids != (None,):\n return [\n vlr\n for vlr in self.vlrs\n if vlr.user_id == user_id and vlr.record_id in record_ids\n ]\n else:\n return [\n vlr\n for vlr in self.vlrs\n if vlr.user_id == user_id or vlr.record_id in record_ids\n ]", "response": "Function to get a list of vlrs matching the user_id and record_ids."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get(self, vlr_type):\n return [v for v in self.vlrs if v.__class__.__name__ == vlr_type]", "response": "Returns the list of vlrs of the requested type"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef extract(self, vlr_type):\n kept_vlrs, extracted_vlrs = [], []\n for vlr in self.vlrs:\n if vlr.__class__.__name__ == vlr_type:\n extracted_vlrs.append(vlr)\n else:\n kept_vlrs.append(vlr)\n self.vlrs = kept_vlrs\n return extracted_vlrs", "response": "Returns the list of vlrs of the requested type"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreads vlrs and parses them if possible from the stream", "response": "def read_from(cls, data_stream, num_to_read):\n \"\"\" Reads vlrs and parse them if possible from the stream\n\n Parameters\n ----------\n data_stream : io.BytesIO\n stream to read from\n num_to_read : int\n number of vlrs to be read\n\n Returns\n -------\n pylas.vlrs.vlrlist.VLRList\n List of vlrs\n\n \"\"\"\n vlrlist = cls()\n for _ in range(num_to_read):\n raw = RawVLR.read_from(data_stream)\n try:\n vlrlist.append(vlr_factory(raw))\n except UnicodeDecodeError:\n logger.error(\"Failed to decode VLR: {}\".format(raw))\n\n return vlrlist"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef files_have_same_point_format_id(las_files):\n point_format_found = {las.header.point_format_id for las in las_files}\n return len(point_format_found) == 1", "response": "Returns True if all the files have the same point format id"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef files_have_same_dtype(las_files):\n dtypes = {las.points.dtype for las in las_files}\n return len(dtypes) == 1", "response": "Returns true if all the files have the same numpy datatype"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _raise_if_wrong_file_signature(stream):\n file_sig = stream.read(len(headers.LAS_FILE_SIGNATURE))\n if file_sig != headers.LAS_FILE_SIGNATURE:\n raise errors.PylasError(\n \"File Signature ({}) is not {}\".format(file_sig, headers.LAS_FILE_SIGNATURE)\n )", "response": "Checks that the file signature is correct."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef read_header(self):\n self.stream.seek(self.start_pos)\n return headers.HeaderFactory().read_from_stream(self.stream)", "response": "Reads the header of the las file and returns it\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef read_vlrs(self):\n self.stream.seek(self.start_pos + self.header.size)\n return VLRList.read_from(self.stream, num_to_read=self.header.number_of_vlr)", "response": "Reads and returns the vlrs of the file"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef read(self):\n vlrs = self.read_vlrs()\n self._warn_if_not_at_expected_pos(\n self.header.offset_to_point_data, \"end of vlrs\", \"start of points\"\n )\n self.stream.seek(self.start_pos + self.header.offset_to_point_data)\n\n try:\n points = self._read_points(vlrs)\n except (RuntimeError, errors.LazPerfNotFound) as e:\n logger.error(\"LazPerf failed to decompress ({}), trying laszip.\".format(e))\n self.stream.seek(self.start_pos)\n self.__init__(io.BytesIO(laszip_decompress(self.stream)))\n return self.read()\n\n if points.point_format.has_waveform_packet:\n self.stream.seek(\n self.start_pos + self.header.start_of_waveform_data_packet_record\n )\n if self.header.global_encoding.are_waveform_flag_equal():\n raise errors.PylasError(\n \"Incoherent values for internal and external waveform flags, both are {})\".format(\n \"set\"\n if self.header.global_encoding.waveform_internal\n else \"unset\"\n )\n )\n if self.header.global_encoding.waveform_internal:\n # TODO: Find out what to do with these\n _, _ = self._read_internal_waveform_packet()\n elif self.header.global_encoding.waveform_external:\n logger.info(\n \"Waveform data is in an external file, you'll have to load it yourself\"\n )\n\n if self.header.version >= \"1.4\":\n evlrs = self.read_evlrs()\n return las14.LasData(\n header=self.header, vlrs=vlrs, points=points, evlrs=evlrs\n )\n\n return las12.LasData(header=self.header, vlrs=vlrs, points=points)", "response": "Reads the whole las data and returns a LasData object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _read_points(self, vlrs):\n try:\n extra_dims = vlrs.get(\"ExtraBytesVlr\")[0].type_of_extra_dims()\n except IndexError:\n extra_dims = None\n\n point_format = PointFormat(self.header.point_format_id, extra_dims=extra_dims)\n if self.header.are_points_compressed:\n laszip_vlr = vlrs.pop(vlrs.index(\"LasZipVlr\"))\n points = self._read_compressed_points_data(laszip_vlr, point_format)\n else:\n points = record.PackedPointRecord.from_stream(\n self.stream, point_format, self.header.point_count\n )\n return points", "response": "private function to handle reading of the points record parts\n is the number of points in the las file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _read_compressed_points_data(self, laszip_vlr, point_format):\n offset_to_chunk_table = struct.unpack(\">> las = read_las(\"pylastests/simple.las\")\n >>> las.classification\n array([1, 1, 1, ..., 1, 1, 1], dtype=uint8)\n\n Parameters\n ----------\n source : str or io.BytesIO\n The source to read data from\n\n closefd: bool\n if True and the source is a stream, the function will close it\n after it is done reading\n\n\n Returns\n -------\n pylas.lasdatas.base.LasBase\n The object you can interact with to get access to the LAS points & VLRs\n \"\"\"\n with open_las(source, closefd=closefd) as reader:\n return reader.read()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a File from an existing header.", "response": "def create_from_header(header):\n \"\"\" Creates a File from an existing header,\n allocating the array of point according to the provided header.\n The input header is copied.\n\n\n Parameters\n ----------\n header : existing header to be used to create the file\n\n Returns\n -------\n pylas.lasdatas.base.LasBase\n \"\"\"\n header = copy.copy(header)\n header.point_count = 0\n points = record.PackedPointRecord.empty(PointFormat(header.point_format_id))\n if header.version >= \"1.4\":\n return las14.LasData(header=header, points=points)\n return las12.LasData(header=header, points=points)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_las(*, point_format_id=0, file_version=None):\n if file_version is not None:\n dims.raise_if_version_not_compatible_with_fmt(point_format_id, file_version)\n else:\n file_version = dims.min_file_version_for_point_format(point_format_id)\n\n header = headers.HeaderFactory.new(file_version)\n header.point_format_id = point_format_id\n\n if file_version >= \"1.4\":\n return las14.LasData(header=header)\n return las12.LasData(header=header)", "response": "Function to create a new empty las data object"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef convert(source_las, *, point_format_id=None, file_version=None):\n if point_format_id is None:\n point_format_id = source_las.points_data.point_format.id\n\n if file_version is None:\n file_version = max(\n source_las.header.version,\n dims.min_file_version_for_point_format(point_format_id),\n )\n else:\n file_version = str(file_version)\n dims.raise_if_version_not_compatible_with_fmt(point_format_id, file_version)\n\n header = headers.HeaderFactory.convert_header(source_las.header, file_version)\n header.point_format_id = point_format_id\n\n point_format = PointFormat(\n point_format_id, source_las.points_data.point_format.extra_dims\n )\n points = record.PackedPointRecord.from_point_record(\n source_las.points_data, point_format\n )\n\n try:\n evlrs = source_las.evlrs\n except ValueError:\n evlrs = []\n\n if file_version >= \"1.4\":\n\n las = las14.LasData(\n header=header, vlrs=source_las.vlrs, points=points, evlrs=evlrs\n )\n else:\n if evlrs:\n logger.warning(\n \"The source contained {} EVLRs,\"\n \" they will be lost as version {} doest not support them\".format(\n len(evlrs), file_version\n )\n )\n las = las12.LasData(header=header, vlrs=source_las.vlrs, points=points)\n\n return las", "response": "Converts a Las from one point format to another point format."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nmerges multiple las files into one", "response": "def merge_las(*las_files):\n \"\"\" Merges multiple las files into one\n\n merged = merge_las(las_1, las_2)\n merged = merge_las([las_1, las_2, las_3])\n\n Parameters\n ----------\n las_files: Iterable of LasData or LasData\n\n Returns\n -------\n pylas.lasdatas.base.LasBase\n The result of the merging\n\n \"\"\"\n if len(las_files) == 1:\n las_files = las_files[0]\n\n if not las_files:\n raise ValueError(\"No files to merge\")\n\n if not utils.files_have_same_dtype(las_files):\n raise ValueError(\"All files must have the same point format\")\n\n header = las_files[0].header\n num_pts_merged = sum(len(las.points) for las in las_files)\n\n # scaled x, y, z have to be set manually\n # to be sure to have a good offset in the header\n merged = create_from_header(header)\n # TODO extra dimensions should be manged better here\n\n for dim_name, dim_type in las_files[0].points_data.point_format.extra_dims:\n merged.add_extra_dim(dim_name, dim_type)\n\n merged.points = np.zeros(num_pts_merged, merged.points.dtype)\n merged_x = np.zeros(num_pts_merged, np.float64)\n merged_y = np.zeros(num_pts_merged, np.float64)\n merged_z = np.zeros(num_pts_merged, np.float64)\n\n offset = 0\n for i, las in enumerate(las_files, start=1):\n slc = slice(offset, offset + len(las.points))\n merged.points[slc] = las.points\n merged_x[slc] = las.x\n merged_y[slc] = las.y\n merged_z[slc] = las.z\n merged['point_source_id'][slc] = i\n offset += len(las.points)\n\n merged.x = merged_x\n merged.y = merged_y\n merged.z = merged_z\n\n return merged"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nwrite the given las into memory using BytesIO and reads it again", "response": "def write_then_read_again(las, do_compress=False):\n \"\"\" writes the given las into memory using BytesIO and \n reads it again, returning the newly read file.\n\n Mostly used for testing purposes, without having to write to disk\n \"\"\"\n out = io.BytesIO()\n las.write(out, do_compress=do_compress)\n out.seek(0)\n return read_las(out)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef date(self):\n try:\n return datetime.date(self.creation_year, 1, 1) + datetime.timedelta(\n self.creation_day_of_year - 1\n )\n except ValueError:\n return None", "response": "Returns the creation date stored in las file\n\n Returns None if the file is not yet present"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets the creation date of the file as a python date object", "response": "def date(self, date):\n \"\"\" Returns the date of file creation as a python date object\n \"\"\"\n self.creation_year = date.year\n self.creation_day_of_year = date.timetuple().tm_yday"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the array of x y z values of the minimum values of the log entry", "response": "def mins(self):\n \"\"\" Returns de minimum values of x, y, z as a numpy array\n \"\"\"\n return np.array([self.x_min, self.y_min, self.z_min])"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset the x y z_min and x_min attributes of the log2 data structures.", "response": "def mins(self, value):\n \"\"\" Sets de minimum values of x, y, z as a numpy array\n \"\"\"\n self.x_min, self.y_min, self.z_min = value"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef maxs(self):\n return np.array([self.x_max, self.y_max, self.z_max])", "response": "Returns the maximum values of x y z as a numpy array"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the x y z max values as a numpy array", "response": "def maxs(self, value):\n \"\"\" Sets de maximum values of x, y, z as a numpy array\n \"\"\"\n self.x_max, self.y_max, self.z_max = value"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef scales(self):\n return np.array([self.x_scale, self.y_scale, self.z_scale])", "response": "Returns the scaling values of x y z as a numpy array"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the offsets values of x y z as a numpy array", "response": "def offsets(self):\n \"\"\" Returns the offsets values of x, y, z as a numpy array\n \"\"\"\n return np.array([self.x_offset, self.y_offset, self.z_offset])"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef header_class_for_version(cls, version):\n try:\n return cls._version_to_header[str(version)]\n except KeyError:\n raise errors.FileVersionNotSupported(version)", "response": "Return the header class for the given version."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreading the next las version header field from the given stream and returns it as a str.", "response": "def peek_file_version(cls, stream):\n \"\"\" seeks to the position of the las version header fields\n in the stream and returns it as a str\n\n Parameters\n ----------\n stream io.BytesIO\n\n Returns\n -------\n str\n file version read from the stream\n\n \"\"\"\n old_pos = stream.tell()\n stream.seek(cls._offset_to_major_version)\n major = int.from_bytes(stream.read(ctypes.sizeof(ctypes.c_uint8)), \"little\")\n minor = int.from_bytes(stream.read(ctypes.sizeof(ctypes.c_uint8)), \"little\")\n stream.seek(old_pos)\n return \"{}.{}\".format(major, minor)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef convert_header(cls, old_header, new_version):\n new_header_class = cls.header_class_for_version(new_version)\n\n b = bytearray(old_header)\n b += b\"\\x00\" * (ctypes.sizeof(new_header_class) - len(b))\n new_header = new_header_class.from_buffer(b)\n new_header.version = str(new_version)\n\n return new_header", "response": "Converts a header instance to another version."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nunpack a sub field using its mask", "response": "def unpack(source_array, mask, dtype=np.uint8):\n \"\"\" Unpack sub field using its mask\n\n Parameters:\n ----------\n source_array : numpy.ndarray\n The source array\n mask : mask (ie: 0b00001111)\n Mask of the sub field to be extracted from the source array\n Returns\n -------\n numpy.ndarray\n The sub field array\n \"\"\"\n lsb = least_significant_bit(mask)\n return ((source_array & mask) >> lsb).astype(dtype)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef pack(array, sub_field_array, mask, inplace=False):\n lsb = least_significant_bit(mask)\n max_value = int(mask >> lsb)\n if sub_field_array.max() > max_value:\n raise OverflowError(\n \"value ({}) is greater than allowed (max: {})\".format(\n sub_field_array.max(), max_value\n )\n )\n if inplace:\n array[:] = array & ~mask\n array[:] = array | ((sub_field_array << lsb) & mask).astype(array.dtype)\n else:\n array = array & ~mask\n return array | ((sub_field_array << lsb) & mask).astype(array.dtype)", "response": "Packs a sub field array into another array using a mask"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef lost_dimensions(point_fmt_in, point_fmt_out):\n\n unpacked_dims_in = PointFormat(point_fmt_in).dtype\n unpacked_dims_out = PointFormat(point_fmt_out).dtype\n\n out_dims = unpacked_dims_out.fields\n completely_lost = []\n for dim_name in unpacked_dims_in.names:\n if dim_name not in out_dims:\n completely_lost.append(dim_name)\n return completely_lost", "response": "Returns a list of the names of the dimensions that will be lost\n when converting from point_fmt_in to point_fmt_out."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef dtype(self):\n dtype = self._access_dict(dims.ALL_POINT_FORMATS_DTYPE, self.id)\n dtype = self._dtype_add_extra_dims(dtype)\n return dtype", "response": "Returns the numpy. dtype used to store the point records in a numpy array."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef unpacked_dtype(self):\n dtype = self._access_dict(dims.UNPACKED_POINT_FORMATS_DTYPES, self.id)\n dtype = self._dtype_add_extra_dims(dtype)\n return dtype", "response": "Returns the numpy. dtype used to store the point records in a numpy array."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a dictionary of the sub fields for this point format", "response": "def sub_fields(self):\n \"\"\" Returns a dict of the sub fields for this point format\n\n Returns\n -------\n Dict[str, Tuple[str, SubField]]\n maps a sub field name to its composed dimension with additional information\n\n \"\"\"\n sub_fields_dict = {}\n for composed_dim_name, sub_fields in self.composed_fields.items():\n for sub_field in sub_fields:\n sub_fields_dict[sub_field.name] = (composed_dim_name, sub_field)\n return sub_fields_dict"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef num_extra_bytes(self):\n return sum(np.dtype(extra_dim[1]).itemsize for extra_dim in self.extra_dims)", "response": "Returns the number of extra bytes in the base record set."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef has_waveform_packet(self):\n dimensions = set(self.dimension_names)\n return all(name in dimensions for name in dims.WAVEFORM_FIELDS_NAMES)", "response": "Returns True if the point format has waveform packet dimensions"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef main(port, ip, command, loglevel):\n numeric_level = getattr(logging, loglevel.upper(), None)\n if not isinstance(numeric_level, int):\n raise ValueError('Invalid log level: %s' % loglevel)\n \n logging.basicConfig(level=numeric_level)\n\n click.echo(\"Demo of satel_integra library\")\n if command == \"demo\":\n demo(ip, port)", "response": "Console script for satel_integra."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef checksum(command):\n crc = 0x147A\n for b in command:\n # rotate (crc 1 bit left)\n crc = ((crc << 1) & 0xFFFF) | (crc & 0x8000) >> 15\n crc = crc ^ 0xFFFF\n crc = (crc + (crc >> 8) + b) & 0xFFFF\n return crc", "response": "Function to calculate checksum as per Satel manual."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef print_hex(data):\n hex_msg = \"\"\n for c in data:\n hex_msg += \"\\\\x\" + format(c, \"02x\")\n _LOGGER.debug(hex_msg)", "response": "Debugging method to print out frames in hex."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef verify_and_strip(resp):\n if resp[0:2] != b'\\xFE\\xFE':\n _LOGGER.error(\"Houston, we got problem:\")\n print_hex(resp)\n raise Exception(\"Wrong header - got %X%X\" % (resp[0], resp[1]))\n if resp[-2:] != b'\\xFE\\x0D':\n raise Exception(\"Wrong footer - got %X%X\" % (resp[-2], resp[-1]))\n output = resp[2:-2].replace(b'\\xFE\\xF0', b'\\xFE')\n\n c = checksum(bytearray(output[0:-2]))\n\n if (256 * output[-2:-1][0] + output[-1:][0]) != c:\n raise Exception(\"Wrong checksum - got %d expected %d\" % (\n (256 * output[-2:-1][0] + output[-1:][0]), c))\n\n return output[0:-2]", "response": "Verify checksum and strip header and footer of received frame."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef list_set_bits(r, expected_length):\n set_bit_numbers = []\n bit_index = 0x1\n assert (len(r) == expected_length + 1)\n\n for b in r[1:]:\n for i in range(8):\n if ((b >> i) & 1) == 1:\n set_bit_numbers.append(bit_index)\n bit_index += 1\n\n return set_bit_numbers", "response": "Return list of positions of bits set to one in given data."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef generate_query(command):\n data = bytearray(command)\n c = checksum(data)\n data.append(c >> 8)\n data.append(c & 0xFF)\n data.replace(b'\\xFE', b'\\xFE\\xF0')\n\n data = bytearray.fromhex(\"FEFE\") + data + bytearray.fromhex(\"FE0D\")\n return data", "response": "Add header, checksum and footer to command data."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef demo(host, port):\n # logging.basicConfig(level=logging.DEBUG)\n\n loop = asyncio.get_event_loop()\n stl = AsyncSatel(host,\n port,\n loop,\n [1, 2, 3, 4, 5, 6, 7, 8, 12, 13, 14, 15, 16, 17, 18, 19,\n 20, 21, 22, 23, 25, 26, 27, 28, 29, 30],\n [8, 9, 10]\n )\n\n loop.run_until_complete(stl.connect())\n loop.create_task(stl.arm(\"3333\", 1))\n loop.create_task(stl.disarm(\"3333\"))\n loop.create_task(stl.keep_alive())\n loop.create_task(stl.monitor_status())\n\n loop.run_forever()\n loop.close()", "response": "Basic demo of the monitoring capabilities."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconnecting to the alarm system.", "response": "async def connect(self):\n \"\"\"Make a TCP connection to the alarm system.\"\"\"\n _LOGGER.debug(\"Connecting...\")\n\n try:\n self._reader, self._writer = await asyncio.open_connection(\n self._host, self._port, loop=self._loop)\n _LOGGER.debug(\"sucess connecting...\")\n\n except Exception as e:\n _LOGGER.warning(\n \"Exception during connecting: %s.\", e)\n self._writer = None\n self._reader = None\n return False\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\nasync def start_monitoring(self):\n data = generate_query(\n b'\\x7F\\x01\\xDC\\x99\\x80\\x00\\x04\\x00\\x00\\x00\\x00\\x00\\x00')\n\n await self._send_data(data)\n resp = await self._read_data()\n\n if resp is None:\n _LOGGER.warning(\"Start monitoring - no data!\")\n return\n\n if resp[1:2] != b'\\xFF':\n _LOGGER.warning(\"Monitoring not accepted.\")", "response": "Start monitoring for interesting events."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nprocesses the message and return a status dict that contains the state of each output that has changed.", "response": "def _output_changed(self, msg):\n \"\"\"0x17 outputs state 0x17 + 16/32 bytes\"\"\"\n\n status = {\"outputs\": {}}\n\n output_states = list_set_bits(msg, 32)\n self.violated_outputs = output_states\n _LOGGER.debug(\"Output states: %s, monitored outputs: %s\",\n output_states, self._monitored_outputs)\n for output in self._monitored_outputs:\n status[\"outputs\"][output] = \\\n 1 if output in output_states else 0\n\n _LOGGER.debug(\"Returning status: %s\", status)\n\n if self._output_changed_callback:\n self._output_changed_callback(status)\n\n return status"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nsend an ARM command to the alarm.", "response": "async def arm(self, code, partition_list, mode=0):\n \"\"\"Send arming command to the alarm. Modes allowed: from 0 till 3.\"\"\"\n _LOGGER.debug(\"Sending arm command, mode: %s!\", mode)\n while len(code) < 16:\n code += 'F'\n\n code_bytes = bytearray.fromhex(code)\n mode_command = 0x80 + mode\n data = generate_query(mode_command.to_bytes(1, 'big')\n + code_bytes\n + partition_bytes(partition_list))\n\n await self._send_data(data)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsends disarm command to the server.", "response": "async def disarm(self, code, partition_list):\n \"\"\"Send command to disarm.\"\"\"\n _LOGGER.info(\"Sending disarm command.\")\n while len(code) < 16:\n code += 'F'\n\n code_bytes = bytearray.fromhex(code)\n\n data = generate_query(b'\\x84' + code_bytes\n + partition_bytes(partition_list))\n\n await self._send_data(data)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\nasync def clear_alarm(self, code, partition_list):\n _LOGGER.info(\"Sending clear the alarm command.\")\n while len(code) < 16:\n code += 'F'\n\n code_bytes = bytearray.fromhex(code)\n\n data = generate_query(b'\\x85' + code_bytes\n + partition_bytes(partition_list))\n\n await self._send_data(data)", "response": "Send clear alarm command."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\nasync def set_output(self, code, output_id, state):\n \"\"\"0x88 outputs on\n + 8 bytes - user code\n + 16/32 bytes - output list\n If function is accepted, function result can be\n checked by observe the system state \"\"\"\n _LOGGER.debug(\"Turn on, output: %s, code: %s\", output_id, code)\n while len(code) < 16:\n code += 'F'\n\n code_bytes = bytearray.fromhex(code)\n mode_command = 0x88 if state else 0x89\n data = generate_query(mode_command.to_bytes(1, 'big') +\n code_bytes +\n output_bytes(output_id))\n await self._send_data(data)", "response": "Send output turn on command to the alarm."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\nasync def keep_alive(self):\n while True:\n await asyncio.sleep(self._keep_alive_timeout)\n if self.closed:\n return\n # Command to read status of the alarm\n data = generate_query(b'\\xEE\\x01\\x01')\n await self._send_data(data)", "response": "A workaround for Satel Integra disconnecting after 25s.\n\n Every interval it sends some random question to the device, ignoring\n answer - just to keep connection alive."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nstart monitoring of the alarm status.", "response": "async def monitor_status(self, alarm_status_callback=None,\n zone_changed_callback=None,\n output_changed_callback=None):\n \"\"\"Start monitoring of the alarm status.\n\n Send command to satel integra to start sending updates. Read in a\n loop and call respective callbacks when received messages.\n \"\"\"\n self._alarm_status_callback = alarm_status_callback\n self._zone_changed_callback = zone_changed_callback\n self._output_changed_callback = output_changed_callback\n\n _LOGGER.info(\"Starting monitor_status loop\")\n\n while not self.closed:\n _LOGGER.debug(\"Iteration... \")\n while not self.connected:\n _LOGGER.info(\"Not connected, re-connecting... \")\n await self.connect()\n if not self.connected:\n _LOGGER.warning(\"Not connected, sleeping for 10s... \")\n await asyncio.sleep(self._reconnection_timeout)\n continue\n await self.start_monitoring()\n if not self.connected:\n _LOGGER.warning(\"Start monitoring failed, sleeping for 10s...\")\n await asyncio.sleep(self._reconnection_timeout)\n continue\n while True:\n await self._update_status()\n _LOGGER.debug(\"Got status!\")\n if not self.connected:\n _LOGGER.info(\"Got connection broken, reconnecting!\")\n break\n _LOGGER.info(\"Closed, quit monitoring.\")"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nstops monitoring and close connection.", "response": "def close(self):\n \"\"\"Stop monitoring and close connection.\"\"\"\n _LOGGER.debug(\"Closing...\")\n self.closed = True\n if self.connected:\n self._writer.close()"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\npurges all matching user_id.", "response": "def purge_db(self):\n \"\"\"\n Clear all matching our user_id.\n \"\"\"\n with self.engine.begin() as db:\n purge_user(db, self.user_id)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef guess_type(self, path, allow_directory=True):\n if path.endswith('.ipynb'):\n return 'notebook'\n elif allow_directory and self.dir_exists(path):\n return 'directory'\n else:\n return 'file'", "response": "Guess the type of a file or directory."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_file_id(self, path):\n with self.engine.begin() as db:\n try:\n file_id = get_file_id(db, self.user_id, path)\n except NoSuchFile:\n self.no_such_entity(path)\n\n return file_id", "response": "Get the id of a file in the database."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_notebook(self, path, content, format):\n with self.engine.begin() as db:\n try:\n record = get_file(\n db,\n self.user_id,\n path,\n content,\n self.crypto.decrypt,\n )\n except NoSuchFile:\n self.no_such_entity(path)\n\n return self._notebook_model_from_db(record, content)", "response": "Get a notebook from the database."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nbuilds a notebook model from a database record.", "response": "def _notebook_model_from_db(self, record, content):\n \"\"\"\n Build a notebook model from database record.\n \"\"\"\n path = to_api_path(record['parent_name'] + record['name'])\n model = base_model(path)\n model['type'] = 'notebook'\n model['last_modified'] = model['created'] = record['created_at']\n if content:\n content = reads_base64(record['content'])\n self.mark_trusted_cells(content, path)\n model['content'] = content\n model['format'] = 'json'\n self.validate_notebook_model(model)\n return model"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_directory(self, path, content, format):\n with self.engine.begin() as db:\n try:\n record = get_directory(\n db, self.user_id, path, content\n )\n except NoSuchDirectory:\n if self.file_exists(path):\n # TODO: It's awkward/expensive to have to check this to\n # return a 400 instead of 404. Consider just 404ing.\n self.do_400(\"Wrong type: %s\" % path)\n else:\n self.no_such_entity(path)\n\n return self._directory_model_from_db(record, content)", "response": "Get a directory from the database."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts file records from database to file models.", "response": "def _convert_file_records(self, file_records):\n \"\"\"\n Apply _notebook_model_from_db or _file_model_from_db to each entry\n in file_records, depending on the result of `guess_type`.\n \"\"\"\n for record in file_records:\n type_ = self.guess_type(record['name'], allow_directory=False)\n if type_ == 'notebook':\n yield self._notebook_model_from_db(record, False)\n elif type_ == 'file':\n yield self._file_model_from_db(record, False, None)\n else:\n self.do_500(\"Unknown file type %s\" % type_)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nbuilding a directory model from database directory record.", "response": "def _directory_model_from_db(self, record, content):\n \"\"\"\n Build a directory model from database directory record.\n \"\"\"\n model = base_directory_model(to_api_path(record['name']))\n if content:\n model['format'] = 'json'\n model['content'] = list(\n chain(\n self._convert_file_records(record['files']),\n (\n self._directory_model_from_db(subdir, False)\n for subdir in record['subdirs']\n ),\n )\n )\n return model"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nbuilding a file model from a database record.", "response": "def _file_model_from_db(self, record, content, format):\n \"\"\"\n Build a file model from database record.\n \"\"\"\n # TODO: Most of this is shared with _notebook_model_from_db.\n path = to_api_path(record['parent_name'] + record['name'])\n model = base_model(path)\n model['type'] = 'file'\n model['last_modified'] = model['created'] = record['created_at']\n if content:\n bcontent = record['content']\n model['content'], model['format'], model['mimetype'] = from_b64(\n path,\n bcontent,\n format,\n )\n return model"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _save_notebook(self, db, model, path):\n nb_contents = from_dict(model['content'])\n self.check_and_sign(nb_contents, path)\n save_file(\n db,\n self.user_id,\n path,\n writes_base64(nb_contents),\n self.crypto.encrypt,\n self.max_file_size_bytes,\n )\n # It's awkward that this writes to the model instead of returning.\n self.validate_notebook_model(model)\n return model.get('message')", "response": "Save a notebook.\n\n Returns a validation message."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _save_file(self, db, model, path):\n save_file(\n db,\n self.user_id,\n path,\n to_b64(model['content'], model.get('format', None)),\n self.crypto.encrypt,\n self.max_file_size_bytes,\n )\n return None", "response": "Save a non - notebook file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef rename_file(self, old_path, path):\n with self.engine.begin() as db:\n try:\n if self.file_exists(old_path):\n rename_file(db, self.user_id, old_path, path)\n elif self.dir_exists(old_path):\n rename_directory(db, self.user_id, old_path, path)\n else:\n self.no_such_entity(path)\n except (FileExists, DirectoryExists):\n self.already_exists(path)\n except RenameRoot as e:\n self.do_409(str(e))", "response": "Rename file or directory from old_path to path."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndeleting file or directory.", "response": "def delete_file(self, path):\n \"\"\"\n Delete object corresponding to path.\n \"\"\"\n if self.file_exists(path):\n self._delete_non_directory(path)\n elif self.dir_exists(path):\n self._delete_directory(path)\n else:\n self.no_such_entity(path)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\napply preprocessing steps to file/notebook content that we're going to write to the database. Applies ``encrypt_func`` to ``content`` and checks that the result is smaller than ``max_size_bytes``.", "response": "def preprocess_incoming_content(content, encrypt_func, max_size_bytes):\n \"\"\"\n Apply preprocessing steps to file/notebook content that we're going to\n write to the database.\n\n Applies ``encrypt_func`` to ``content`` and checks that the result is\n smaller than ``max_size_bytes``.\n \"\"\"\n encrypted = encrypt_func(content)\n if max_size_bytes != UNLIMITED and len(encrypted) > max_size_bytes:\n raise FileTooLarge()\n return encrypted"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd a new user to the database if they don t already exist.", "response": "def ensure_db_user(db, user_id):\n \"\"\"\n Add a new user if they don't already exist.\n \"\"\"\n with ignore_unique_violation():\n db.execute(\n users.insert().values(id=user_id),\n )"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef purge_user(db, user_id):\n db.execute(files.delete().where(\n files.c.user_id == user_id\n ))\n db.execute(directories.delete().where(\n directories.c.user_id == user_id\n ))\n db.execute(users.delete().where(\n users.c.id == user_id\n ))", "response": "Delete a user and all of its resources."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _is_in_directory(table, user_id, db_dirname):\n return and_(\n table.c.parent_name == db_dirname,\n table.c.user_id == user_id,\n )", "response": "Return a WHERE clause that matches entries in a directory."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef files_in_directory(db, user_id, db_dirname):\n fields = _file_default_fields()\n rows = db.execute(\n select(\n fields,\n ).where(\n _is_in_directory(files, user_id, db_dirname),\n ).order_by(\n files.c.user_id,\n files.c.parent_name,\n files.c.name,\n files.c.created_at,\n ).distinct(\n files.c.user_id, files.c.parent_name, files.c.name,\n )\n )\n return [to_dict_no_content(fields, row) for row in rows]", "response": "Return a list of files in a directory."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef directories_in_directory(db, user_id, db_dirname):\n fields = _directory_default_fields()\n rows = db.execute(\n select(\n fields,\n ).where(\n _is_in_directory(directories, user_id, db_dirname),\n )\n )\n return [to_dict_no_content(fields, row) for row in rows]", "response": "Return a list of directories in a directory."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_directory(db, user_id, api_dirname, content):\n db_dirname = from_api_dirname(api_dirname)\n if not _dir_exists(db, user_id, db_dirname):\n raise NoSuchDirectory(api_dirname)\n if content:\n files = files_in_directory(\n db,\n user_id,\n db_dirname,\n )\n subdirectories = directories_in_directory(\n db,\n user_id,\n db_dirname,\n )\n else:\n files, subdirectories = None, None\n\n # TODO: Consider using namedtuples for these return values.\n return {\n 'name': db_dirname,\n 'files': files,\n 'subdirs': subdirectories,\n }", "response": "Get the names of all files and directories that are direct children of a node of a node."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _file_where(user_id, api_path):\n directory, name = split_api_filepath(api_path)\n return and_(\n files.c.name == name,\n files.c.user_id == user_id,\n files.c.parent_name == directory,\n )", "response": "Return a WHERE clause matching the given API path and user_id."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a SELECT statement that returns the latest N versions of a file.", "response": "def _select_file(user_id, api_path, fields, limit):\n \"\"\"\n Return a SELECT statement that returns the latest N versions of a file.\n \"\"\"\n query = select(fields).where(\n _file_where(user_id, api_path),\n ).order_by(\n _file_creation_order(),\n )\n if limit is not None:\n query = query.limit(limit)\n\n return query"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _file_default_fields():\n return [\n files.c.name,\n files.c.created_at,\n files.c.parent_name,\n ]", "response": "Return a list of default fields for a file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_file(db, user_id, api_path, query_fields, decrypt_func):\n result = db.execute(\n _select_file(user_id, api_path, query_fields, limit=1),\n ).first()\n\n if result is None:\n raise NoSuchFile(api_path)\n\n if files.c.content in query_fields:\n return to_dict_with_content(query_fields, result, decrypt_func)\n else:\n return to_dict_no_content(query_fields, result)", "response": "Get file data for the given user_id path and query_fields."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets file data for the given user_id and path.", "response": "def get_file(db, user_id, api_path, include_content, decrypt_func):\n \"\"\"\n Get file data for the given user_id and path.\n\n Include content only if include_content=True.\n \"\"\"\n query_fields = _file_default_fields()\n if include_content:\n query_fields.append(files.c.content)\n\n return _get_file(db, user_id, api_path, query_fields, decrypt_func)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_file_id(db, user_id, api_path):\n return _get_file(\n db,\n user_id,\n api_path,\n [files.c.id],\n unused_decrypt_func,\n )['id']", "response": "Get the file id for the given user_id and api_path."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef delete_file(db, user_id, api_path):\n result = db.execute(\n files.delete().where(\n _file_where(user_id, api_path)\n )\n )\n\n rowcount = result.rowcount\n if not rowcount:\n raise NoSuchFile(api_path)\n\n return rowcount", "response": "Delete a file.\n\n TODO: Consider making this a soft delete."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef file_exists(db, user_id, path):\n try:\n get_file(\n db,\n user_id,\n path,\n include_content=False,\n decrypt_func=unused_decrypt_func,\n )\n return True\n except NoSuchFile:\n return False", "response": "Check if a file exists."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nrenaming a file in the database.", "response": "def rename_file(db, user_id, old_api_path, new_api_path):\n \"\"\"\n Rename a file.\n \"\"\"\n\n # Overwriting existing files is disallowed.\n if file_exists(db, user_id, new_api_path):\n raise FileExists(new_api_path)\n\n old_dir, old_name = split_api_filepath(old_api_path)\n new_dir, new_name = split_api_filepath(new_api_path)\n if old_dir != new_dir:\n raise ValueError(\n dedent(\n \"\"\"\n Can't rename object to new directory.\n Old Path: {old_api_path}\n New Path: {new_api_path}\n \"\"\".format(\n old_api_path=old_api_path,\n new_api_path=new_api_path\n )\n )\n )\n\n db.execute(\n files.update().where(\n _file_where(user_id, old_api_path),\n ).values(\n name=new_name,\n created_at=func.now(),\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsave a file. TODO: Update-then-insert is probably cheaper than insert-then-update.", "response": "def save_file(db, user_id, path, content, encrypt_func, max_size_bytes):\n \"\"\"\n Save a file.\n\n TODO: Update-then-insert is probably cheaper than insert-then-update.\n \"\"\"\n content = preprocess_incoming_content(\n content,\n encrypt_func,\n max_size_bytes,\n )\n directory, name = split_api_filepath(path)\n with db.begin_nested() as savepoint:\n try:\n res = db.execute(\n files.insert().values(\n name=name,\n user_id=user_id,\n parent_name=directory,\n content=content,\n )\n )\n except IntegrityError as error:\n # The file already exists, so overwrite its content with the newer\n # version.\n if is_unique_violation(error):\n savepoint.rollback()\n res = db.execute(\n files.update().where(\n _file_where(user_id, path),\n ).values(\n content=content,\n created_at=func.now(),\n )\n )\n else:\n # Unknown error. Reraise\n raise\n\n return res"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngenerating a generator of decrypted files.", "response": "def generate_files(engine, crypto_factory, min_dt=None, max_dt=None,\n logger=None):\n \"\"\"\n Create a generator of decrypted files.\n\n Files are yielded in ascending order of their timestamp.\n\n This function selects all current notebooks (optionally, falling within a\n datetime range), decrypts them, and returns a generator yielding dicts,\n each containing a decoded notebook and metadata including the user,\n filepath, and timestamp.\n\n Parameters\n ----------\n engine : SQLAlchemy.engine\n Engine encapsulating database connections.\n crypto_factory : function[str -> Any]\n A function from user_id to an object providing the interface required\n by PostgresContentsManager.crypto. Results of this will be used for\n decryption of the selected notebooks.\n min_dt : datetime.datetime, optional\n Minimum last modified datetime at which a file will be included.\n max_dt : datetime.datetime, optional\n Last modified datetime at and after which a file will be excluded.\n logger : Logger, optional\n \"\"\"\n return _generate_notebooks(files, files.c.created_at,\n engine, crypto_factory, min_dt, max_dt, logger)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef purge_remote_checkpoints(db, user_id):\n db.execute(\n remote_checkpoints.delete().where(\n remote_checkpoints.c.user_id == user_id,\n )\n )", "response": "Delete all remote checkpoints for the given user_id."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef generate_checkpoints(engine, crypto_factory, min_dt=None, max_dt=None,\n logger=None):\n \"\"\"\n Create a generator of decrypted remote checkpoints.\n\n Checkpoints are yielded in ascending order of their timestamp.\n\n This function selects all notebook checkpoints (optionally, falling within\n a datetime range), decrypts them, and returns a generator yielding dicts,\n each containing a decoded notebook and metadata including the user,\n filepath, and timestamp.\n\n Parameters\n ----------\n engine : SQLAlchemy.engine\n Engine encapsulating database connections.\n crypto_factory : function[str -> Any]\n A function from user_id to an object providing the interface required\n by PostgresContentsManager.crypto. Results of this will be used for\n decryption of the selected notebooks.\n min_dt : datetime.datetime, optional\n Minimum last modified datetime at which a file will be included.\n max_dt : datetime.datetime, optional\n Last modified datetime at and after which a file will be excluded.\n logger : Logger, optional\n \"\"\"\n return _generate_notebooks(remote_checkpoints,\n remote_checkpoints.c.last_modified,\n engine, crypto_factory, min_dt, max_dt, logger)", "response": "Generates a generator of decrypted remote checkpoints."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _generate_notebooks(table, timestamp_column,\n engine, crypto_factory, min_dt, max_dt, logger):\n \"\"\"\n See docstrings for `generate_files` and `generate_checkpoints`.\n\n Parameters\n ----------\n table : SQLAlchemy.Table\n Table to fetch notebooks from, `files` or `remote_checkpoints.\n timestamp_column : SQLAlchemy.Column\n `table`'s column storing timestamps, `created_at` or `last_modified`.\n engine : SQLAlchemy.engine\n Engine encapsulating database connections.\n crypto_factory : function[str -> Any]\n A function from user_id to an object providing the interface required\n by PostgresContentsManager.crypto. Results of this will be used for\n decryption of the selected notebooks.\n min_dt : datetime.datetime\n Minimum last modified datetime at which a file will be included.\n max_dt : datetime.datetime\n Last modified datetime at and after which a file will be excluded.\n logger : Logger\n \"\"\"\n where_conds = []\n if min_dt is not None:\n where_conds.append(timestamp_column >= min_dt)\n if max_dt is not None:\n where_conds.append(timestamp_column < max_dt)\n if table is files:\n # Only select files that are notebooks\n where_conds.append(files.c.name.like(u'%.ipynb'))\n\n # Query for notebooks satisfying the conditions.\n query = select([table]).order_by(timestamp_column)\n for cond in where_conds:\n query = query.where(cond)\n result = engine.execute(query)\n\n # Decrypt each notebook and yield the result.\n for nb_row in result:\n try:\n # The decrypt function depends on the user\n user_id = nb_row['user_id']\n decrypt_func = crypto_factory(user_id).decrypt\n\n nb_dict = to_dict_with_content(table.c, nb_row, decrypt_func)\n if table is files:\n # Correct for files schema differing somewhat from checkpoints.\n nb_dict['path'] = nb_dict['parent_name'] + nb_dict['name']\n nb_dict['last_modified'] = nb_dict['created_at']\n\n # For 'content', we use `reads_base64` directly. If the db content\n # format is changed from base64, the decoding should be changed\n # here as well.\n yield {\n 'id': nb_dict['id'],\n 'user_id': user_id,\n 'path': to_api_path(nb_dict['path']),\n 'last_modified': nb_dict['last_modified'],\n 'content': reads_base64(nb_dict['content']),\n }\n except CorruptedFile:\n if logger is not None:\n logger.warning(\n 'Corrupted file with id %d in table %s.'\n % (nb_row['id'], table.name)\n )", "response": "Generate a list of notebooks from a table."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets all file ids for a user.", "response": "def select_file_ids(db, user_id):\n \"\"\"\n Get all file ids for a user.\n \"\"\"\n return list(\n db.execute(\n select([files.c.id])\n .where(files.c.user_id == user_id)\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nselect all checkpoint ids for a user.", "response": "def select_remote_checkpoint_ids(db, user_id):\n \"\"\"\n Get all file ids for a user.\n \"\"\"\n return list(\n db.execute(\n select([remote_checkpoints.c.id])\n .where(remote_checkpoints.c.user_id == user_id)\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef reencrypt_user_content(engine,\n user_id,\n old_decrypt_func,\n new_encrypt_func,\n logger):\n \"\"\"\n Re-encrypt all of the files and checkpoints for a single user.\n \"\"\"\n logger.info(\"Begin re-encryption for user %s\", user_id)\n with engine.begin() as db:\n # NOTE: Doing both of these operations in one transaction depends for\n # correctness on the fact that the creation of new checkpoints always\n # involves writing new data into the database from Python, rather than\n # simply copying data inside the DB.\n\n # If we change checkpoint creation so that it does an in-database copy,\n # then we need to split this transaction to ensure that\n # file-reencryption is complete before checkpoint-reencryption starts.\n\n # If that doesn't happen, it will be possible for a user to create a\n # new checkpoint in a transaction that hasn't seen the completed\n # file-reencryption process, but we might not see that checkpoint here,\n # which means that we would never update the content of that checkpoint\n # to the new encryption key.\n logger.info(\"Re-encrypting files for %s\", user_id)\n for (file_id,) in select_file_ids(db, user_id):\n reencrypt_row_content(\n db,\n files,\n file_id,\n old_decrypt_func,\n new_encrypt_func,\n logger,\n )\n\n logger.info(\"Re-encrypting checkpoints for %s\", user_id)\n for (cp_id,) in select_remote_checkpoint_ids(db, user_id):\n reencrypt_row_content(\n db,\n remote_checkpoints,\n cp_id,\n old_decrypt_func,\n new_encrypt_func,\n logger,\n )\n logger.info(\"Finished re-encryption for user %s\", user_id)", "response": "Re - encrypt all of the files and checkpoints for a single user."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nderive a single Fernet key from a secret key and a user ID.", "response": "def derive_single_fernet_key(password, user_id):\n \"\"\"\n Convert a secret key and a user ID into an encryption key to use with a\n ``cryptography.fernet.Fernet``.\n\n Taken from\n https://cryptography.io/en/latest/fernet/#using-passwords-with-fernet\n\n Parameters\n ----------\n password : unicode\n ascii-encodable key to derive\n user_id : unicode\n ascii-encodable user_id to use as salt\n \"\"\"\n password = ascii_unicode_to_bytes(password)\n user_id = ascii_unicode_to_bytes(user_id)\n\n kdf = PBKDF2HMAC(\n algorithm=hashes.SHA256(),\n length=32,\n salt=user_id,\n iterations=100000,\n backend=default_backend(),\n )\n return base64.urlsafe_b64encode(kdf.derive(password))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef derive_fallback_fernet_keys(passwords, user_id):\n # Normally I wouldn't advocate for these kinds of assertions, but we really\n # really really don't want to mess up deriving encryption keys.\n assert isinstance(passwords, (list, tuple)), \\\n \"Expected list or tuple of keys, got %s.\" % type(passwords)\n\n def derive_single_allow_none(k):\n if k is None:\n return None\n return derive_single_fernet_key(k, user_id).decode('ascii')\n\n return list(map(derive_single_allow_none, passwords))", "response": "Derive a list of per - user Fernet keys from a list of master keys and a user_id."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate and return a function suitable for passing as a crypto_factory to the supplied user_id and salted with the supplied password.", "response": "def single_password_crypto_factory(password):\n \"\"\"\n Create and return a function suitable for passing as a crypto_factory to\n ``pgcontents.utils.sync.reencrypt_all_users``\n\n The factory here returns a ``FernetEncryption`` that uses a key derived\n from ``password`` and salted with the supplied user_id.\n \"\"\"\n @memoize_single_arg\n def factory(user_id):\n return FernetEncryption(\n Fernet(derive_single_fernet_key(password, user_id))\n )\n return factory"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_name(column_like):\n if isinstance(column_like, Column):\n return column_like.name\n elif isinstance(column_like, Cast):\n return column_like.clause.name", "response": "Get the name from a column - like SQLAlchemy expression."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconvert a SQLAlchemy row that does not contain a content field to a dict.", "response": "def to_dict_no_content(fields, row):\n \"\"\"\n Convert a SQLAlchemy row that does not contain a 'content' field to a dict.\n\n If row is None, return None.\n\n Raises AssertionError if there is a field named 'content' in ``fields``.\n \"\"\"\n assert(len(fields) == len(row))\n\n field_names = list(map(_get_name, fields))\n assert 'content' not in field_names, \"Unexpected content field.\"\n\n return dict(zip(field_names, row))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef to_dict_with_content(fields, row, decrypt_func):\n assert(len(fields) == len(row))\n\n field_names = list(map(_get_name, fields))\n assert 'content' in field_names, \"Missing content field.\"\n\n result = dict(zip(field_names, row))\n result['content'] = decrypt_func(result['content'])\n return result", "response": "Convert a SQLAlchemy row that contains a content field to a dict."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a checkpoint of the current state of a notebook.", "response": "def create_notebook_checkpoint(self, nb, path):\n \"\"\"Create a checkpoint of the current state of a notebook\n\n Returns a checkpoint_id for the new checkpoint.\n \"\"\"\n b64_content = writes_base64(nb)\n with self.engine.begin() as db:\n return save_remote_checkpoint(\n db,\n self.user_id,\n path,\n b64_content,\n self.crypto.encrypt,\n self.max_file_size_bytes,\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef create_file_checkpoint(self, content, format, path):\n try:\n b64_content = to_b64(content, format)\n except ValueError as e:\n self.do_400(str(e))\n with self.engine.begin() as db:\n return save_remote_checkpoint(\n db,\n self.user_id,\n path,\n b64_content,\n self.crypto.encrypt,\n self.max_file_size_bytes,\n )", "response": "Create a checkpoint of the current state of a file."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndeletes a checkpoint for a file", "response": "def delete_checkpoint(self, checkpoint_id, path):\n \"\"\"delete a checkpoint for a file\"\"\"\n with self.engine.begin() as db:\n return delete_single_remote_checkpoint(\n db, self.user_id, path, checkpoint_id,\n )"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_checkpoint_content(self, checkpoint_id, path):\n with self.engine.begin() as db:\n return get_remote_checkpoint(\n db,\n self.user_id,\n path,\n checkpoint_id,\n self.crypto.decrypt,\n )['content']", "response": "Get the content of a checkpoint."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef list_checkpoints(self, path):\n with self.engine.begin() as db:\n return list_remote_checkpoints(db, self.user_id, path)", "response": "Return a list of checkpoints for a given file"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef rename_all_checkpoints(self, old_path, new_path):\n with self.engine.begin() as db:\n return move_remote_checkpoints(\n db,\n self.user_id,\n old_path,\n new_path,\n )", "response": "Rename all checkpoints for old_path to new_path."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef delete_all_checkpoints(self, path):\n with self.engine.begin() as db:\n delete_remote_checkpoints(db, self.user_id, path)", "response": "Delete all checkpoints for the given path."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef purge_db(self):\n with self.engine.begin() as db:\n purge_remote_checkpoints(db, self.user_id)", "response": "Purge all database records for the current user."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _resolve_path(path, manager_dict):\n path = normalize_api_path(path)\n parts = path.split('/')\n\n # Try to find a sub-manager for the first subdirectory.\n mgr = manager_dict.get(parts[0])\n if mgr is not None:\n return parts[0], mgr, '/'.join(parts[1:])\n\n # Try to find use the root manager, if one was supplied.\n mgr = manager_dict.get('')\n if mgr is not None:\n return '', mgr, path\n\n raise HTTPError(\n 404,\n \"Couldn't resolve path [{path}] and \"\n \"no root manager supplied!\".format(path=path)\n )", "response": "Resolve a path based on a dictionary of manager prefixes."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets an argument from kwargs and args.", "response": "def _get_arg(argname, args, kwargs):\n \"\"\"\n Get an argument, either from kwargs or from the first entry in args.\n Raises a TypeError if argname not in kwargs and len(args) == 0.\n\n Mutates kwargs in place if the value is found in kwargs.\n \"\"\"\n try:\n return kwargs.pop(argname), args\n except KeyError:\n pass\n try:\n return args[0], args[1:]\n except IndexError:\n raise TypeError(\"No value passed for %s\" % argname)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _apply_prefix(prefix, model):\n if not isinstance(model, dict):\n raise TypeError(\"Expected dict for model, got %s\" % type(model))\n\n # We get unwanted leading/trailing slashes if prefix or model['path'] are\n # '', both of which are legal values.\n model['path'] = '/'.join((prefix, model['path'])).strip('/')\n if model['type'] in ('notebook', 'file'):\n return model\n\n if model['type'] != 'directory':\n raise ValueError(\"Unknown model type %s.\" % type(model))\n\n content = model.get('content', None)\n if content is not None:\n for sub_model in content:\n _apply_prefix(prefix, sub_model)\n\n return model", "response": "Apply a prefix to all path entries in the given model."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncalls when the manager directory changes.", "response": "def _managers_changed(self, name, old, new):\n \"\"\"\n Strip slashes from directories before updating.\n \"\"\"\n for key in new:\n if '/' in key:\n raise ValueError(\n \"Expected directory names w/o slashes. Got [%s]\" % key\n )\n self.managers = {k.strip('/'): v for k, v in new.items()}"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget a single entry in the database.", "response": "def get(self, path, content=True, type=None, format=None):\n \"\"\"\n Special case handling for listing root dir.\n \"\"\"\n path = normalize_api_path(path)\n if path:\n return self.__get(path, content=content, type=type, format=format)\n if not content:\n return base_directory_model('')\n\n extra_content = self._extra_root_dirs()\n rm = self.root_manager\n if rm is None:\n root_model = base_directory_model('')\n root_model.update(\n format='json',\n content=extra_content,\n )\n else:\n root_model = rm.get(\n path,\n content=content,\n type=type,\n format=format,\n )\n # Append the extra directories.\n root_model['content'].extend(extra_content)\n return root_model"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nensures that the manager at the given path can t be deleted.", "response": "def delete(self, path):\n \"\"\"\n Ensure that roots of our managers can't be deleted. This should be\n enforced by https://github.com/ipython/ipython/pull/8168, but rogue\n implementations might override this behavior.\n \"\"\"\n path = normalize_api_path(path)\n if path in self.managers:\n raise HTTPError(\n 400, \"Can't delete root of %s\" % self.managers[path]\n )\n return self.__delete(path)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nnormalize the API path.", "response": "def normalize_api_path(api_path):\n \"\"\"\n Resolve paths with '..' to normalized paths, raising an error if the final\n result is outside root.\n \"\"\"\n normalized = posixpath.normpath(api_path.strip('/'))\n if normalized == '.':\n normalized = ''\n elif normalized.startswith('..'):\n raise PathOutsideRoot(normalized)\n return normalized"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef split_api_filepath(path):\n parts = path.rsplit('/', 1)\n if len(parts) == 1:\n name = parts[0]\n dirname = '/'\n else:\n name = parts[1]\n dirname = parts[0] + '/'\n\n return from_api_dirname(dirname), name", "response": "Split an API file path into directory and name."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef writes_base64(nb, version=NBFORMAT_VERSION):\n return b64encode(writes(nb, version=version).encode('utf-8'))", "response": "Write a notebook as base64."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreading a notebook from base64.", "response": "def reads_base64(nb, as_version=NBFORMAT_VERSION):\n \"\"\"\n Read a notebook from base64.\n \"\"\"\n try:\n return reads(b64decode(nb).decode('utf-8'), as_version=as_version)\n except Exception as e:\n raise CorruptedFile(e)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _decode_unknown_from_base64(path, bcontent):\n content = b64decode(bcontent)\n try:\n return (content.decode('utf-8'), 'text')\n except UnicodeError:\n pass\n return bcontent.decode('ascii'), 'base64'", "response": "Decode base64 data of unknown format."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndecode base64 content for a file.", "response": "def from_b64(path, bcontent, format):\n \"\"\"\n Decode base64 content for a file.\n\n format:\n If 'text', the contents will be decoded as UTF-8.\n If 'base64', do nothing.\n If not specified, try to decode as UTF-8, and fall back to base64\n\n Returns a triple of decoded_content, format, and mimetype.\n \"\"\"\n decoders = {\n 'base64': lambda path, bcontent: (bcontent.decode('ascii'), 'base64'),\n 'text': _decode_text_from_base64,\n None: _decode_unknown_from_base64,\n }\n\n try:\n content, real_format = decoders[format](path, bcontent)\n except HTTPError:\n # Pass through HTTPErrors, since we intend for them to bubble all the\n # way back to the API layer.\n raise\n except Exception as e:\n # Anything else should be wrapped in a CorruptedFile, since it likely\n # indicates misconfiguration of encryption.\n raise CorruptedFile(e)\n\n default_mimes = {\n 'text': 'text/plain',\n 'base64': 'application/octet-stream',\n }\n mimetype = mimetypes.guess_type(path)[0] or default_mimes[real_format]\n\n return content, real_format, mimetype"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef prefix_dirs(path):\n _dirname = posixpath.dirname\n path = path.strip('/')\n out = []\n while path != '':\n path = _dirname(path)\n out.append(path)\n return reversed(out)", "response": "Returns an iterable of all prefix directories of path descending from root."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsplits an iterable of models into a list of file paths and a list of directories.", "response": "def _separate_dirs_files(models):\n \"\"\"\n Split an iterable of models into a list of file paths and a list of\n directory paths.\n \"\"\"\n dirs = []\n files = []\n for model in models:\n if model['type'] == 'directory':\n dirs.append(model['path'])\n else:\n files.append(model['path'])\n return dirs, files"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef walk_files(mgr):\n for dir_, subdirs, files in walk_files(mgr):\n for file_ in files:\n yield file_", "response": "Iterate over all files in the hierarchy of the manager."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef walk_files_with_content(mgr):\n for _, _, files in walk(mgr):\n for f in files:\n yield mgr.get(f, content=True)", "response": "Iterate over all files in the hierarchy with content"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef reencrypt_single_user(engine, user_id, old_crypto, new_crypto, logger):\n # Use FallbackCrypto so that we're re-entrant if we halt partway through.\n crypto = FallbackCrypto([new_crypto, old_crypto])\n\n reencrypt_user_content(\n engine=engine,\n user_id=user_id,\n old_decrypt_func=crypto.decrypt,\n new_encrypt_func=crypto.encrypt,\n logger=logger,\n )", "response": "Re - encrypt all files and checkpoints for a single user."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef unencrypt_single_user(engine, user_id, old_crypto, logger):\n reencrypt_user_content(\n engine=engine,\n user_id=user_id,\n old_decrypt_func=old_crypto.decrypt,\n new_encrypt_func=lambda s: s,\n logger=logger,\n )", "response": "Unencrypt all files and checkpoints for a single user."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef upgrade(db_url, revision):\n with temp_alembic_ini(ALEMBIC_DIR_LOCATION, db_url) as alembic_ini:\n subprocess.check_call(\n ['alembic', '-c', alembic_ini, 'upgrade', revision]\n )", "response": "Upgrade the given database to revision."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_author_string(self, links=False):\n saved_args = locals()\n saved_args = saved_args['links']\n \"\"\"Returns list of authors as a comma-separated\n string (with 'and' before last author).\"\"\"\n\n def format_author(author):\n if links and author.person.slug:\n return '%s' % (author.person.slug, author.person.full_name)\n return author.person.full_name\n\n if links == True or links == False:\n authors = map(format_author, self.authors.all())\n else:\n authors = map(format_author, saved_args)\n\n if not authors:\n return \"\"\n elif len(authors) == 1:\n # If this is the only author, just return author name\n return authors[0]\n\n return \", \".join(authors[0:-1]) + \" and \" + authors[-1]", "response": "Returns a comma - separated string of the authors."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a comma - separated string containing the author type of the user.", "response": "def get_author_type_string(self):\n \"\"\"Returns list of authors as a comma-separated string\n sorted by author type (with 'and' before last author).\"\"\"\n\n authorTypeString = ''\n aStringA = ''\n aStringB = ''\n aStringC = ''\n aStringD = ''\n\n authors = dict((k, list(v)) for k, v in groupby(self.authors.all(), lambda a: a.type))\n for author in authors:\n if author == 'author':\n aStringA += 'Written by ' + self.get_author_string(authors['author'])\n if author == 'photographer':\n aStringB += 'Photos by ' + self.get_author_string(authors['photographer'])\n if author == 'illustrator':\n aStringC += 'Illustrations by ' + self.get_author_string(authors['illustrator'])\n if author == 'videographer':\n aStringD += 'Videos by ' + self.get_author_string(authors['videographer'])\n if aStringA != '':\n authorTypeString += aStringA\n if aStringB != '':\n authorTypeString += ', ' + aStringB\n if aStringC != '':\n authorTypeString += ', ' + aStringC\n if aStringD != '':\n authorTypeString += ', ' + aStringD\n return authorTypeString"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef sanitize_block(self, block):\n\n embed_type = block.get('type', None)\n data = block.get('data', {})\n serializer = self.serializers.get(embed_type, None)\n\n if serializer is None:\n return block\n\n block['data'] = serializer.to_internal_value(data)\n\n return block", "response": "Santizes the data for the given block."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nqueuing an instance to be fetched from the database.", "response": "def queue_instance(self, embed_type, data):\n \"\"\"Queue an instance to be fetched from the database.\"\"\"\n\n serializer = self.serializers.get(embed_type, None)\n\n if serializer is None:\n return\n\n instance_id = serializer.get_id(data)\n\n if embed_type not in self.ids:\n self.ids[embed_type] = []\n\n self.ids[embed_type].append(instance_id)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef load_instances(self, embed_type, ids):\n\n serializer = self.serializers.get(embed_type, None)\n\n if serializer is None:\n return\n\n self.instances[embed_type] = serializer.fetch(ids)", "response": "Fetch all queued instances of type embed_type save results\n to self. instances"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef insert_instance(self, block):\n\n embed_type = block.get('type', None)\n data = block.get('data', {})\n serializer = self.serializers.get(embed_type, None)\n\n if serializer is None:\n return block\n\n try:\n instance_id = serializer.get_id(data)\n instance = self.instances[embed_type][instance_id]\n data[embed_type] = serializer.serialize(instance)\n except:\n data[embed_type] = None\n\n block['data'] = data\n\n return block", "response": "Insert a fetched instance into embed block."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef load_data(self):\n\n for embed_type in self.ids.keys():\n self.load_instances(embed_type, self.ids[embed_type])", "response": "Load data in bulk for each embed block."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nperform validation of the widget data", "response": "def validate(self, data):\n \"\"\"Perform validation of the widget data\"\"\"\n\n from dispatch.theme import ThemeManager\n\n errors = {}\n\n if data.get('widget') is not None:\n\n try:\n widget = ThemeManager.Widgets.get(data['widget'])\n except WidgetNotFound as e:\n errors['widget'] = str(e)\n else:\n for field in widget.fields:\n\n field_data = data['data'].get(field.name)\n\n if field_data is not None:\n try:\n field.validate(field_data)\n except InvalidField as e:\n errors[field.name] = str(e)\n elif field.required:\n errors[field.name] = '%s is required' % field.label\n\n if errors:\n raise ValidationError(errors)\n\n return data"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef admin(request):\n context = {\n 'api_url': settings.API_URL,\n 'app_js_bundle': 'manager-%s.js' % dispatch.__version__,\n 'app_css_bundle': 'manager-%s.css' % dispatch.__version__\n }\n \n return render_to_response('manager/index.html', context)", "response": "Render HTML entry point for manager app."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef to_json(self):\n result = {}\n\n for field in self.fields:\n result[field.name] = field.to_json(self.data.get(field.name))\n\n return result", "response": "Return JSON representation of this template"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nhide authenticated_fields if request context is missing or user is not authenticated", "response": "def hide_authenticated_fields(self):\n \"\"\"Hides authenticated_fields if request context is missing or\n user is not authenticated\"\"\"\n authenticated_fields = getattr(self.Meta, 'authenticated_fields', [])\n\n if not self.is_authenticated():\n for field in authenticated_fields:\n self.fields.pop(field)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nexcluding fields that are included in the queryparameters", "response": "def exclude_fields(self):\n \"\"\"Excludes fields that are included in the queryparameters\"\"\"\n request = self.context.get('request')\n if request:\n exclude = request.query_params.get('exclude', None)\n if exclude is None: return\n \n excluded_fields = exclude.split(',')\n for field in excluded_fields:\n self.fields.pop(field)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the latest article with the given primary key.", "response": "def get(self, *args, **kwargs):\n \"\"\"Get the latest article with the given primary key.\"\"\"\n if 'pk' in kwargs:\n kwargs['parent'] = kwargs['pk']\n kwargs['head'] = True\n del kwargs['pk']\n\n \"\"\"If the url requested includes the querystring parameters 'version' and 'preview_id',\n get the article with the specified version and preview_id.\n\n Otherwise, get the published version of the article.\n \"\"\"\n\n if 'request' in kwargs:\n\t \trequest = kwargs['request']\n\t \tversion = request.GET.get('version', None)\n\t \tpreview_id = request.GET.get('preview_id', None)\n\n\t \tif (version is not None) and (preview_id is not None):\n\t \t\tkwargs['revision_id'] = version\n\t \t\tkwargs['preview_id'] = preview_id\n\t \t\tdel kwargs['is_published']\n\n\t \tdel kwargs['request']\n\n return super(PublishableManager, self).get(*args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_queryset(self):\n\n # Get base queryset from DispatchPublishableMixin\n queryset = self.get_publishable_queryset()\n\n queryset = queryset.order_by('-updated_at')\n\n # Optionally filter by a query parameter\n q = self.request.query_params.get('q')\n\n if q:\n queryset = queryset.filter(title__icontains=q)\n\n return queryset", "response": "Only display unpublished content to authenticated users filter by\n query parameter if present."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\noverride the default get_attribute method to convert None values to False.", "response": "def get_attribute(self, instance):\n \"\"\"Overrides the default get_attribute method to convert None values to False.\"\"\"\n\n attr = super(NullBooleanField, self).get_attribute(instance)\n return True if attr else False"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef validate_widget(widget):\n\n if not has_valid_id(widget):\n raise InvalidWidget(\"%s must contain a valid 'id' attribute\" % widget.__name__)\n\n if not has_valid_name(widget):\n raise InvalidWidget(\"%s must contain a valid 'name' attribute\" % widget.__name__)\n\n if not has_valid_template(widget):\n raise InvalidWidget(\"%s must contain a valid 'template' attribute\" % widget.__name__)\n\n if not hasattr(widget, 'zones') or not widget.zones:\n raise InvalidWidget(\"%s must be compatible with at least one zone\" % widget.__name__)", "response": "Checks that the given widget contains the required fields."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef validate_zone(zone):\n\n if not has_valid_id(zone):\n raise InvalidZone(\"%s must contain a valid 'id' attribute\" % zone.__name__)\n\n if not has_valid_name(zone):\n raise InvalidZone(\"%s must contain a valid 'name' attribute\" % zone.__name__)", "response": "Checks that the given zone contains the required fields."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn True if id is a valid UUID False otherwise.", "response": "def is_valid_uuid(id):\n \"\"\"Return True if id is a valid UUID, False otherwise.\"\"\"\n if not isinstance(id, basestring):\n return False\n\n try:\n val = UUID(id, version=4)\n except ValueError:\n return False\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the user s permissions.", "response": "def get_permissions(self):\n \"\"\"Returns the user's permissions.\"\"\"\n\n permissions = ''\n if self.groups.filter(name='Admin').exists() or self.is_superuser:\n permissions = 'admin'\n\n return permissions"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nmodify the user s permissions.", "response": "def modify_permissions(self, permissions):\n \"\"\"Modify the user's permissions.\"\"\"\n\n group = Group.objects.get(name='Admin')\n\n if permissions == 'admin':\n self.groups.add(group)\n else:\n self.groups.remove(group)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nraises a ValidationError if data does not match the author format.", "response": "def AuthorValidator(data):\n \"\"\"Raise a ValidationError if data does not match the author format.\"\"\"\n if not isinstance(data, list):\n # Convert single instance to a list\n data = [data]\n\n for author in data:\n if 'person' not in author:\n raise ValidationError('An author must contain a person.')\n if 'type' in author and not isinstance(author['type'], basestring):\n # If type is defined, it should be a string\n raise ValidationError('The author type must be a string.')"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsave widget data for this zone.", "response": "def save(self, validated_data):\n \"\"\"Save widget data for this zone.\"\"\"\n\n (zone, created) = ZoneModel.objects.get_or_create(zone_id=self.id)\n\n zone.widget_id = validated_data['widget']\n zone.data = validated_data['data']\n\n # Call widget before-save hook on nested widgets\n for key in list(zone.data.keys()):\n if isinstance(zone.data[key], dict) and ('id' in zone.data[key].keys()) and ('data' in zone.data[key].keys()):\n zone.data[key]['data'] = self.before_save(zone.data[key]['id'], zone.data[key]['data'])\n\n # Call widget before-save hook\n zone.data = self.before_save(zone.widget_id, zone.data)\n\n return zone.save()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning data from each field.", "response": "def get_data(self):\n \"\"\"Returns data from each field.\"\"\"\n result = {}\n\n for field in self.fields:\n result[field.name] = self.data.get(field.name)\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef prepare_data(self):\n result = {}\n\n for field in self.fields:\n data = self.data.get(field.name)\n result[field.name] = field.prepare_data(data)\n\n return result", "response": "Prepare widget data for template."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef render(self, data=None, add_context=None):\n template = loader.get_template(self.template)\n\n if not data:\n data = self.context(self.prepare_data())\n\n if add_context is not None:\n for key, value in add_context.iteritems():\n if key in self.accepted_keywords:\n data[key] = value\n\n return template.render(data)", "response": "Renders the widget as HTML."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef content_to_html(content, article_id):\n\n def render_node(html, node, index):\n \"\"\"Renders node as HTML\"\"\"\n if node['type'] == 'paragraph':\n return html + '

%s

' % node['data']\n else:\n\n if node['type'] == 'ad':\n id = 'div-gpt-ad-1443288719995-' + str(10 + index) + '-' + str(article_id)\n dfp_type = 'Intra_Article_' + str(index + 1)\n size = 'banner'\n if node['data'] == 'mobile':\n size = 'box'\n newString = '
'\n return html + '
%s
\\n' % newString\n try:\n if node['type'] == 'poll':\n node['type'] = 'widget'\n node['data']['data'] = node['data']\n\n return html + embeds.render(node['type'], node['data'])\n except EmbedException:\n return html\n\n html = ''\n index = 0\n for node in content:\n html = render_node(html, node, index)\n if (node['type'] == 'ad'):\n index += 1\n # return mark_safe(reduce(render_node, content, ''))\n return mark_safe(html)", "response": "Converts a list of content to HTML."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconverting article content to JSON", "response": "def content_to_json(content):\n \"\"\"Returns article/page content as JSON\"\"\"\n\n def render_node(node):\n \"\"\"Renders node as JSON\"\"\"\n\n if node['type'] == 'paragraph':\n return node\n else:\n return {\n 'type': node['type'],\n 'data': embeds.to_json(node['type'], node['data'])\n }\n\n return map(render_node, content)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the settings for this integration as a dictionary.", "response": "def get_settings(cls, show_hidden=False):\n \"\"\"\n Retrieves the settings for this integration as a dictionary.\n\n Removes all hidden fields if show_hidden=False\n \"\"\"\n settings = Integration.objects.get_settings(cls.ID)\n\n if not show_hidden:\n for field in cls.HIDDEN_FIELDS:\n settings.pop(field, None)\n\n return settings"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef callback(cls, user, query):\n\n # Get settings for this integration\n settings = cls.get_settings(show_hidden=True)\n\n fb = Facebook()\n\n payload = {\n 'client_id': settings['client_id'],\n 'client_secret': settings['client_secret'],\n 'code': query['code'],\n 'redirect_uri': cls.REDIRECT_URI\n }\n\n try:\n\n # Authenticate with Facebook\n fb.get_access_token(payload)\n\n # Fetch pages belonging to authenticated user\n pages = fb.list_pages('me')\n\n except FacebookAPIError, e:\n raise IntegrationCallbackError(e.message)\n\n return {\n 'pages': pages\n }", "response": "Receive OAuth callback request from Facebook."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning settings for given integration as a dictionary.", "response": "def get_settings(self, integration_id):\n \"\"\"Return settings for given integration as a dictionary.\"\"\"\n\n try:\n integration = self.get(integration_id=integration_id)\n return json.loads(integration.settings)\n except (self.model.DoesNotExist, ValueError):\n return {}"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef update_settings(self, integration_id, settings):\n\n (integration, created) = self.get_or_create(integration_id=integration_id)\n\n try:\n current_settings = json.loads(integration.settings)\n except ValueError:\n current_settings = {}\n\n current_settings.update(settings)\n\n integration.settings = json.dumps(current_settings)\n\n integration.save()", "response": "Updates settings for given integration."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nhandling requests to the user signup page.", "response": "def signup(request, uuid=None):\n \"\"\"Handles requests to the user signup page.\"\"\"\n\n invite = get_object_or_404(Invite.objects.all(), id=uuid)\n\n if invite.expiration_date < timezone.now():\n invite.delete()\n raise Http404('This page does not exist.')\n\n if request.method == 'POST':\n form = SignUpForm(request.POST)\n if form.is_valid():\n user = form.save(commit=False)\n\n user.email = invite.email\n user.person = invite.person\n\n user.save()\n\n if invite.permissions == 'admin':\n group = Group.objects.get(name='Admin')\n user.groups.add(group)\n\n invite.delete()\n\n return redirect('dispatch-admin')\n else:\n return render(\n request,\n 'registration/signup.html',\n {\n 'form': form,\n 'email': invite.email\n }\n )\n\n else:\n form = SignUpForm()\n\n return render(\n request,\n 'registration/signup.html',\n {\n 'form': form,\n 'email': invite.email\n }\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the HTML produced from enclosing each item in in a tag of type tagName.", "response": "def maptag(tagname, contents):\n \"\"\"Returns the HTML produced from enclosing each item in\n `contents` in a tag of type `tagname`\"\"\"\n return u''.join(tag(tagname, item) for item in contents)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef zone(zone_id, **kwargs):\n\n try:\n zone = ThemeManager.Zones.get(zone_id)\n except ZoneNotFound:\n return ''\n\n try:\n return zone.widget.render(add_context=kwargs)\n except (WidgetNotFound, AttributeError):\n pass\n\n return ''", "response": "Renders the contents of the zone with given zone_id."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef save(self, revision=True, *args, **kwargs):\n\n if revision:\n # If this is a revision, set it to be the head of the list and increment the revision id\n self.head = True\n self.revision_id += 1\n\n previous_revision = self.get_previous_revision()\n\n if not self.is_parent():\n # If this is a revision, delete the old head of the list.\n type(self).objects \\\n .filter(parent=self.parent, head=True) \\\n .update(head=None)\n\n # Clear the instance id to force Django to save a new instance.\n # Both fields (pk, id) required for this to work -- something to do with model inheritance\n self.pk = None\n self.id = None\n\n # New version is unpublished by default\n self.is_published = None\n\n # Set created_at to current time, but only for first version\n if not self.created_at:\n self.created_at = timezone.now()\n self.updated_at = timezone.now()\n\n if revision:\n self.updated_at = timezone.now()\n\n super(Publishable, self).save(*args, **kwargs)\n\n # Update the parent foreign key\n if not self.parent:\n self.parent = self\n super(Publishable, self).save(update_fields=['parent'])\n\n if revision:\n # Set latest version for all articles\n type(self).objects \\\n .filter(parent=self.parent) \\\n .update(latest_version=self.revision_id)\n \n self.latest_version = self.revision_id\n\n return self", "response": "Save this Publishable instance."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef save_featured_image(self, data):\n\n attachment = self.featured_image\n\n if data is None:\n if attachment:\n attachment.delete()\n\n self.featured_image = None\n return\n\n if data['image_id'] is None:\n if attachment:\n attachment.delete()\n\n self.featured_image = None\n return\n\n if not attachment:\n attachment = ImageAttachment()\n\n attachment.image_id = data.get('image_id', attachment.image_id)\n attachment.caption = data.get('caption', None)\n attachment.credit = data.get('credit', None)\n\n instance_type = str(type(self)).lower()\n\n setattr(attachment, instance_type, self)\n\n attachment.save()\n\n self.featured_image = attachment", "response": "Handles saving the featured image."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef save_subsection(self, subsection_id):\n Article.objects.filter(parent_id=self.parent.id).update(subsection_id=subsection_id)", "response": "Save the subsection to the parent article"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the file extension.", "response": "def get_extension(self):\n \"\"\"Returns the file extension.\"\"\"\n ext = os.path.splitext(self.img.name)[1]\n if ext:\n # Remove period from extension\n return ext[1:]\n return ext"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_medium_url(self):\n if self.is_gif():\n return self.get_absolute_url()\n return '%s%s-%s.jpg' % (settings.MEDIA_URL, self.get_name(), 'medium')", "response": "Returns the medium size image URL."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef save_thumbnail(self, image, size, name, label, file_type):\n width, height = size\n (imw, imh) = image.size\n\n # If image is larger than thumbnail size, resize image\n if (imw > width) or (imh > height):\n image.thumbnail(size, Img.ANTIALIAS)\n\n # Attach new thumbnail label to image filename\n name = \"%s-%s.jpg\" % (name, label)\n\n # Image.save format takes JPEG not jpg\n if file_type in self.JPG_FORMATS:\n file_type = 'JPEG'\n\n # Write new thumbnail to StringIO object\n image_io = StringIO.StringIO()\n image.save(image_io, format=file_type, quality=75)\n\n # Convert StringIO object to Django File object\n thumb_file = InMemoryUploadedFile(image_io, None, name, 'image/jpeg', image_io.len, None)\n\n # Save the new file to the default storage system\n default_storage.save(name, thumb_file)", "response": "Processes and saves a resized thumbnail version of the image."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef init_app(self, app):\n\n app.config.setdefault('MYSQL_HOST', 'localhost')\n app.config.setdefault('MYSQL_USER', None)\n app.config.setdefault('MYSQL_PASSWORD', None)\n app.config.setdefault('MYSQL_DB', None)\n app.config.setdefault('MYSQL_PORT', 3306)\n app.config.setdefault('MYSQL_UNIX_SOCKET', None)\n app.config.setdefault('MYSQL_CONNECT_TIMEOUT', 10)\n app.config.setdefault('MYSQL_READ_DEFAULT_FILE', None)\n app.config.setdefault('MYSQL_USE_UNICODE', True)\n app.config.setdefault('MYSQL_CHARSET', 'utf8')\n app.config.setdefault('MYSQL_SQL_MODE', None)\n app.config.setdefault('MYSQL_CURSORCLASS', None)\n\n if hasattr(app, 'teardown_appcontext'):\n app.teardown_appcontext(self.teardown)", "response": "Initialize the application for use with this ArcGIS MySQL instance."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef connection(self):\n\n ctx = _app_ctx_stack.top\n if ctx is not None:\n if not hasattr(ctx, 'mysql_db'):\n ctx.mysql_db = self.connect\n return ctx.mysql_db", "response": "Attempts to connect to the MySQL server. Returns None if unsuccessful or None if unsuccessful."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nwraps a fileobj in a bandwidth limited stream and returns a BandwidthLimitedStream object.", "response": "def get_bandwith_limited_stream(self, fileobj, transfer_coordinator,\n enabled=True):\n \"\"\"Wraps a fileobj in a bandwidth limited stream wrapper\n\n :type fileobj: file-like obj\n :param fileobj: The file-like obj to wrap\n\n :type transfer_coordinator: s3transfer.futures.TransferCoordinator\n param transfer_coordinator: The coordinator for the general transfer\n that the wrapped stream is a part of\n\n :type enabled: boolean\n :param enabled: Whether bandwidth limiting should be enabled to start\n \"\"\"\n stream = BandwidthLimitedStream(\n fileobj, self._leaky_bucket, transfer_coordinator,\n self._time_utils)\n if not enabled:\n stream.disable_bandwidth_limiting()\n return stream"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef read(self, amount):\n if not self._bandwidth_limiting_enabled:\n return self._fileobj.read(amount)\n\n # We do not want to be calling consume on every read as the read\n # amounts can be small causing the lock of the leaky bucket to\n # introduce noticeable overhead. So instead we keep track of\n # how many bytes we have seen and only call consume once we pass a\n # certain threshold.\n self._bytes_seen += amount\n if self._bytes_seen < self._bytes_threshold:\n return self._fileobj.read(amount)\n\n self._consume_through_leaky_bucket()\n return self._fileobj.read(amount)", "response": "Read a specified amount of bytes from the file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef consume(self, amt, request_token):\n with self._lock:\n time_now = self._time_utils.time()\n if self._consumption_scheduler.is_scheduled(request_token):\n return self._release_requested_amt_for_scheduled_request(\n amt, request_token, time_now)\n elif self._projected_to_exceed_max_rate(amt, time_now):\n self._raise_request_exceeded_exception(\n amt, request_token, time_now)\n else:\n return self._release_requested_amt(amt, time_now)", "response": "Consumes an amount of bytes from the queue."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nschedule a wait time to be able to consume an amount of bytes in the request that is used to identify the request.", "response": "def schedule_consumption(self, amt, token, time_to_consume):\n \"\"\"Schedules a wait time to be able to consume an amount\n\n :type amt: int\n :param amt: The amount of bytes scheduled to be consumed\n\n :type token: RequestToken\n :param token: The token associated to the consumption\n request that is used to identify the request.\n\n :type time_to_consume: float\n :param time_to_consume: The desired time it should take for that\n specific request amount to be consumed in regardless of previously\n scheduled consumption requests\n\n :rtype: float\n :returns: The amount of time to wait for the specific request before\n actually consuming the specified amount.\n \"\"\"\n self._total_wait += time_to_consume\n self._tokens_to_scheduled_consumption[token] = {\n 'wait_duration': self._total_wait,\n 'time_to_consume': time_to_consume,\n }\n return self._total_wait"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef process_scheduled_consumption(self, token):\n scheduled_retry = self._tokens_to_scheduled_consumption.pop(token)\n self._total_wait = max(\n self._total_wait - scheduled_retry['time_to_consume'], 0)", "response": "Processes a scheduled consumption request that has completed\n ."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the projected rate using a provided amount and time.", "response": "def get_projected_rate(self, amt, time_at_consumption):\n \"\"\"Get the projected rate using a provided amount and time\n\n :type amt: int\n :param amt: The proposed amount to consume\n\n :type time_at_consumption: float\n :param time_at_consumption: The proposed time to consume at\n\n :rtype: float\n :returns: The consumption rate if that amt and time were consumed\n \"\"\"\n if self._last_time is None:\n return 0.0\n return self._calculate_exponential_moving_average_rate(\n amt, time_at_consumption)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nrecords the consumption rate based off amount and time point .", "response": "def record_consumption_rate(self, amt, time_at_consumption):\n \"\"\"Record the consumption rate based off amount and time point\n\n :type amt: int\n :param amt: The amount that got consumed\n\n :type time_at_consumption: float\n :param time_at_consumption: The time at which the amount was consumed\n \"\"\"\n if self._last_time is None:\n self._last_time = time_at_consumption\n self._current_rate = 0.0\n return\n self._current_rate = self._calculate_exponential_moving_average_rate(\n amt, time_at_consumption)\n self._last_time = time_at_consumption"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndownloads the contents of an object to a file.", "response": "def download_file(self, bucket, key, filename, extra_args=None,\n expected_size=None):\n \"\"\"Downloads the object's contents to a file\n\n :type bucket: str\n :param bucket: The name of the bucket to download from\n\n :type key: str\n :param key: The name of the key to download from\n\n :type filename: str\n :param filename: The name of a file to download to.\n\n :type extra_args: dict\n :param extra_args: Extra arguments that may be passed to the\n client operation\n\n :type expected_size: int\n :param expected_size: The expected size in bytes of the download. If\n provided, the downloader will not call HeadObject to determine the\n object's size and use the provided value instead. The size is\n needed to determine whether to do a multipart download.\n\n :rtype: s3transfer.futures.TransferFuture\n :returns: Transfer future representing the download\n \"\"\"\n self._start_if_needed()\n if extra_args is None:\n extra_args = {}\n self._validate_all_known_args(extra_args)\n transfer_id = self._transfer_monitor.notify_new_transfer()\n download_file_request = DownloadFileRequest(\n transfer_id=transfer_id, bucket=bucket, key=key,\n filename=filename, extra_args=extra_args,\n expected_size=expected_size,\n )\n logger.debug(\n 'Submitting download file request: %s.', download_file_request)\n self._download_request_queue.put(download_file_request)\n call_args = CallArgs(\n bucket=bucket, key=key, filename=filename, extra_args=extra_args,\n expected_size=expected_size)\n future = self._get_transfer_future(transfer_id, call_args)\n return future"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef poll_for_result(self, transfer_id):\n self._transfer_states[transfer_id].wait_till_done()\n exception = self._transfer_states[transfer_id].exception\n if exception:\n raise exception\n return None", "response": "Poll for the result of a transfer."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncalculates the range parameter for a single multipart downloads or copies", "response": "def calculate_range_parameter(part_size, part_index, num_parts,\n total_size=None):\n \"\"\"Calculate the range parameter for multipart downloads/copies\n\n :type part_size: int\n :param part_size: The size of the part\n\n :type part_index: int\n :param part_index: The index for which this parts starts. This index starts\n at zero\n\n :type num_parts: int\n :param num_parts: The total number of parts in the transfer\n\n :returns: The value to use for Range parameter on downloads or\n the CopySourceRange parameter for copies\n \"\"\"\n # Used to calculate the Range parameter\n start_range = part_index * part_size\n if part_index == num_parts - 1:\n end_range = ''\n if total_size is not None:\n end_range = str(total_size - 1)\n else:\n end_range = start_range + part_size - 1\n range_param = 'bytes=%s-%s' % (start_range, end_range)\n return range_param"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nretrieve the callbacks from a subscriber.", "response": "def get_callbacks(transfer_future, callback_type):\n \"\"\"Retrieves callbacks from a subscriber\n\n :type transfer_future: s3transfer.futures.TransferFuture\n :param transfer_future: The transfer future the subscriber is associated\n to.\n\n :type callback_type: str\n :param callback_type: The type of callback to retrieve from the subscriber.\n Valid types include:\n * 'queued'\n * 'progress'\n * 'done'\n\n :returns: A list of callbacks for the type specified. All callbacks are\n preinjected with the transfer future.\n \"\"\"\n callbacks = []\n for subscriber in transfer_future.meta.call_args.subscribers:\n callback_name = 'on_' + callback_type\n if hasattr(subscriber, callback_name):\n callbacks.append(\n functools.partial(\n getattr(subscriber, callback_name),\n future=transfer_future\n )\n )\n return callbacks"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_filtered_dict(original_dict, whitelisted_keys):\n filtered_dict = {}\n for key, value in original_dict.items():\n if key in whitelisted_keys:\n filtered_dict[key] = value\n return filtered_dict", "response": "Returns a dictionary that only contains keys that are included in the whitelist\nCTYPE."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nfinalizes the counter Once finalized the callback is invoked once the count reaches zero.", "response": "def finalize(self):\n \"\"\"Finalize the counter\n\n Once finalized, the counter never be incremented and the callback\n can be invoked once the count reaches zero\n \"\"\"\n with self._lock:\n self._is_finalized = True\n if self._count == 0:\n self._callback()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck to see if a file is a special UNIX file.", "response": "def is_special_file(cls, filename):\n \"\"\"Checks to see if a file is a special UNIX file.\n\n It checks if the file is a character special device, block special\n device, FIFO, or socket.\n\n :param filename: Name of the file\n\n :returns: True if the file is a special file. False, if is not.\n \"\"\"\n # If it does not exist, it must be a new file so it cannot be\n # a special file.\n if not os.path.exists(filename):\n return False\n mode = os.stat(filename).st_mode\n # Character special device.\n if stat.S_ISCHR(mode):\n return True\n # Block special device\n if stat.S_ISBLK(mode):\n return True\n # Named pipe / FIFO\n if stat.S_ISFIFO(mode):\n return True\n # Socket.\n if stat.S_ISSOCK(mode):\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef from_filename(cls, filename, start_byte, chunk_size, callbacks=None,\n enable_callbacks=True):\n \"\"\"Convenience factory function to create from a filename.\n\n :type start_byte: int\n :param start_byte: The first byte from which to start reading.\n\n :type chunk_size: int\n :param chunk_size: The max chunk size to read. Trying to read\n pass the end of the chunk size will behave like you've\n reached the end of the file.\n\n :type full_file_size: int\n :param full_file_size: The entire content length associated\n with ``fileobj``.\n\n :type callbacks: function(amount_read)\n :param callbacks: Called whenever data is read from this object.\n\n :type enable_callbacks: bool\n :param enable_callbacks: Indicate whether to invoke callback\n during read() calls.\n\n :rtype: ``ReadFileChunk``\n :return: A new instance of ``ReadFileChunk``\n\n \"\"\"\n f = open(filename, 'rb')\n f.seek(start_byte)\n file_size = os.fstat(f.fileno()).st_size\n return cls(f, chunk_size, file_size, callbacks, enable_callbacks)", "response": "Convenience constructor to create a new read file chunk from a filename."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef acquire(self, tag, blocking=True):\n logger.debug(\"Acquiring %s\", tag)\n if not self._semaphore.acquire(blocking):\n raise NoResourcesAvailable(\"Cannot acquire tag '%s'\" % tag)", "response": "Acquire the semaphore with the specified tag."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrelease the semaphore with the given tag and token.", "response": "def release(self, tag, acquire_token):\n \"\"\"Release the semaphore\n\n :param tag: A tag identifying what is releasing the semaphore\n :param acquire_token: The token returned from when the semaphore was\n acquired. Note that this is not really needed to directly use this\n class but is needed for API compatibility with the\n SlidingWindowSemaphore implementation.\n \"\"\"\n logger.debug(\"Releasing acquire %s/%s\" % (tag, acquire_token))\n self._semaphore.release()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef adjust_chunksize(self, current_chunksize, file_size=None):\n chunksize = current_chunksize\n if file_size is not None:\n chunksize = self._adjust_for_max_parts(chunksize, file_size)\n return self._adjust_for_chunksize_limits(chunksize)", "response": "Adjust the chunksize to the maximum number of parts that can be uploaded."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nqueue a file IO write task for submission to the IO executor.", "response": "def queue_file_io_task(self, fileobj, data, offset):\n \"\"\"Queue IO write for submission to the IO executor.\n\n This method accepts an IO executor and information about the\n downloaded data, and handles submitting this to the IO executor.\n\n This method may defer submission to the IO executor if necessary.\n\n \"\"\"\n self._transfer_coordinator.submit(\n self._io_executor,\n self.get_io_write_task(fileobj, data, offset)\n )"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_io_write_task(self, fileobj, data, offset):\n return IOWriteTask(\n self._transfer_coordinator,\n main_kwargs={\n 'fileobj': fileobj,\n 'data': data,\n 'offset': offset,\n }\n )", "response": "Returns an IOWriteTask object that can be submitted to the IO executor for the requested set of data\n ."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a class for managing a specific type of download for a given file.", "response": "def _get_download_output_manager_cls(self, transfer_future, osutil):\n \"\"\"Retrieves a class for managing output for a download\n\n :type transfer_future: s3transfer.futures.TransferFuture\n :param transfer_future: The transfer future for the request\n\n :type osutil: s3transfer.utils.OSUtils\n :param osutil: The os utility associated to the transfer\n\n :rtype: class of DownloadOutputManager\n :returns: The appropriate class to use for managing a specific type of\n input for downloads.\n \"\"\"\n download_manager_resolver_chain = [\n DownloadSpecialFilenameOutputManager,\n DownloadFilenameOutputManager,\n DownloadSeekableOutputManager,\n DownloadNonSeekableOutputManager,\n ]\n\n fileobj = transfer_future.meta.call_args.fileobj\n for download_manager_cls in download_manager_resolver_chain:\n if download_manager_cls.is_compatible(fileobj, osutil):\n return download_manager_cls\n raise RuntimeError(\n 'Output %s of type: %s is not supported.' % (\n fileobj, type(fileobj)))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _submit(self, client, config, osutil, request_executor, io_executor,\n transfer_future, bandwidth_limiter=None):\n \"\"\"\n :param client: The client associated with the transfer manager\n\n :type config: s3transfer.manager.TransferConfig\n :param config: The transfer config associated with the transfer\n manager\n\n :type osutil: s3transfer.utils.OSUtil\n :param osutil: The os utility associated to the transfer manager\n\n :type request_executor: s3transfer.futures.BoundedExecutor\n :param request_executor: The request executor associated with the\n transfer manager\n\n :type io_executor: s3transfer.futures.BoundedExecutor\n :param io_executor: The io executor associated with the\n transfer manager\n\n :type transfer_future: s3transfer.futures.TransferFuture\n :param transfer_future: The transfer future associated with the\n transfer request that tasks are being submitted for\n\n :type bandwidth_limiter: s3transfer.bandwidth.BandwidthLimiter\n :param bandwidth_limiter: The bandwidth limiter to use when\n downloading streams\n \"\"\"\n if transfer_future.meta.size is None:\n # If a size was not provided figure out the size for the\n # user.\n response = client.head_object(\n Bucket=transfer_future.meta.call_args.bucket,\n Key=transfer_future.meta.call_args.key,\n **transfer_future.meta.call_args.extra_args\n )\n transfer_future.meta.provide_transfer_size(\n response['ContentLength'])\n\n download_output_manager = self._get_download_output_manager_cls(\n transfer_future, osutil)(osutil, self._transfer_coordinator,\n io_executor)\n\n # If it is greater than threshold do a ranged download, otherwise\n # do a regular GetObject download.\n if transfer_future.meta.size < config.multipart_threshold:\n self._submit_download_request(\n client, config, osutil, request_executor, io_executor,\n download_output_manager, transfer_future, bandwidth_limiter)\n else:\n self._submit_ranged_download_request(\n client, config, osutil, request_executor, io_executor,\n download_output_manager, transfer_future, bandwidth_limiter)", "response": "Submit the object to the server."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndownloads an object and places content into io queue.", "response": "def _main(self, client, bucket, key, fileobj, extra_args, callbacks,\n max_attempts, download_output_manager, io_chunksize,\n start_index=0, bandwidth_limiter=None):\n \"\"\"Downloads an object and places content into io queue\n\n :param client: The client to use when calling GetObject\n :param bucket: The bucket to download from\n :param key: The key to download from\n :param fileobj: The file handle to write content to\n :param exta_args: Any extra arguements to include in GetObject request\n :param callbacks: List of progress callbacks to invoke on download\n :param max_attempts: The number of retries to do when downloading\n :param download_output_manager: The download output manager associated\n with the current download.\n :param io_chunksize: The size of each io chunk to read from the\n download stream and queue in the io queue.\n :param start_index: The location in the file to start writing the\n content of the key to.\n :param bandwidth_limiter: The bandwidth limiter to use when throttling\n the downloading of data in streams.\n \"\"\"\n last_exception = None\n for i in range(max_attempts):\n try:\n response = client.get_object(\n Bucket=bucket, Key=key, **extra_args)\n streaming_body = StreamReaderProgress(\n response['Body'], callbacks)\n if bandwidth_limiter:\n streaming_body = \\\n bandwidth_limiter.get_bandwith_limited_stream(\n streaming_body, self._transfer_coordinator)\n\n current_index = start_index\n chunks = DownloadChunkIterator(streaming_body, io_chunksize)\n for chunk in chunks:\n # If the transfer is done because of a cancellation\n # or error somewhere else, stop trying to submit more\n # data to be written and break out of the download.\n if not self._transfer_coordinator.done():\n self._handle_io(\n download_output_manager, fileobj, chunk,\n current_index\n )\n current_index += len(chunk)\n else:\n return\n return\n except S3_RETRYABLE_DOWNLOAD_ERRORS as e:\n logger.debug(\"Retrying exception caught (%s), \"\n \"retrying request, (attempt %s / %s)\", e, i,\n max_attempts, exc_info=True)\n last_exception = e\n # Also invoke the progress callbacks to indicate that we\n # are trying to download the stream again and all progress\n # for this GetObject has been lost.\n invoke_progress_callbacks(\n callbacks, start_index - current_index)\n continue\n raise RetriesExceededError(last_exception)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _main(self, fileobj, data, offset):\n fileobj.seek(offset)\n fileobj.write(data)", "response": "This function is called by the file object when the file is opened."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrequest any available writes given new incoming data.", "response": "def request_writes(self, offset, data):\n \"\"\"Request any available writes given new incoming data.\n\n You call this method by providing new data along with the\n offset associated with the data. If that new data unlocks\n any contiguous writes that can now be submitted, this\n method will return all applicable writes.\n\n This is done with 1 method call so you don't have to\n make two method calls (put(), get()) which acquires a lock\n each method call.\n\n \"\"\"\n if offset < self._next_offset:\n # This is a request for a write that we've already\n # seen. This can happen in the event of a retry\n # where if we retry at at offset N/2, we'll requeue\n # offsets 0-N/2 again.\n return []\n writes = []\n if offset in self._pending_offsets:\n # We've already queued this offset so this request is\n # a duplicate. In this case we should ignore\n # this request and prefer what's already queued.\n return []\n heapq.heappush(self._writes, (offset, data))\n self._pending_offsets.add(offset)\n while self._writes and self._writes[0][0] == self._next_offset:\n next_write = heapq.heappop(self._writes)\n writes.append({'offset': next_write[0], 'data': next_write[1]})\n self._pending_offsets.remove(next_write[0])\n self._next_offset += len(next_write[1])\n return writes"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef upload(self, fileobj, bucket, key, extra_args=None, subscribers=None):\n if extra_args is None:\n extra_args = {}\n if subscribers is None:\n subscribers = []\n self._validate_all_known_args(extra_args, self.ALLOWED_UPLOAD_ARGS)\n call_args = CallArgs(\n fileobj=fileobj, bucket=bucket, key=key, extra_args=extra_args,\n subscribers=subscribers\n )\n extra_main_kwargs = {}\n if self._bandwidth_limiter:\n extra_main_kwargs['bandwidth_limiter'] = self._bandwidth_limiter\n return self._submit_transfer(\n call_args, UploadSubmissionTask, extra_main_kwargs)", "response": "Uploads a file to S3."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef download(self, bucket, key, fileobj, extra_args=None,\n subscribers=None):\n \"\"\"Downloads a file from S3\n\n :type bucket: str\n :param bucket: The name of the bucket to download from\n\n :type key: str\n :param key: The name of the key to download from\n\n :type fileobj: str or seekable file-like object\n :param fileobj: The name of a file to download or a seekable file-like\n object to download. It is recommended to use a filename because\n file-like objects may result in higher memory usage.\n\n :type extra_args: dict\n :param extra_args: Extra arguments that may be passed to the\n client operation\n\n :type subscribers: list(s3transfer.subscribers.BaseSubscriber)\n :param subscribers: The list of subscribers to be invoked in the\n order provided based on the event emit during the process of\n the transfer request.\n\n :rtype: s3transfer.futures.TransferFuture\n :returns: Transfer future representing the download\n \"\"\"\n if extra_args is None:\n extra_args = {}\n if subscribers is None:\n subscribers = []\n self._validate_all_known_args(extra_args, self.ALLOWED_DOWNLOAD_ARGS)\n call_args = CallArgs(\n bucket=bucket, key=key, fileobj=fileobj, extra_args=extra_args,\n subscribers=subscribers\n )\n extra_main_kwargs = {'io_executor': self._io_executor}\n if self._bandwidth_limiter:\n extra_main_kwargs['bandwidth_limiter'] = self._bandwidth_limiter\n return self._submit_transfer(\n call_args, DownloadSubmissionTask, extra_main_kwargs)", "response": "Downloads a file from S3 and returns a TransferFuture representing the download."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef copy(self, copy_source, bucket, key, extra_args=None,\n subscribers=None, source_client=None):\n \"\"\"Copies a file in S3\n\n :type copy_source: dict\n :param copy_source: The name of the source bucket, key name of the\n source object, and optional version ID of the source object. The\n dictionary format is:\n ``{'Bucket': 'bucket', 'Key': 'key', 'VersionId': 'id'}``. Note\n that the ``VersionId`` key is optional and may be omitted.\n\n :type bucket: str\n :param bucket: The name of the bucket to copy to\n\n :type key: str\n :param key: The name of the key to copy to\n\n :type extra_args: dict\n :param extra_args: Extra arguments that may be passed to the\n client operation\n\n :type subscribers: a list of subscribers\n :param subscribers: The list of subscribers to be invoked in the\n order provided based on the event emit during the process of\n the transfer request.\n\n :type source_client: botocore or boto3 Client\n :param source_client: The client to be used for operation that\n may happen at the source object. For example, this client is\n used for the head_object that determines the size of the copy.\n If no client is provided, the transfer manager's client is used\n as the client for the source object.\n\n :rtype: s3transfer.futures.TransferFuture\n :returns: Transfer future representing the copy\n \"\"\"\n if extra_args is None:\n extra_args = {}\n if subscribers is None:\n subscribers = []\n if source_client is None:\n source_client = self._client\n self._validate_all_known_args(extra_args, self.ALLOWED_COPY_ARGS)\n call_args = CallArgs(\n copy_source=copy_source, bucket=bucket, key=key,\n extra_args=extra_args, subscribers=subscribers,\n source_client=source_client\n )\n return self._submit_transfer(call_args, CopySubmissionTask)", "response": "Copies a file in S3 to a new object."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndeletes an object in a bucket.", "response": "def delete(self, bucket, key, extra_args=None, subscribers=None):\n \"\"\"Delete an S3 object.\n\n :type bucket: str\n :param bucket: The name of the bucket.\n\n :type key: str\n :param key: The name of the S3 object to delete.\n\n :type extra_args: dict\n :param extra_args: Extra arguments that may be passed to the\n DeleteObject call.\n\n :type subscribers: list\n :param subscribers: A list of subscribers to be invoked during the\n process of the transfer request. Note that the ``on_progress``\n callback is not invoked during object deletion.\n\n :rtype: s3transfer.futures.TransferFuture\n :return: Transfer future representing the deletion.\n\n \"\"\"\n if extra_args is None:\n extra_args = {}\n if subscribers is None:\n subscribers = []\n self._validate_all_known_args(extra_args, self.ALLOWED_DELETE_ARGS)\n call_args = CallArgs(\n bucket=bucket, key=key, extra_args=extra_args,\n subscribers=subscribers\n )\n return self._submit_transfer(call_args, DeleteSubmissionTask)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef shutdown(self, cancel=False, cancel_msg=''):\n self._shutdown(cancel, cancel, cancel_msg)", "response": "Shuts down the TransferManager."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncancels all inprogress transfers by calling cancel on each transfer coordinator.", "response": "def cancel(self, msg='', exc_type=CancelledError):\n \"\"\"Cancels all inprogress transfers\n\n This cancels the inprogress transfers by calling cancel() on all\n tracked transfer coordinators.\n\n :param msg: The message to pass on to each transfer coordinator that\n gets cancelled.\n\n :param exc_type: The type of exception to set for the cancellation\n \"\"\"\n for transfer_coordinator in self.tracked_transfer_coordinators:\n transfer_coordinator.cancel(msg, exc_type)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nwaiting until there are no inprogress transfers and raise the exception if any exception is raised.", "response": "def wait(self):\n \"\"\"Wait until there are no more inprogress transfers\n\n This will not stop when failures are encountered and not propogate any\n of these errors from failed transfers, but it can be interrupted with\n a KeyboardInterrupt.\n \"\"\"\n try:\n transfer_coordinator = None\n for transfer_coordinator in self.tracked_transfer_coordinators:\n transfer_coordinator.result()\n except KeyboardInterrupt:\n logger.debug('Received KeyboardInterrupt in wait()')\n # If Keyboard interrupt is raised while waiting for\n # the result, then exit out of the wait and raise the\n # exception\n if transfer_coordinator:\n logger.debug(\n 'On KeyboardInterrupt was waiting for %s',\n transfer_coordinator)\n raise\n except Exception:\n # A general exception could have been thrown because\n # of result(). We just want to ignore this and continue\n # because we at least know that the transfer coordinator\n # has completed.\n pass"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nread a specific amount of data from a file - like object and returns it.", "response": "def _read(self, fileobj, amount, truncate=True):\n \"\"\"\n Reads a specific amount of data from a stream and returns it. If there\n is any data in initial_data, that will be popped out first.\n\n :type fileobj: A file-like object that implements read\n :param fileobj: The stream to read from.\n\n :type amount: int\n :param amount: The number of bytes to read from the stream.\n\n :type truncate: bool\n :param truncate: Whether or not to truncate initial_data after\n reading from it.\n\n :return: Generator which generates part bodies from the initial data.\n \"\"\"\n # If the the initial data is empty, we simply read from the fileobj\n if len(self._initial_data) == 0:\n return fileobj.read(amount)\n\n # If the requested number of bytes is less than the amount of\n # initial data, pull entirely from initial data.\n if amount <= len(self._initial_data):\n data = self._initial_data[:amount]\n # Truncate initial data so we don't hang onto the data longer\n # than we need.\n if truncate:\n self._initial_data = self._initial_data[amount:]\n return data\n\n # At this point there is some initial data left, but not enough to\n # satisfy the number of bytes requested. Pull out the remaining\n # initial data and read the rest from the fileobj.\n amount_to_read = amount - len(self._initial_data)\n data = self._initial_data + fileobj.read(amount_to_read)\n\n # Zero out initial data so we don't hang onto the data any more.\n if truncate:\n self._initial_data = b''\n return data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nwrapping the data with the interrupt reader and the file chunk reader.", "response": "def _wrap_data(self, data, callbacks, close_callbacks):\n \"\"\"\n Wraps data with the interrupt reader and the file chunk reader.\n\n :type data: bytes\n :param data: The data to wrap.\n\n :type callbacks: list\n :param callbacks: The callbacks associated with the transfer future.\n\n :type close_callbacks: list\n :param close_callbacks: The callbacks to be called when closing the\n wrapper for the data.\n\n :return: Fully wrapped data.\n \"\"\"\n fileobj = self._wrap_fileobj(six.BytesIO(data))\n return self._osutil.open_file_chunk_reader_from_fileobj(\n fileobj=fileobj, chunk_size=len(data), full_file_size=len(data),\n callbacks=callbacks, close_callbacks=close_callbacks)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_upload_input_manager_cls(self, transfer_future):\n upload_manager_resolver_chain = [\n UploadFilenameInputManager,\n UploadSeekableInputManager,\n UploadNonSeekableInputManager\n ]\n\n fileobj = transfer_future.meta.call_args.fileobj\n for upload_manager_cls in upload_manager_resolver_chain:\n if upload_manager_cls.is_compatible(fileobj):\n return upload_manager_cls\n raise RuntimeError(\n 'Input %s of type: %s is not supported.' % (\n fileobj, type(fileobj)))", "response": "Returns the class for managing a specific type of file for an upload based on file object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _submit(self, client, config, osutil, request_executor,\n transfer_future, bandwidth_limiter=None):\n \"\"\"\n :param client: The client associated with the transfer manager\n\n :type config: s3transfer.manager.TransferConfig\n :param config: The transfer config associated with the transfer\n manager\n\n :type osutil: s3transfer.utils.OSUtil\n :param osutil: The os utility associated to the transfer manager\n\n :type request_executor: s3transfer.futures.BoundedExecutor\n :param request_executor: The request executor associated with the\n transfer manager\n\n :type transfer_future: s3transfer.futures.TransferFuture\n :param transfer_future: The transfer future associated with the\n transfer request that tasks are being submitted for\n \"\"\"\n upload_input_manager = self._get_upload_input_manager_cls(\n transfer_future)(\n osutil, self._transfer_coordinator, bandwidth_limiter)\n\n # Determine the size if it was not provided\n if transfer_future.meta.size is None:\n upload_input_manager.provide_transfer_size(transfer_future)\n\n # Do a multipart upload if needed, otherwise do a regular put object.\n if not upload_input_manager.requires_multipart_upload(\n transfer_future, config):\n self._submit_upload_request(\n client, config, osutil, request_executor, transfer_future,\n upload_input_manager)\n else:\n self._submit_multipart_request(\n client, config, osutil, request_executor, transfer_future,\n upload_input_manager)", "response": "Submit the object to the server."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _main(self, client, fileobj, bucket, key, upload_id, part_number,\n extra_args):\n \"\"\"\n :param client: The client to use when calling PutObject\n :param fileobj: The file to upload.\n :param bucket: The name of the bucket to upload to\n :param key: The name of the key to upload to\n :param upload_id: The id of the upload\n :param part_number: The number representing the part of the multipart\n upload\n :param extra_args: A dictionary of any extra arguments that may be\n used in the upload.\n\n :rtype: dict\n :returns: A dictionary representing a part::\n\n {'Etag': etag_value, 'PartNumber': part_number}\n\n This value can be appended to a list to be used to complete\n the multipart upload.\n \"\"\"\n with fileobj as body:\n response = client.upload_part(\n Bucket=bucket, Key=key,\n UploadId=upload_id, PartNumber=part_number,\n Body=body, **extra_args)\n etag = response['ETag']\n return {'ETag': etag, 'PartNumber': part_number}", "response": "This method is used to upload a new object to the object store. It will upload the object to the object store and return the response."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsets the exception on the future.", "response": "def set_exception(self, exception):\n \"\"\"Sets the exception on the future.\"\"\"\n if not self.done():\n raise TransferNotDoneError(\n 'set_exception can only be called once the transfer is '\n 'complete.')\n self._coordinator.set_exception(exception, override=True)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef set_result(self, result):\n with self._lock:\n self._exception = None\n self._result = result\n self._status = 'success'", "response": "Set a result for the TransferFuture."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_exception(self, exception, override=False):\n with self._lock:\n if not self.done() or override:\n self._exception = exception\n self._status = 'failed'", "response": "Set the exception that caused the transfer to fail."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nwait until the TransferFuture is done and returns the result.", "response": "def result(self):\n \"\"\"Waits until TransferFuture is done and returns the result\n\n If the TransferFuture succeeded, it will return the result. If the\n TransferFuture failed, it will raise the exception associated to the\n failure.\n \"\"\"\n # Doing a wait() with no timeout cannot be interrupted in python2 but\n # can be interrupted in python3 so we just wait with the largest\n # possible value integer value, which is on the scale of billions of\n # years...\n self._done_event.wait(MAXINT)\n\n # Once done waiting, raise an exception if present or return the\n # final result.\n if self._exception:\n raise self._exception\n return self._result"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncancel the TransferFuture with the given message.", "response": "def cancel(self, msg='', exc_type=CancelledError):\n \"\"\"Cancels the TransferFuture\n\n :param msg: The message to attach to the cancellation\n :param exc_type: The type of exception to set for the cancellation\n \"\"\"\n with self._lock:\n if not self.done():\n should_announce_done = False\n logger.debug('%s cancel(%s) called', self, msg)\n self._exception = exc_type(msg)\n if self._status == 'not-started':\n should_announce_done = True\n self._status = 'cancelled'\n if should_announce_done:\n self.announce_done()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef submit(self, executor, task, tag=None):\n logger.debug(\n \"Submitting task %s to executor %s for transfer request: %s.\" % (\n task, executor, self.transfer_id)\n )\n future = executor.submit(task, tag=tag)\n # Add this created future to the list of associated future just\n # in case it is needed during cleanups.\n self.add_associated_future(future)\n future.add_done_callback(\n FunctionContainer(self.remove_associated_future, future))\n return future", "response": "Submits a task to an executor and returns a future representing the submitted task."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_done_callback(self, function, *args, **kwargs):\n with self._done_callbacks_lock:\n self._done_callbacks.append(\n FunctionContainer(function, *args, **kwargs)\n )", "response": "Add a done callback to be invoked when transfer is done"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd a callback to call upon failure", "response": "def add_failure_cleanup(self, function, *args, **kwargs):\n \"\"\"Adds a callback to call upon failure\"\"\"\n with self._failure_cleanups_lock:\n self._failure_cleanups.append(\n FunctionContainer(function, *args, **kwargs))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef announce_done(self):\n if self.status != 'success':\n self._run_failure_cleanups()\n self._done_event.set()\n self._run_done_callbacks()", "response": "Announce that the transfer is done running and run associated callbacks."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef submit(self, task, tag=None, block=True):\n semaphore = self._semaphore\n # If a tag was provided, use the semaphore associated to that\n # tag.\n if tag:\n semaphore = self._tag_semaphores[tag]\n\n # Call acquire on the semaphore.\n acquire_token = semaphore.acquire(task.transfer_id, block)\n # Create a callback to invoke when task is done in order to call\n # release on the semaphore.\n release_callback = FunctionContainer(\n semaphore.release, task.transfer_id, acquire_token)\n # Submit the task to the underlying executor.\n future = ExecutorFuture(self._executor.submit(task))\n # Add the Semaphore.release() callback to the future such that\n # it is invoked once the future completes.\n future.add_done_callback(release_callback)\n return future", "response": "Submit a task to the current thread."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_done_callback(self, fn):\n # The done callback for concurrent.futures.Future will always pass a\n # the future in as the only argument. So we need to create the\n # proper signature wrapper that will invoke the callback provided.\n def done_callback(future_passed_to_callback):\n return fn()\n self._future.add_done_callback(done_callback)", "response": "Adds a callback to be completed once the future is done."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsubmits a delete object task for the given node.", "response": "def _submit(self, client, request_executor, transfer_future, **kwargs):\n \"\"\"\n :param client: The client associated with the transfer manager\n\n :type config: s3transfer.manager.TransferConfig\n :param config: The transfer config associated with the transfer\n manager\n\n :type osutil: s3transfer.utils.OSUtil\n :param osutil: The os utility associated to the transfer manager\n\n :type request_executor: s3transfer.futures.BoundedExecutor\n :param request_executor: The request executor associated with the\n transfer manager\n\n :type transfer_future: s3transfer.futures.TransferFuture\n :param transfer_future: The transfer future associated with the\n transfer request that tasks are being submitted for\n \"\"\"\n call_args = transfer_future.meta.call_args\n\n self._transfer_coordinator.submit(\n request_executor,\n DeleteObjectTask(\n transfer_coordinator=self._transfer_coordinator,\n main_kwargs={\n 'client': client,\n 'bucket': call_args.bucket,\n 'key': call_args.key,\n 'extra_args': call_args.extra_args,\n },\n is_final=True\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsubmits the object to the client.", "response": "def _submit(self, client, config, osutil, request_executor,\n transfer_future):\n \"\"\"\n :param client: The client associated with the transfer manager\n\n :type config: s3transfer.manager.TransferConfig\n :param config: The transfer config associated with the transfer\n manager\n\n :type osutil: s3transfer.utils.OSUtil\n :param osutil: The os utility associated to the transfer manager\n\n :type request_executor: s3transfer.futures.BoundedExecutor\n :param request_executor: The request executor associated with the\n transfer manager\n\n :type transfer_future: s3transfer.futures.TransferFuture\n :param transfer_future: The transfer future associated with the\n transfer request that tasks are being submitted for\n \"\"\"\n # Determine the size if it was not provided\n if transfer_future.meta.size is None:\n # If a size was not provided figure out the size for the\n # user. Note that we will only use the client provided to\n # the TransferManager. If the object is outside of the region\n # of the client, they may have to provide the file size themselves\n # with a completely new client.\n call_args = transfer_future.meta.call_args\n head_object_request = \\\n self._get_head_object_request_from_copy_source(\n call_args.copy_source)\n extra_args = call_args.extra_args\n\n # Map any values that may be used in the head object that is\n # used in the copy object\n for param, value in extra_args.items():\n if param in self.EXTRA_ARGS_TO_HEAD_ARGS_MAPPING:\n head_object_request[\n self.EXTRA_ARGS_TO_HEAD_ARGS_MAPPING[param]] = value\n\n response = call_args.source_client.head_object(\n **head_object_request)\n transfer_future.meta.provide_transfer_size(\n response['ContentLength'])\n\n # If it is greater than threshold do a multipart copy, otherwise\n # do a regular copy object.\n if transfer_future.meta.size < config.multipart_threshold:\n self._submit_copy_request(\n client, config, osutil, request_executor, transfer_future)\n else:\n self._submit_multipart_request(\n client, config, osutil, request_executor, transfer_future)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _main(self, client, copy_source, bucket, key, upload_id, part_number,\n extra_args, callbacks, size):\n \"\"\"\n :param client: The client to use when calling PutObject\n :param copy_source: The CopySource parameter to use\n :param bucket: The name of the bucket to upload to\n :param key: The name of the key to upload to\n :param upload_id: The id of the upload\n :param part_number: The number representing the part of the multipart\n upload\n :param extra_args: A dictionary of any extra arguments that may be\n used in the upload.\n :param callbacks: List of callbacks to call after copy part\n :param size: The size of the transfer. This value is passed into\n the callbacks\n\n :rtype: dict\n :returns: A dictionary representing a part::\n\n {'Etag': etag_value, 'PartNumber': part_number}\n\n This value can be appended to a list to be used to complete\n the multipart upload.\n \"\"\"\n response = client.upload_part_copy(\n CopySource=copy_source, Bucket=bucket, Key=key,\n UploadId=upload_id, PartNumber=part_number, **extra_args)\n for callback in callbacks:\n callback(bytes_transferred=size)\n etag = response['CopyPartResult']['ETag']\n return {'ETag': etag, 'PartNumber': part_number}", "response": "This method is used to upload a new part of a multipart object. It will call the multipart object s upload_part_copy method and then call the callback functions to complete the multipart object s part."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef from_filename(cls, filename, start_byte, chunk_size, callback=None,\n enable_callback=True):\n \"\"\"Convenience factory function to create from a filename.\n\n :type start_byte: int\n :param start_byte: The first byte from which to start reading.\n\n :type chunk_size: int\n :param chunk_size: The max chunk size to read. Trying to read\n pass the end of the chunk size will behave like you've\n reached the end of the file.\n\n :type full_file_size: int\n :param full_file_size: The entire content length associated\n with ``fileobj``.\n\n :type callback: function(amount_read)\n :param callback: Called whenever data is read from this object.\n\n :type enable_callback: bool\n :param enable_callback: Indicate whether to invoke callback\n during read() calls.\n\n :rtype: ``ReadFileChunk``\n :return: A new instance of ``ReadFileChunk``\n\n \"\"\"\n f = open(filename, 'rb')\n file_size = os.fstat(f.fileno()).st_size\n return cls(f, start_byte, chunk_size, file_size, callback,\n enable_callback)", "response": "Convenience constructor to create a new read file object from a filename."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef upload_file(self, filename, bucket, key,\n callback=None, extra_args=None):\n \"\"\"Upload a file to an S3 object.\n\n Variants have also been injected into S3 client, Bucket and Object.\n You don't have to use S3Transfer.upload_file() directly.\n \"\"\"\n if extra_args is None:\n extra_args = {}\n self._validate_all_known_args(extra_args, self.ALLOWED_UPLOAD_ARGS)\n events = self._client.meta.events\n events.register_first('request-created.s3',\n disable_upload_callbacks,\n unique_id='s3upload-callback-disable')\n events.register_last('request-created.s3',\n enable_upload_callbacks,\n unique_id='s3upload-callback-enable')\n if self._osutil.get_file_size(filename) >= \\\n self._config.multipart_threshold:\n self._multipart_upload(filename, bucket, key, callback, extra_args)\n else:\n self._put_object(filename, bucket, key, callback, extra_args)", "response": "Uploads a file to an S3 object."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef download_file(self, bucket, key, filename, extra_args=None,\n callback=None):\n \"\"\"Download an S3 object to a file.\n\n Variants have also been injected into S3 client, Bucket and Object.\n You don't have to use S3Transfer.download_file() directly.\n \"\"\"\n # This method will issue a ``head_object`` request to determine\n # the size of the S3 object. This is used to determine if the\n # object is downloaded in parallel.\n if extra_args is None:\n extra_args = {}\n self._validate_all_known_args(extra_args, self.ALLOWED_DOWNLOAD_ARGS)\n object_size = self._object_size(bucket, key, extra_args)\n temp_filename = filename + os.extsep + random_file_extension()\n try:\n self._download_file(bucket, key, temp_filename, object_size,\n extra_args, callback)\n except Exception:\n logger.debug(\"Exception caught in download_file, removing partial \"\n \"file: %s\", temp_filename, exc_info=True)\n self._osutil.remove_file(temp_filename)\n raise\n else:\n self._osutil.rename_file(temp_filename, filename)", "response": "Download an object to a file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _main(self, client, bucket, key, extra_args):\n # Create the multipart upload.\n response = client.create_multipart_upload(\n Bucket=bucket, Key=key, **extra_args)\n upload_id = response['UploadId']\n\n # Add a cleanup if the multipart upload fails at any point.\n self._transfer_coordinator.add_failure_cleanup(\n client.abort_multipart_upload, Bucket=bucket, Key=key,\n UploadId=upload_id\n )\n return upload_id", "response": "This method creates a multipart upload and returns the upload id."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _main(self, client, bucket, key, upload_id, parts, extra_args):\n client.complete_multipart_upload(\n Bucket=bucket, Key=key, UploadId=upload_id,\n MultipartUpload={'Parts': parts},\n **extra_args)", "response": "This method is called by the CompleteMultipartUploadTask class. It is called by the CompleteMultipartUploadTask class to complete the multipart upload."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef parse(file_path, content=None):\n try:\n # Parso reads files in binary mode and converts to unicode\n # using python_bytes_to_unicode() function. As a result,\n # we no longer have information about original file encoding and\n # output of module.get_content() can't be converted back to bytes\n # For now we can make a compromise by reading the file ourselves\n # and passing content to parse() function.\n if content is None:\n with open(file_path) as f:\n content = f.read()\n py_tree = _parser.parse(\n content, path=file_path, error_recovery=False)\n return ParsoPythonFile(file_path, py_tree)\n except parso.parser.ParserSyntaxError as ex:\n logging.error(\"Failed to parse %s:%d '%s'\", file_path,\n ex.error_leaf.line, ex.error_leaf.get_code())", "response": "Parse a file and return a ParsoPythonFile object with the file_path and content."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _iter_step_func_decorators(self):\n func_defs = [func for func in self.py_tree.iter_funcdefs()] + [func for cls in self.py_tree.iter_classdefs() for func in cls.iter_funcdefs()]\n for func in func_defs:\n for decorator in func.get_decorators():\n if decorator.children[1].value == 'step':\n yield func, decorator\n break", "response": "Iterate over functions with step decorator in parsed file"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _step_decorator_args(self, decorator):\n args = decorator.children[3:-2]\n step = None\n if len(args) == 1:\n try:\n step = ast.literal_eval(args[0].get_code())\n except (ValueError, SyntaxError):\n pass\n if isinstance(step, six.string_types+(list,)):\n return step\n logging.error(\"Decorator step accepts either a string or a list of strings - %s:%d\",\n self.file_path, decorator.start_pos[0])\n else:\n logging.error(\"Decorator step accepts only one argument - %s:%d\",\n self.file_path, decorator.start_pos[0])", "response": "Get the arguments passed to step decorators\n "} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\niterates over all the steps in the parsed file.", "response": "def iter_steps(self):\n \"\"\"Iterate over steps in the parsed file.\"\"\"\n for func, decorator in self._iter_step_func_decorators():\n step = self._step_decorator_args(decorator)\n if step:\n span = self._span_from_pos(decorator.start_pos, func.end_pos)\n yield step, func.name.value, span"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nfinding the ast node which contains the text.", "response": "def _find_step_node(self, step_text):\n \"\"\"Find the ast node which contains the text.\"\"\"\n for func, decorator in self._iter_step_func_decorators():\n step = self._step_decorator_args(decorator)\n arg_node = decorator.children[3]\n if step == step_text:\n return arg_node, func\n elif isinstance(step, list) and step_text in step:\n idx = step.index(step_text)\n step_node = arg_node.children[1].children[idx * 2]\n return step_node, func\n return None, None"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nfind the step with old_text and change it to new_text.The step function parameters are also changed according to move_param_from_idx. Each entry in this list should specify parameter position from old.", "response": "def refactor_step(self, old_text, new_text, move_param_from_idx):\n \"\"\"\n Find the step with old_text and change it to new_text.The step function\n parameters are also changed according to move_param_from_idx.\n Each entry in this list should specify parameter position from old.\n \"\"\"\n diffs = []\n step, func = self._find_step_node(old_text)\n if step is None:\n return diffs\n step_diff = self._refactor_step_text(step, old_text, new_text)\n diffs.append(step_diff)\n params_list_node = func.children[2]\n moved_params = self._move_param_nodes(\n params_list_node.children, move_param_from_idx)\n if params_list_node.children is not moved_params:\n # Record original parameter list span excluding braces\n params_span = self._span_from_pos(\n params_list_node.children[0].end_pos,\n params_list_node.children[-1].start_pos)\n params_list_node.children = moved_params\n # Get code for moved paramters excluding braces\n param_code = ''.join(p.get_code() for p in moved_params[1:-1])\n diffs.append((params_span, param_code))\n return diffs"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse(file_path, content=None):\n try:\n if content is None:\n with open(file_path) as f:\n content = f.read()\n py_tree = RedBaron(content)\n return RedbaronPythonFile(file_path, py_tree)\n except Exception as ex:\n # Trim parsing error message to only include failure location\n msg = str(ex)\n marker = \"<---- here\\n\"\n marker_pos = msg.find(marker)\n if marker_pos > 0:\n msg = msg[:marker_pos + len(marker)]\n logging.error(\"Failed to parse {}: {}\".format(file_path, msg))", "response": "Parse a file and return a RedbaronPythonFile object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\niterating over all step functions in parsed file.", "response": "def _iter_step_func_decorators(self):\n \"\"\"Find functions with step decorator in parsed file.\"\"\" \n for node in self.py_tree.find_all('def'):\n for decorator in node.decorators:\n if decorator.name.value == 'step':\n yield node, decorator\n break"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _step_decorator_args(self, decorator):\n args = decorator.call.value\n step = None\n if len(args) == 1:\n try:\n step = args[0].value.to_python()\n except (ValueError, SyntaxError):\n pass\n if isinstance(step, six.string_types + (list,)):\n return step\n logging.error(\"Decorator step accepts either a string or a list of \\\n strings - %s\",\n self.file_path)\n else:\n logging.error(\"Decorator step accepts only one argument - %s\",\n self.file_path)", "response": "Get arguments passed to step decorators converted to python objects."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef iter_steps(self):\n for func, decorator in self._iter_step_func_decorators():\n step = self._step_decorator_args(decorator)\n if step:\n yield step, func.name, self._span_for_node(func, True)", "response": "Iterate over all the steps in the parsed file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nfinding the ast node which contains the text.", "response": "def _find_step_node(self, step_text):\n \"\"\"Find the ast node which contains the text.\"\"\"\n for func, decorator in self._iter_step_func_decorators():\n step = self._step_decorator_args(decorator)\n arg_node = decorator.call.value[0].value\n if step == step_text:\n return arg_node, func\n elif isinstance(step, list) and step_text in step:\n step_node = arg_node[step.index(step_text)]\n return step_node, func\n return None, None"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfinding the step with old_text and change it to new_text. The step function parameters are also changed according to move_param_from_idx. Each entry in this list should specify parameter position from old", "response": "def refactor_step(self, old_text, new_text, move_param_from_idx):\n \"\"\"\n Find the step with old_text and change it to new_text.\n The step function parameters are also changed according\n to move_param_from_idx. Each entry in this list should\n specify parameter position from old\n \"\"\"\n diffs = []\n step, func = self._find_step_node(old_text)\n if step is None:\n return diffs\n step_diff = self._refactor_step_text(step, old_text, new_text)\n diffs.append(step_diff)\n moved_params = self._move_params(func.arguments, move_param_from_idx)\n if func.arguments is not moved_params:\n params_span = self._span_for_node(func.arguments, False)\n func.arguments = moved_params\n diffs.append((params_span, func.arguments.dumps()))\n return diffs"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef select_python_parser(parser=None):\n if parser == 'redbaron' or os.environ.get('GETGAUGE_USE_0_3_3_PARSER'):\n PythonFile.Class = RedbaronPythonFile\n else:\n PythonFile.Class = ParsoPythonFile", "response": "Select default parser for loading and refactoring steps."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef list(self, teamId, max=None, **request_parameters):\n check_type(teamId, basestring, may_be_none=False)\n check_type(max, int)\n\n params = dict_from_items_with_values(\n request_parameters,\n teamId=teamId,\n max=max,\n )\n\n # API request - get items\n items = self._session.get_items(API_ENDPOINT, params=params)\n\n # Yield team membership objects created from the returned items JSON\n # objects\n for item in items:\n yield self._object_factory(OBJECT_TYPE, item)", "response": "This method returns a generator that yields all team memberships for a given team."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create(self, teamId, personId=None, personEmail=None,\n isModerator=False, **request_parameters):\n \"\"\"Add someone to a team by Person ID or email address.\n\n Add someone to a team by Person ID or email address; optionally making\n them a moderator.\n\n Args:\n teamId(basestring): The team ID.\n personId(basestring): The person ID.\n personEmail(basestring): The email address of the person.\n isModerator(bool): Set to True to make the person a team moderator.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n TeamMembership: A TeamMembership object with the details of the\n created team membership.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n check_type(teamId, basestring, may_be_none=False)\n check_type(personId, basestring)\n check_type(personEmail, basestring)\n check_type(isModerator, bool)\n\n post_data = dict_from_items_with_values(\n request_parameters,\n teamId=teamId,\n personId=personId,\n personEmail=personEmail,\n isModerator=isModerator,\n )\n\n # API request\n json_data = self._session.post(API_ENDPOINT, json=post_data)\n\n # Return a team membership object created from the response JSON data\n return self._object_factory(OBJECT_TYPE, json_data)", "response": "Creates a new team membership object."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef update(self, membershipId, isModerator=None, **request_parameters):\n check_type(membershipId, basestring, may_be_none=False)\n check_type(isModerator, bool)\n\n put_data = dict_from_items_with_values(\n request_parameters,\n isModerator=isModerator,\n )\n\n # API request\n json_data = self._session.put(API_ENDPOINT + '/' + membershipId,\n json=put_data)\n\n # Return a team membership object created from the response JSON data\n return self._object_factory(OBJECT_TYPE, json_data)", "response": "Updates a team membership by ID."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef delete(self, membershipId):\n check_type(membershipId, basestring, may_be_none=False)\n\n # API request\n self._session.delete(API_ENDPOINT + '/' + membershipId)", "response": "Delete a team membership by ID."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_catfact():\n response = requests.get(CAT_FACTS_URL, verify=False)\n response.raise_for_status()\n json_data = response.json()\n return json_data['fact']", "response": "Get a cat fact from catfact. ninja and return it as a string."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef POST(self):\n # Get the POST data sent from Webex Teams\n json_data = web.data()\n print(\"\\nWEBHOOK POST RECEIVED:\")\n print(json_data, \"\\n\")\n\n # Create a Webhook object from the JSON data\n webhook_obj = Webhook(json_data)\n # Get the room details\n room = api.rooms.get(webhook_obj.data.roomId)\n # Get the message details\n message = api.messages.get(webhook_obj.data.id)\n # Get the sender's details\n person = api.people.get(message.personId)\n\n print(\"NEW MESSAGE IN ROOM '{}'\".format(room.title))\n print(\"FROM '{}'\".format(person.displayName))\n print(\"MESSAGE '{}'\\n\".format(message.text))\n\n # This is a VERY IMPORTANT loop prevention control step.\n # If you respond to all messages... You will respond to the messages\n # that the bot posts and thereby create a loop condition.\n me = api.people.me()\n if message.personId == me.id:\n # Message was sent by me (bot); do not respond.\n return 'OK'\n else:\n # Message was sent by someone else; parse message and respond.\n if \"/CAT\" in message.text:\n print(\"FOUND '/CAT'\")\n # Get a cat fact\n cat_fact = get_catfact()\n print(\"SENDING CAT FACT '{}'\".format(cat_fact))\n # Post the fact to the room where the request was received\n api.messages.create(room.id, text=cat_fact)\n return 'OK'", "response": "Respond to inbound webhook JSON HTTP POSTs from Webex Teams."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nlisting room memberships. By default, lists memberships for rooms to which the authenticated user belongs. Use query parameters to filter the response. Use `roomId` to list memberships for a room, by ID. Use either `personId` or `personEmail` to filter the results. This method supports Webex Teams's implementation of RFC5988 Web Linking to provide pagination support. It returns a generator container that incrementally yields all memberships returned by the query. The generator will automatically request additional 'pages' of responses from Webex as needed until all responses have been returned. The container makes the generator safe for reuse. A new API call will be made, using the same parameters that were specified when the generator was created, every time a new iterator is requested from the container. Args: roomId(basestring): Limit results to a specific room, by ID. personId(basestring): Limit results to a specific person, by ID. personEmail(basestring): Limit results to a specific person, by email address. max(int): Limit the maximum number of items returned from the Webex Teams service per request. **request_parameters: Additional request parameters (provides support for parameters that may be added in the future). Returns: GeneratorContainer: A GeneratorContainer which, when iterated, yields the memberships returned by the Webex Teams query. Raises: TypeError: If the parameter types are incorrect. ApiError: If the Webex Teams cloud returns an error.", "response": "def list(self, roomId=None, personId=None, personEmail=None, max=None,\n **request_parameters):\n \"\"\"List room memberships.\n\n By default, lists memberships for rooms to which the authenticated user\n belongs.\n\n Use query parameters to filter the response.\n\n Use `roomId` to list memberships for a room, by ID.\n\n Use either `personId` or `personEmail` to filter the results.\n\n This method supports Webex Teams's implementation of RFC5988 Web\n Linking to provide pagination support. It returns a generator\n container that incrementally yields all memberships returned by the\n query. The generator will automatically request additional 'pages' of\n responses from Webex as needed until all responses have been returned.\n The container makes the generator safe for reuse. A new API call will\n be made, using the same parameters that were specified when the\n generator was created, every time a new iterator is requested from the\n container.\n\n Args:\n roomId(basestring): Limit results to a specific room, by ID.\n personId(basestring): Limit results to a specific person, by ID.\n personEmail(basestring): Limit results to a specific person, by\n email address.\n max(int): Limit the maximum number of items returned from the Webex\n Teams service per request.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the memberships returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n check_type(roomId, basestring)\n check_type(personId, basestring)\n check_type(personEmail, basestring)\n check_type(max, int)\n\n params = dict_from_items_with_values(\n request_parameters,\n roomId=roomId,\n personId=personId,\n personEmail=personEmail,\n max=max,\n )\n\n # API request - get items\n items = self._session.get_items(API_ENDPOINT, params=params)\n\n # Yield membership objects created from the returned items JSON objects\n for item in items:\n yield self._object_factory(OBJECT_TYPE, item)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef delete(self, membershipId):\n check_type(membershipId, basestring)\n\n # API request\n self._session.delete(API_ENDPOINT + '/' + membershipId)", "response": "Delete a membership by ID."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert a string ( bytes str or unicode ) to unicode.", "response": "def to_unicode(string):\n \"\"\"Convert a string (bytes, str or unicode) to unicode.\"\"\"\n assert isinstance(string, basestring)\n if sys.version_info[0] >= 3:\n if isinstance(string, bytes):\n return string.decode('utf-8')\n else:\n return string\n else:\n if isinstance(string, str):\n return string.decode('utf-8')\n else:\n return string"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting a string ( bytes str or unicode to bytes.", "response": "def to_bytes(string):\n \"\"\"Convert a string (bytes, str or unicode) to bytes.\"\"\"\n assert isinstance(string, basestring)\n if sys.version_info[0] >= 3:\n if isinstance(string, str):\n return string.encode('utf-8')\n else:\n return string\n else:\n if isinstance(string, unicode):\n return string.encode('utf-8')\n else:\n return string"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate_base_url(base_url):\n parsed_url = urllib.parse.urlparse(base_url)\n if parsed_url.scheme and parsed_url.netloc:\n return parsed_url.geturl()\n else:\n error_message = \"base_url must contain a valid scheme (protocol \" \\\n \"specifier) and network location (hostname)\"\n raise ValueError(error_message)", "response": "Verify that base_url specifies a protocol and network location."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nchecks to see if string is a validly - formatted web url.", "response": "def is_web_url(string):\n \"\"\"Check to see if string is an validly-formatted web url.\"\"\"\n assert isinstance(string, basestring)\n parsed_url = urllib.parse.urlparse(string)\n return (\n (\n parsed_url.scheme.lower() == 'http'\n or parsed_url.scheme.lower() == 'https'\n )\n and parsed_url.netloc\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nopens the file and return an EncodableFile tuple.", "response": "def open_local_file(file_path):\n \"\"\"Open the file and return an EncodableFile tuple.\"\"\"\n assert isinstance(file_path, basestring)\n assert is_local_file(file_path)\n file_name = os.path.basename(file_path)\n file_object = open(file_path, 'rb')\n content_type = mimetypes.guess_type(file_name)[0] or 'text/plain'\n return EncodableFile(file_name=file_name,\n file_object=file_object,\n content_type=content_type)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nchecks that the object o is an instance of one of the acceptable types or None.", "response": "def check_type(o, acceptable_types, may_be_none=True):\n \"\"\"Object is an instance of one of the acceptable types or None.\n\n Args:\n o: The object to be inspected.\n acceptable_types: A type or tuple of acceptable types.\n may_be_none(bool): Whether or not the object may be None.\n\n Raises:\n TypeError: If the object is None and may_be_none=False, or if the\n object is not an instance of one of the acceptable types.\n\n \"\"\"\n if not isinstance(acceptable_types, tuple):\n acceptable_types = (acceptable_types,)\n\n if may_be_none and o is None:\n # Object is None, and that is OK!\n pass\n elif isinstance(o, acceptable_types):\n # Object is an instance of an acceptable type.\n pass\n else:\n # Object is something else.\n error_message = (\n \"We were expecting to receive an instance of one of the following \"\n \"types: {types}{none}; but instead we received {o} which is a \"\n \"{o_type}.\".format(\n types=\", \".join([repr(t.__name__) for t in acceptable_types]),\n none=\"or 'None'\" if may_be_none else \"\",\n o=o,\n o_type=repr(type(o).__name__)\n )\n )\n raise TypeError(error_message)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef dict_from_items_with_values(*dictionaries, **items):\n dict_list = list(dictionaries)\n dict_list.append(items)\n result = {}\n for d in dict_list:\n for key, value in d.items():\n if value is not None:\n result[key] = value\n return result", "response": "Creates a dict with the inputted items ; pruning any that are None."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef check_response_code(response, expected_response_code):\n if response.status_code == expected_response_code:\n pass\n elif response.status_code == RATE_LIMIT_RESPONSE_CODE:\n raise RateLimitError(response)\n else:\n raise ApiError(response)", "response": "Checks the response code against the expected response code ; raise ApiError if they do not match."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef json_dict(json_data):\n if isinstance(json_data, dict):\n return json_data\n elif isinstance(json_data, basestring):\n return json.loads(json_data, object_hook=OrderedDict)\n else:\n raise TypeError(\n \"'json_data' must be a dictionary or valid JSON string; \"\n \"received: {!r}\".format(json_data)\n )", "response": "Given a dictionary or JSON string ; return a Python dictionary with the contents of the JSON object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nlist rooms. By default, lists rooms to which the authenticated user belongs. This method supports Webex Teams's implementation of RFC5988 Web Linking to provide pagination support. It returns a generator container that incrementally yields all rooms returned by the query. The generator will automatically request additional 'pages' of responses from Webex as needed until all responses have been returned. The container makes the generator safe for reuse. A new API call will be made, using the same parameters that were specified when the generator was created, every time a new iterator is requested from the container. Args: teamId(basestring): Limit the rooms to those associated with a team, by ID. type(basestring): 'direct' returns all 1-to-1 rooms. `group` returns all group rooms. If not specified or values not matched, will return all room types. sortBy(basestring): Sort results by room ID (`id`), most recent activity (`lastactivity`), or most recently created (`created`). max(int): Limit the maximum number of items returned from the Webex Teams service per request. **request_parameters: Additional request parameters (provides support for parameters that may be added in the future). Returns: GeneratorContainer: A GeneratorContainer which, when iterated, yields the rooms returned by the Webex Teams query. Raises: TypeError: If the parameter types are incorrect. ApiError: If the Webex Teams cloud returns an error.", "response": "def list(self, teamId=None, type=None, sortBy=None, max=None,\n **request_parameters):\n \"\"\"List rooms.\n\n By default, lists rooms to which the authenticated user belongs.\n\n This method supports Webex Teams's implementation of RFC5988 Web\n Linking to provide pagination support. It returns a generator\n container that incrementally yields all rooms returned by the\n query. The generator will automatically request additional 'pages' of\n responses from Webex as needed until all responses have been returned.\n The container makes the generator safe for reuse. A new API call will\n be made, using the same parameters that were specified when the\n generator was created, every time a new iterator is requested from the\n container.\n\n Args:\n teamId(basestring): Limit the rooms to those associated with a\n team, by ID.\n type(basestring): 'direct' returns all 1-to-1 rooms. `group`\n returns all group rooms. If not specified or values not\n matched, will return all room types.\n sortBy(basestring): Sort results by room ID (`id`), most recent\n activity (`lastactivity`), or most recently created\n (`created`).\n max(int): Limit the maximum number of items returned from the Webex\n Teams service per request.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the rooms returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n check_type(teamId, basestring)\n check_type(type, basestring)\n check_type(sortBy, basestring)\n check_type(max, int)\n\n params = dict_from_items_with_values(\n request_parameters,\n teamId=teamId,\n type=type,\n sortBy=sortBy,\n max=max,\n )\n\n # API request - get items\n items = self._session.get_items(API_ENDPOINT, params=params)\n\n # Yield room objects created from the returned items JSON objects\n for item in items:\n yield self._object_factory(OBJECT_TYPE, item)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create(self, title, teamId=None, **request_parameters):\n check_type(title, basestring)\n check_type(teamId, basestring)\n\n post_data = dict_from_items_with_values(\n request_parameters,\n title=title,\n teamId=teamId,\n )\n\n # API request\n json_data = self._session.post(API_ENDPOINT, json=post_data)\n\n # Return a room object created from the response JSON data\n return self._object_factory(OBJECT_TYPE, json_data)", "response": "Create a new room."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef update(self, roomId, title=None, **request_parameters):\n check_type(roomId, basestring, may_be_none=False)\n check_type(roomId, basestring)\n\n put_data = dict_from_items_with_values(\n request_parameters,\n title=title,\n )\n\n # API request\n json_data = self._session.put(API_ENDPOINT + '/' + roomId,\n json=put_data)\n\n # Return a room object created from the response JSON data\n return self._object_factory(OBJECT_TYPE, json_data)", "response": "Updates the details for a room by ID."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndeleting a room. Args: roomId(basestring): The ID of the room to be deleted. Raises: TypeError: If the parameter types are incorrect. ApiError: If the Webex Teams cloud returns an error.", "response": "def delete(self, roomId):\n \"\"\"Delete a room.\n\n Args:\n roomId(basestring): The ID of the room to be deleted.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n check_type(roomId, basestring, may_be_none=False)\n\n # API request\n self._session.delete(API_ENDPOINT + '/' + roomId)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nlist all licenses for a given organization.", "response": "def list(self, orgId=None, **request_parameters):\n \"\"\"List all licenses for a given organization.\n\n If no orgId is specified, the default is the organization of the\n authenticated user.\n\n Args:\n orgId(basestring): Specify the organization, by ID.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the licenses returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n check_type(orgId, basestring)\n\n params = dict_from_items_with_values(\n request_parameters,\n orgId=orgId,\n )\n\n # API request - get items\n items = self._session.get_items(API_ENDPOINT, params=params)\n\n # Yield license objects created from the returned JSON objects\n for item in items:\n yield self._object_factory(OBJECT_TYPE, item)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef created(self):\n created = self._json_data.get('created')\n if created:\n return WebexTeamsDateTime.strptime(created)\n else:\n return None", "response": "Creation date and time in ISO8601 format."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef generator_container(generator_function):\n\n @functools.wraps(generator_function)\n def generator_container_wrapper(*args, **kwargs):\n \"\"\"Store a generator call in a container and return the container.\n\n Args:\n *args: The arguments passed to the generator function.\n **kwargs: The keyword arguments passed to the generator function.\n\n Returns:\n GeneratorContainer: A container wrapping the call to the generator.\n\n \"\"\"\n return GeneratorContainer(generator_function, *args, **kwargs)\n\n return generator_container_wrapper", "response": "Decorator for generating a new generator function in a container."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nprocessing incoming requests to the events URI.", "response": "def webex_teams_webhook_events():\n \"\"\"Processes incoming requests to the '/events' URI.\"\"\"\n if request.method == 'GET':\n return (\"\"\"\n \n \n \n Webex Teams Bot served via Flask\n \n \n

\n Your Flask web server is up and running!\n

\n

\n Here is a nice Cat Fact for you:\n

\n
{}
\n \n \n \"\"\".format(get_catfact()))\n elif request.method == 'POST':\n \"\"\"Respond to inbound webhook JSON HTTP POST from Webex Teams.\"\"\"\n\n # Get the POST data sent from Webex Teams\n json_data = request.json\n print(\"\\n\")\n print(\"WEBHOOK POST RECEIVED:\")\n print(json_data)\n print(\"\\n\")\n\n # Create a Webhook object from the JSON data\n webhook_obj = Webhook(json_data)\n # Get the room details\n room = api.rooms.get(webhook_obj.data.roomId)\n # Get the message details\n message = api.messages.get(webhook_obj.data.id)\n # Get the sender's details\n person = api.people.get(message.personId)\n\n print(\"NEW MESSAGE IN ROOM '{}'\".format(room.title))\n print(\"FROM '{}'\".format(person.displayName))\n print(\"MESSAGE '{}'\\n\".format(message.text))\n\n # This is a VERY IMPORTANT loop prevention control step.\n # If you respond to all messages... You will respond to the messages\n # that the bot posts and thereby create a loop condition.\n me = api.people.me()\n if message.personId == me.id:\n # Message was sent by me (bot); do not respond.\n return 'OK'\n\n else:\n # Message was sent by someone else; parse message and respond.\n if \"/CAT\" in message.text:\n print(\"FOUND '/CAT'\")\n # Get a cat fact\n cat_fact = get_catfact()\n print(\"SENDING CAT FACT '{}'\".format(cat_fact))\n # Post the fact to the room where the request was received\n api.messages.create(room.id, text=cat_fact)\n return 'OK'"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nattempt to get the access token from the environment variable.", "response": "def _get_access_token():\n \"\"\"Attempt to get the access token from the environment.\n\n Try using the current and legacy environment variables. If the access token\n is found in a legacy environment variable, raise a deprecation warning.\n\n Returns:\n The access token found in the environment (str), or None.\n \"\"\"\n access_token = os.environ.get(ACCESS_TOKEN_ENVIRONMENT_VARIABLE)\n if access_token:\n return access_token\n\n else:\n for access_token_variable in LEGACY_ACCESS_TOKEN_ENVIRONMENT_VARIABLES:\n access_token = os.environ.get(access_token_variable)\n if access_token:\n env_var_deprecation_warning = PendingDeprecationWarning(\n \"Use of the `{legacy}` environment variable will be \"\n \"deprecated in the future. Please update your \"\n \"environment(s) to use the new `{new}` environment \"\n \"variable.\".format(\n legacy=access_token,\n new=ACCESS_TOKEN_ENVIRONMENT_VARIABLE,\n )\n )\n warnings.warn(env_var_deprecation_warning)\n return access_token"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a new webhook.", "response": "def create(self, name, targetUrl, resource, event,\n filter=None, secret=None, **request_parameters):\n \"\"\"Create a webhook.\n\n Args:\n name(basestring): A user-friendly name for this webhook.\n targetUrl(basestring): The URL that receives POST requests for\n each event.\n resource(basestring): The resource type for the webhook.\n event(basestring): The event type for the webhook.\n filter(basestring): The filter that defines the webhook scope.\n secret(basestring): The secret used to generate payload signature.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n Webhook: A Webhook object with the details of the created webhook.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n check_type(name, basestring, may_be_none=False)\n check_type(targetUrl, basestring, may_be_none=False)\n check_type(resource, basestring, may_be_none=False)\n check_type(event, basestring, may_be_none=False)\n check_type(filter, basestring)\n check_type(secret, basestring)\n\n post_data = dict_from_items_with_values(\n request_parameters,\n name=name,\n targetUrl=targetUrl,\n resource=resource,\n event=event,\n filter=filter,\n secret=secret,\n )\n\n # API request\n json_data = self._session.post(API_ENDPOINT, json=post_data)\n\n # Return a webhook object created from the response JSON data\n return self._object_factory(OBJECT_TYPE, json_data)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nupdating a Webhook by ID.", "response": "def update(self, webhookId, name=None, targetUrl=None,\n **request_parameters):\n \"\"\"Update a webhook, by ID.\n\n Args:\n webhookId(basestring): The webhook ID.\n name(basestring): A user-friendly name for this webhook.\n targetUrl(basestring): The URL that receives POST requests for\n each event.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n Webhook: A Webhook object with the updated Webex Teams webhook\n details.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n check_type(webhookId, basestring, may_be_none=False)\n check_type(name, basestring)\n check_type(targetUrl, basestring)\n\n put_data = dict_from_items_with_values(\n request_parameters,\n name=name,\n targetUrl=targetUrl,\n )\n\n # API request\n json_data = self._session.put(API_ENDPOINT + '/' + webhookId,\n json=put_data)\n\n # Return a webhook object created from the response JSON data\n return self._object_factory(OBJECT_TYPE, json_data)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndeletes a webhook by ID.", "response": "def delete(self, webhookId):\n \"\"\"Delete a webhook, by ID.\n\n Args:\n webhookId(basestring): The ID of the webhook to be deleted.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n check_type(webhookId, basestring, may_be_none=False)\n\n # API request\n self._session.delete(API_ENDPOINT + '/' + webhookId)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfixes the next URL in the Link headers of the Webex Teams API endpoint.", "response": "def _fix_next_url(next_url):\n \"\"\"Remove max=null parameter from URL.\n\n Patch for Webex Teams Defect: 'next' URL returned in the Link headers of\n the responses contain an errant 'max=null' parameter, which causes the\n next request (to this URL) to fail if the URL is requested as-is.\n\n This patch parses the next_url to remove the max=null parameter.\n\n Args:\n next_url(basestring): The 'next' URL to be parsed and cleaned.\n\n Returns:\n basestring: The clean URL to be used for the 'next' request.\n\n Raises:\n AssertionError: If the parameter types are incorrect.\n ValueError: If 'next_url' does not contain a valid API endpoint URL\n (scheme, netloc and path).\n\n \"\"\"\n next_url = str(next_url)\n parsed_url = urllib.parse.urlparse(next_url)\n\n if not parsed_url.scheme or not parsed_url.netloc or not parsed_url.path:\n raise ValueError(\n \"'next_url' must be a valid API endpoint URL, minimally \"\n \"containing a scheme, netloc and path.\"\n )\n\n if parsed_url.query:\n query_list = parsed_url.query.split('&')\n if 'max=null' in query_list:\n query_list.remove('max=null')\n warnings.warn(\"`max=null` still present in next-URL returned \"\n \"from Webex Teams\", RuntimeWarning)\n new_query = '&'.join(query_list)\n parsed_url = list(parsed_url)\n parsed_url[4] = new_query\n\n return urllib.parse.urlunparse(parsed_url)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef wait_on_rate_limit(self, value):\n check_type(value, bool, may_be_none=False)\n self._wait_on_rate_limit = value", "response": "Enable or disable automatic rate - limit handling."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef update_headers(self, headers):\n check_type(headers, dict, may_be_none=False)\n self._req_session.headers.update(headers)", "response": "Updates the HTTP headers used for requests in this session."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngives a relative or absolute URL return an absolute URL.", "response": "def abs_url(self, url):\n \"\"\"Given a relative or absolute URL; return an absolute URL.\n\n Args:\n url(basestring): A relative or absolute URL.\n\n Returns:\n str: An absolute URL.\n\n \"\"\"\n parsed_url = urllib.parse.urlparse(url)\n if not parsed_url.scheme and not parsed_url.netloc:\n # url is a relative URL; combine with base_url\n return urllib.parse.urljoin(str(self.base_url), str(url))\n else:\n # url is already an absolute URL; return as is\n return url"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef request(self, method, url, erc, **kwargs):\n # Ensure the url is an absolute URL\n abs_url = self.abs_url(url)\n\n # Update request kwargs with session defaults\n kwargs.setdefault('timeout', self.single_request_timeout)\n\n while True:\n # Make the HTTP request to the API endpoint\n response = self._req_session.request(method, abs_url, **kwargs)\n\n try:\n # Check the response code for error conditions\n check_response_code(response, erc)\n except RateLimitError as e:\n # Catch rate-limit errors\n # Wait and retry if automatic rate-limit handling is enabled\n if self.wait_on_rate_limit:\n warnings.warn(RateLimitWarning(response))\n time.sleep(e.retry_after)\n continue\n else:\n # Re-raise the RateLimitError\n raise\n else:\n return response", "response": "This method is the main HTTP request method that is used by the Webex Teams API. It is the main HTTP request method that is called by the requests package. It is the main HTTP request method that is called by the Webex Teams API."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsend a GET request to the Webex Teams API endpoint.", "response": "def get(self, url, params=None, **kwargs):\n \"\"\"Sends a GET request.\n\n Args:\n url(basestring): The URL of the API endpoint.\n params(dict): The parameters for the HTTP GET request.\n **kwargs:\n erc(int): The expected (success) response code for the request.\n others: Passed on to the requests package.\n\n Raises:\n ApiError: If anything other than the expected response code is\n returned by the Webex Teams API endpoint.\n\n \"\"\"\n check_type(url, basestring, may_be_none=False)\n check_type(params, dict)\n\n # Expected response code\n erc = kwargs.pop('erc', EXPECTED_RESPONSE_CODE['GET'])\n\n response = self.request('GET', url, erc, params=params, **kwargs)\n return extract_and_parse_json(response)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a generator that GETs and yields pages of data.", "response": "def get_pages(self, url, params=None, **kwargs):\n \"\"\"Return a generator that GETs and yields pages of data.\n\n Provides native support for RFC5988 Web Linking.\n\n Args:\n url(basestring): The URL of the API endpoint.\n params(dict): The parameters for the HTTP GET request.\n **kwargs:\n erc(int): The expected (success) response code for the request.\n others: Passed on to the requests package.\n\n Raises:\n ApiError: If anything other than the expected response code is\n returned by the Webex Teams API endpoint.\n\n \"\"\"\n check_type(url, basestring, may_be_none=False)\n check_type(params, dict)\n\n # Expected response code\n erc = kwargs.pop('erc', EXPECTED_RESPONSE_CODE['GET'])\n\n # First request\n response = self.request('GET', url, erc, params=params, **kwargs)\n\n while True:\n yield extract_and_parse_json(response)\n\n if response.links.get('next'):\n next_url = response.links.get('next').get('url')\n\n # Patch for Webex Teams 'max=null' in next URL bug.\n # Testing shows that patch is no longer needed; raising a\n # warnning if it is still taking effect;\n # considering for future removal\n next_url = _fix_next_url(next_url)\n\n # Subsequent requests\n response = self.request('GET', next_url, erc, **kwargs)\n\n else:\n break"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_items(self, url, params=None, **kwargs):\n # Get generator for pages of JSON data\n pages = self.get_pages(url, params=params, **kwargs)\n\n for json_page in pages:\n assert isinstance(json_page, dict)\n\n items = json_page.get('items')\n\n if items is None:\n error_message = \"'items' key not found in JSON data: \" \\\n \"{!r}\".format(json_page)\n raise MalformedResponse(error_message)\n\n else:\n for item in items:\n yield item", "response": "Returns a generator that GETs and yields individual JSON items."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef put(self, url, json=None, data=None, **kwargs):\n check_type(url, basestring, may_be_none=False)\n\n # Expected response code\n erc = kwargs.pop('erc', EXPECTED_RESPONSE_CODE['PUT'])\n\n response = self.request('PUT', url, erc, json=json, data=data,\n **kwargs)\n return extract_and_parse_json(response)", "response": "Sends a PUT request to the Webex Teams API endpoint."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsending a DELETE request to the Webex Teams API endpoint.", "response": "def delete(self, url, **kwargs):\n \"\"\"Sends a DELETE request.\n\n Args:\n url(basestring): The URL of the API endpoint.\n **kwargs:\n erc(int): The expected (success) response code for the request.\n others: Passed on to the requests package.\n\n Raises:\n ApiError: If anything other than the expected response code is\n returned by the Webex Teams API endpoint.\n\n \"\"\"\n check_type(url, basestring, may_be_none=False)\n\n # Expected response code\n erc = kwargs.pop('erc', EXPECTED_RESPONSE_CODE['DELETE'])\n\n self.request('DELETE', url, erc, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a new guest issuer using the provided issuer token.", "response": "def create(self, subject, displayName, issuerToken, expiration, secret):\n \"\"\"Create a new guest issuer using the provided issuer token.\n\n This function returns a guest issuer with an api access token.\n\n Args:\n subject(basestring): Unique and public identifier\n displayName(basestring): Display Name of the guest user\n issuerToken(basestring): Issuer token from developer hub\n expiration(basestring): Expiration time as a unix timestamp\n secret(basestring): The secret used to sign your guest issuers\n\n Returns:\n GuestIssuerToken: A Guest Issuer with a valid access token.\n\n Raises:\n TypeError: If the parameter types are incorrect\n ApiError: If the webex teams cloud returns an error.\n \"\"\"\n check_type(subject, basestring)\n check_type(displayName, basestring)\n check_type(issuerToken, basestring)\n check_type(expiration, basestring)\n check_type(secret, basestring)\n\n payload = {\n \"sub\": subject,\n \"name\": displayName,\n \"iss\": issuerToken,\n \"exp\": expiration\n }\n\n key = base64.b64decode(secret)\n jwt_token = jwt.encode(payload, key, algorithm='HS256')\n\n url = self._session.base_url + API_ENDPOINT + \"/\" + \"login\"\n headers = {\n 'Authorization': \"Bearer \" + jwt_token.decode('utf-8')\n }\n response = requests.post(url, headers=headers)\n check_response_code(response, EXPECTED_RESPONSE_CODE['GET'])\n\n return self._object_factory(OBJECT_TYPE, response.json())"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nlisting messages in a room. Each message will include content attachments if present. The list API sorts the messages in descending order by creation date. This method supports Webex Teams's implementation of RFC5988 Web Linking to provide pagination support. It returns a generator container that incrementally yields all messages returned by the query. The generator will automatically request additional 'pages' of responses from Webex as needed until all responses have been returned. The container makes the generator safe for reuse. A new API call will be made, using the same parameters that were specified when the generator was created, every time a new iterator is requested from the container. Args: roomId(basestring): List messages for a room, by ID. mentionedPeople(basestring): List messages where the caller is mentioned by specifying \"me\" or the caller `personId`. before(basestring): List messages sent before a date and time, in ISO8601 format. beforeMessage(basestring): List messages sent before a message, by ID. max(int): Limit the maximum number of items returned from the Webex Teams service per request. **request_parameters: Additional request parameters (provides support for parameters that may be added in the future). Returns: GeneratorContainer: A GeneratorContainer which, when iterated, yields the messages returned by the Webex Teams query. Raises: TypeError: If the parameter types are incorrect. ApiError: If the Webex Teams cloud returns an error.", "response": "def list(self, roomId, mentionedPeople=None, before=None,\n beforeMessage=None, max=None, **request_parameters):\n \"\"\"Lists messages in a room.\n\n Each message will include content attachments if present.\n\n The list API sorts the messages in descending order by creation date.\n\n This method supports Webex Teams's implementation of RFC5988 Web\n Linking to provide pagination support. It returns a generator\n container that incrementally yields all messages returned by the\n query. The generator will automatically request additional 'pages' of\n responses from Webex as needed until all responses have been returned.\n The container makes the generator safe for reuse. A new API call will\n be made, using the same parameters that were specified when the\n generator was created, every time a new iterator is requested from the\n container.\n\n Args:\n roomId(basestring): List messages for a room, by ID.\n mentionedPeople(basestring): List messages where the caller is\n mentioned by specifying \"me\" or the caller `personId`.\n before(basestring): List messages sent before a date and time, in\n ISO8601 format.\n beforeMessage(basestring): List messages sent before a message,\n by ID.\n max(int): Limit the maximum number of items returned from the Webex\n Teams service per request.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the messages returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n check_type(roomId, basestring, may_be_none=False)\n check_type(mentionedPeople, basestring)\n check_type(before, basestring)\n check_type(beforeMessage, basestring)\n check_type(max, int)\n\n params = dict_from_items_with_values(\n request_parameters,\n roomId=roomId,\n mentionedPeople=mentionedPeople,\n before=before,\n beforeMessage=beforeMessage,\n max=max,\n )\n\n # API request - get items\n items = self._session.get_items(API_ENDPOINT, params=params)\n\n # Yield message objects created from the returned items JSON objects\n for item in items:\n yield self._object_factory(OBJECT_TYPE, item)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a new message in the Webex Teams cloud.", "response": "def create(self, roomId=None, toPersonId=None, toPersonEmail=None,\n text=None, markdown=None, files=None, **request_parameters):\n \"\"\"Post a message, and optionally a attachment, to a room.\n\n The files parameter is a list, which accepts multiple values to allow\n for future expansion, but currently only one file may be included with\n the message.\n\n Args:\n roomId(basestring): The room ID.\n toPersonId(basestring): The ID of the recipient when sending a\n private 1:1 message.\n toPersonEmail(basestring): The email address of the recipient when\n sending a private 1:1 message.\n text(basestring): The message, in plain text. If `markdown` is\n specified this parameter may be optionally used to provide\n alternate text for UI clients that do not support rich text.\n markdown(basestring): The message, in markdown format.\n files(`list`): A list of public URL(s) or local path(s) to files to\n be posted into the room. Only one file is allowed per message.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n Message: A Message object with the details of the created message.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n ValueError: If the files parameter is a list of length > 1, or if\n the string in the list (the only element in the list) does not\n contain a valid URL or path to a local file.\n\n \"\"\"\n check_type(roomId, basestring)\n check_type(toPersonId, basestring)\n check_type(toPersonEmail, basestring)\n check_type(text, basestring)\n check_type(markdown, basestring)\n check_type(files, list)\n if files:\n if len(files) != 1:\n raise ValueError(\"The length of the `files` list is greater \"\n \"than one (1). The files parameter is a \"\n \"list, which accepts multiple values to \"\n \"allow for future expansion, but currently \"\n \"only one file may be included with the \"\n \"message.\")\n check_type(files[0], basestring)\n\n post_data = dict_from_items_with_values(\n request_parameters,\n roomId=roomId,\n toPersonId=toPersonId,\n toPersonEmail=toPersonEmail,\n text=text,\n markdown=markdown,\n files=files,\n )\n\n # API request\n if not files or is_web_url(files[0]):\n # Standard JSON post\n json_data = self._session.post(API_ENDPOINT, json=post_data)\n\n elif is_local_file(files[0]):\n # Multipart MIME post\n try:\n post_data['files'] = open_local_file(files[0])\n multipart_data = MultipartEncoder(post_data)\n headers = {'Content-type': multipart_data.content_type}\n json_data = self._session.post(API_ENDPOINT,\n headers=headers,\n data=multipart_data)\n finally:\n post_data['files'].file_object.close()\n\n else:\n raise ValueError(\"The `files` parameter does not contain a vaild \"\n \"URL or path to a local file.\")\n\n # Return a message object created from the response JSON data\n return self._object_factory(OBJECT_TYPE, json_data)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndeletes a message from the Webex Teams cloud.", "response": "def delete(self, messageId):\n \"\"\"Delete a message.\n\n Args:\n messageId(basestring): The ID of the message to be deleted.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n check_type(messageId, basestring, may_be_none=False)\n\n # API request\n self._session.delete(API_ENDPOINT + '/' + messageId)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nlisting people This method supports Webex Teams's implementation of RFC5988 Web Linking to provide pagination support. It returns a generator container that incrementally yields all people returned by the query. The generator will automatically request additional 'pages' of responses from Webex as needed until all responses have been returned. The container makes the generator safe for reuse. A new API call will be made, using the same parameters that were specified when the generator was created, every time a new iterator is requested from the container. Args: email(basestring): The e-mail address of the person to be found. displayName(basestring): The complete or beginning portion of the displayName to be searched. id(basestring): List people by ID. Accepts up to 85 person IDs separated by commas. orgId(basestring): The organization ID. max(int): Limit the maximum number of items returned from the Webex Teams service per request. **request_parameters: Additional request parameters (provides support for parameters that may be added in the future). Returns: GeneratorContainer: A GeneratorContainer which, when iterated, yields the people returned by the Webex Teams query. Raises: TypeError: If the parameter types are incorrect. ApiError: If the Webex Teams cloud returns an error.", "response": "def list(self, email=None, displayName=None, id=None, orgId=None, max=None,\n **request_parameters):\n \"\"\"List people\n\n This method supports Webex Teams's implementation of RFC5988 Web\n Linking to provide pagination support. It returns a generator\n container that incrementally yields all people returned by the\n query. The generator will automatically request additional 'pages' of\n responses from Webex as needed until all responses have been returned.\n The container makes the generator safe for reuse. A new API call will\n be made, using the same parameters that were specified when the\n generator was created, every time a new iterator is requested from the\n container.\n\n Args:\n email(basestring): The e-mail address of the person to be found.\n displayName(basestring): The complete or beginning portion of\n the displayName to be searched.\n id(basestring): List people by ID. Accepts up to 85 person IDs\n separated by commas.\n orgId(basestring): The organization ID.\n max(int): Limit the maximum number of items returned from the Webex\n Teams service per request.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the people returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n check_type(id, basestring)\n check_type(email, basestring)\n check_type(displayName, basestring)\n check_type(orgId, basestring)\n check_type(max, int)\n\n params = dict_from_items_with_values(\n request_parameters,\n id=id,\n email=email,\n displayName=displayName,\n orgId=orgId,\n max=max,\n )\n\n # API request - get items\n items = self._session.get_items(API_ENDPOINT, params=params)\n\n # Yield person objects created from the returned items JSON objects\n for item in items:\n yield self._object_factory(OBJECT_TYPE, item)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create(self, emails, displayName=None, firstName=None, lastName=None,\n avatar=None, orgId=None, roles=None, licenses=None,\n **request_parameters):\n \"\"\"Create a new user account for a given organization\n\n Only an admin can create a new user account.\n\n Args:\n emails(`list`): Email address(es) of the person (list of strings).\n displayName(basestring): Full name of the person.\n firstName(basestring): First name of the person.\n lastName(basestring): Last name of the person.\n avatar(basestring): URL to the person's avatar in PNG format.\n orgId(basestring): ID of the organization to which this\n person belongs.\n roles(`list`): Roles of the person (list of strings containing\n the role IDs to be assigned to the person).\n licenses(`list`): Licenses allocated to the person (list of\n strings - containing the license IDs to be allocated to the\n person).\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n Person: A Person object with the details of the created person.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n check_type(emails, list, may_be_none=False)\n check_type(displayName, basestring)\n check_type(firstName, basestring)\n check_type(lastName, basestring)\n check_type(avatar, basestring)\n check_type(orgId, basestring)\n check_type(roles, list)\n check_type(licenses, list)\n\n post_data = dict_from_items_with_values(\n request_parameters,\n emails=emails,\n displayName=displayName,\n firstName=firstName,\n lastName=lastName,\n avatar=avatar,\n orgId=orgId,\n roles=roles,\n licenses=licenses,\n )\n\n # API request\n json_data = self._session.post(API_ENDPOINT, json=post_data)\n\n # Return a person object created from the returned JSON object\n return self._object_factory(OBJECT_TYPE, json_data)", "response": "Create a new user account for a given organization."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget a person s details by ID.", "response": "def get(self, personId):\n \"\"\"Get a person's details, by ID.\n\n Args:\n personId(basestring): The ID of the person to be retrieved.\n\n Returns:\n Person: A Person object with the details of the requested person.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n check_type(personId, basestring, may_be_none=False)\n\n # API request\n json_data = self._session.get(API_ENDPOINT + '/' + personId)\n\n # Return a person object created from the response JSON data\n return self._object_factory(OBJECT_TYPE, json_data)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef update(self, personId, emails=None, displayName=None, firstName=None,\n lastName=None, avatar=None, orgId=None, roles=None,\n licenses=None, **request_parameters):\n \"\"\"Update details for a person, by ID.\n\n Only an admin can update a person's details.\n\n Email addresses for a person cannot be changed via the Webex Teams API.\n\n Include all details for the person. This action expects all user\n details to be present in the request. A common approach is to first GET\n the person's details, make changes, then PUT both the changed and\n unchanged values.\n\n Args:\n personId(basestring): The person ID.\n emails(`list`): Email address(es) of the person (list of strings).\n displayName(basestring): Full name of the person.\n firstName(basestring): First name of the person.\n lastName(basestring): Last name of the person.\n avatar(basestring): URL to the person's avatar in PNG format.\n orgId(basestring): ID of the organization to which this\n person belongs.\n roles(`list`): Roles of the person (list of strings containing\n the role IDs to be assigned to the person).\n licenses(`list`): Licenses allocated to the person (list of\n strings - containing the license IDs to be allocated to the\n person).\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n Person: A Person object with the updated details.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n check_type(emails, list)\n check_type(displayName, basestring)\n check_type(firstName, basestring)\n check_type(lastName, basestring)\n check_type(avatar, basestring)\n check_type(orgId, basestring)\n check_type(roles, list)\n check_type(licenses, list)\n\n put_data = dict_from_items_with_values(\n request_parameters,\n emails=emails,\n displayName=displayName,\n firstName=firstName,\n lastName=lastName,\n avatar=avatar,\n orgId=orgId,\n roles=roles,\n licenses=licenses,\n )\n\n # API request\n json_data = self._session.put(API_ENDPOINT + '/' + personId,\n json=put_data)\n\n # Return a person object created from the returned JSON object\n return self._object_factory(OBJECT_TYPE, json_data)", "response": "Update details for a person by ID."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndelete a person from the system.", "response": "def delete(self, personId):\n \"\"\"Remove a person from the system.\n\n Only an admin can remove a person.\n\n Args:\n personId(basestring): The ID of the person to be deleted.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n check_type(personId, basestring, may_be_none=False)\n\n # API request\n self._session.delete(API_ENDPOINT + '/' + personId)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget the details of the person accessing the API.", "response": "def me(self):\n \"\"\"Get the details of the person accessing the API.\n\n Raises:\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n # API request\n json_data = self._session.get(API_ENDPOINT + '/me')\n\n # Return a person object created from the response JSON data\n return self._object_factory(OBJECT_TYPE, json_data)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef list(self, **request_parameters):\n # API request - get items\n items = self._session.get_items(\n API_ENDPOINT,\n params=request_parameters\n )\n\n # Yield role objects created from the returned JSON objects\n for item in items:\n yield self._object_factory(OBJECT_TYPE, item)", "response": "List all roles.\n\n Args:\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the roles returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nlisting teams to which the authenticated user belongs. This method supports Webex Teams's implementation of RFC5988 Web Linking to provide pagination support. It returns a generator container that incrementally yields all teams returned by the query. The generator will automatically request additional 'pages' of responses from Webex as needed until all responses have been returned. The container makes the generator safe for reuse. A new API call will be made, using the same parameters that were specified when the generator was created, every time a new iterator is requested from the container. Args: max(int): Limit the maximum number of items returned from the Webex Teams service per request. **request_parameters: Additional request parameters (provides support for parameters that may be added in the future). Returns: GeneratorContainer: A GeneratorContainer which, when iterated, yields the teams returned by the Webex Teams query. Raises: TypeError: If the parameter types are incorrect. ApiError: If the Webex Teams cloud returns an error.", "response": "def list(self, max=None, **request_parameters):\n \"\"\"List teams to which the authenticated user belongs.\n\n This method supports Webex Teams's implementation of RFC5988 Web\n Linking to provide pagination support. It returns a generator\n container that incrementally yields all teams returned by the\n query. The generator will automatically request additional 'pages' of\n responses from Webex as needed until all responses have been returned.\n The container makes the generator safe for reuse. A new API call will\n be made, using the same parameters that were specified when the\n generator was created, every time a new iterator is requested from the\n container.\n\n Args:\n max(int): Limit the maximum number of items returned from the Webex\n Teams service per request.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the teams returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n check_type(max, int)\n\n params = dict_from_items_with_values(\n request_parameters,\n max=max,\n )\n\n # API request - get items\n items = self._session.get_items(API_ENDPOINT, params=params)\n\n # Yield team objects created from the returned items JSON objects\n for item in items:\n yield self._object_factory(OBJECT_TYPE, item)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create(self, name, **request_parameters):\n check_type(name, basestring, may_be_none=False)\n\n post_data = dict_from_items_with_values(\n request_parameters,\n name=name,\n )\n\n # API request\n json_data = self._session.post(API_ENDPOINT, json=post_data)\n\n # Return a team object created from the response JSON data\n return self._object_factory(OBJECT_TYPE, json_data)", "response": "Create a new team."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nupdates the details for a team by ID.", "response": "def update(self, teamId, name=None, **request_parameters):\n \"\"\"Update details for a team, by ID.\n\n Args:\n teamId(basestring): The team ID.\n name(basestring): A user-friendly name for the team.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n Team: A Team object with the updated Webex Teams team details.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n check_type(teamId, basestring, may_be_none=False)\n check_type(name, basestring)\n\n put_data = dict_from_items_with_values(\n request_parameters,\n name=name,\n )\n\n # API request\n json_data = self._session.put(API_ENDPOINT + '/' + teamId,\n json=put_data)\n\n # Return a team object created from the response JSON data\n return self._object_factory(OBJECT_TYPE, json_data)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndeletes a team s base cache entry.", "response": "def delete(self, teamId):\n \"\"\"Delete a team.\n\n Args:\n teamId(basestring): The ID of the team to be deleted.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n check_type(teamId, basestring, may_be_none=False)\n\n # API request\n self._session.delete(API_ENDPOINT + '/' + teamId)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nlist events in the Webex team s organization.", "response": "def list(self, resource=None, type=None, actorId=None, _from=None, to=None,\n max=None, **request_parameters):\n \"\"\"List events.\n\n List events in your organization. Several query parameters are\n available to filter the response.\n\n Note: `from` is a keyword in Python and may not be used as a variable\n name, so we had to use `_from` instead.\n\n This method supports Webex Teams's implementation of RFC5988 Web\n Linking to provide pagination support. It returns a generator\n container that incrementally yields all events returned by the\n query. The generator will automatically request additional 'pages' of\n responses from Wevex as needed until all responses have been returned.\n The container makes the generator safe for reuse. A new API call will\n be made, using the same parameters that were specified when the\n generator was created, every time a new iterator is requested from the\n container.\n\n Args:\n resource(basestring): Limit results to a specific resource type.\n Possible values: \"messages\", \"memberships\".\n type(basestring): Limit results to a specific event type. Possible\n values: \"created\", \"updated\", \"deleted\".\n actorId(basestring): Limit results to events performed by this\n person, by ID.\n _from(basestring): Limit results to events which occurred after a\n date and time, in ISO8601 format (yyyy-MM-dd'T'HH:mm:ss.SSSZ).\n to(basestring): Limit results to events which occurred before a\n date and time, in ISO8601 format (yyyy-MM-dd'T'HH:mm:ss.SSSZ).\n max(int): Limit the maximum number of items returned from the Webex\n Teams service per request.\n **request_parameters: Additional request parameters (provides\n support for parameters that may be added in the future).\n\n Returns:\n GeneratorContainer: A GeneratorContainer which, when iterated,\n yields the events returned by the Webex Teams query.\n\n Raises:\n TypeError: If the parameter types are incorrect.\n ApiError: If the Webex Teams cloud returns an error.\n\n \"\"\"\n check_type(resource, basestring)\n check_type(type, basestring)\n check_type(actorId, basestring)\n check_type(_from, basestring)\n check_type(to, basestring)\n check_type(max, int)\n\n params = dict_from_items_with_values(\n request_parameters,\n resource=resource,\n type=type,\n actorId=actorId,\n _from=_from,\n to=to,\n max=max,\n )\n\n if _from:\n params[\"from\"] = params.pop(\"_from\")\n\n # API request - get items\n items = self._session.get_items(API_ENDPOINT, params=params)\n\n # Yield event objects created from the returned items JSON objects\n for item in items:\n yield self._object_factory(OBJECT_TYPE, item)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _serialize(cls, data):\n if hasattr(data, \"__hash__\") and callable(data.__hash__):\n # If the data is already hashable (should be immutable) return it\n return data\n elif isinstance(data, list):\n # Freeze the elements of the list and return as a tuple\n return tuple((cls._serialize(item) for item in data))\n elif isinstance(data, dict):\n # Freeze the elements of the dictionary, sort them, and return\n # them as a list of tuples\n key_value_tuples = [\n (key, cls._serialize(value))\n for key, value in data.items()\n ]\n key_value_tuples.sort()\n return tuple(key_value_tuples)\n else:\n raise TypeError(\n \"Unable to freeze {} data type.\".format(type(data))\n )", "response": "Serialize data to an frozen tuple."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get(self, client_id, client_secret, code, redirect_uri):\n check_type(client_id, basestring, may_be_none=False)\n check_type(client_secret, basestring, may_be_none=False)\n check_type(code, basestring, may_be_none=False)\n check_type(redirect_uri, basestring, may_be_none=False)\n\n post_data = dict_from_items_with_values(\n grant_type=\"authorization_code\",\n client_id=client_id,\n client_secret=client_secret,\n code=code,\n redirect_uri=redirect_uri,\n )\n\n # API request\n response = requests.post(self._endpoint_url, data=post_data,\n **self._request_kwargs)\n check_response_code(response, EXPECTED_RESPONSE_CODE['POST'])\n json_data = extract_and_parse_json(response)\n\n # Return a access_token object created from the response JSON data\n return self._object_factory(OBJECT_TYPE, json_data)", "response": "Get an access token from the Webex API."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef lastActivity(self):\n last_activity = self._json_data.get('lastActivity')\n if last_activity:\n return WebexTeamsDateTime.strptime(last_activity)\n else:\n return None", "response": "The date and time of the person s last activity."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef post_events_service(request):\n\n # Get the POST data sent from Webex Teams\n json_data = request.json\n log.info(\"\\n\")\n log.info(\"WEBHOOK POST RECEIVED:\")\n log.info(json_data)\n log.info(\"\\n\")\n\n # Create a Webhook object from the JSON data\n webhook_obj = Webhook(json_data)\n\n # Get the room details\n room = api.rooms.get(webhook_obj.data.roomId)\n\n # Get the message details\n message = api.messages.get(webhook_obj.data.id)\n\n # Get the sender's details\n person = api.people.get(message.personId)\n\n log.info(\"NEW MESSAGE IN ROOM '{}'\".format(room.title))\n log.info(\"FROM '{}'\".format(person.displayName))\n log.info(\"MESSAGE '{}'\\n\".format(message.text))\n\n # This is a VERY IMPORTANT loop prevention control step.\n # If you respond to all messages... You will respond to the messages\n # that the bot posts and thereby create a loop condition.\n me = api.people.me()\n if message.personId == me.id:\n # Message was sent by me (bot); do not respond.\n return {'Message': 'OK'}\n\n else:\n # Message was sent by someone else; parse message and respond.\n if \"/CAT\" in message.text:\n log.info(\"FOUND '/CAT'\")\n\n # Get a cat fact\n catfact = get_catfact()\n log.info(\"SENDING CAT FACT'{}'\".format(catfact))\n\n # Post the fact to the room where the request was received\n api.messages.create(room.id, text=catfact)\n return {'Message': 'OK'}", "response": "Respond to inbound webhook JSON HTTP POST from Webex Teams."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the ngrok public URL from the local client API.", "response": "def get_ngrok_public_url():\r\n \"\"\"Get the ngrok public HTTP URL from the local client API.\"\"\"\r\n try:\r\n response = requests.get(url=NGROK_CLIENT_API_BASE_URL + \"/tunnels\",\r\n headers={'content-type': 'application/json'})\r\n response.raise_for_status()\r\n\r\n except requests.exceptions.RequestException:\r\n print(\"Could not connect to the ngrok client API; \"\r\n \"assuming not running.\")\r\n return None\r\n\r\n else:\r\n for tunnel in response.json()[\"tunnels\"]:\r\n if tunnel.get(\"public_url\", \"\").startswith(\"http://\"):\r\n print(\"Found ngrok public HTTP URL:\", tunnel[\"public_url\"])\r\n return tunnel[\"public_url\"]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef delete_webhooks_with_name(api, name):\r\n for webhook in api.webhooks.list():\r\n if webhook.name == name:\r\n print(\"Deleting Webhook:\", webhook.name, webhook.targetUrl)\r\n api.webhooks.delete(webhook.id)", "response": "Delete a webhook with the given name."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_ngrok_webhook(api, ngrok_public_url):\r\n print(\"Creating Webhook...\")\r\n webhook = api.webhooks.create(\r\n name=WEBHOOK_NAME,\r\n targetUrl=urljoin(ngrok_public_url, WEBHOOK_URL_SUFFIX),\r\n resource=WEBHOOK_RESOURCE,\r\n event=WEBHOOK_EVENT,\r\n )\r\n print(webhook)\r\n print(\"Webhook successfully created.\")\r\n return webhook", "response": "Create a Webex Teams webhook pointing to the public ngrok URL."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef main():\r\n api = WebexTeamsAPI()\r\n delete_webhooks_with_name(api, name=WEBHOOK_NAME)\r\n public_url = get_ngrok_public_url()\r\n if public_url is not None:\r\n create_ngrok_webhook(api, public_url)", "response": "Main function for the web browser."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef read(self):\n\t\tdata = [0]*6\n\t\tfor i in range(6):\n\t\t\tdata[i] = random.uniform(-2048, 2048)\n\t\taccel = data[:3]\n\t\tmag = data[3:]\n\t\treturn (accel, mag)", "response": "Reads the accel and mag from the device."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef console():\n\n parser = argparse.ArgumentParser(description=console.__doc__)\n parser.add_argument('--device', default='/dev/ttyUSB0',\n help='port to read DSMR data from')\n parser.add_argument('--host', default=None,\n help='alternatively connect using TCP host.')\n parser.add_argument('--port', default=None,\n help='TCP port to use for connection')\n parser.add_argument('--version', default='2.2', choices=['2.2', '4'],\n help='DSMR version (2.2, 4)')\n parser.add_argument('--verbose', '-v', action='count')\n\n args = parser.parse_args()\n\n if args.verbose:\n level = logging.DEBUG\n else:\n level = logging.ERROR\n logging.basicConfig(level=level)\n\n loop = asyncio.get_event_loop()\n\n def print_callback(telegram):\n \"\"\"Callback that prints telegram values.\"\"\"\n for obiref, obj in telegram.items():\n if obj:\n print(obj.value, obj.unit)\n print()\n\n # create tcp or serial connection depending on args\n if args.host and args.port:\n create_connection = partial(create_tcp_dsmr_reader,\n args.host, args.port, args.version,\n print_callback, loop=loop)\n else:\n create_connection = partial(create_dsmr_reader,\n args.device, args.version,\n print_callback, loop=loop)\n\n try:\n # connect and keep connected until interrupted by ctrl-c\n while True:\n # create serial or tcp connection\n conn = create_connection()\n transport, protocol = loop.run_until_complete(conn)\n # wait until connection it closed\n loop.run_until_complete(protocol.wait_closed())\n # wait 5 seconds before attempting reconnect\n loop.run_until_complete(asyncio.sleep(5))\n except KeyboardInterrupt:\n # cleanup connection after user initiated shutdown\n transport.close()\n loop.run_until_complete(asyncio.sleep(0))\n finally:\n loop.close()", "response": "Output DSMR data to console."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef read(self):\n with serial.Serial(**self.serial_settings) as serial_handle:\n while True:\n data = serial_handle.readline()\n self.telegram_buffer.append(data.decode('ascii'))\n\n for telegram in self.telegram_buffer.get_all():\n try:\n yield self.telegram_parser.parse(telegram)\n except InvalidChecksumError as e:\n logger.warning(str(e))\n except ParseError as e:\n logger.error('Failed to parse telegram: %s', e)", "response": "Reads complete DSMR telegrams from the serial interface and parses them into MbusObject s and CosemObject s."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nread complete DSMR telegrams from the serial interface and parse them into MbusObject s and MbusObject s and MbusObject s and MbusObject s and asynchronous processing.", "response": "def read(self, queue):\n \"\"\"\n Read complete DSMR telegram's from the serial interface and parse it\n into CosemObject's and MbusObject's.\n\n Instead of being a generator, values are pushed to provided queue for\n asynchronous processing.\n\n :rtype: None\n \"\"\"\n # create Serial StreamReader\n conn = serial_asyncio.open_serial_connection(**self.serial_settings)\n reader, _ = yield from conn\n\n while True:\n # Read line if available or give control back to loop until new\n # data has arrived.\n data = yield from reader.readline()\n self.telegram_buffer.append(data.decode('ascii'))\n\n for telegram in self.telegram_buffer.get_all():\n try:\n # Push new parsed telegram onto queue.\n queue.put_nowait(\n self.telegram_parser.parse(telegram)\n )\n except ParseError as e:\n logger.warning('Failed to parse telegram: %s', e)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef create_dsmr_protocol(dsmr_version, telegram_callback, loop=None):\n\n if dsmr_version == '2.2':\n specification = telegram_specifications.V2_2\n serial_settings = SERIAL_SETTINGS_V2_2\n elif dsmr_version == '4':\n specification = telegram_specifications.V4\n serial_settings = SERIAL_SETTINGS_V4\n elif dsmr_version == '5':\n specification = telegram_specifications.V5\n serial_settings = SERIAL_SETTINGS_V5\n else:\n raise NotImplementedError(\"No telegram parser found for version: %s\",\n dsmr_version)\n\n protocol = partial(DSMRProtocol, loop, TelegramParser(specification),\n telegram_callback=telegram_callback)\n\n return protocol, serial_settings", "response": "Creates a DSMR asyncio protocol."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create_dsmr_reader(port, dsmr_version, telegram_callback, loop=None):\n protocol, serial_settings = create_dsmr_protocol(\n dsmr_version, telegram_callback, loop=None)\n serial_settings['url'] = port\n\n conn = create_serial_connection(loop, protocol, **serial_settings)\n return conn", "response": "Creates a DSMR asyncio protocol coroutine using serial port."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_tcp_dsmr_reader(host, port, dsmr_version,\n telegram_callback, loop=None):\n \"\"\"Creates a DSMR asyncio protocol coroutine using TCP connection.\"\"\"\n protocol, _ = create_dsmr_protocol(\n dsmr_version, telegram_callback, loop=None)\n conn = loop.create_connection(protocol, host, port)\n return conn", "response": "Creates a TCP DSMR reader coroutine using TCP connection."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef data_received(self, data):\n data = data.decode('ascii')\n self.log.debug('received data: %s', data)\n self.telegram_buffer.append(data)\n\n for telegram in self.telegram_buffer.get_all():\n self.handle_telegram(telegram)", "response": "Add incoming data to the internal buffer."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nstops when connection is lost.", "response": "def connection_lost(self, exc):\n \"\"\"Stop when connection is lost.\"\"\"\n if exc:\n self.log.exception('disconnected due to exception')\n else:\n self.log.info('disconnected because of close/abort.')\n self._closed.set()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef handle_telegram(self, telegram):\n self.log.debug('got telegram: %s', telegram)\n\n try:\n parsed_telegram = self.telegram_parser.parse(telegram)\n except InvalidChecksumError as e:\n self.log.warning(str(e))\n except ParseError:\n self.log.exception(\"failed to parse telegram\")\n else:\n self.telegram_callback(parsed_telegram)", "response": "Parse the telegram and send it to the callback."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef parse(self, telegram_data):\n\n if self.apply_checksum_validation \\\n and self.telegram_specification['checksum_support']:\n self.validate_checksum(telegram_data)\n\n telegram = {}\n\n for signature, parser in self.telegram_specification['objects'].items():\n match = re.search(signature, telegram_data, re.DOTALL)\n\n # Some signatures are optional and may not be present,\n # so only parse lines that match\n if match:\n telegram[signature] = parser.parse(match.group(0))\n\n return telegram", "response": "Parse telegram from string to dict."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef validate_checksum(telegram):\n\n # Extract the part for which the checksum applies.\n checksum_contents = re.search(r'\\/.+\\!', telegram, re.DOTALL)\n\n # Extract the hexadecimal checksum value itself.\n # The line ending '\\r\\n' for the checksum line can be ignored.\n checksum_hex = re.search(r'((?<=\\!)[0-9A-Z]{4})+', telegram)\n\n if not checksum_contents or not checksum_hex:\n raise ParseError(\n 'Failed to perform CRC validation because the telegram is '\n 'incomplete. The checksum and/or content values are missing.'\n )\n\n calculated_crc = CRC16().calculate(checksum_contents.group(0))\n expected_crc = int(checksum_hex.group(0), base=16)\n\n if calculated_crc != expected_crc:\n raise InvalidChecksumError(\n \"Invalid telegram. The CRC checksum '{}' does not match the \"\n \"expected '{}'\".format(\n calculated_crc,\n expected_crc\n )\n )", "response": "Validate the CRC of the telegram."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _remove(self, telegram):\n # Remove data leading up to the telegram and the telegram itself.\n index = self._buffer.index(telegram) + len(telegram)\n\n self._buffer = self._buffer[index:]", "response": "Remove a telegram from the internal buffer and incomplete data preceding it."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget the version of the package from the given file by executing it and extracting the given name.", "response": "def get_version(file, name='__version__'):\n \"\"\"Get the version of the package from the given file by\n executing it and extracting the given `name`.\n \"\"\"\n path = os.path.realpath(file)\n version_ns = {}\n with io.open(path, encoding=\"utf8\") as f:\n exec(f.read(), {}, version_ns)\n return version_ns[name]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef ensure_python(specs):\n if not isinstance(specs, (list, tuple)):\n specs = [specs]\n v = sys.version_info\n part = '%s.%s' % (v.major, v.minor)\n for spec in specs:\n if part == spec:\n return\n try:\n if eval(part + spec):\n return\n except SyntaxError:\n pass\n raise ValueError('Python version %s unsupported' % part)", "response": "Ensures that a list of range specifiers for python are compatible."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfind all of the packages in the specified directory.", "response": "def find_packages(top=HERE):\n \"\"\"\n Find all of the packages.\n \"\"\"\n packages = []\n for d, dirs, _ in os.walk(top, followlinks=True):\n if os.path.exists(pjoin(d, '__init__.py')):\n packages.append(os.path.relpath(d, top).replace(os.path.sep, '.'))\n elif d != top:\n # Do not look for packages in subfolders if current is not a package\n dirs[:] = []\n return packages"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a command class with the given prerelease class.", "response": "def create_cmdclass(prerelease_cmd=None, package_data_spec=None,\n data_files_spec=None):\n \"\"\"Create a command class with the given optional prerelease class.\n\n Parameters\n ----------\n prerelease_cmd: (name, Command) tuple, optional\n The command to run before releasing.\n package_data_spec: dict, optional\n A dictionary whose keys are the dotted package names and\n whose values are a list of glob patterns.\n data_files_spec: list, optional\n A list of (path, dname, pattern) tuples where the path is the\n `data_files` install path, dname is the source directory, and the\n pattern is a glob pattern.\n\n Notes\n -----\n We use specs so that we can find the files *after* the build\n command has run.\n\n The package data glob patterns should be relative paths from the package\n folder containing the __init__.py file, which is given as the package\n name.\n e.g. `dict(foo=['./bar/*', './baz/**'])`\n\n The data files directories should be absolute paths or relative paths\n from the root directory of the repository. Data files are specified\n differently from `package_data` because we need a separate path entry\n for each nested folder in `data_files`, and this makes it easier to\n parse.\n e.g. `('share/foo/bar', 'pkgname/bizz, '*')`\n \"\"\"\n wrapped = [prerelease_cmd] if prerelease_cmd else []\n if package_data_spec or data_files_spec:\n wrapped.append('handle_files')\n wrapper = functools.partial(_wrap_command, wrapped)\n handle_files = _get_file_handler(package_data_spec, data_files_spec)\n\n if 'bdist_egg' in sys.argv:\n egg = wrapper(bdist_egg, strict=True)\n else:\n egg = bdist_egg_disabled\n\n cmdclass = dict(\n build_py=wrapper(build_py, strict=is_repo),\n bdist_egg=egg,\n sdist=wrapper(sdist, strict=True),\n handle_files=handle_files,\n )\n\n if bdist_wheel:\n cmdclass['bdist_wheel'] = wrapper(bdist_wheel, strict=True)\n\n cmdclass['develop'] = wrapper(develop, strict=True)\n return cmdclass"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a command that calls the given function.", "response": "def command_for_func(func):\n \"\"\"Create a command that calls the given function.\"\"\"\n\n class FuncCommand(BaseCommand):\n\n def run(self):\n func()\n update_package_data(self.distribution)\n\n return FuncCommand"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef run(cmd, **kwargs):\n log.info('> ' + list2cmdline(cmd))\n kwargs.setdefault('cwd', HERE)\n kwargs.setdefault('shell', os.name == 'nt')\n if not isinstance(cmd, (list, tuple)) and os.name != 'nt':\n cmd = shlex.split(cmd)\n cmd[0] = which(cmd[0])\n return subprocess.check_call(cmd, **kwargs)", "response": "Echo a command before running it. Defaults to repo as cwd"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the newest or oldest mtime for all files in a directory.", "response": "def recursive_mtime(path, newest=True):\n \"\"\"Gets the newest/oldest mtime for all files in a directory.\"\"\"\n if os.path.isfile(path):\n return mtime(path)\n current_extreme = None\n for dirname, dirnames, filenames in os.walk(path, topdown=False):\n for filename in filenames:\n mt = mtime(pjoin(dirname, filename))\n if newest: # Put outside of loop?\n if mt >= (current_extreme or mt):\n current_extreme = mt\n elif mt <= (current_extreme or mt):\n current_extreme = mt\n return current_extreme"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef ensure_targets(targets):\n\n class TargetsCheck(BaseCommand):\n def run(self):\n if skip_npm:\n log.info('Skipping target checks')\n return\n missing = [t for t in targets if not os.path.exists(t)]\n if missing:\n raise ValueError(('missing files: %s' % missing))\n\n return TargetsCheck", "response": "Returns a Command that checks that certain files exist."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _wrap_command(cmds, cls, strict=True):\n class WrappedCommand(cls):\n\n def run(self):\n if not getattr(self, 'uninstall', None):\n try:\n [self.run_command(cmd) for cmd in cmds]\n except Exception:\n if strict:\n raise\n else:\n pass\n # update package data\n update_package_data(self.distribution)\n\n result = cls.run(self)\n return result\n return WrappedCommand", "response": "Wrap a setup command in a new command class."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_file_handler(package_data_spec, data_files_spec):\n class FileHandler(BaseCommand):\n\n def run(self):\n package_data = self.distribution.package_data\n package_spec = package_data_spec or dict()\n\n for (key, patterns) in package_spec.items():\n package_data[key] = _get_package_data(key, patterns)\n\n self.distribution.data_files = _get_data_files(\n data_files_spec, self.distribution.data_files\n )\n\n return FileHandler", "response": "Get a file handler command."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nexpand data file specs into valid data files metadata.", "response": "def _get_data_files(data_specs, existing):\n \"\"\"Expand data file specs into valid data files metadata.\n\n Parameters\n ----------\n data_specs: list of tuples\n See [createcmdclass] for description.\n existing: list of tuples\n The existing distribution data_files metadata.\n\n Returns\n -------\n A valid list of data_files items.\n \"\"\"\n # Extract the existing data files into a staging object.\n file_data = defaultdict(list)\n for (path, files) in existing or []:\n file_data[path] = files\n\n # Extract the files and assign them to the proper data\n # files path.\n for (path, dname, pattern) in data_specs or []:\n dname = dname.replace(os.sep, '/')\n offset = len(dname) + 1\n\n files = _get_files(pjoin(dname, pattern))\n for fname in files:\n # Normalize the path.\n root = os.path.dirname(fname)\n full_path = '/'.join([path, root[offset:]])\n if full_path.endswith('/'):\n full_path = full_path[:-1]\n file_data[full_path].append(fname)\n\n # Construct the data files spec.\n data_files = []\n for (path, files) in file_data.items():\n data_files.append((path, files))\n return data_files"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nexpand file patterns to a list of package_data paths.", "response": "def _get_package_data(root, file_patterns=None):\n \"\"\"Expand file patterns to a list of `package_data` paths.\n\n Parameters\n -----------\n root: str\n The relative path to the package root from `HERE`.\n file_patterns: list or str, optional\n A list of glob patterns for the data file locations.\n The globs can be recursive if they include a `**`.\n They should be relative paths from the root or\n absolute paths. If not given, all files will be used.\n\n Note:\n Files in `node_modules` are ignored.\n \"\"\"\n if file_patterns is None:\n file_patterns = ['*']\n return _get_files(file_patterns, pjoin(HERE, root))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _compile_pattern(pat, ignore_case=True):\n if isinstance(pat, bytes):\n pat_str = pat.decode('ISO-8859-1')\n res_str = _translate_glob(pat_str)\n res = res_str.encode('ISO-8859-1')\n else:\n res = _translate_glob(pat)\n flags = re.IGNORECASE if ignore_case else 0\n return re.compile(res, flags=flags).match", "response": "Translate and compile a glob pattern to a regular expression matcher."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _iexplode_path(path):\n (head, tail) = os.path.split(path)\n if not head or (not tail and head == path):\n if head:\n yield head\n if tail or not head:\n yield tail\n return\n for p in _iexplode_path(head):\n yield p\n yield tail", "response": "Iterate over all the parts of a path."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _translate_glob(pat):\n translated_parts = []\n for part in _iexplode_path(pat):\n translated_parts.append(_translate_glob_part(part))\n os_sep_class = '[%s]' % re.escape(SEPARATORS)\n res = _join_translated(translated_parts, os_sep_class)\n return '{res}\\\\Z(?ms)'.format(res=res)", "response": "Translate a glob PATTERN to a regular expression."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _join_translated(translated_parts, os_sep_class):\n res = ''\n for part in translated_parts[:-1]:\n if part == '.*':\n # drop separator, since it is optional\n # (** matches ZERO or more dirs)\n res += part\n else:\n res += part + os_sep_class\n\n if translated_parts[-1] == '.*':\n # Final part is **\n res += '.+'\n # Follow stdlib/git convention of matching all sub files/directories:\n res += '({os_sep_class}?.*)?'.format(os_sep_class=os_sep_class)\n else:\n res += translated_parts[-1]\n return res", "response": "Join translated glob pattern parts."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _translate_glob_part(pat):\n # Code modified from Python 3 standard lib fnmatch:\n if pat == '**':\n return '.*'\n i, n = 0, len(pat)\n res = []\n while i < n:\n c = pat[i]\n i = i + 1\n if c == '*':\n # Match anything but path separators:\n res.append('[^%s]*' % SEPARATORS)\n elif c == '?':\n res.append('[^%s]?' % SEPARATORS)\n elif c == '[':\n j = i\n if j < n and pat[j] == '!':\n j = j + 1\n if j < n and pat[j] == ']':\n j = j + 1\n while j < n and pat[j] != ']':\n j = j + 1\n if j >= n:\n res.append('\\\\[')\n else:\n stuff = pat[i:j].replace('\\\\', '\\\\\\\\')\n i = j + 1\n if stuff[0] == '!':\n stuff = '^' + stuff[1:]\n elif stuff[0] == '^':\n stuff = '\\\\' + stuff\n res.append('[%s]' % stuff)\n else:\n res.append(re.escape(c))\n return ''.join(res)", "response": "Translate a glob PATTERN PART to a regular expression."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef truncate(self, table):\n truncate_sql, serial_key_sql = super(PostgresDbWriter, self).truncate(table)\n self.execute(truncate_sql)\n if serial_key_sql:\n self.execute(serial_key_sql)", "response": "Send DDL to truncate the specified table."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef write_table(self, table):\n table_sql, serial_key_sql = super(PostgresDbWriter, self).write_table(table)\n for sql in serial_key_sql + table_sql:\n self.execute(sql)", "response": "Send DDL to create the specified table."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef write_indexes(self, table):\n index_sql = super(PostgresDbWriter, self).write_indexes(table)\n for sql in index_sql:\n self.execute(sql)", "response": "Send DDL to create the specified table indexes\n\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsending DDL to create the specified table triggers", "response": "def write_triggers(self, table):\n \"\"\"Send DDL to create the specified `table` triggers\n\n :Parameters:\n - `table`: an instance of a :py:class:`mysql2pgsql.lib.mysql_reader.MysqlReader.Table` object that represents the table to read/write.\n\n Returns None\n \"\"\"\n index_sql = super(PostgresDbWriter, self).write_triggers(table)\n for sql in index_sql:\n self.execute(sql)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef write_constraints(self, table):\n constraint_sql = super(PostgresDbWriter, self).write_constraints(table)\n for sql in constraint_sql:\n self.execute(sql)", "response": "Send DDL to create the specified table constraints\n\n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef write_contents(self, table, reader):\n f = self.FileObjFaker(table, reader.read(table), self.process_row, self.verbose)\n self.copy_from(f, '\"%s\"' % table.name, ['\"%s\"' % c['name'] for c in table.columns])", "response": "Write the contents of table to self."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nexamine a row from MySQL and alters the values when necessary to be compatible with PostgreSQL.", "response": "def process_row(self, table, row):\n \"\"\"Examines row data from MySQL and alters\n the values when necessary to be compatible with\n sending to PostgreSQL via the copy command\n \"\"\"\n for index, column in enumerate(table.columns):\n hash_key = hash(frozenset(column.items()))\n column_type = self.column_types[hash_key] if hash_key in self.column_types else self.column_type(column)\n if row[index] == None and ('timestamp' not in column_type or not column['default']):\n row[index] = '\\N'\n elif row[index] == None and column['default']:\n if self.tz:\n row[index] = '1970-01-01T00:00:00.000000' + self.tz_offset\n else:\n row[index] = '1970-01-01 00:00:00'\n elif 'bit' in column_type:\n row[index] = bin(ord(row[index]))[2:]\n elif isinstance(row[index], (str, unicode, basestring)):\n if column_type == 'bytea':\n row[index] = Binary(row[index]).getquoted()[1:-8] if row[index] else row[index]\n elif 'text[' in column_type:\n row[index] = '{%s}' % ','.join('\"%s\"' % v.replace('\"', r'\\\"') for v in row[index].split(','))\n else:\n row[index] = row[index].replace('\\\\', r'\\\\').replace('\\n', r'\\n').replace(\n '\\t', r'\\t').replace('\\r', r'\\r').replace('\\0', '')\n elif column_type == 'boolean':\n # We got here because you used a tinyint(1), if you didn't want a bool, don't use that type\n row[index] = 't' if row[index] not in (None, 0) else 'f' if row[index] == 0 else row[index]\n elif isinstance(row[index], (date, datetime)):\n if isinstance(row[index], datetime) and self.tz:\n try:\n if row[index].tzinfo:\n row[index] = row[index].astimezone(self.tz).isoformat()\n else:\n row[index] = datetime(*row[index].timetuple()[:6], tzinfo=self.tz).isoformat()\n except Exception as e:\n print e.message\n else:\n row[index] = row[index].isoformat()\n elif isinstance(row[index], timedelta):\n row[index] = datetime.utcfromtimestamp(_get_total_seconds(row[index])).time().isoformat()\n else:\n row[index] = AsIs(row[index]).getquoted()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef truncate(self, table):\n truncate_sql, serial_key_sql = super(PostgresFileWriter, self).truncate(table)\n self.f.write(\"\"\"\n-- TRUNCATE %(table_name)s;\n%(truncate_sql)s\n\"\"\" % {'table_name': table.name, 'truncate_sql': truncate_sql})\n\n if serial_key_sql:\n self.f.write(\"\"\"\n%(serial_key_sql)s\n\"\"\" % {\n 'serial_key_sql': serial_key_sql})", "response": "Write DDL to truncate the specified table."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nwrite DDL to create the specified table.", "response": "def write_table(self, table):\n \"\"\"Write DDL to create the specified `table`.\n\n :Parameters:\n - `table`: an instance of a :py:class:`mysql2pgsql.lib.mysql_reader.MysqlReader.Table` object that represents the table to read/write.\n\n Returns None\n \"\"\"\n table_sql, serial_key_sql = super(PostgresFileWriter, self).write_table(table)\n if serial_key_sql:\n self.f.write(\"\"\"\n%(serial_key_sql)s\n\"\"\" % {\n 'serial_key_sql': '\\n'.join(serial_key_sql)\n })\n\n self.f.write(\"\"\"\n-- Table: %(table_name)s\n%(table_sql)s\n\"\"\" % {\n 'table_name': table.name,\n 'table_sql': '\\n'.join(table_sql),\n })"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef write_indexes(self, table):\n self.f.write('\\n'.join(super(PostgresFileWriter, self).write_indexes(table)))", "response": "Write DDL of table indexes to the output file\n\n "} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nwrite DDL of table constraints to the output file", "response": "def write_constraints(self, table):\n \"\"\"Write DDL of `table` constraints to the output file\n\n :Parameters:\n - `table`: an instance of a :py:class:`mysql2pgsql.lib.mysql_reader.MysqlReader.Table` object that represents the table to read/write.\n\n Returns None\n \"\"\"\n self.f.write('\\n'.join(super(PostgresFileWriter, self).write_constraints(table)))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nwrites TRIGGERs existing on table to the output file.", "response": "def write_triggers(self, table):\n \"\"\"Write TRIGGERs existing on `table` to the output file\n\n :Parameters:\n - `table`: an instance of a :py:class:`mysql2pgsql.lib.mysql_reader.MysqlReader.Table` object that represents the table to read/write.\n\n Returns None\n \"\"\"\n self.f.write('\\n'.join(super(PostgresFileWriter, self).write_triggers(table)))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef write_contents(self, table, reader):\n # start variable optimiztions\n pr = self.process_row\n f_write = self.f.write\n verbose = self.verbose\n # end variable optimiztions\n\n f_write(\"\"\"\n--\n-- Data for Name: %(table_name)s; Type: TABLE DATA;\n--\n\nCOPY \"%(table_name)s\" (%(column_names)s) FROM stdin;\n\"\"\" % {\n 'table_name': table.name,\n 'column_names': ', '.join(('\"%s\"' % col['name']) for col in table.columns)})\n if verbose:\n tt = time.time\n start_time = tt()\n prev_val_len = 0\n prev_row_count = 0\n for i, row in enumerate(reader.read(table), 1):\n row = list(row)\n pr(table, row)\n try:\n f_write(u'%s\\n' % (u'\\t'.join(row)))\n except UnicodeDecodeError:\n f_write(u'%s\\n' % (u'\\t'.join(r.decode('utf-8') for r in row)))\n if verbose:\n if (i % 20000) == 0:\n now = tt()\n elapsed = now - start_time\n val = '%.2f rows/sec [%s] ' % ((i - prev_row_count) / elapsed, i)\n print_row_progress('%s%s' % ((\"\\b\" * prev_val_len), val))\n prev_val_len = len(val) + 3\n start_time = now\n prev_row_count = i\n\n f_write(\"\\\\.\\n\\n\")\n if verbose:\n print('')", "response": "Writes the contents of the table to the output file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef parse_fntdata(_data, _config, _extra_data_receiver=None):\n\tdata = {}\n\tframe_data_list = []\n\n\tparse_common_info = parse(\"common lineHeight={line_height:d} base={base:d} scaleW={scale_w:d} scaleH={scale_h:d} pages={pages:d} packed={packed:d}\", _data[1])\n\tparse_page_info = parse(\"page id={id:d} file=\\\"{file}\\\"\", _data[2])\n\tparse_char_count = parse(\"chars count={count:d}\", _data[3])\n\traw_frames_data = {}\n\tfor index in xrange(0, parse_char_count[\"count\"]):\n\t\tparse_frame = parse(\"char id={id:d} x={x:d} y={y:d} width={width:d} height={height:d} xoffset={xoffset:d} yoffset={yoffset:d} xadvance={xadvance:d} page={page:d} chnl={chnl:d} letter=\\\"{letter}\\\"\", _data[index + 4])\n\n\t\tframe_data = {}\n\t\tframe_data[\"name\"] = \"{prefix}_{id}.png\".format(prefix= _config[\"prefix\"], id=parse_frame[\"id\"], letter=parse_frame[\"letter\"])\n\t\tframe_data[\"source_size\"] = (parse_frame[\"width\"], parse_frame[\"height\"])\n\t\tframe_data[\"rotated\"] = False\n\t\tframe_data[\"src_rect\"] = (parse_frame[\"x\"], parse_frame[\"y\"], parse_frame[\"x\"] + parse_frame[\"width\"], parse_frame[\"y\"] + parse_frame[\"height\"])\n\t\tframe_data[\"offset\"] = (0, 0)\n\n\t\tif parse_frame[\"width\"] <= 0 or parse_frame[\"height\"] <= 0:\n\t\t\tcontinue\n\n\t\tframe_data_list.append(frame_data)\n\n\t\tparse_frame_named_data = parse_frame.named.copy()\n\t\tparse_frame_named_data[\"texture\"] = frame_data[\"name\"]\n\n\t\traw_frames_data[parse_frame[\"id\"]] = parse_frame_named_data\n\n\tdata[\"texture\"] = parse_page_info[\"file\"]\n\tdata[\"frames\"] = frame_data_list\n\n\tif _extra_data_receiver != None:\n\t\t_extra_data_receiver[\"common\"] = parse_common_info.named\n\t\t_extra_data_receiver[\"frames\"] = raw_frames_data\n\n\treturn data", "response": "Parse the FNT data."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _pvr_head(_data):\n return {\n \"sig\": _data[:4],\n \"compression_type\": struct.unpack(\"H\", _data[4:6])[0],\n \"version\": struct.unpack(\"H\", _data[6:8])[0],\n \"reserved\": struct.unpack(\"I\", _data[8:12])[0],\n \"len\": struct.unpack(\"I\", _data[12:16])[0],\n }", "response": "Parse the PVR header."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns an approximate number of queued tasks in the queue.", "response": "def qsize(self, extra_predicate=None):\n \"\"\" Return an approximate number of queued tasks in the queue. \"\"\"\n count = self._query_queued('COUNT(*) AS count', extra_predicate=extra_predicate)\n return count[0].count"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nstart a new task handler from the queue.", "response": "def start(self, block=False, timeout=None, retry_interval=0.5, extra_predicate=None):\n \"\"\"\n Retrieve a task handler from the queue.\n\n If block is True, this function will block until it is able to retrieve a task.\n If block is True and timeout is a number it will block for at most seconds.\n retry_interval is the maximum time in seconds between successive retries.\n\n extra_predicate\n If extra_predicate is defined, it should be a tuple of (raw_predicate, predicate_args)\n raw_predicate will be prefixed by AND, and inserted into the WHERE condition in the queries.\n predicate_args will be sql escaped and formatted into raw_predicate.\n \"\"\"\n start = time.time()\n while 1:\n task_handler = self._dequeue_task(extra_predicate)\n if task_handler is None and block:\n if timeout is not None and (time.time() - start) > timeout:\n break\n time.sleep(retry_interval * (random.random() + 0.1))\n else:\n break\n return task_handler"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _build_extra_predicate(self, extra_predicate):\n if extra_predicate is None:\n return ''\n\n # if they don't have a supported format seq, wrap it for them\n if not isinstance(extra_predicate[1], (list, dict, tuple)):\n extra_predicate = [extra_predicate[0], (extra_predicate[1], )]\n\n extra_predicate = database.escape_query(*extra_predicate)\n\n return 'AND (' + extra_predicate + ')'", "response": "Build a WHERE clause for the extra predicate."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef simplejson_datetime_serializer(obj):\n if hasattr(obj, 'isoformat'):\n return obj.isoformat()\n else:\n raise TypeError('Object of type %s with value of %s is not JSON serializable' % (type(obj), repr(obj)))", "response": "This function is used to serialize dates and datetimes to ISO strings."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef reconnect(self):\n conn = _mysql.connect(**self._db_args)\n if conn is not None:\n self.close()\n self._db = conn", "response": "Closes the existing database connection and re - opens it."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef query(self, query, *parameters, **kwparameters):\n return self._query(query, parameters, kwparameters)", "response": "Query the database and return the rows or affected rows."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get(self, query, *parameters, **kwparameters):\n rows = self._query(query, parameters, kwparameters)\n if not rows:\n return None\n elif not isinstance(rows, list):\n raise MySQLError(\"Query is not a select query\")\n elif len(rows) > 1:\n raise MySQLError(\"Multiple rows returned for Database.get() query\")\n else:\n return rows[0]", "response": "Returns the first row returned for the given query."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef execute(self, query, *parameters, **kwparameters):\n return self.execute_lastrowid(query, *parameters, **kwparameters)", "response": "Executes the given query returning the lastrowid from the query."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef execute_lastrowid(self, query, *parameters, **kwparameters):\n self._execute(query, parameters, kwparameters)\n self._result = self._db.store_result()\n return self._db.insert_id()", "response": "Executes the given query returning the lastrowid from the query."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_connection(db=DATABASE):\n return database.connect(host=HOST, port=PORT, user=USER, password=PASSWORD, database=db)", "response": "Returns a new connection to the database."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef run_benchmark():\n\n stopping = threading.Event()\n workers = [ InsertWorker(stopping) for _ in range(NUM_WORKERS) ]\n\n print('Launching %d workers' % NUM_WORKERS)\n\n [ worker.start() for worker in workers ]\n time.sleep(WORKLOAD_TIME)\n\n print('Stopping workload')\n\n stopping.set()\n [ worker.join() for worker in workers ]\n\n with get_connection() as conn:\n count = conn.get(\"SELECT COUNT(*) AS count FROM %s\" % TABLE).count\n\n print(\"%d rows inserted using %d workers\" % (count, NUM_WORKERS))\n print(\"%.1f rows per second\" % (count / float(WORKLOAD_TIME)))", "response": "Runs a set of InsertWorkers and record their performance."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconnects to the pool", "response": "def _pool_connect(self, agg):\n \"\"\" `agg` should be (host, port)\n Returns a live connection from the connection pool\n \"\"\"\n return self._pool.connect(agg[0], agg[1], self._user, self._password, self._database)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _connect(self):\n with self._lock:\n if self._aggregator:\n try:\n return self._pool_connect(self._aggregator)\n except PoolConnectionException:\n self._aggregator = None\n\n if not len(self._aggregators):\n with self._pool_connect(self._primary_aggregator) as conn:\n self._update_aggregator_list(conn)\n conn.expire()\n\n random.shuffle(self._aggregators)\n\n last_exception = None\n for aggregator in self._aggregators:\n self.logger.debug('Attempting connection with %s:%s' % (aggregator[0], aggregator[1]))\n\n try:\n conn = self._pool_connect(aggregator)\n # connection successful!\n self._aggregator = aggregator\n return conn\n except PoolConnectionException as e:\n # connection error\n last_exception = e\n else:\n # bad news bears... try again later\n self._aggregator = None\n self._aggregators = []\n\n raise last_exception", "response": "Returns an aggregator connection."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef lookup_by_number(errno):\n for key, val in globals().items():\n if errno == val:\n print(key)", "response": "Used for development only"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the number of connections and fairies in the pool.", "response": "def size(self):\n \"\"\" Returns the number of connections cached by the pool. \"\"\"\n return sum(q.qsize() for q in self._connections.values()) + len(self._fairies)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nbuilds a simple expression ready to be added onto another query.", "response": "def simple_expression(joiner=', ', **fields):\n \"\"\" Build a simple expression ready to be added onto another query.\n\n >>> simple_expression(joiner=' AND ', name='bob', role='admin')\n \"`name`=%(_QB_name)s AND `name`=%(_QB_role)s\", { '_QB_name': 'bob', '_QB_role': 'admin' }\n \"\"\"\n expression, params = [], {}\n\n for field_name, value in sorted(fields.items(), key=lambda kv: kv[0]):\n key = '_QB_%s' % field_name\n expression.append('`%s`=%%(%s)s' % (field_name, key))\n params[key] = value\n\n return joiner.join(expression), params"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef update(table_name, **fields):\n prefix = \"UPDATE `%s` SET \" % table_name\n sets, params = simple_expression(', ', **fields)\n return prefix + sets, params", "response": "Build a update query.\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef ping(self):\n with self._db_conn() as conn:\n affected_rows = conn.query('''\n UPDATE %s\n SET last_contact=%%s\n WHERE id = %%s AND lock_hash = %%s\n ''' % self._manager.table_name, datetime.utcnow(), self._lock_id, self._lock_hash)\n\n return bool(affected_rows == 1)", "response": "Check if the lock is still active."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef connect(self, host='127.0.0.1', port=3306, user='root', password='', database=None):\n\n if database is None:\n raise exceptions.RequiresDatabase()\n\n self._db_args = { 'host': host, 'port': port, 'user': user, 'password': password, 'database': database }\n with self._db_conn() as conn:\n conn.query('SELECT 1')\n return self", "response": "Connect to the database specified"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef setup(self):\n with self._db_conn() as conn:\n for table_defn in self._tables.values():\n conn.execute(table_defn)\n return self", "response": "Initialize the required tables in the database."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndestroys the SQLStepQueue tables in the database.", "response": "def destroy(self):\n \"\"\" Destroy the SQLStepQueue tables in the database \"\"\"\n with self._db_conn() as conn:\n for table_name in self._tables:\n conn.execute('DROP TABLE IF EXISTS %s' % table_name)\n return self"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn True if the tables have been setup False otherwise.", "response": "def ready(self):\n \"\"\" Returns True if the tables have been setup, False otherwise \"\"\"\n with self._db_conn() as conn:\n tables = [row.t for row in conn.query('''\n SELECT table_name AS t FROM information_schema.tables\n WHERE table_schema=%s\n ''', self._db_args['database'])]\n return all([table_name in tables for table_name in self._tables])"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck to see if we are still active.", "response": "def valid(self):\n \"\"\" Check to see if we are still active. \"\"\"\n if self.finished is not None:\n return False\n\n with self._db_conn() as conn:\n row = conn.get('''\n SELECT (last_contact > %%(now)s - INTERVAL %%(ttl)s SECOND) AS valid\n FROM %s\n WHERE\n id = %%(task_id)s\n AND execution_id = %%(execution_id)s\n ''' % self._queue.table_name,\n now=datetime.utcnow(),\n ttl=self._queue.execution_ttl,\n task_id=self.task_id,\n execution_id=self.execution_id)\n\n return bool(row is not None and row.valid)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nnotify the queue that this task is still active.", "response": "def ping(self):\n \"\"\" Notify the queue that this task is still active. \"\"\"\n if self.finished is not None:\n raise AlreadyFinished()\n\n with self._db_conn() as conn:\n success = conn.query('''\n UPDATE %s\n SET\n last_contact=%%(now)s,\n update_count=update_count + 1\n WHERE\n id = %%(task_id)s\n AND execution_id = %%(execution_id)s\n AND last_contact > %%(now)s - INTERVAL %%(ttl)s SECOND\n ''' % self._queue.table_name,\n now=datetime.utcnow(),\n task_id=self.task_id,\n execution_id=self.execution_id,\n ttl=self._queue.execution_ttl)\n\n if success != 1:\n raise TaskDoesNotExist()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nload all the datetime isoformats into datetimes", "response": "def _load_steps(self, raw_steps):\n \"\"\" load steps -> basically load all the datetime isoformats into datetimes \"\"\"\n for step in raw_steps:\n if 'start' in step:\n step['start'] = parser.parse(step['start'])\n if 'stop' in step:\n step['stop'] = parser.parse(step['stop'])\n return raw_steps"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndisconnect from the websocket connection and joins the Thread.", "response": "def disconnect(self):\n \"\"\"Disconnects from the websocket connection and joins the Thread.\n\n :return:\n \"\"\"\n self.log.debug(\"disconnect(): Disconnecting from API..\")\n self.reconnect_required.clear()\n self.disconnect_called.set()\n if self.socket:\n self.socket.close()\n self.join(timeout=1)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef reconnect(self):\n # Reconnect attempt at self.reconnect_interval\n self.log.debug(\"reconnect(): Initialzion reconnect sequence..\")\n self.connected.clear()\n self.reconnect_required.set()\n if self.socket:\n self.socket.close()", "response": "Issues a reconnection by setting the reconnect_required event."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a websocket connection.", "response": "def _connect(self):\n \"\"\"Creates a websocket connection.\n\n :return:\n \"\"\"\n self.log.debug(\"_connect(): Initializing Connection..\")\n self.socket = websocket.WebSocketApp(\n self.url,\n on_open=self._on_open,\n on_message=self._on_message,\n on_error=self._on_error,\n on_close=self._on_close\n )\n\n if 'ca_certs' not in self.sslopt.keys():\n ssl_defaults = ssl.get_default_verify_paths()\n self.sslopt['ca_certs'] = ssl_defaults.cafile\n\n self.log.debug(\"_connect(): Starting Connection..\")\n self.socket.run_forever(sslopt=self.sslopt,\n http_proxy_host=self.http_proxy_host,\n http_proxy_port=self.http_proxy_port,\n http_proxy_auth=self.http_proxy_auth,\n http_no_proxy=self.http_no_proxy)\n\n # stop outstanding ping/pong timers\n self._stop_timers()\n while self.reconnect_required.is_set():\n if not self.disconnect_called.is_set():\n self.log.info(\"Attempting to connect again in %s seconds.\"\n % self.reconnect_interval)\n self.state = \"unavailable\"\n time.sleep(self.reconnect_interval)\n\n # We need to set this flag since closing the socket will\n # set it to False\n self.socket.keep_running = True\n self.socket.sock = None\n self.socket.run_forever(sslopt=self.sslopt,\n http_proxy_host=self.http_proxy_host,\n http_proxy_port=self.http_proxy_port,\n http_proxy_auth=self.http_proxy_auth,\n http_no_proxy=self.http_no_proxy)\n else:\n break"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nhandle and passes received data to the appropriate handlers.", "response": "def _on_message(self, ws, message):\n \"\"\"Handles and passes received data to the appropriate handlers.\n\n :return:\n \"\"\"\n self._stop_timers()\n\n raw, received_at = message, time.time()\n self.log.debug(\"_on_message(): Received new message %s at %s\",\n raw, received_at)\n try:\n data = json.loads(raw)\n except json.JSONDecodeError:\n # Something wrong with this data, log and discard\n return\n\n # Handle data\n if isinstance(data, dict):\n # This is a system message\n self._system_handler(data, received_at)\n else:\n # This is a list of data\n if data[1] == 'hb':\n self._heartbeat_handler()\n else:\n self._data_handler(data, received_at)\n\n # We've received data, reset timers\n self._start_timers()"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nstops ping pong and connection timers.", "response": "def _stop_timers(self):\n \"\"\"Stops ping, pong and connection timers.\n\n :return:\n \"\"\"\n if self.ping_timer:\n self.ping_timer.cancel()\n\n if self.connection_timer:\n self.connection_timer.cancel()\n\n if self.pong_timer:\n self.pong_timer.cancel()\n self.log.debug(\"_stop_timers(): Timers stopped.\")"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsends a ping to the API and starts pong timers.", "response": "def send_ping(self):\n \"\"\"Sends a ping message to the API and starts pong timers.\n\n :return:\n \"\"\"\n self.log.debug(\"send_ping(): Sending ping to API..\")\n self.socket.send(json.dumps({'event': 'ping'}))\n self.pong_timer = Timer(self.pong_timeout, self._check_pong)\n self.pong_timer.start()"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nchecking if a Pong message was received in time.", "response": "def _check_pong(self):\n \"\"\"Checks if a Pong message was received.\n\n :return:\n \"\"\"\n self.pong_timer.cancel()\n if self.pong_received:\n self.log.debug(\"_check_pong(): Pong received in time.\")\n self.pong_received = False\n else:\n # reconnect\n self.log.debug(\"_check_pong(): Pong not received in time.\"\n \"Issuing reconnect..\")\n self.reconnect()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef send(self, api_key=None, secret=None, list_data=None, auth=False, **kwargs):\n if auth:\n nonce = str(int(time.time() * 10000000))\n auth_string = 'AUTH' + nonce\n auth_sig = hmac.new(secret.encode(), auth_string.encode(),\n hashlib.sha384).hexdigest()\n\n payload = {'event': 'auth', 'apiKey': api_key, 'authSig': auth_sig,\n 'authPayload': auth_string, 'authNonce': nonce}\n payload = json.dumps(payload)\n elif list_data:\n payload = json.dumps(list_data)\n else:\n payload = json.dumps(kwargs)\n self.log.debug(\"send(): Sending payload to API: %s\", payload)\n try:\n self.socket.send(payload)\n except websocket.WebSocketConnectionClosedException:\n self.log.error(\"send(): Did not send out payload %s - client not connected. \", kwargs)", "response": "Sends the given Payload to the API via the websocket connection."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\npass data up to the client via a Queue.", "response": "def pass_to_client(self, event, data, *args):\n \"\"\"Passes data up to the client via a Queue().\n\n :param event:\n :param data:\n :param args:\n :return:\n \"\"\"\n self.q.put((event, data, *args))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _unpause(self):\n self.log.debug(\"_unpause(): Clearing paused() Flag!\")\n self.paused.clear()\n self.log.debug(\"_unpause(): Re-subscribing softly..\")\n self._resubscribe(soft=True)", "response": "Unpauses the connection.\n\n Send a message up to client that he should re-subscribe to all\n channels.\n\n :return:"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _system_handler(self, data, ts):\n self.log.debug(\"_system_handler(): Received a system message: %s\", data)\n # Unpack the data\n event = data.pop('event')\n if event == 'pong':\n self.log.debug(\"_system_handler(): Distributing %s to _pong_handler..\",\n data)\n self._pong_handler()\n elif event == 'info':\n self.log.debug(\"_system_handler(): Distributing %s to _info_handler..\",\n data)\n self._info_handler(data)\n elif event == 'error':\n self.log.debug(\"_system_handler(): Distributing %s to _error_handler..\",\n data)\n self._error_handler(data)\n elif event in ('subscribed', 'unsubscribed', 'conf', 'auth', 'unauth'):\n self.log.debug(\"_system_handler(): Distributing %s to \"\n \"_response_handler..\", data)\n self._response_handler(event, data, ts)\n else:\n self.log.error(\"Unhandled event: %s, data: %s\", event, data)", "response": "Handles system messages from the server."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _response_handler(self, event, data, ts):\n self.log.debug(\"_response_handler(): Passing %s to client..\", data)\n self.pass_to_client(event, data, ts)", "response": "Handle responses to unsubscribe and conf commands."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _info_handler(self, data):\n\n def raise_exception():\n \"\"\"Log info code as error and raise a ValueError.\"\"\"\n self.log.error(\"%s: %s\", data['code'], info_message[data['code']])\n raise ValueError(\"%s: %s\" % (data['code'], info_message[data['code']]))\n\n if 'code' not in data and 'version' in data:\n self.log.info('Initialized Client on API Version %s', data['version'])\n return\n\n info_message = {20000: 'Invalid User given! Please make sure the given ID is correct!',\n 20051: 'Stop/Restart websocket server '\n '(please try to reconnect)',\n 20060: 'Refreshing data from the trading engine; '\n 'please pause any acivity.',\n 20061: 'Done refreshing data from the trading engine.'\n ' Re-subscription advised.'}\n\n codes = {20051: self.reconnect, 20060: self._pause,\n 20061: self._unpause}\n\n if 'version' in data:\n self.log.info(\"API version: %i\", data['version'])\n return\n\n try:\n self.log.info(info_message[data['code']])\n codes[data['code']]()\n except KeyError as e:\n self.log.exception(e)\n self.log.error(\"Unknown Info code %s!\", data['code'])\n raise", "response": "Handle INFO messages from the API and issues relevant actions."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _error_handler(self, data):\n errors = {10000: 'Unknown event',\n 10001: 'Generic error',\n 10008: 'Concurrency error',\n 10020: 'Request parameters error',\n 10050: 'Configuration setup failed',\n 10100: 'Failed authentication',\n 10111: 'Error in authentication request payload',\n 10112: 'Error in authentication request signature',\n 10113: 'Error in authentication request encryption',\n 10114: 'Error in authentication request nonce',\n 10200: 'Error in un-authentication request',\n 10300: 'Subscription Failed (generic)',\n 10301: 'Already Subscribed',\n 10302: 'Unknown channel',\n 10400: 'Subscription Failed (generic)',\n 10401: 'Not subscribed',\n 11000: 'Not ready, try again later',\n 20000: 'User is invalid!',\n 20051: 'Websocket server stopping',\n 20060: 'Websocket server resyncing',\n 20061: 'Websocket server resync complete'\n }\n try:\n self.log.error(errors[data['code']])\n except KeyError:\n self.log.error(\"Received unknown error Code in message %s! \"\n \"Reconnecting..\", data)", "response": "Handle error messages and log them accordingly."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nhandle data messages by passing them up to the client.", "response": "def _data_handler(self, data, ts):\n \"\"\"Handles data messages by passing them up to the client.\n\n :param data:\n :param ts:\n :return:\n \"\"\"\n # Pass the data up to the Client\n self.log.debug(\"_data_handler(): Passing %s to client..\",\n data)\n self.pass_to_client('data', data, ts)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _resubscribe(self, soft=False):\n # Restore non-default Bitfinex websocket configuration\n if self.bitfinex_config:\n self.send(**self.bitfinex_config)\n q_list = []\n while True:\n try:\n identifier, q = self.channel_configs.popitem(last=True if soft else False)\n except KeyError:\n break\n q_list.append((identifier, q.copy()))\n if identifier == 'auth':\n self.send(**q, auth=True)\n continue\n if soft:\n q['event'] = 'unsubscribe'\n self.send(**q)\n\n # Resubscribe for soft start.\n if soft:\n for identifier, q in reversed(q_list):\n self.channel_configs[identifier] = q\n self.send(**q)\n else:\n for identifier, q in q_list:\n self.channel_configs[identifier] = q", "response": "Resubscribes to all channels found in self. channel_configs."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef join(self, timeout=None):\n self._stopped.set()\n super(QueueProcessor, self).join(timeout=timeout)", "response": "Set sentinel for run method and join thread."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nhandles responses to subscribe command. Registers a channel id with the client and assigns a data handler to it.", "response": "def _handle_subscribed(self, dtype, data, ts,):\n \"\"\"Handles responses to subscribe() commands.\n\n Registers a channel id with the client and assigns a data handler to it.\n\n :param dtype:\n :param data:\n :param ts:\n :return:\n \"\"\"\n self.log.debug(\"_handle_subscribed: %s - %s - %s\", dtype, data, ts)\n channel_name = data.pop('channel')\n channel_id = data.pop('chanId')\n config = data\n\n if 'pair' in config:\n symbol = config['pair']\n if symbol.startswith('t'):\n symbol = symbol[1:]\n elif 'symbol' in config:\n symbol = config['symbol']\n if symbol.startswith('t'):\n symbol = symbol[1:]\n elif 'key' in config:\n symbol = config['key'].split(':')[2][1:] #layout type:interval:tPair\n else:\n symbol = None\n\n if 'prec' in config and config['prec'].startswith('R'):\n channel_name = 'raw_' + channel_name\n\n self.channel_handlers[channel_id] = self._data_handlers[channel_name]\n\n # Create a channel_name, symbol tuple to identify channels of same type\n if 'key' in config:\n identifier = (channel_name, symbol, config['key'].split(':')[1])\n else:\n identifier = (channel_name, symbol)\n self.channel_handlers[channel_id] = identifier\n self.channel_directory[identifier] = channel_id\n self.channel_directory[channel_id] = identifier\n self.log.info(\"Subscription succesful for channel %s\", identifier)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nhandling responses to unsubscribe() commands.", "response": "def _handle_unsubscribed(self, dtype, data, ts):\n \"\"\"Handles responses to unsubscribe() commands.\n\n Removes a channel id from the client.\n\n :param dtype:\n :param data:\n :param ts:\n :return:\n \"\"\"\n self.log.debug(\"_handle_unsubscribed: %s - %s - %s\", dtype, data, ts)\n channel_id = data.pop('chanId')\n\n # Unregister the channel from all internal attributes\n chan_identifier = self.channel_directory.pop(channel_id)\n self.channel_directory.pop(chan_identifier)\n self.channel_handlers.pop(channel_id)\n self.last_update.pop(channel_id)\n self.log.info(\"Successfully unsubscribed from %s\", chan_identifier)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _handle_auth(self, dtype, data, ts):\n # Contains keys status, chanId, userId, caps\n if dtype == 'unauth':\n raise NotImplementedError\n channel_id = data.pop('chanId')\n user_id = data.pop('userId')\n\n identifier = ('auth', user_id)\n self.channel_handlers[identifier] = channel_id\n self.channel_directory[identifier] = channel_id\n self.channel_directory[channel_id] = identifier", "response": "Handles authentication responses.\n\n :param dtype:\n :param data:\n :param ts:\n :return:"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _handle_conf(self, dtype, data, ts):\n self.log.debug(\"_handle_conf: %s - %s - %s\", dtype, data, ts)\n self.log.info(\"Configuration accepted: %s\", dtype)\n return", "response": "Handles configuration messages.\n\n :param dtype:\n :param data:\n :param ts:\n :return:"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nupdates the timestamp for the given channel id.", "response": "def update_timestamps(self, chan_id, ts):\n \"\"\"Updates the timestamp for the given channel id.\n\n :param chan_id:\n :param ts:\n :return:\n \"\"\"\n try:\n self.last_update[chan_id] = ts\n except KeyError:\n self.log.warning(\"Attempted ts update of channel %s, but channel \"\n \"not present anymore.\",\n self.channel_directory[chan_id])"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nhandling related data for the account related data.", "response": "def _handle_account(self, data, ts):\n \"\"\" Handles Account related data.\n\n translation table for channel names:\n Data Channels\n os - Orders\n hos - Historical Orders\n ps - Positions\n hts - Trades (snapshot)\n te - Trade Event\n tu - Trade Update\n ws - Wallets\n bu - Balance Info\n miu - Margin Info\n fiu - Funding Info\n fos - Offers\n hfos - Historical Offers\n fcs - Credits\n hfcs - Historical Credits\n fls - Loans\n hfls - Historical Loans\n htfs - Funding Trades\n n - Notifications (WIP)\n\n :param dtype:\n :param data:\n :param ts:\n :return:\n \"\"\"\n # channel_short, data\n chan_id, channel_short_name, *data = data\n entry = (channel_short_name, data, ts)\n self.account.put(entry)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _handle_ticker(self, dtype, data, ts):\n self.log.debug(\"_handle_ticker: %s - %s - %s\", dtype, data, ts)\n channel_id, *data = data\n channel_identifier = self.channel_directory[channel_id]\n\n entry = (data, ts)\n self.tickers[channel_identifier].put(entry)", "response": "Adds received ticker data to self. tickers dict."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _handle_book(self, dtype, data, ts):\n self.log.debug(\"_handle_book: %s - %s - %s\", dtype, data, ts)\n channel_id, *data = data\n log.debug(\"ts: %s\\tchan_id: %s\\tdata: %s\", ts, channel_id, data)\n channel_identifier = self.channel_directory[channel_id]\n entry = (data, ts)\n self.books[channel_identifier].put(entry)", "response": "Handle a new order book."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nhandles a raw order book.", "response": "def _handle_raw_book(self, dtype, data, ts):\n \"\"\"Updates the raw order books stored in self.raw_books[chan_id].\n\n :param dtype:\n :param data:\n :param ts:\n :return:\n \"\"\"\n self.log.debug(\"_handle_raw_book: %s - %s - %s\", dtype, data, ts)\n channel_id, *data = data\n channel_identifier = self.channel_directory[channel_id]\n entry = (data, ts)\n self.raw_books[channel_identifier].put(entry)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nhandle trades from the channel.", "response": "def _handle_trades(self, dtype, data, ts):\n \"\"\"Files trades in self._trades[chan_id].\n\n :param dtype:\n :param data:\n :param ts:\n :return:\n \"\"\"\n self.log.debug(\"_handle_trades: %s - %s - %s\", dtype, data, ts)\n channel_id, *data = data\n channel_identifier = self.channel_directory[channel_id]\n entry = (data, ts)\n self.trades[channel_identifier].put(entry)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nstore the data received via wss in self. candles [ chan_id ].", "response": "def _handle_candles(self, dtype, data, ts):\n \"\"\"Stores OHLC data received via wss in self.candles[chan_id].\n\n :param dtype:\n :param data:\n :param ts:\n :return:\n \"\"\"\n self.log.debug(\"_handle_candles: %s - %s - %s\", dtype, data, ts)\n channel_id, *data = data\n channel_identifier = self.channel_directory[channel_id]\n entry = (data, ts)\n self.candles[channel_identifier].put(entry)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef reset(self):\n self.conn.reconnect()\n while not self.conn.connected.is_set():\n log.info(\"reset(): Waiting for connection to be set up..\")\n time.sleep(1)\n\n for key in self.channel_configs:\n self.conn.send(**self.channel_configs[key])", "response": "Reset the client.\n\n :return:"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a queue containing all candles data for the given symbol pair.", "response": "def candles(self, pair, timeframe=None):\n \"\"\"Return a queue containing all received candles data.\n\n :param pair: str, Symbol pair to request data for\n :param timeframe: str\n :return: Queue()\n \"\"\"\n timeframe = '1m' if not timeframe else timeframe\n key = ('candles', pair, timeframe)\n return self.queue_processor.candles[key]"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsends configuration to the server", "response": "def config(self, decimals_as_strings=True, ts_as_dates=False,\n sequencing=False, ts=False, **kwargs):\n \"\"\"Send configuration to websocket server\n\n :param decimals_as_strings: bool, turn on/off decimals as strings\n :param ts_as_dates: bool, decide to request timestamps as dates instead\n :param sequencing: bool, turn on sequencing\n\t:param ts: bool, request the timestamp to be appended to every array\n sent by the server\n :param kwargs:\n :return:\n \"\"\"\n flags = 0\n if decimals_as_strings:\n flags += 8\n if ts_as_dates:\n flags += 32\n if ts:\n flags += 32768\n if sequencing:\n flags += 65536\n q = {'event': 'conf', 'flags': flags}\n q.update(kwargs)\n self.conn.bitfinex_config = q\n self.conn.send(**q)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsubscribes to the passed pair s ticker channel.", "response": "def subscribe_to_ticker(self, pair, **kwargs):\n \"\"\"Subscribe to the passed pair's ticker channel.\n\n :param pair: str, Symbol pair to request data for\n :param kwargs:\n :return:\n \"\"\"\n identifier = ('ticker', pair)\n self._subscribe('ticker', identifier, symbol=pair, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef subscribe_to_order_book(self, pair, **kwargs):\n identifier = ('book', pair)\n self._subscribe('book', identifier, symbol=pair, **kwargs)", "response": "Subscribe to the passed pair s order book channel."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef unsubscribe_from_order_book(self, pair, **kwargs):\n identifier = ('book', pair)\n self._unsubscribe('book', identifier, symbol=pair, **kwargs)", "response": "Unsubscribe to the passed pair s order book channel."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsubscribe to the passed pair s raw order book channel.", "response": "def subscribe_to_raw_order_book(self, pair, prec=None, **kwargs):\n \"\"\"Subscribe to the passed pair's raw order book channel.\n\n :param pair: str, Symbol pair to request data for\n :param prec:\n :param kwargs:\n :return:\n \"\"\"\n identifier = ('raw_book', pair)\n prec = 'R0' if prec is None else prec\n self._subscribe('book', identifier, pair=pair, prec=prec, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef unsubscribe_from_raw_order_book(self, pair, prec=None, **kwargs):\n identifier = ('raw_book', pair)\n prec = 'R0' if prec is None else prec\n self._unsubscribe('book', identifier, pair=pair, prec=prec, **kwargs)", "response": "Unsubscribe to the passed pair s raw order book channel."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsubscribing to the passed pair s trades channel.", "response": "def subscribe_to_trades(self, pair, **kwargs):\n \"\"\"Subscribe to the passed pair's trades channel.\n\n :param pair: str, Symbol pair to request data for\n :param kwargs:\n :return:\n \"\"\"\n identifier = ('trades', pair)\n self._subscribe('trades', identifier, symbol=pair, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef subscribe_to_candles(self, pair, timeframe=None, **kwargs):\n\n valid_tfs = ['1m', '5m', '15m', '30m', '1h', '3h', '6h', '12h', '1D',\n '7D', '14D', '1M']\n if timeframe:\n if timeframe not in valid_tfs:\n raise ValueError(\"timeframe must be any of %s\" % valid_tfs)\n else:\n timeframe = '1m'\n identifier = ('candles', pair, timeframe)\n pair = 't' + pair if not pair.startswith('t') else pair\n key = 'trade:' + timeframe + ':' + pair\n self._subscribe('candles', identifier, key=key, **kwargs)", "response": "Subscribe to the passed pair s OHLC data channel."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef unsubscribe_from_candles(self, pair, timeframe=None, **kwargs):\n\n valid_tfs = ['1m', '5m', '15m', '30m', '1h', '3h', '6h', '12h', '1D',\n '7D', '14D', '1M']\n if timeframe:\n if timeframe not in valid_tfs:\n raise ValueError(\"timeframe must be any of %s\" % valid_tfs)\n else:\n timeframe = '1m'\n identifier = ('candles', pair, timeframe)\n pair = 't' + pair if not pair.startswith('t') else pair\n key = 'trade:' + timeframe + ':' + pair\n\n self._unsubscribe('candles', identifier, key=key, **kwargs)", "response": "Unsubscribe to the passed pair s OHLC data channel."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef authenticate(self):\n if not self.key and not self.secret:\n raise ValueError(\"Must supply both key and secret key for API!\")\n self.channel_configs['auth'] = {'api_key': self.key, 'secret': self.secret}\n self.conn.send(api_key=self.key, secret=self.secret, auth=True)", "response": "Authenticate with the Bitfinex API."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef cancel_order(self, multi=False, **order_identifiers):\n if multi:\n self._send_auth_command('oc_multi', order_identifiers)\n else:\n self._send_auth_command('oc', order_identifiers)", "response": "Cancel one or multiple orders via Websocket."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parseEnvVars():\n\n # Auth\n authKey = os.getenv(\"WIOTP_AUTH_KEY\", None)\n authToken = os.getenv(\"WIOTP_AUTH_TOKEN\", None)\n\n # Also support WIOTP_API_KEY / WIOTP_API_TOKEN usage\n if authKey is None and authToken is None:\n authKey = os.getenv(\"WIOTP_API_KEY\", None)\n authToken = os.getenv(\"WIOTP_API_TOKEN\", None)\n\n # Identity\n appId = os.getenv(\"WIOTP_IDENTITY_APPID\", str(uuid.uuid4()))\n # Options\n domain = os.getenv(\"WIOTP_OPTIONS_DOMAIN\", None)\n logLevel = os.getenv(\"WIOTP_OPTIONS_LOGLEVEL\", \"info\")\n port = os.getenv(\"WIOTP_OPTIONS_MQTT_PORT\", None)\n transport = os.getenv(\"WIOTP_OPTIONS_MQTT_TRANSPORT\", None)\n caFile = os.getenv(\"WIOTP_OPTIONS_MQTT_CAFILE\", None)\n cleanStart = os.getenv(\"WIOTP_OPTIONS_MQTT_CLEANSTART\", \"True\")\n sessionExpiry = os.getenv(\"WIOTP_OPTIONS_MQTT_SESSIONEXPIRY\", \"3600\")\n keepAlive = os.getenv(\"WIOTP_OPTIONS_MQTT_KEEPALIVE\", \"60\")\n sharedSubs = os.getenv(\"WIOTP_OPTIONS_MQTT_SHAREDSUBSCRIPTION\", \"False\")\n verifyCert = os.getenv(\"WIOTP_OPTIONS_HTTP_VERIFY\", \"True\")\n\n if port is not None:\n try:\n port = int(port)\n except ValueError as e:\n raise ConfigurationException(\"WIOTP_PORT must be a number\")\n\n try:\n sessionExpiry = int(sessionExpiry)\n except ValueError as e:\n raise ConfigurationException(\"WIOTP_OPTIONS_MQTT_SESSIONEXPIRY must be a number\")\n\n try:\n keepAlive = int(keepAlive)\n except ValueError as e:\n raise ConfigurationException(\"WIOTP_OPTIONS_MQTT_KEEPAIVE must be a number\")\n\n if logLevel not in [\"error\", \"warning\", \"info\", \"debug\"]:\n raise ConfigurationException(\"WIOTP_OPTIONS_LOGLEVEL must be one of error, warning, info, debug\")\n else:\n # Convert log levels from string to int (we need to upper case our strings from the config)\n logLevel = logging.getLevelName(logLevel.upper())\n\n cfg = {\n \"identity\": {\"appId\": appId},\n \"options\": {\n \"domain\": domain,\n \"logLevel\": logLevel,\n \"mqtt\": {\n \"port\": port,\n \"transport\": transport,\n \"cleanStart\": cleanStart in [\"True\", \"true\", \"1\"],\n \"sessionExpiry\": sessionExpiry,\n \"keepAlive\": keepAlive,\n \"sharedSubscription\": sharedSubs in [\"True\", \"true\", \"1\"],\n \"caFile\": caFile,\n },\n \"http\": {\"verify\": verifyCert in [\"True\", \"true\", \"1\"]},\n },\n }\n\n # Quickstart doesn't support auth, so ensure we only add this if it's defined\n if authToken is not None:\n cfg[\"auth\"] = {\"key\": authKey, \"token\": authToken}\n\n return cfg", "response": "Parses environment variables into a Python dictionary suitable for passing to the \n device client constructor as the options parameter."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nparsing a YAML configuration file into a Python dictionary suitable for passing to the device client constructor as the options parameter.", "response": "def parseConfigFile(configFilePath):\n \"\"\"\n Parse a yaml configuration file into a Python dictionary suitable for passing to the \n device client constructor as the `options` parameter\n \n # Example Configuration File\n \n identity:\n appId: myApp\n auth:\n key: a-23gh56-sdsdajhjnee\n token: Ab$76s)asj8_s5\n options:\n domain: internetofthings.ibmcloud.com\n logLevel: error|warning|info|debug\n mqtt:\n port: 8883\n transport: tcp\n cleanStart: false\n sessionExpiry: 3600\n keepAlive: 60\n sharedSubscription: false\n caFile: /path/to/certificateAuthorityFile.pem\n http:\n verify: true \n \"\"\"\n\n try:\n with open(configFilePath) as f:\n data = yaml.load(f)\n except (OSError, IOError) as e:\n # In 3.3, IOError became an alias for OSError, and FileNotFoundError is a subclass of OSError\n reason = \"Error reading device configuration file '%s' (%s)\" % (configFilePath, e)\n raise ConfigurationException(reason)\n\n if \"options\" in data and \"logLevel\" in data[\"options\"]:\n if data[\"options\"][\"logLevel\"] not in [\"error\", \"warning\", \"info\", \"debug\"]:\n raise ConfigurationException(\"Optional setting options.logLevel must be one of error, warning, info, debug\")\n else:\n # Convert log levels from string to int (we need to upper case our strings from the config)\n data[\"options\"][\"logLevel\"] = logging.getLevelName(data[\"options\"][\"logLevel\"].upper())\n\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _onCommand(self, client, userdata, pahoMessage):\n try:\n command = Command(pahoMessage, self._messageCodecs)\n except InvalidEventException as e:\n self.logger.critical(str(e))\n else:\n self.logger.debug(\"Received device command '%s'\" % (command.command))\n if self.commandCallback:\n self.commandCallback(command)", "response": "Internal callback for device command messages"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _onDeviceCommand(self, client, userdata, pahoMessage):\n try:\n command = Command(pahoMessage, self._messageCodecs)\n except InvalidEventException as e:\n self.logger.critical(str(e))\n else:\n self.logger.debug(\"Received gateway command '%s'\" % (command.command))\n if self.deviceCommandCallback:\n self.deviceCommandCallback(command)", "response": "Internal callback for gateway command messages"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef find(self, nameFilter=None, typeFilter=None, enabledFilter=None, serviceId=None):\n\n queryParms = {}\n if nameFilter:\n queryParms[\"name\"] = nameFilter\n if typeFilter:\n queryParms[\"type\"] = typeFilter\n if enabledFilter:\n queryParms[\"enabled\"] = enabledFilter\n if serviceId:\n queryParms[\"serviceId\"] = serviceId\n\n return IterableConnectorList(self._apiClient, filters=queryParms)", "response": "Gets the list of Historian connectors in a given service."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create(self, name, serviceId, timezone, description, enabled):\n\n connector = {\n \"name\": name,\n \"description\": description,\n \"serviceId\": serviceId,\n \"timezone\": timezone,\n \"enabled\": enabled,\n }\n\n url = \"api/v0002/historianconnectors\"\n\n r = self._apiClient.post(url, data=connector)\n if r.status_code == 201:\n return Connector(apiClient=self._apiClient, **r.json())\n else:\n raise ApiException(r)", "response": "This method creates a new connector for the organization in the Watson IoT Platform."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nupdate the connector with the specified uuid. if description is empty, the existing description will be removed. Parameters: - connector (String), Connnector Id which is a UUID - name (string) - Name of the service - timezone (json object) - Should have a valid structure for the service type. - description (string) - description of the service - enabled (boolean) - enabled Throws APIException on failure.", "response": "def update(self, connectorId, name, description, timezone, enabled):\n \"\"\"\n Updates the connector with the specified uuid.\n if description is empty, the existing description will be removed.\n Parameters:\n - connector (String), Connnector Id which is a UUID\n - name (string) - Name of the service\n - timezone (json object) - Should have a valid structure for the service type.\n - description (string) - description of the service\n - enabled (boolean) - enabled\n Throws APIException on failure.\n\n \"\"\"\n\n url = \"api/v0002/historianconnectors/%s\" % (connectorId)\n\n connectorBody = {}\n connectorBody[\"name\"] = name\n connectorBody[\"description\"] = description\n connectorBody[\"timezone\"] = timezone\n connectorBody[\"enabled\"] = enabled\n\n r = self._apiClient.put(url, data=connectorBody)\n if r.status_code == 200:\n return Connector(apiClient=self._apiClient, **r.json())\n else:\n raise ApiException(r)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets a list of services that the Watson IoT Platform can connect to.", "response": "def find(self, nameFilter=None, typeFilter=None, bindingModeFilter=None, boundFilter=None):\n \"\"\"\n Gets the list of services that the Watson IoT Platform can connect to. \n The list can include a mixture of services that are either bound or unbound.\n \n Parameters:\n \n - nameFilter(string) - Filter the results by the specified name\n - typeFilter(string) - Filter the results by the specified type, Available values : cloudant, eventstreams\n - bindingModeFilter(string) - Filter the results by the specified binding mode, Available values : automatic, manual\n - boundFilter(boolean) - Filter the results by the bound flag \n \n Throws APIException on failure.\n \"\"\"\n\n queryParms = {}\n if nameFilter:\n queryParms[\"name\"] = nameFilter\n if typeFilter:\n queryParms[\"type\"] = typeFilter\n if bindingModeFilter:\n queryParms[\"bindingMode\"] = bindingModeFilter\n if boundFilter:\n queryParms[\"bound\"] = boundFilter\n\n return IterableServiceBindingsList(self._apiClient, filters=queryParms)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create(self, serviceBinding):\n if not isinstance(serviceBinding, ServiceBindingCreateRequest):\n if serviceBinding[\"type\"] == \"cloudant\":\n serviceBinding = CloudantServiceBindingCreateRequest(**serviceBinding)\n elif serviceBinding[\"type\"] == \"eventstreams\":\n serviceBinding = EventStreamsServiceBindingCreateRequest(**serviceBinding)\n else:\n raise Exception(\"Unsupported service binding type\")\n\n url = \"api/v0002/s2s/services\"\n\n r = self._apiClient.post(url, data=serviceBinding)\n if r.status_code == 201:\n return ServiceBinding(**r.json())\n else:\n raise ApiException(r)", "response": "This method creates a new external service."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef update(self, serviceId, serviceName, credentials, description):\n\n url = \"api/v0002/s2s/services/%s\" % (serviceId)\n\n serviceBody = {}\n serviceBody[\"name\"] = serviceName\n serviceBody[\"description\"] = description\n serviceBody[\"credentials\"] = credentials\n\n r = self._apiClient.put(url, data=serviceBody)\n if r.status_code == 200:\n return ServiceBinding(**r.json())\n else:\n raise ApiException(r)", "response": "This method updates the service with the specified id and credentials and description."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef create(self, deviceType):\n\n r = self._apiClient.post(\"api/v0002/device/types\", deviceType)\n\n if r.status_code == 201:\n return DeviceType(apiClient=self._apiClient, **r.json())\n else:\n raise ApiException(r)", "response": "Register one or more new device types"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\npublishing an event to Watson IoT Platform.", "response": "def publishEvent(self, event, msgFormat, data, qos=0, on_publish=None):\n \"\"\"\n Publish an event to Watson IoT Platform.\n\n # Parameters\n event (string): Name of this event\n msgFormat (string): Format of the data for this event\n data (dict): Data for this event\n qos (int): MQTT quality of service level to use (`0`, `1`, or `2`)\n on_publish(function): A function that will be called when receipt \n of the publication is confirmed. \n \n # Callback and QoS\n The use of the optional #on_publish function has different implications depending \n on the level of qos used to publish the event: \n \n - qos 0: the client has asynchronously begun to send the event\n - qos 1 and 2: the client has confirmation of delivery from the platform\n \"\"\"\n topic = \"iot-2/evt/{event}/fmt/{msg_format}\".format(event=event, msg_format=msgFormat)\n return self._publishEvent(topic, event, msgFormat, data, qos, on_publish)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nretrieving the organization - specific status of each of the services offered by the IBM Watson IoT Platform.", "response": "def serviceStatus(self):\n \"\"\"\n Retrieve the organization-specific status of each of the services offered by the IBM Watson IoT Platform.\n In case of failure it throws APIException\n \"\"\"\n\n r = self._apiClient.get(\"api/v0002/service-status\")\n\n if r.status_code == 200:\n return ServiceStatus(**r.json())\n else:\n raise ApiException(r)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create(self, devices):\n if not isinstance(devices, list):\n listOfDevices = [devices]\n returnAsAList = False\n else:\n listOfDevices = devices\n returnAsAList = True\n\n r = self._apiClient.post(\"api/v0002/bulk/devices/add\", listOfDevices)\n\n if r.status_code in [201, 202]:\n if returnAsAList:\n responseList = []\n for entry in r.json():\n responseList.append(DeviceCreateResponse(**entry))\n return responseList\n else:\n return DeviceCreateResponse(**r.json()[0])\n else:\n raise ApiException(r)", "response": "This method creates a new device in the bulk."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef update(self, deviceUid, metadata=None, deviceInfo=None, status=None):\n\n if not isinstance(deviceUid, DeviceUid) and isinstance(deviceUid, dict):\n deviceUid = DeviceUid(**deviceUid)\n\n deviceUrl = \"api/v0002/device/types/%s/devices/%s\" % (deviceUid.typeId, deviceUid.deviceId)\n\n data = {\"status\": status, \"deviceInfo\": deviceInfo, \"metadata\": metadata}\n\n r = self._apiClient.put(deviceUrl, data)\n if r.status_code == 200:\n return Device(apiClient=self._apiClient, **r.json())\n else:\n raise ApiException(r)", "response": "Update an existing device in the system"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef delete(self, devices):\n if not isinstance(devices, list):\n listOfDevices = [devices]\n else:\n listOfDevices = devices\n\n r = self._apiClient.post(\"api/v0002/bulk/devices/remove\", listOfDevices)\n\n if r.status_code in [200, 202]:\n return r.json()\n else:\n raise ApiException(r)", "response": "This method deletes one or more devices from the dictionary."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef find(self, status=None, connectedAfter=None):\n queryParms = {}\n if status:\n queryParms[\"status\"] = status\n if connectedAfter:\n queryParms[\"connectedAfter\"] = connectedAfter\n\n return IterableClientStatusList(self._apiClient, filters=queryParms)", "response": "Iterate through all Connectors and return a list of all the Connectors that have the given status."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nlist all device management extension packages", "response": "def list(self):\n \"\"\"\n List all device management extension packages\n \"\"\"\n url = \"api/v0002/mgmt/custom/bundle\"\n r = self._apiClient.get(url)\n\n if r.status_code == 200:\n return r.json()\n else:\n raise ApiException(r)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef create(self, dmeData):\n url = \"api/v0002/mgmt/custom/bundle\"\n r = self._apiClient.post(url, dmeData)\n\n if r.status_code == 201:\n return r.json()\n else:\n raise ApiException(r)", "response": "Create a new device management extension package"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndeletes a device management extension package It accepts bundleId (string) as parameters In case of failure it throws APIException", "response": "def delete(self, bundleId):\n \"\"\"\n Delete a device management extension package\n It accepts bundleId (string) as parameters\n In case of failure it throws APIException\n \"\"\"\n url = \"api/v0002/mgmt/custom/bundle/%s\" % (bundleId)\n r = self._apiClient.delete(url)\n\n if r.status_code == 204:\n return True\n else:\n raise ApiException(r)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef update(self, bundleId, dmeData):\n url = \"api/v0002/mgmt/custom/bundle/%s\" % (bundleId)\n r = self._apiClient.put(url, dmeData)\n\n if r.status_code == 200:\n return r.json()\n else:\n raise ApiException(r)", "response": "This method updates a device management extension package"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nregisters a new thing. It accepts thingTypeId (string), thingId (string), name (string), description (string), aggregatedObjects (JSON) and metadata (JSON) as parameters In case of failure it throws APIException", "response": "def registerThing(self, thingTypeId, thingId, name = None, description = None, aggregatedObjects = None, metadata=None):\n \"\"\"\n Registers a new thing.\n It accepts thingTypeId (string), thingId (string), name (string), description (string), aggregatedObjects (JSON) and metadata (JSON) as parameters\n In case of failure it throws APIException\n \"\"\"\n thingsUrl = ApiClient.thingsUrl % (self.host, thingTypeId)\n payload = {'thingId' : thingId, 'name' : name, 'description' : description, 'aggregatedObjects' : aggregatedObjects, 'metadata': metadata}\n\n r = requests.post(thingsUrl, auth=self.credentials, data=json.dumps(payload), headers = {'content-type': 'application/json'}, verify=self.verify)\n status = r.status_code\n if status == 201:\n self.logger.debug(\"Thing Instance Created\")\n return r.json()\n elif status == 400:\n raise ibmiotf.APIException(400, \"Invalid request (No body, invalid JSON, unexpected key, bad value)\", r.json())\n elif status == 401:\n raise ibmiotf.APIException(401, \"The authentication token is empty or invalid\", None)\n elif status == 403:\n raise ibmiotf.APIException(403, \"The authentication method is invalid or the API key used does not exist\", None)\n elif status == 404:\n raise ibmiotf.APIException(404, \"The thing type with specified id does not exist\", None)\n elif status == 409:\n raise ibmiotf.APIException(409, \"A thing instance with the specified id already exists\", r.json())\n elif status == 500:\n raise ibmiotf.APIException(500, \"Unexpected error\", None)\n else:\n raise ibmiotf.APIException(None, \"Unexpected error\", None)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef getThing(self, thingTypeId, thingId):\n thingUrl = ApiClient.thingUrl % (self.host, thingTypeId, thingId)\n\n r = requests.get(thingUrl, auth=self.credentials, verify=self.verify)\n status = r.status_code\n\n if status == 200:\n self.logger.debug(\"Thing instance was successfully retrieved\")\n return r.json()\n elif status == 304:\n raise ibmiotf.APIException(304, \"The state of the thing has not been modified (response to a conditional GET).\", None)\n elif status == 401:\n raise ibmiotf.APIException(401, \"The authentication token is empty or invalid\", None)\n elif status == 403:\n raise ibmiotf.APIException(403, \"The authentication method is invalid or the api key used does not exist\", None)\n elif status == 404:\n raise ibmiotf.APIException(404, \"A thing type with the specified id, or a thing with the specified id, does not exist.\", None)\n elif status == 500:\n raise ibmiotf.APIException(500, \"Unexpected error\", None)\n else:\n raise ibmiotf.APIException(None, \"Unexpected error\", None)", "response": "Gets the details of a thing. It accepts thingTypeId thingId It accepts thingId"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting details for multiple things of a type It accepts thingTypeId (string) and parameters In case of failure it throws APIException", "response": "def getThingsForType(self, thingTypeId, parameters = None):\n \"\"\"\n Gets details for multiple things of a type\n It accepts thingTypeId (string) and parameters\n In case of failure it throws APIException\n \"\"\"\n thingsUrl = ApiClient.thingsUrl % (self.host, thingTypeId)\n\n r = requests.get(thingsUrl, auth=self.credentials, params = parameters, verify=self.verify)\n status = r.status_code\n if status == 200:\n self.logger.debug(\"List of things was successfully retrieved\")\n return r.json()\n elif status == 401:\n raise ibmiotf.APIException(401, \"The authentication token is empty or invalid\", None)\n elif status == 403:\n raise ibmiotf.APIException(403, \"The authentication method is invalid or the api key used does not exist\", None)\n elif status == 404:\n raise ibmiotf.APIException(404, \"The thing type does not exist\", None)\n elif status == 500:\n raise ibmiotf.APIException(500, \"Unexpected error\", None)\n else:\n raise ibmiotf.APIException(None, \"Unexpected error\", None)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef removeThing(self, thingTypeId, thingId):\n thingUrl = ApiClient.thingUrl % (self.host, thingTypeId, thingId)\n\n r = requests.delete(thingUrl, auth=self.credentials, verify=self.verify)\n status = r.status_code\n if status == 204:\n self.logger.debug(\"Thing was successfully removed\")\n return True\n elif status == 401:\n raise ibmiotf.APIException(401, \"The authentication token is empty or invalid\", None)\n elif status == 403:\n raise ibmiotf.APIException(403, \"The authentication method is invalid or the api key used does not exist\", None)\n elif status == 404:\n raise ibmiotf.APIException(404, \"A thing type or thing instance with the specified id does not exist.\", None)\n elif status == 409:\n raise ibmiotf.APIException(409, \"The thing instance is aggregated into another thing instance.\", None)\n elif status == 500:\n raise ibmiotf.APIException(500, \"Unexpected error\", None)\n else:\n raise ibmiotf.APIException(None, \"Unexpected error\", None)", "response": "This method deletes an existing thing. It returns True if the thing was successfully removed False otherwise."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef addDraftThingType(self, thingTypeId, name = None, description = None, schemaId = None, metadata = None):\n draftThingTypesUrl = ApiClient.draftThingTypesUrl % (self.host)\n payload = {'id' : thingTypeId, 'name' : name, 'description' : description, 'schemaId' : schemaId, 'metadata': metadata}\n\n r = requests.post(draftThingTypesUrl, auth=self.credentials, data=json.dumps(payload), headers = {'content-type': 'application/json'}, verify=self.verify)\n status = r.status_code\n if status == 201:\n self.logger.debug(\"The draft thing Type is created\")\n return r.json()\n elif status == 400:\n raise ibmiotf.APIException(400, \"Invalid request (No body, invalid JSON, unexpected key, bad value)\", r.json())\n elif status == 401:\n raise ibmiotf.APIException(401, \"The authentication token is empty or invalid\", None)\n elif status == 403:\n raise ibmiotf.APIException(403, \"The authentication method is invalid or the api key used does not exist\", None)\n elif status == 409:\n raise ibmiotf.APIException(409, \"The draft thing type already exists\", r.json())\n elif status == 500:\n raise ibmiotf.APIException(500, \"Unexpected error\", None)\n else:\n raise ibmiotf.APIException(None, \"Unexpected error\", None)", "response": "Adds a draft thing type to the current Thing Type."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate a thing type. It accepts thingTypeId (string), name (string), description (string), schemaId (string) and metadata(JSON) as the parameters In case of failure it throws APIException", "response": "def updateDraftThingType(self, thingTypeId, name, description, schemaId, metadata = None):\n \"\"\"\n Updates a thing type.\n It accepts thingTypeId (string), name (string), description (string), schemaId (string) and metadata(JSON) as the parameters\n In case of failure it throws APIException\n \"\"\"\n draftThingTypeUrl = ApiClient.draftThingTypeUrl % (self.host, thingTypeId)\n draftThingTypeUpdate = {'name' : name, 'description' : description, 'schemaId' : schemaId, 'metadata' : metadata}\n r = requests.put(draftThingTypeUrl, auth=self.credentials, data=json.dumps(draftThingTypeUpdate), headers = {'content-type': 'application/json'}, verify=self.verify)\n status = r.status_code\n if status == 200:\n self.logger.debug(\"Thing type was successfully modified\")\n return r.json()\n elif status == 401:\n raise ibmiotf.APIException(401, \"The authentication token is empty or invalid\", None)\n elif status == 403:\n raise ibmiotf.APIException(403, \"The authentication method is invalid or the api key used does not exist\", None)\n elif status == 404:\n raise ibmiotf.APIException(404, \"The Thing type does not exist\", None)\n elif status == 409:\n raise ibmiotf.APIException(409, \"The update could not be completed due to a conflict\", r.json())\n elif status == 500:\n raise ibmiotf.APIException(500, \"Unexpected error\", None)\n else:\n raise ibmiotf.APIException(None, \"Unexpected error\", None)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nretrieving all existing draft thing types. It accepts accepts an optional query parameters (Dictionary) In case of failure it throws APIException", "response": "def getDraftThingTypes(self, parameters = None):\n \"\"\"\n Retrieves all existing draft thing types.\n It accepts accepts an optional query parameters (Dictionary)\n In case of failure it throws APIException\n \"\"\"\n draftThingTypesUrl = ApiClient.draftThingTypesUrl % (self.host)\n r = requests.get(draftThingTypesUrl, auth=self.credentials, params = parameters, verify=self.verify)\n status = r.status_code\n if status == 200:\n self.logger.debug(\"Draft thing types successfully retrieved\")\n self.logger.debug(json.dumps(r.json()))\n return r.json()\n elif status == 401:\n raise ibmiotf.APIException(401, \"The authentication token is empty or invalid\", None)\n elif status == 403:\n raise ibmiotf.APIException(403, \"The authentication method is invalid or the api key used does not exist\", None)\n elif status == 500:\n raise ibmiotf.APIException(500, \"Unexpected error\", None)\n else:\n raise ibmiotf.APIException(None, \"Unexpected error\", None)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef getDraftThingType(self, thingTypeId, parameters = None):\n draftThingTypeUrl = ApiClient.draftThingTypeUrl % (self.host, thingTypeId)\n r = requests.get(draftThingTypeUrl, auth=self.credentials, params = parameters, verify=self.verify)\n status = r.status_code\n if status == 200:\n self.logger.debug(\"Draft thing type successfully retrieved\")\n self.logger.debug(json.dumps(r.json()))\n return r.json()\n elif status == 304:\n raise ibmiotf.APIException(304, \"The state of the thing type has not been modified (response to a conditional GET).\", None)\n elif status == 401:\n raise ibmiotf.APIException(401, \"The authentication token is empty or invalid\", None)\n elif status == 403:\n raise ibmiotf.APIException(403, \"The authentication method is invalid or the api key used does not exist\", None)\n elif status == 404:\n raise ibmiotf.APIException(404, \"A draft thing type with the specified id does not exist.\", None)\n elif status == 500:\n raise ibmiotf.APIException(500, \"Unexpected error\", None)\n else:\n raise ibmiotf.APIException(None, \"Unexpected error\", None)", "response": "This method returns all existing draft thing types. It accepts a query parameter dictionary which contains the values for the parameters parameter. It returns a dictionary containing the values for the parameters parameter."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef deleteDraftThingType(self, thingTypeId):\n draftThingTypeUrl = ApiClient.draftThingTypeUrl % (self.host, thingTypeId)\n\n r = requests.delete(draftThingTypeUrl, auth=self.credentials, verify=self.verify)\n status = r.status_code\n if status == 204:\n self.logger.debug(\"Device type was successfully deleted\")\n return True\n elif status == 401:\n raise ibmiotf.APIException(401, \"The authentication token is empty or invalid\", None)\n elif status == 403:\n raise ibmiotf.APIException(403, \"The authentication method is invalid or the api key used does not exist\", None)\n elif status == 404:\n raise ibmiotf.APIException(404, \"A thing type with the specified id does not exist.\", None)\n elif status == 409:\n raise ibmiotf.APIException(409, \"The draft thing type with the specified id is currently being referenced by another object.\", None)\n elif status == 500:\n raise ibmiotf.APIException(500, \"Unexpected error\", None)\n else:\n raise ibmiotf.APIException(None, \"Unexpected error\", None)", "response": "This method deletes a draft Thing Type. It returns True if the thing type was deleted False otherwise."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef createSchema(self, schemaName, schemaFileName, schemaContents, description=None):\n req = ApiClient.allSchemasUrl % (self.host, \"/draft\")\n fields={\n 'schemaFile': (schemaFileName, schemaContents, 'application/json'),\n 'schemaType': 'json-schema',\n 'name': schemaName,\n }\n if description:\n fields[\"description\"] = description\n\n multipart_data = MultipartEncoder(fields=fields)\n resp = requests.post(req, auth=self.credentials, data=multipart_data,\n headers={'Content-Type': multipart_data.content_type}, verify=self.verify)\n if resp.status_code == 201:\n self.logger.debug(\"Schema created\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error creating a schema\", resp)\n return resp.json()[\"id\"], resp.json()", "response": "Create a schema for the org."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndelete a schema. Parameter: schemaId (string). Throws APIException on failure.", "response": "def deleteSchema(self, schemaId):\n \"\"\"\n Delete a schema. Parameter: schemaId (string). Throws APIException on failure.\n \"\"\"\n req = ApiClient.oneSchemaUrl % (self.host, \"/draft\", schemaId)\n resp = requests.delete(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 204:\n self.logger.debug(\"Schema deleted\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error deleting schema\", resp)\n return resp"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef updateSchema(self, schemaId, schemaDefinition):\n req = ApiClient.oneSchemaUrl % (self.host, \"/draft\", schemaId)\n body = {\"schemaDefinition\": schemaDefinition}\n resp = requests.put(req, auth=self.credentials, headers={\"Content-Type\":\"application/json\"},\n data=json.dumps(body), verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"Schema updated\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error updating schema\", resp)\n return resp.json()", "response": "Update a schema. Returns a dict with the updated schema definition."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef getSchemaContent(self, schemaId, draft=False):\n if draft:\n req = ApiClient.oneSchemaContentUrl % (self.host, \"/draft\", schemaId)\n else:\n req = ApiClient.oneSchemaContentUrl % (self.host, \"\", schemaId)\n resp = requests.get(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"Schema content retrieved\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error getting schema content\", resp)\n return resp.json()", "response": "Get the content for a schema."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef updateSchemaContent(self, schemaId, schemaFile):\n req = ApiClient.oneSchemaContentUrl % (self.host, \"/draft\", schemaId)\n body = {\"schemaFile\": schemaFile}\n resp = requests.put(req, auth=self.credentials, headers={\"Content-Type\":\"application/json\"},\n data=json.dumps(body), verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"Schema content updated\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error updating schema content\", resp)\n return resp.json()", "response": "Updates the content for a specific schema."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef updateEventType(self, eventTypeId, name, schemaId, description=None):\n req = ApiClient.oneEventTypesUrl % (self.host, \"/draft\", eventTypeId)\n body = {\"name\" : name, \"schemaId\" : schemaId}\n if description:\n body[\"description\"] = description\n resp = requests.put(req, auth=self.credentials, headers={\"Content-Type\":\"application/json\"},\n data=json.dumps(body), verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"event type updated\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error updating event type\", resp)\n return resp.json()", "response": "Updates an event type."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef deleteEventType(self, eventTypeId):\n req = ApiClient.oneEventTypeUrl % (self.host, \"/draft\", eventTypeId)\n resp = requests.delete(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 204:\n self.logger.debug(\"event type deleted\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error deleting an event type\", resp)\n return resp", "response": "Delete an event type."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef getEventType(self, eventTypeId, draft=False):\n if draft:\n req = ApiClient.oneEventTypeUrl % (self.host, \"/draft\", eventTypeId)\n else:\n req = ApiClient.oneEventTypeUrl % (self.host, \"\", eventTypeId)\n resp = requests.get(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"event type retrieved\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error getting an event type\", resp)\n return resp.json()", "response": "Gets an event type."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef createPhysicalInterface(self, name, description=None):\n req = ApiClient.allPhysicalInterfacesUrl % (self.host, \"/draft\")\n body = {\"name\" : name}\n if description:\n body[\"description\"] = description\n resp = requests.post(req, auth=self.credentials, headers={\"Content-Type\":\"application/json\"},\n data=json.dumps(body), verify=self.verify)\n if resp.status_code == 201:\n self.logger.debug(\"physical interface created\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error creating physical interface\", resp)\n return resp.json()[\"id\"], resp.json()", "response": "Method to create a physical interface."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nupdate a physical interface.", "response": "def updatePhysicalInterface(self, physicalInterfaceId, name, schemaId, description=None):\n \"\"\"\n Update a physical interface.\n Parameters:\n - physicalInterfaceId (string)\n - name (string)\n - schemaId (string)\n - description (string, optional)\n Throws APIException on failure.\n \"\"\"\n req = ApiClient.onePhysicalInterfacesUrl % (self.host, \"/draft\", physicalInterfaceId)\n body = {\"name\" : name, \"schemaId\" : schemaId}\n if description:\n body[\"description\"] = description\n resp = requests.put(req, auth=self.credentials, headers={\"Content-Type\":\"application/json\"},\n data=json.dumps(body), verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"physical interface updated\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error updating physical interface\", resp)\n return resp.json()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndelete a physical interface.", "response": "def deletePhysicalInterface(self, physicalInterfaceId):\n \"\"\"\n Delete a physical interface.\n Parameters: physicalInterfaceId (string).\n Throws APIException on failure.\n \"\"\"\n req = ApiClient.onePhysicalInterfaceUrl % (self.host, \"/draft\", physicalInterfaceId)\n resp = requests.delete(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 204:\n self.logger.debug(\"physical interface deleted\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error deleting a physical interface\", resp)\n return resp"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget a physical interface.", "response": "def getPhysicalInterface(self, physicalInterfaceId, draft=False):\n \"\"\"\n Get a physical interface.\n Parameters:\n - physicalInterfaceId (string)\n - draft (boolean)\n Throws APIException on failure.\n \"\"\"\n if draft:\n req = ApiClient.onePhysicalInterfaceUrl % (self.host, \"/draft\", physicalInterfaceId)\n else:\n req = ApiClient.onePhysicalInterfaceUrl % (self.host, \"\", physicalInterfaceId)\n\n resp = requests.get(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"physical interface retrieved\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error getting a physical interface\", resp)\n return resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef createEvent(self, physicalInterfaceId, eventTypeId, eventId):\n req = ApiClient.allEventsUrl % (self.host, \"/draft\", physicalInterfaceId)\n body = {\"eventId\" : eventId, \"eventTypeId\" : eventTypeId}\n resp = requests.post(req, auth=self.credentials, headers={\"Content-Type\":\"application/json\"}, data=json.dumps(body),\n verify=self.verify)\n if resp.status_code == 201:\n self.logger.debug(\"Event mapping created\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error creating event mapping\", resp)\n return resp.json()", "response": "Create an event mapping for a specific event."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef deleteEvent(self, physicalInterfaceId, eventId):\n req = ApiClient.oneEventUrl % (self.host, \"/draft\", physicalInterfaceId, eventId)\n resp = requests.delete(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 204:\n self.logger.debug(\"Event mapping deleted\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error deleting event mapping\", resp)\n return resp", "response": "Delete an event mapping from a physical interface."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting all logical interfaces for an org.", "response": "def getLogicalInterfaces(self, draft=False, name=None, schemaId=None):\n \"\"\"\n Get all logical interfaces for an org.\n Parameters: draft (boolean).\n Returns:\n - list of ids\n - response object\n Throws APIException on failure.\n \"\"\"\n if draft:\n req = ApiClient.allLogicalInterfacesUrl % (self.host, \"/draft\")\n else:\n req = ApiClient.allLogicalInterfacesUrl % (self.host, \"\")\n\n if name or schemaId:\n req += \"?\"\n if name:\n req += \"name=\"+name\n if schemaId:\n if name:\n req += \"&\"\n req += \"schemaId=\"+schemaId\n\n resp = requests.get(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"All logical interfaces retrieved\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error getting all logical interfaces\", resp)\n return [x[\"id\"] for x in resp.json()[\"results\"]], resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef updateLogicalInterface(self, logicalInterfaceId, name, schemaId, description=None):\n req = ApiClient.oneLogicalInterfaceUrl % (self.host, \"/draft\", logicalInterfaceId)\n body = {\"name\" : name, \"schemaId\" : schemaId, \"id\" : logicalInterfaceId}\n if description:\n body[\"description\"] = description\n resp = requests.put(req, auth=self.credentials, headers={\"Content-Type\":\"application/json\"},\n data=json.dumps(body), verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"Logical interface updated\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error updating logical interface\", resp)\n return resp.json()", "response": "Updates a logical interface."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndeletes a logical interface.", "response": "def deleteLogicalInterface(self, logicalInterfaceId):\n \"\"\"\n Deletes a logical interface.\n Parameters: logicalInterfaceId (string).\n Throws APIException on failure.\n \"\"\"\n req = ApiClient.oneLogicalInterfaceUrl % (self.host, \"/draft\", logicalInterfaceId)\n resp = requests.delete(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 204:\n self.logger.debug(\"logical interface deleted\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error deleting a logical interface\", resp)\n return resp"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget a specific logical interface.", "response": "def getLogicalInterface(self, logicalInterfaceId, draft=False):\n \"\"\"\n Gets a logical interface.\n Parameters:\n - logicalInterfaceId (string)\n - draft (boolean)\n Throws APIException on failure.\n \"\"\"\n if draft:\n req = ApiClient.oneLogicalInterfaceUrl % (self.host, \"/draft\", logicalInterfaceId)\n else:\n req = ApiClient.oneLogicalInterfaceUrl % (self.host, \"\", logicalInterfaceId)\n resp = requests.get(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"logical interface retrieved\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error getting a logical interface\", resp)\n return resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget a rule for a logical interface. Parameters: - logicalInterfaceId (string) - ruleId (string) - draft (boolean) Throws APIException on failure.", "response": "def getRuleForLogicalInterface(self, logicalInterfaceId, ruleId, draft=False):\n \"\"\"\n Gets a rule for a logical interface.\n Parameters:\n - logicalInterfaceId (string)\n - ruleId (string)\n - draft (boolean)\n Throws APIException on failure.\n \"\"\"\n if draft:\n req = ApiClient.oneRuleForLogicalInterfaceUrl % (self.host, \"/draft\", logicalInterfaceId, ruleId)\n else:\n req = ApiClient.oneRuleForLogicalInterfaceUrl % (self.host, \"\", logicalInterfaceId, ruleId)\n resp = requests.get(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"logical interface rule retrieved\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error getting logical interface rule\", resp)\n return resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd a rule to a logical interface.", "response": "def addRuleToLogicalInterface(self, logicalInterfaceId, name, condition, description=None, alias=None):\n \"\"\"\n Adds a rule to a logical interface.\n Parameters:\n - logicalInterfaceId (string)\n - name (string)\n - condition (string)\n - (description (string, optional)\n Returns: rule id (string), response (object).\n Throws APIException on failure.\n \"\"\"\n req = ApiClient.allRulesForLogicalInterfaceUrl % (self.host, \"/draft\", logicalInterfaceId)\n body = {\"name\" : name, \"condition\" : condition}\n if description:\n body[\"description\"] = description\n resp = requests.post(req, auth=self.credentials, headers={\"Content-Type\":\"application/json\"},\n data=json.dumps(body), verify=self.verify)\n if resp.status_code == 201:\n self.logger.debug(\"Logical interface rule created\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error creating logical interface rule\", resp)\n return resp.json()[\"id\"], resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef updateRuleOnLogicalInterface(self, logicalInterfaceId, ruleId, name, condition, description=None):\n req = ApiClient.oneRuleForLogicalInterfaceUrl % (self.host, \"/draft\", logicalInterfaceId, ruleId)\n body = {\"logicalInterfaceId\" : logicalInterfaceId, \"id\" : ruleId, \"name\" : name, \"condition\" : condition}\n if description:\n body[\"description\"] = description\n resp = requests.put(req, auth=self.credentials, headers={\"Content-Type\":\"application/json\"},\n data=json.dumps(body), verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"Logical interface rule updated\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error updating logical interface rule\", resp)\n return resp.json()", "response": "This method updates a rule on a logical interface."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef deleteRuleOnLogicalInterface(self, logicalInterfaceId, ruleId):\n req = ApiClient.oneRuleForLogicalInterfaceUrl % (self.host, \"/draft\", logicalInterfaceId, ruleId)\n resp = requests.delete(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 204:\n self.logger.debug(\"Logical interface rule deleted\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error deleting logical interface rule\", resp)\n return resp", "response": "Delete a rule from a logical interface"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds a physical interface to a device type.", "response": "def addPhysicalInterfaceToDeviceType(self, typeId, physicalInterfaceId):\n \"\"\"\n Adds a physical interface to a device type.\n Parameters:\n - typeId (string) - the device type\n - physicalInterfaceId (string) - the id returned by the platform on creation of the physical interface\n Throws APIException on failure.\n \"\"\"\n req = ApiClient.oneDeviceTypePhysicalInterfaceUrl % (self.host, \"/draft\", typeId)\n body = {\"id\" : physicalInterfaceId}\n resp = requests.post(req, auth=self.credentials, headers={\"Content-Type\":\"application/json\"}, data=json.dumps(body),\n verify=self.verify)\n if resp.status_code == 201:\n self.logger.debug(\"Physical interface added to a device type\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error adding physical interface to a device type\", resp)\n return resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the physical interface associated with a device type.", "response": "def getPhysicalInterfaceOnDeviceType(self, typeId, draft=False):\n \"\"\"\n Gets the physical interface associated with a device type.\n Parameters:\n - typeId (string) - the device type\n - draft (boolean)\n Throws APIException on failure.\n \"\"\"\n if draft:\n req = ApiClient.oneDeviceTypePhysicalInterfaceUrl % (self.host, \"/draft\", typeId)\n else:\n req = ApiClient.oneDeviceTypePhysicalInterfaceUrl % (self.host, \"\", typeId)\n resp = requests.get(req, auth=self.credentials, headers={\"Content-Type\":\"application/json\"},\n verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"Physical interface retrieved from a device type\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error getting physical interface on a device type\", resp)\n return resp.json()[\"id\"], resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef getLogicalInterfacesOnDeviceType(self, typeId, draft=False):\n if draft:\n req = ApiClient.allDeviceTypeLogicalInterfacesUrl % (self.host, \"/draft\", typeId)\n else:\n req = ApiClient.allDeviceTypeLogicalInterfacesUrl % (self.host, \"\", typeId)\n resp = requests.get(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"All device type logical interfaces retrieved\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error getting all device type logical interfaces\", resp)\n return [appintf[\"id\"] for appintf in resp.json()], resp.json()", "response": "Get all logical interfaces for a device type."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds a logical interface to a device type.", "response": "def addLogicalInterfaceToDeviceType(self, typeId, logicalInterfaceId):\n \"\"\"\n Adds a logical interface to a device type.\n Parameters:\n - typeId (string) - the device type\n - logicalInterfaceId (string) - the id returned by the platform on creation of the logical interface\n - description (string) - optional (not used)\n Throws APIException on failure.\n \"\"\"\n req = ApiClient.allDeviceTypeLogicalInterfacesUrl % (self.host, \"/draft\", typeId)\n body = {\"id\" : logicalInterfaceId}\n# body = {\"name\" : \"required but not used!!!\", \"id\" : logicalInterfaceId, \"schemaId\" : schemaId}\n# if description:\n# body[\"description\"] = description\n resp = requests.post(req, auth=self.credentials, headers={\"Content-Type\":\"application/json\"}, data=json.dumps(body),\n verify=self.verify)\n if resp.status_code == 201:\n self.logger.debug(\"Logical interface added to a device type\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error adding logical interface to a device type\", resp)\n return resp.json()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nremoves a logical interface from a device type.", "response": "def removeLogicalInterfaceFromDeviceType(self, typeId, logicalInterfaceId):\n \"\"\"\n Removes a logical interface from a device type.\n Parameters:\n - typeId (string) - the device type\n - logicalInterfaceId (string) - the id returned by the platform on creation of the logical interface\n Throws APIException on failure.\n \"\"\"\n req = ApiClient.oneDeviceTypeLogicalInterfaceUrl % (self.host, typeId, logicalInterfaceId)\n resp = requests.delete(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 204:\n self.logger.debug(\"Logical interface removed from a device type\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error removing logical interface from a device type\", resp)\n return resp"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets all the mappings for a device type.", "response": "def getMappingsOnDeviceType(self, typeId, draft=False):\n \"\"\"\n Get all the mappings for a device type.\n Parameters:\n - typeId (string) - the device type\n - draft (boolean) - draft or active\n Throws APIException on failure.\n \"\"\"\n if draft:\n req = ApiClient.allDeviceTypeMappingsUrl % (self.host, \"/draft\", typeId)\n else:\n req = ApiClient.allDeviceTypeMappingsUrl % (self.host, \"\", typeId)\n\n resp = requests.get(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"All device type mappings retrieved\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error getting all device type mappings\", resp)\n return resp.json()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget the mappings for a logical interface from a device type. Parameters: - typeId (string) - the device type - logicalInterfaceId (string) - the platform returned id of the logical interface Throws APIException on failure.", "response": "def getMappingsOnDeviceTypeForLogicalInterface(self, typeId, logicalInterfaceId, draft=False):\n \"\"\"\n Gets the mappings for a logical interface from a device type.\n Parameters:\n - typeId (string) - the device type\n - logicalInterfaceId (string) - the platform returned id of the logical interface\n Throws APIException on failure.\n \"\"\"\n if draft:\n req = ApiClient.oneDeviceTypeMappingUrl % (self.host, \"/draft\", typeId, logicalInterfaceId)\n else:\n req = ApiClient.oneDeviceTypeMappingUrl % (self.host, \"\", typeId, logicalInterfaceId)\n\n resp = requests.get(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"Mappings retrieved from the device type\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error getting mappings for a logical interface from a device type\", resp)\n return resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef validateDeviceTypeConfiguration(self, typeId):\n req = ApiClient.draftDeviceTypeUrl % (self.host, typeId)\n body = {\"operation\" : \"validate-configuration\"}\n resp = requests.patch(req, auth=self.credentials, headers={\"Content-Type\":\"application/json\"}, data=json.dumps(body),\n verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"Validation for device type configuration succeeded\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"Validation for device type configuration failed\", resp)\n return resp.json()", "response": "Validate the device type configuration."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nvalidates the logical interface configuration.", "response": "def validateLogicalInterfaceConfiguration(self, logicalInterfaceId):\n \"\"\"\n Validate the logical interface configuration.\n Parameters:\n - logicalInterfaceId (string)\n Throws APIException on failure.\n \"\"\"\n req = ApiClient.oneLogicalInterfaceUrl % (self.host, \"/draft\", logicalInterfaceId)\n body = {\"operation\" : \"validate-configuration\"}\n resp = requests.patch(req, auth=self.credentials, headers={\"Content-Type\":\"application/json\"}, data=json.dumps(body),\n verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"Validation for logical interface configuration succeeded\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"Validation for logical interface configuration failed\", resp)\n return resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef getDeviceStateForLogicalInterface(self, typeId, deviceId, logicalInterfaceId):\n req = ApiClient.deviceStateUrl % (self.host, typeId, deviceId, logicalInterfaceId)\n resp = requests.get(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"State retrieved from the device type for a logical interface\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error getting state for a logical interface from a device type\", resp)\n return resp.json()", "response": "Get the state of a device type and logical interface."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the state for a logical interface for a thing. Parameters: - thingTypeId (string) - the platform thing type - thingId (string) - the platform thing id - logicalInterfaceId (string) - the platform returned id of the logical interface Throws APIException on failure.", "response": "def getThingStateForLogicalInterface(self, thingTypeId, thingId, logicalInterfaceId):\n \"\"\"\n Gets the state for a logical interface for a thing.\n Parameters:\n - thingTypeId (string) - the platform thing type\n - thingId (string) - the platform thing id\n - logicalInterfaceId (string) - the platform returned id of the logical interface\n Throws APIException on failure.\n \"\"\"\n req = ApiClient.thingStateUrl % (self.host, thingTypeId, thingId, logicalInterfaceId)\n resp = requests.get(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"State retrieved from the thing type for a logical interface\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error getting state for a logical interface from a thing type\", resp)\n return resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef resetThingStateForLogicalInterface(self, thingTypeId, thingId , logicalInterfaceId):\n req = ApiClient.thingStateUrl % (self.host, \"\", thingTypeId,thingId , logicalInterfaceId)\n body = {\"operation\" : \"reset-state\"}\n resp = requests.patch(req, auth=self.credentials, headers={\"Content-Type\":\"application/json\"}, data=json.dumps(body),\n verify=self.verify)\n if (resp.status_code == 200):\n self.logger.debug(\"Reset ThingState For LogicalInterface succeeded\")\n else:\n raise ibmiotf.APIException(resp.status_code, \" HTTP error on reset ThingState For LogicalInterface \", resp)\n return resp.json()", "response": "This method resets the state of a thing for a logical interface."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget all logical interfaces for a thing type.", "response": "def getLogicalInterfacesOnThingType(self, thingTypeId, draft=False):\n \"\"\"\n Get all logical interfaces for a thing type.\n Parameters:\n - thingTypeId (string)\n - draft (boolean)\n Returns:\n - list of logical interface ids\n - HTTP response object\n Throws APIException on failure.\n \"\"\"\n if draft:\n req = ApiClient.allThingTypeLogicalInterfacesUrl % (self.host, \"/draft\", thingTypeId)\n else:\n req = ApiClient.allThingTypeLogicalInterfacesUrl % (self.host, \"\", thingTypeId)\n resp = requests.get(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"All thing type logical interfaces retrieved\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error getting all thing type logical interfaces\", resp)\n return [appintf[\"id\"] for appintf in resp.json()], resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef addLogicalInterfaceToThingType(self, thingTypeId, logicalInterfaceId, schemaId = None, name = None):\n req = ApiClient.allThingTypeLogicalInterfacesUrl % (self.host, \"/draft\", thingTypeId)\n body = {\"id\" : logicalInterfaceId}\n# body = {\"name\" : name, \"id\" : logicalInterfaceId, \"schemaId\" : schemaId}\n# if description:\n# body[\"description\"] = description\n resp = requests.post(req, auth=self.credentials, headers={\"Content-Type\":\"application/json\"}, data=json.dumps(body),\n verify=self.verify)\n if resp.status_code == 201:\n self.logger.debug(\"The draft logical interface was successfully associated with the thing type.\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error adding logical interface to a thing type\", resp)\n return resp.json()", "response": "Adds a logical interface to a thing type."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nremoving a logical interface from a thing type. Parameters: - thingTypeId (string) - the thing type - logicalInterfaceId (string) - the id returned by the platform on creation of the logical interface Throws APIException on failure.", "response": "def removeLogicalInterfaceFromThingType(self, thingTypeId, logicalInterfaceId):\n \"\"\"\n Removes a logical interface from a thing type.\n Parameters:\n - thingTypeId (string) - the thing type\n - logicalInterfaceId (string) - the id returned by the platform on creation of the logical interface\n Throws APIException on failure.\n \"\"\"\n req = ApiClient.oneThingTypeLogicalInterfaceUrl % (self.host, thingTypeId, logicalInterfaceId)\n resp = requests.delete(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 204:\n self.logger.debug(\"Logical interface removed from a thing type\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error removing logical interface from a thing type\", resp)\n return resp"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef getMappingsOnThingType(self, thingTypeId, draft=False):\n if draft:\n req = ApiClient.allThingTypeMappingsUrl % (self.host, \"/draft\", thingTypeId)\n else:\n req = ApiClient.allThingTypeMappingsUrl % (self.host, \"\", thingTypeId)\n\n resp = requests.get(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"All thing type mappings retrieved\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error getting all thing type mappings\", resp)\n return resp.json()", "response": "Get all the mappings for a thing type."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the mappings for a logical interface from a thing type. Parameters: - thingTypeId (string) - the thing type - logicalInterfaceId (string) - the platform returned id of the logical interface Throws APIException on failure.", "response": "def getMappingsOnThingTypeForLogicalInterface(self, thingTypeId, logicalInterfaceId, draft=False):\n \"\"\"\n Gets the mappings for a logical interface from a thing type.\n Parameters:\n - thingTypeId (string) - the thing type\n - logicalInterfaceId (string) - the platform returned id of the logical interface\n Throws APIException on failure.\n \"\"\"\n if draft:\n req = ApiClient.oneThingTypeMappingUrl % (self.host, \"/draft\", thingTypeId, logicalInterfaceId)\n else:\n req = ApiClient.oneThingTypeMappingUrl % (self.host, \"\", thingTypeId, logicalInterfaceId)\n\n resp = requests.get(req, auth=self.credentials, verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"Mappings retrieved from the thing type\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error getting mappings for a logical interface from a thing type\", resp)\n return resp.json()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef updateMappingsOnDeviceType(self, thingTypeId, logicalInterfaceId, mappingsObject, notificationStrategy = \"never\"):\n req = ApiClient.oneThingTypeMappingUrl % (self.host, \"/draft\", thingTypeId, logicalInterfaceId)\n try:\n mappings = json.dumps({\n \"logicalInterfaceId\" : logicalInterfaceId,\n \"notificationStrategy\" : notificationStrategy,\n \"propertyMappings\" : mappingsObject\n })\n except Exception as exc:\n raise ibmiotf.APIException(-1, \"Exception formatting mappings object to JSON\", exc)\n resp = requests.put(req, auth=self.credentials, headers={\"Content-Type\":\"application/json\"}, data=mappings,\n verify=self.verify)\n if resp.status_code == 200:\n self.logger.debug(\"Thing type mappings updated for logical interface\")\n else:\n raise ibmiotf.APIException(resp.status_code, \"HTTP error updating thing type mappings for logical interface\", resp)\n return resp.json()", "response": "This method updates the mappings for a thing type on a specific logical interface."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef connect(self):\n self.logger.debug(\n \"Connecting... (address = %s, port = %s, clientId = %s, username = %s)\"\n % (self.address, self.port, self.clientId, self.username)\n )\n try:\n self.connectEvent.clear()\n self.client.connect(self.address, port=self.port, keepalive=self.keepAlive)\n self.client.loop_start()\n if not self.connectEvent.wait(timeout=30):\n self.client.loop_stop()\n self._logAndRaiseException(\n ConnectionException(\n \"Operation timed out connecting to IBM Watson IoT Platform: %s\" % (self.address)\n )\n )\n\n except socket.error as serr:\n self.client.loop_stop()\n self._logAndRaiseException(\n ConnectionException(\"Failed to connect to IBM Watson IoT Platform: %s - %s\" % (self.address, str(serr)))\n )", "response": "Connect the client to the IBM Watson IoT Platform using the underlying Paho MQTT client."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef disconnect(self):\n # self.logger.info(\"Closing connection to the IBM Watson IoT Platform\")\n self.client.disconnect()\n # If we don't call loop_stop() it appears we end up with a zombie thread which continues to process\n # network traffic, preventing any subsequent attempt to reconnect using connect()\n self.client.loop_stop()\n self.logger.info(\"Closed connection to the IBM Watson IoT Platform\")", "response": "Disconnect the client from IBM Watson IoT Platform"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncall when the client has log information.", "response": "def _onLog(self, mqttc, obj, level, string):\n \"\"\"\n Called when the client has log information. \n \n See [paho.mqtt.python#on_log](https://github.com/eclipse/paho.mqtt.python#on_log) for more information\n \n # Parameters\n mqttc (paho.mqtt.client.Client): The client instance for this callback\n obj (object): The private user data as set in Client() or user_data_set()\n level (int): The severity of the message, will be one of `MQTT_LOG_INFO`, \n `MQTT_LOG_NOTICE`, `MQTT_LOG_WARNING`, `MQTT_LOG_ERR`, and `MQTT_LOG_DEBUG`.\n string (string): The log message itself\n \n \"\"\"\n self.logger.debug(\"%d %s\" % (level, string))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _onConnect(self, mqttc, userdata, flags, rc):\n if rc == 0:\n self.connectEvent.set()\n self.logger.info(\"Connected successfully: %s\" % (self.clientId))\n\n # Restoring previous subscriptions\n with self._subLock:\n if len(self._subscriptions) > 0:\n for subscription in self._subscriptions:\n # We use the underlying mqttclient subscribe method rather than _subscribe because we are\n # claiming a lock on the subscriptions list and do not want anything else to modify it,\n # which that method does\n (result, mid) = self.client.subscribe(subscription, qos=self._subscriptions[subscription])\n if result != paho.MQTT_ERR_SUCCESS:\n self._logAndRaiseException(ConnectionException(\"Unable to subscribe to %s\" % subscription))\n self.logger.debug(\"Restored %s previous subscriptions\" % len(self._subscriptions))\n elif rc == 1:\n self._logAndRaiseException(ConnectionException(\"Incorrect protocol version\"))\n elif rc == 2:\n self._logAndRaiseException(ConnectionException(\"Invalid client identifier\"))\n elif rc == 3:\n self._logAndRaiseException(ConnectionException(\"Server unavailable\"))\n elif rc == 4:\n self._logAndRaiseException(\n ConnectionException(\"Bad username or password: (%s, %s)\" % (self.username, self.password))\n )\n elif rc == 5:\n self._logAndRaiseException(\n ConnectionException(\"Not authorized: s (%s, %s, %s)\" % (self.clientId, self.username, self.password))\n )\n else:\n self._logAndRaiseException(ConnectionException(\"Unexpected connection failure: %s\" % (rc)))", "response": "Called when the broker responds to a connection request."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncalling when the client disconnects from the IBM Watson IoT Platform.", "response": "def _onDisconnect(self, mqttc, obj, rc):\n \"\"\"\n Called when the client disconnects from IBM Watson IoT Platform.\n \n See [paho.mqtt.python#on_disconnect](https://github.com/eclipse/paho.mqtt.python#on_disconnect) for more information\n \n # Parameters\n mqttc (paho.mqtt.client.Client): The client instance for this callback\n obj (object): The private user data as set in Client() or user_data_set()\n rc (int): indicates the disconnection state. If `MQTT_ERR_SUCCESS` (0), the callback was \n called in response to a `disconnect()` call. If any other value the disconnection was \n unexpected, such as might be caused by a network error.\n \n \"\"\"\n # Clear the event to indicate we're no longer connected\n self.connectEvent.clear()\n\n if rc != 0:\n self.logger.error(\"Unexpected disconnect from IBM Watson IoT Platform: %d\" % (rc))\n else:\n self.logger.info(\"Disconnected from IBM Watson IoT Platform\")"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _onPublish(self, mqttc, obj, mid):\n with self._messagesLock:\n if mid in self._onPublishCallbacks:\n midOnPublish = self._onPublishCallbacks.get(mid)\n del self._onPublishCallbacks[mid]\n if midOnPublish != None:\n midOnPublish()\n else:\n # record the fact that paho callback has already come through so it can be called inline\n # with the publish.\n self._onPublishCallbacks[mid] = None", "response": "Called when a message has been successfully published to the IBM Watson IoT Platform."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef subscribeToDeviceEvents(self, typeId=\"+\", deviceId=\"+\", eventId=\"+\", msgFormat=\"+\", qos=0):\n if self._config.isQuickstart() and deviceId == \"+\":\n self.logger.warning(\n \"QuickStart applications do not support wildcard subscription to events from all devices\"\n )\n return 0\n\n topic = \"iot-2/type/%s/id/%s/evt/%s/fmt/%s\" % (typeId, deviceId, eventId, msgFormat)\n return self._subscribe(topic, qos)", "response": "Subscribe to device events."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nsubscribe to device status messages", "response": "def subscribeToDeviceStatus(self, typeId=\"+\", deviceId=\"+\"):\n \"\"\"\n Subscribe to device status messages\n\n # Parameters\n typeId (string): typeId for the subscription, optional. Defaults to all device types (MQTT `+` wildcard)\n deviceId (string): deviceId for the subscription, optional. Defaults to all devices (MQTT `+` wildcard)\n\n # Returns\n int: If the subscription was successful then the return Message ID (mid) for the subscribe request\n will be returned. The mid value can be used to track the subscribe request by checking against\n the mid argument if you register a subscriptionCallback method.\n If the subscription fails then the return value will be `0`\n \"\"\"\n if self._config.isQuickstart() and deviceId == \"+\":\n self.logger.warning(\"QuickStart applications do not support wildcard subscription to device status\")\n return 0\n\n topic = \"iot-2/type/%s/id/%s/mon\" % (typeId, deviceId)\n return self._subscribe(topic, 0)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef subscribeToDeviceCommands(self, typeId=\"+\", deviceId=\"+\", commandId=\"+\", msgFormat=\"+\"):\n if self._config.isQuickstart():\n self.logger.warning(\"QuickStart applications do not support commands\")\n return 0\n\n topic = \"iot-2/type/%s/id/%s/cmd/%s/fmt/%s\" % (typeId, deviceId, commandId, msgFormat)\n return self._subscribe(topic, 0)", "response": "Subscribe to device command messages."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef publishCommand(self, typeId, deviceId, commandId, msgFormat, data=None, qos=0, on_publish=None):\n if self._config.isQuickstart():\n self.logger.warning(\"QuickStart applications do not support sending commands\")\n return False\n if not self.connectEvent.wait(timeout=10):\n return False\n else:\n topic = \"iot-2/type/%s/id/%s/cmd/%s/fmt/%s\" % (typeId, deviceId, commandId, msgFormat)\n\n # Raise an exception if there is no codec for this msgFormat\n if self.getMessageCodec(msgFormat) is None:\n raise MissingMessageEncoderException(msgFormat)\n\n payload = self.getMessageCodec(msgFormat).encode(data, datetime.now())\n result = self.client.publish(topic, payload=payload, qos=qos, retain=False)\n if result[0] == paho.MQTT_ERR_SUCCESS:\n # Because we are dealing with aync pub/sub model and callbacks it is possible that\n # the _onPublish() callback for this mid is called before we obtain the lock to place\n # the mid into the _onPublishCallbacks list.\n #\n # _onPublish knows how to handle a scenario where the mid is not present (no nothing)\n # in this scenario we will need to invoke the callback directly here, because at the time\n # the callback was invoked the mid was not yet in the list.\n with self._messagesLock:\n if result[1] in self._onPublishCallbacks:\n # paho callback beat this thread so call callback inline now\n del self._onPublishCallbacks[result[1]]\n if on_publish is not None:\n on_publish()\n else:\n # this thread beat paho callback so set up for call later\n self._onPublishCallbacks[result[1]] = on_publish\n return True\n else:\n return False", "response": "Publish a command to a device"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _onDeviceStatus(self, client, userdata, pahoMessage):\n try:\n status = Status(pahoMessage)\n self.logger.debug(\"Received %s action from %s\" % (status.action, status.clientId))\n if self.deviceStatusCallback:\n self.deviceStatusCallback(status)\n except InvalidEventException as e:\n self.logger.critical(str(e))", "response": "Internal callback for device status messages"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _onAppStatus(self, client, userdata, pahoMessage):\n try:\n status = Status(pahoMessage)\n self.logger.debug(\"Received %s action from %s\" % (status.action, status.clientId))\n if self.appStatusCallback:\n self.appStatusCallback(status)\n except InvalidEventException as e:\n self.logger.critical(str(e))", "response": "Internal callback for application status messages"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nparse environment variables into a Python dictionary suitable for passing to the device client constructor as the options parameter.", "response": "def parseEnvVars():\n \"\"\"\n Parse environment variables into a Python dictionary suitable for passing to the\n device client constructor as the `options` parameter\n\n - `WIOTP_IDENTITY_ORGID`\n - `WIOTP_IDENTITY_TYPEID`\n - `WIOTP_IDENTITY_DEVICEID`\n - `WIOTP_AUTH_TOKEN`\n - `WIOTP_OPTIONS_DOMAIN` (optional)\n - `WIOTP_OPTIONS_LOGLEVEL` (optional)\n - `WIOTP_OPTIONS_MQTT_PORT` (optional)\n - `WIOTP_OPTIONS_MQTT_TRANSPORT` (optional)\n - `WIOTP_OPTIONS_MQTT_CAFILE` (optional)\n - `WIOTP_OPTIONS_MQTT_CLEANSTART` (optional)\n - `WIOTP_OPTIONS_MQTT_SESSIONEXPIRY` (optional)\n - `WIOTP_OPTIONS_MQTT_KEEPALIVE` (optional)\n \"\"\"\n\n # Identify\n orgId = os.getenv(\"WIOTP_IDENTITY_ORGID\", None)\n typeId = os.getenv(\"WIOTP_IDENTITY_TYPEID\", None)\n deviceId = os.getenv(\"WIOTP_IDENTITY_DEVICEID\", None)\n # Auth\n authToken = os.getenv(\"WIOTP_AUTH_TOKEN\", None)\n # Options\n domain = os.getenv(\"WIOTP_OPTIONS_DOMAIN\", None)\n logLevel = os.getenv(\"WIOTP_OPTIONS_LOGLEVEL\", \"info\")\n port = os.getenv(\"WIOTP_OPTIONS_MQTT_PORT\", None)\n transport = os.getenv(\"WIOTP_OPTIONS_MQTT_TRANSPORT\", None)\n caFile = os.getenv(\"WIOTP_OPTIONS_MQTT_CAFILE\", None)\n cleanStart = os.getenv(\"WIOTP_OPTIONS_MQTT_CLEANSTART\", \"False\")\n sessionExpiry = os.getenv(\"WIOTP_OPTIONS_MQTT_SESSIONEXPIRY\", \"3600\")\n keepAlive = os.getenv(\"WIOTP_OPTIONS_MQTT_KEEPALIVE\", \"60\")\n caFile = os.getenv(\"WIOTP_OPTIONS_MQTT_CAFILE\", None)\n\n if orgId is None:\n raise ConfigurationException(\"Missing WIOTP_IDENTITY_ORGID environment variable\")\n if typeId is None:\n raise ConfigurationException(\"Missing WIOTP_IDENTITY_TYPEID environment variable\")\n if deviceId is None:\n raise ConfigurationException(\"Missing WIOTP_IDENTITY_DEVICEID environment variable\")\n if orgId != \"quickstart\" and authToken is None:\n raise ConfigurationException(\"Missing WIOTP_AUTH_TOKEN environment variable\")\n if port is not None:\n try:\n port = int(port)\n except ValueError as e:\n raise ConfigurationException(\"WIOTP_OPTIONS_MQTT_PORT must be a number\")\n\n try:\n sessionExpiry = int(sessionExpiry)\n except ValueError as e:\n raise ConfigurationException(\"WIOTP_OPTIONS_MQTT_SESSIONEXPIRY must be a number\")\n\n try:\n keepAlive = int(keepAlive)\n except ValueError as e:\n raise ConfigurationException(\"WIOTP_OPTIONS_MQTT_KEEPAIVE must be a number\")\n\n if logLevel not in [\"error\", \"warning\", \"info\", \"debug\"]:\n raise ConfigurationException(\"WIOTP_OPTIONS_LOGLEVEL must be one of error, warning, info, debug\")\n else:\n # Convert log levels from string to int (we need to upper case our strings from the config)\n logLevel = logging.getLevelName(logLevel.upper())\n\n cfg = {\n \"identity\": {\"orgId\": orgId, \"typeId\": typeId, \"deviceId\": deviceId},\n \"options\": {\n \"domain\": domain,\n \"logLevel\": logLevel,\n \"mqtt\": {\n \"port\": port,\n \"transport\": transport,\n \"caFile\": caFile,\n \"cleanStart\": cleanStart in [\"True\", \"true\", \"1\"],\n \"sessionExpiry\": sessionExpiry,\n \"keepAlive\": keepAlive,\n },\n },\n }\n\n # Quickstart doesn't support auth, so ensure we only add this if it's defined\n if authToken is not None:\n cfg[\"auth\"] = {\"token\": authToken}\n\n return cfg"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the last cached message for a specific event from a specific device.", "response": "def get(self, deviceUid, eventId):\n \"\"\"\n Retrieves the last cached message for specified event from a specific device.\n \"\"\"\n\n if not isinstance(deviceUid, DeviceUid) and isinstance(deviceUid, dict):\n deviceUid = DeviceUid(**deviceUid)\n\n url = \"api/v0002/device/types/%s/devices/%s/events/%s\" % (deviceUid.typeId, deviceUid.deviceId, eventId)\n r = self._apiClient.get(url)\n\n if r.status_code == 200:\n return LastEvent(**r.json())\n else:\n raise ApiException(r)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef getAll(self, deviceUid):\n\n if not isinstance(deviceUid, DeviceUid) and isinstance(deviceUid, dict):\n deviceUid = DeviceUid(**deviceUid)\n\n url = \"api/v0002/device/types/%s/devices/%s/events\" % (deviceUid.typeId, deviceUid.deviceId)\n r = self._apiClient.get(url)\n\n if r.status_code == 200:\n events = []\n for event in r.json():\n events.append(LastEvent(**event))\n return events\n else:\n raise ApiException(r)", "response": "Get all the last cached messages for a specific device."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _makeApiCall(self, parameters=None):\n r = self._apiClient.get(self._url, parameters)\n if r.status_code == 200:\n return r.json()\n else:\n raise Exception(\"HTTP %s %s\" % (r.status_code, r.text))", "response": "This method is used to make an API call to retrieve bulk devices and set the local_key of the device. It accepts a list of parameters"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef decode(message):\n try:\n data = json.loads(message.payload.decode(\"utf-8\"))\n except ValueError as e:\n raise InvalidEventException('Unable to parse JSON. payload=\"%s\" error=%s' % (message.payload, str(e)))\n\n timestamp = datetime.now(pytz.timezone(\"UTC\"))\n\n # TODO: Flatten JSON, covert into array of key/value pairs\n return Message(data, timestamp)", "response": "Convert a generic JSON message into a Message object"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndecode a message into a Message object.", "response": "def decode(message):\n '''\n The decoder understands the comma-seperated format produced by the encoder and\n allocates the two values to the correct keys:\n data['hello'] = 'world'\n data['x'] = 10\n\n '''\n (hello, x) = message.payload.split(\",\")\n\n data = {}\n data['hello'] = hello\n data['x'] = x\n\n timestamp = datetime.now(pytz.timezone('UTC'))\n\n return Message(data, timestamp)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nretrieve the organization - specific status of each of the services offered by the IBM Watson IoT Platform.", "response": "def dataTransfer(self, start, end, detail=False):\n \"\"\"\n Retrieve the organization-specific status of each of the services offered by the IBM Watson IoT Platform.\n In case of failure it throws APIException\n \"\"\"\n\n r = self._apiClient.get(\n \"api/v0002/usage/data-traffic?start=%s&end=%s&detail=%s\"\n % (start.strftime(\"%Y-%m-%d\"), end.strftime(\"%Y-%m-%d\"), detail)\n )\n\n if r.status_code == 200:\n return DataTransferSummary(**r.json())\n else:\n raise ApiException(r)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef initiate(self, request):\n url = MgmtRequests.mgmtRequests\n r = self._apiClient.post(url, request)\n\n if r.status_code == 202:\n return r.json()\n else:\n raise ApiException(r)", "response": "Initiate a device management request."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef delete(self, requestId):\n url = MgmtRequests.mgmtSingleRequest % (requestId)\n r = self._apiClient.delete(url)\n\n if r.status_code == 204:\n return True\n else:\n raise ApiException(r)", "response": "Delete a single request from the device management service."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get(self, requestId):\n url = MgmtRequests.mgmtSingleRequest % (requestId)\n r = self._apiClient.get(url)\n\n if r.status_code == 200:\n return r.json()\n else:\n raise ApiException(r)", "response": "This method returns details of a device management request. It accepts requestId as parameter"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets a list of device mangaement request device statuses.", "response": "def getStatus(self, requestId, typeId=None, deviceId=None):\n \"\"\"\n Get a list of device management request device statuses.\n Get an individual device mangaement request device status.\n \"\"\"\n if typeId is None or deviceId is None:\n url = MgmtRequests.mgmtRequestStatus % (requestId)\n r = self._apiClient.get(url)\n\n if r.status_code == 200:\n return r.json()\n else:\n raise ApiException(r)\n else:\n url = MgmtRequests.mgmtRequestSingleDeviceStatus % (requestId, typeId, deviceId)\n r = self._apiClient.get(url)\n\n if r.status_code == 200:\n return r.json()\n else:\n raise ApiException(r)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nforcing a flush of the index to storage. Renders index inaccessible.", "response": "def close(self):\n \"\"\"Force a flush of the index to storage. Renders index\n inaccessible.\"\"\"\n if self.handle:\n self.handle.destroy()\n self.handle = None\n else:\n raise IOError(\"Unclosable index\")"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef insert(self, id, coordinates, obj=None):\n p_mins, p_maxs = self.get_coordinate_pointers(coordinates)\n data = ctypes.c_ubyte(0)\n size = 0\n pyserialized = None\n if obj is not None:\n size, data, pyserialized = self._serialize(obj)\n core.rt.Index_InsertData(self.handle, id, p_mins, p_maxs,\n self.properties.dimension, data, size)", "response": "Inserts an entry into the index with the given coordinates."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the number of objects that intersect the given coordinates.", "response": "def count(self, coordinates):\n \"\"\"Return number of objects that intersect the given coordinates.\n\n :param coordinates: sequence or array\n This may be an object that satisfies the numpy array\n protocol, providing the index's dimension * 2 coordinate\n pairs representing the `mink` and `maxk` coordinates in\n each dimension defining the bounds of the query window.\n\n The following example queries the index for any objects any objects\n that were stored in the index intersect the bounds given in the\n coordinates::\n\n >>> from rtree import index\n >>> idx = index.Index()\n >>> idx.insert(4321,\n ... (34.3776829412, 26.7375853734, 49.3776829412,\n ... 41.7375853734),\n ... obj=42)\n\n >>> print(idx.count((0, 0, 60, 60)))\n 1\n\n \"\"\"\n p_mins, p_maxs = self.get_coordinate_pointers(coordinates)\n\n p_num_results = ctypes.c_uint64(0)\n\n core.rt.Index_Intersects_count(self.handle,\n p_mins,\n p_maxs,\n self.properties.dimension,\n ctypes.byref(p_num_results))\n\n return p_num_results.value"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef intersection(self, coordinates, objects=False):\n\n if objects:\n return self._intersection_obj(coordinates, objects)\n\n p_mins, p_maxs = self.get_coordinate_pointers(coordinates)\n\n p_num_results = ctypes.c_uint64(0)\n\n it = ctypes.pointer(ctypes.c_int64())\n\n core.rt.Index_Intersects_id(self.handle,\n p_mins,\n p_maxs,\n self.properties.dimension,\n ctypes.byref(it),\n ctypes.byref(p_num_results))\n return self._get_ids(it, p_num_results.value)", "response": "Return ids or objects in the index that intersect the given coordinates."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the k nearest objects to the given coordinates.", "response": "def nearest(self, coordinates, num_results=1, objects=False):\n \"\"\"Returns the ``k``-nearest objects to the given coordinates.\n\n :param coordinates: sequence or array\n This may be an object that satisfies the numpy array\n protocol, providing the index's dimension * 2 coordinate\n pairs representing the `mink` and `maxk` coordinates in\n each dimension defining the bounds of the query window.\n\n :param num_results: integer\n The number of results to return nearest to the given coordinates.\n If two index entries are equidistant, *both* are returned.\n This property means that :attr:`num_results` may return more\n items than specified\n\n :param objects: True / False / 'raw'\n If True, the nearest method will return index objects that\n were pickled when they were stored with each index entry, as\n well as the id and bounds of the index entries.\n If 'raw', it will return the object as entered into the database\n without the :class:`rtree.index.Item` wrapper.\n\n Example of finding the three items nearest to this one::\n\n >>> from rtree import index\n >>> idx = index.Index()\n >>> idx.insert(4321, (34.37, 26.73, 49.37, 41.73), obj=42)\n >>> hits = idx.nearest((0, 0, 10, 10), 3, objects=True)\n \"\"\"\n if objects:\n return self._nearest_obj(coordinates, num_results, objects)\n p_mins, p_maxs = self.get_coordinate_pointers(coordinates)\n\n p_num_results = ctypes.pointer(ctypes.c_uint64(num_results))\n\n it = ctypes.pointer(ctypes.c_int64())\n\n core.rt.Index_NearestNeighbors_id(self.handle,\n p_mins,\n p_maxs,\n self.properties.dimension,\n ctypes.byref(it),\n p_num_results)\n\n return self._get_ids(it, p_num_results.contents.value)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the bounds of the index .", "response": "def get_bounds(self, coordinate_interleaved=None):\n \"\"\"Returns the bounds of the index\n\n :param coordinate_interleaved: If True, the coordinates are turned\n in the form [xmin, ymin, ..., kmin, xmax, ymax, ..., kmax],\n otherwise they are returned as\n [xmin, xmax, ymin, ymax, ..., ..., kmin, kmax]. If not specified,\n the :attr:`interleaved` member of the index is used, which\n defaults to True.\n \"\"\"\n if coordinate_interleaved is None:\n coordinate_interleaved = self.interleaved\n return _get_bounds(\n self.handle, core.rt.Index_GetBounds, coordinate_interleaved)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef delete(self, id, coordinates):\n p_mins, p_maxs = self.get_coordinate_pointers(coordinates)\n core.rt.Index_DeleteData(\n self.handle, id, p_mins, p_maxs, self.properties.dimension)", "response": "Deletes items from the index with the given id within the given coordinates."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef deinterleave(self, interleaved):\n assert len(interleaved) % 2 == 0, (\"must be a pairwise list\")\n dimension = len(interleaved) // 2\n di = []\n for i in range(dimension):\n di.extend([interleaved[i], interleaved[i + dimension]])\n return di", "response": "Return a new list of the elements in the given interleaved list."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef interleave(self, deinterleaved):\n assert len(deinterleaved) % 2 == 0, (\"must be a pairwise list\")\n # dimension = len(deinterleaved) / 2\n interleaved = []\n for i in range(2):\n interleaved.extend([deinterleaved[i + j]\n for j in range(0, len(deinterleaved), 2)])\n return interleaved", "response": "Interleave a set of two - element lists."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nplease override this method.", "response": "def destroy(self, context, returnError):\n \"\"\"please override\"\"\"\n returnError.contents.value = self.IllegalStateError\n raise NotImplementedError(\"You must override this method.\")"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef loadByteArray(self, context, page, resultLen, resultData, returnError):\n returnError.contents.value = self.IllegalStateError\n raise NotImplementedError(\"You must override this method.\")", "response": "loadByteArray is a wrapper for loadByteArray"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef storeByteArray(self, context, page, len, data, returnError):\n returnError.contents.value = self.IllegalStateError\n raise NotImplementedError(\"You must override this method.\")", "response": "Store a byte array."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef deleteByteArray(self, context, page, returnError):\n returnError.contents.value = self.IllegalStateError\n raise NotImplementedError(\"You must override this method.\")", "response": "please override this method."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nplease override this method.", "response": "def flush(self, context, returnError):\n \"\"\"please override\"\"\"\n returnError.contents.value = self.IllegalStateError\n raise NotImplementedError(\"You must override this method.\")"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef insert(self, obj, coordinates):\n try:\n count = self._objects[id(obj)] + 1\n except KeyError:\n count = 1\n self._objects[id(obj)] = (count, obj)\n return super(RtreeContainer, self).insert(id(obj), coordinates, None)", "response": "Inserts an item into the index with the given coordinates."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef intersection(self, coordinates, bbox=False):\n if bbox == False:\n for id in super(RtreeContainer,\n self).intersection(coordinates, bbox):\n yield self._objects[id][1]\n elif bbox == True:\n for value in super(RtreeContainer,\n self).intersection(coordinates, bbox):\n value.object = self._objects[value.id][1]\n value.id = None\n yield value\n else:\n raise ValueError(\n \"valid values for the bbox argument are True and False\")", "response": "Return the ids or objects in the index that intersect the given coordinates."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndelete the item from the container within the specified coordinates.", "response": "def delete(self, obj, coordinates):\n \"\"\"Deletes the item from the container within the specified\n coordinates.\n\n :param obj: object\n Any object.\n\n :param coordinates: sequence or array\n Dimension * 2 coordinate pairs, representing the min\n and max coordinates in each dimension of the item to be\n deleted from the index. Their ordering will depend on the\n index's :attr:`interleaved` data member.\n These are not the coordinates of a space containing the\n item, but those of the item itself. Together with the\n id parameter, they determine which item will be deleted.\n This may be an object that satisfies the numpy array protocol.\n\n Example::\n\n >>> from rtree import index\n >>> idx = index.RtreeContainer()\n >>> idx.delete(object(),\n ... (34.3776829412, 26.7375853734, 49.3776829412,\n ... 41.7375853734))\n Traceback (most recent call last):\n ...\n IndexError: object is not in the index\n\n \"\"\"\n try:\n count = self._objects[id(obj)] - 1\n except KeyError:\n raise IndexError('object is not in the index')\n if count == 0:\n del self._objects[obj]\n else:\n self._objects[id(obj)] = (count, obj)\n return super(RtreeContainer, self).delete(id, coordinates)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef check_return(result, func, cargs):\n \"Error checking for Error calls\"\n if result != 0:\n s = rt.Error_GetLastErrorMsg().decode()\n msg = 'LASError in \"%s\": %s' % \\\n (func.__name__, s)\n rt.Error_Reset()\n raise RTreeError(msg)\n return True", "response": "Error checking for Error calls"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef check_void_done(result, func, cargs):\n \"Error checking for void* returns that might be empty with no error\"\n if rt.Error_GetErrorCount():\n s = rt.Error_GetLastErrorMsg().decode()\n msg = 'Error in \"%s\": %s' % (func.__name__, s)\n rt.Error_Reset()\n raise RTreeError(msg)\n return result", "response": "Error checking for void * returns that might be empty with no error"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef load(self):\n\n if isinstance(self.application, str):\n return util.import_app(self.application)\n else:\n return self.application", "response": "Attempt an import of the specified application"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef init_app(self, app):\n if not hasattr(app, 'extensions'):\n app.extensions = {}\n\n if 'common' in app.extensions:\n raise RuntimeError(\"Flask-Common extension already initialized\")\n\n app.extensions['common'] = self\n self.app = app\n\n if 'COMMON_FILESERVER_DISABLED' not in app.config:\n with app.test_request_context():\n\n # Configure WhiteNoise.\n app.wsgi_app = WhiteNoise(app.wsgi_app, root=url_for('static', filename='')[1:])\n\n self.cache = Cache(app, config={'CACHE_TYPE': app.config.get(\"COMMON_CACHE_TYPE\", 'simple')})\n\n @app.before_request\n def before_request_callback():\n request.start_time = maya.now()\n\n @app.after_request\n def after_request_callback(response):\n if 'COMMON_POWERED_BY_DISABLED' not in current_app.config:\n response.headers['X-Powered-By'] = 'Flask'\n if 'COMMON_PROCESSED_TIME_DISABLED' not in current_app.config:\n response.headers['X-Processed-Time'] = maya.now().epoch - request.start_time.epoch\n return response\n\n @app.route('/favicon.ico')\n def favicon():\n return redirect(url_for('static', filename='favicon.ico'), code=301)", "response": "Initializes the Flask application with Common."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nserve the Flask application.", "response": "def serve(self, workers=None, **kwargs):\n \"\"\"Serves the Flask application.\"\"\"\n if self.app.debug:\n print(crayons.yellow('Booting Flask development server...'))\n self.app.run()\n\n else:\n print(crayons.yellow('Booting Gunicorn...'))\n\n # Start the web server.\n server = GunicornServer(\n self.app, workers=workers or number_of_gunicorn_workers(),\n worker_class='egg:meinheld#gunicorn_worker', **kwargs)\n server.run()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef to_native(self, value):\n context_request = None\n if self.context:\n context_request = self.context.get('request', None)\n return build_versatileimagefield_url_set(\n value,\n self.sizes,\n request=context_request\n )", "response": "Convert the value to a native version of the url field."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef crop_on_centerpoint(self, image, width, height, ppoi=(0.5, 0.5)):\n ppoi_x_axis = int(image.size[0] * ppoi[0])\n ppoi_y_axis = int(image.size[1] * ppoi[1])\n center_pixel_coord = (ppoi_x_axis, ppoi_y_axis)\n # Calculate the aspect ratio of `image`\n orig_aspect_ratio = float(\n image.size[0]\n ) / float(\n image.size[1]\n )\n crop_aspect_ratio = float(width) / float(height)\n\n # Figure out if we're trimming from the left/right or top/bottom\n if orig_aspect_ratio >= crop_aspect_ratio:\n # `image` is wider than what's needed,\n # crop from left/right sides\n orig_crop_width = int(\n (crop_aspect_ratio * float(image.size[1])) + 0.5\n )\n orig_crop_height = image.size[1]\n crop_boundary_top = 0\n crop_boundary_bottom = orig_crop_height\n crop_boundary_left = center_pixel_coord[0] - (orig_crop_width // 2)\n crop_boundary_right = crop_boundary_left + orig_crop_width\n if crop_boundary_left < 0:\n crop_boundary_left = 0\n crop_boundary_right = crop_boundary_left + orig_crop_width\n elif crop_boundary_right > image.size[0]:\n crop_boundary_right = image.size[0]\n crop_boundary_left = image.size[0] - orig_crop_width\n\n else:\n # `image` is taller than what's needed,\n # crop from top/bottom sides\n orig_crop_width = image.size[0]\n orig_crop_height = int(\n (float(image.size[0]) / crop_aspect_ratio) + 0.5\n )\n crop_boundary_left = 0\n crop_boundary_right = orig_crop_width\n crop_boundary_top = center_pixel_coord[1] - (orig_crop_height // 2)\n crop_boundary_bottom = crop_boundary_top + orig_crop_height\n if crop_boundary_top < 0:\n crop_boundary_top = 0\n crop_boundary_bottom = crop_boundary_top + orig_crop_height\n elif crop_boundary_bottom > image.size[1]:\n crop_boundary_bottom = image.size[1]\n crop_boundary_top = image.size[1] - orig_crop_height\n # Cropping the image from the original image\n cropped_image = image.crop(\n (\n crop_boundary_left,\n crop_boundary_top,\n crop_boundary_right,\n crop_boundary_bottom\n )\n )\n # Resizing the newly cropped image to the size specified\n # (as determined by `width`x`height`)\n return cropped_image.resize(\n (width, height),\n Image.ANTIALIAS\n )", "response": "Crop an image on the centerpoint."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nprocesses an image and return a BytesIO instance of the image cropped to width and height.", "response": "def process_image(self, image, image_format, save_kwargs,\n width, height):\n \"\"\"\n Return a BytesIO instance of `image` cropped to `width` and `height`.\n\n Cropping will first reduce an image down to its longest side\n and then crop inwards centered on the Primary Point of Interest\n (as specified by `self.ppoi`)\n \"\"\"\n imagefile = BytesIO()\n palette = image.getpalette()\n cropped_image = self.crop_on_centerpoint(\n image,\n width,\n height,\n self.ppoi\n )\n\n # Using ImageOps.fit on GIFs can introduce issues with their palette\n # Solution derived from: http://stackoverflow.com/a/4905209/1149774\n if image_format == 'GIF':\n cropped_image.putpalette(palette)\n\n cropped_image.save(\n imagefile,\n **save_kwargs\n )\n\n return imagefile"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef process_image(self, image, image_format, save_kwargs,\n width, height):\n \"\"\"\n Return a BytesIO instance of `image` that fits in a bounding box.\n\n Bounding box dimensions are `width`x`height`.\n \"\"\"\n imagefile = BytesIO()\n image.thumbnail(\n (width, height),\n Image.ANTIALIAS\n )\n image.save(\n imagefile,\n **save_kwargs\n )\n return imagefile", "response": "Process an image and return a BytesIO instance of the image that fits in a bounding box."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef process_image(self, image, image_format, save_kwargs={}):\n imagefile = BytesIO()\n inv_image = ImageOps.invert(image)\n inv_image.save(\n imagefile,\n **save_kwargs\n )\n return imagefile", "response": "Return a BytesIO instance of image with inverted colors."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nensure data is prepped properly before handing off to ImageField.", "response": "def to_python(self, data):\n \"\"\"Ensure data is prepped properly before handing off to ImageField.\"\"\"\n if data is not None:\n if hasattr(data, 'open'):\n data.open()\n return super(VersatileImageFormField, self).to_python(data)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nprocess the field s placeholder image.", "response": "def process_placeholder_image(self):\n \"\"\"\n Process the field's placeholder image.\n\n Ensures the placeholder image has been saved to the same storage class\n as the field in a top level folder with a name specified by\n settings.VERSATILEIMAGEFIELD_SETTINGS['placeholder_directory_name']\n\n This should be called by the VersatileImageFileDescriptor __get__.\n If self.placeholder_image_name is already set it just returns right away.\n \"\"\"\n if self.placeholder_image_name:\n return\n\n placeholder_image_name = None\n placeholder_image = self.placeholder_image\n if placeholder_image:\n if isinstance(placeholder_image, OnStoragePlaceholderImage):\n name = placeholder_image.path\n else:\n name = placeholder_image.image_data.name\n placeholder_image_name = os.path.join(\n VERSATILEIMAGEFIELD_PLACEHOLDER_DIRNAME, name\n )\n if not self.storage.exists(placeholder_image_name):\n self.storage.save(\n placeholder_image_name,\n placeholder_image.image_data\n )\n self.placeholder_image_name = placeholder_image_name"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef pre_save(self, model_instance, add):\n file = super(VersatileImageField, self).pre_save(model_instance, add)\n self.update_ppoi_field(model_instance)\n return file", "response": "Return field s value just before saving."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef update_ppoi_field(self, instance, *args, **kwargs):\n # Nothing to update if the field doesn't have have a ppoi\n # dimension field.\n if not self.ppoi_field:\n return\n\n # getattr will call the VersatileImageFileDescriptor's __get__ method,\n # which coerces the assigned value into an instance of\n # self.attr_class(VersatileImageFieldFile in this case).\n file = getattr(instance, self.attname)\n\n # file should be an instance of VersatileImageFieldFile or should be\n # None.\n ppoi = None\n if file and not isinstance(file, tuple):\n if hasattr(file, 'ppoi'):\n ppoi = file.ppoi\n\n # Update the ppoi field.\n if self.ppoi_field:\n setattr(instance, self.ppoi_field, ppoi)", "response": "Update the field s ppoi field."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsaves the data sent from the MultiValueField forms that set ppoi values.", "response": "def save_form_data(self, instance, data):\n \"\"\"\n Handle data sent from MultiValueField forms that set ppoi values.\n\n `instance`: The model instance that is being altered via a form\n `data`: The data sent from the form to this field which can be either:\n * `None`: This is unset data from an optional field\n * A two-position tuple: (image_form_data, ppoi_data)\n * `image_form-data` options:\n * `None` the file for this field is unchanged\n * `False` unassign the file form the field\n * `ppoi_data` data structure:\n * `%(x_coordinate)sx%(y_coordinate)s': The ppoi data to\n assign to the unchanged file\n\n \"\"\"\n to_assign = data\n if data and isinstance(data, tuple):\n # This value is coming from a MultiValueField\n if data[0] is None:\n # This means the file hasn't changed but we need to\n # update the ppoi\n current_field = getattr(instance, self.name)\n if data[1]:\n current_field.ppoi = data[1]\n to_assign = current_field\n elif data[0] is False:\n # This means the 'Clear' checkbox was checked so we\n # need to empty the field\n to_assign = ''\n else:\n # This means there is a new upload so we need to unpack\n # the tuple and assign the first position to the field\n # attribute\n to_assign = data[0]\n super(VersatileImageField, self).save_form_data(instance, to_assign)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a formfield with the default values set for the VersatileImageField.", "response": "def formfield(self, **kwargs):\n \"\"\"Return a formfield.\"\"\"\n # This is a fairly standard way to set up some defaults\n # while letting the caller override them.\n defaults = {}\n if self.ppoi_field:\n defaults['form_class'] = SizedImageCenterpointClickDjangoAdminField\n if kwargs.get('widget') is AdminFileWidget:\n # Ensuring default admin widget is skipped (in favor of using\n # SizedImageCenterpointClickDjangoAdminField's default widget as\n # the default widget choice for use in the admin).\n # This is for two reasons:\n # 1. To prevent 'typical' admin users (those who want to use\n # the PPOI 'click' widget by default) from having to\n # specify a formfield_overrides for each ModelAdmin class\n # used by each model that has a VersatileImageField.\n # 2. If a VersatileImageField does not have a ppoi_field specified\n # it will 'fall back' to a ClearableFileInput anyways.\n # If admin users do, in fact, want to force use of the\n # AdminFileWidget they can simply subclass AdminFileWidget and\n # specify it in their ModelAdmin.formfield_overrides (though,\n # if that's the case, why are they using VersatileImageField in\n # the first place?)\n del kwargs['widget']\n defaults.update(kwargs)\n return super(VersatileImageField, self).formfield(**defaults)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef value_to_string(self, obj):\n if DJANGO_VERSION > (1, 9):\n value = self.value_from_object(obj)\n else:\n value = self._get_val_from_obj(obj)\n return self.get_prep_value(value)", "response": "Prepare field for serialization."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nprint out a Yum - style progress bar.", "response": "def cli_progress_bar(start, end, bar_length=50):\n \"\"\"\n Prints out a Yum-style progress bar (via sys.stdout.write).\n `start`: The 'current' value of the progress bar.\n `end`: The '100%' value of the progress bar.\n `bar_length`: The size of the overall progress bar.\n\n Example output with start=20, end=100, bar_length=50:\n [###########----------------------------------------] 20/100 (100%)\n\n Intended to be used in a loop. Example:\n end = 100\n for i in range(end):\n cli_progress_bar(i, end)\n\n Based on an implementation found here:\n http://stackoverflow.com/a/13685020/1149774\n \"\"\"\n percent = float(start) / end\n hashes = '#' * int(round(percent * bar_length))\n spaces = '-' * (bar_length - len(hashes))\n stdout.write(\n \"\\r[{0}] {1}/{2} ({3}%)\".format(\n hashes + spaces,\n start,\n end,\n int(round(percent * 100))\n )\n )\n stdout.flush()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _prewarm_versatileimagefield(size_key, versatileimagefieldfile):\n versatileimagefieldfile.create_on_demand = True\n try:\n url = get_url_from_image_key(versatileimagefieldfile, size_key)\n except Exception:\n success = False\n url_or_filepath = versatileimagefieldfile.name\n logger.exception('Thumbnail generation failed',\n extra={'path': url_or_filepath})\n else:\n success = True\n url_or_filepath = url\n return (success, url_or_filepath)", "response": "Pre - warm the image if it was successfully created."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef warm(self):\n num_images_pre_warmed = 0\n failed_to_create_image_path_list = []\n total = self.queryset.count() * len(self.size_key_list)\n for a, instance in enumerate(self.queryset, start=1):\n for b, size_key in enumerate(self.size_key_list, start=1):\n success, url_or_filepath = self._prewarm_versatileimagefield(\n size_key,\n reduce(getattr, self.image_attr.split(\".\"), instance)\n )\n if success is True:\n num_images_pre_warmed += 1\n if self.verbose:\n cli_progress_bar(num_images_pre_warmed, total)\n else:\n failed_to_create_image_path_list.append(url_or_filepath)\n\n if a * b == total:\n stdout.write('\\n')\n\n stdout.flush()\n return (num_images_pre_warmed, failed_to_create_image_path_list)", "response": "Warm the storage class with the contents of the VersatileImageField field."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef autodiscover():\n from importlib import import_module\n from django.apps import apps\n from django.utils.module_loading import module_has_submodule\n\n for app_config in apps.get_app_configs():\n # Attempt to import the app's module.\n\n try:\n before_import_sizedimage_registry = copy.copy(\n versatileimagefield_registry._sizedimage_registry\n )\n before_import_filter_registry = copy.copy(\n versatileimagefield_registry._filter_registry\n )\n import_module('%s.versatileimagefield' % app_config.name)\n except Exception:\n # Reset the versatileimagefield_registry to the state before the\n # last import as this import will have to reoccur on the next\n # request and this could raise NotRegistered and AlreadyRegistered\n # exceptions (see django ticket #8245).\n versatileimagefield_registry._sizedimage_registry = \\\n before_import_sizedimage_registry\n versatileimagefield_registry._filter_registry = \\\n before_import_filter_registry\n\n # Decide whether to bubble up this error. If the app just\n # doesn't have the module in question, we can ignore the error\n # attempting to import it, otherwise we want it to bubble up.\n if module_has_submodule(app_config.module, 'versatileimagefield'):\n raise", "response": "Autodiscover the versatile image field modules."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef register_sizer(self, attr_name, sizedimage_cls):\n if attr_name.startswith(\n '_'\n ) or attr_name in self.unallowed_sizer_names:\n raise UnallowedSizerName(\n \"`%s` is an unallowed Sizer name. Sizer names cannot begin \"\n \"with an underscore or be named any of the \"\n \"following: %s.\" % (\n attr_name,\n ', '.join([\n name\n for name in self.unallowed_sizer_names\n ])\n )\n )\n if not issubclass(sizedimage_cls, SizedImage):\n raise InvalidSizedImageSubclass(\n 'Only subclasses of versatileimagefield.datastructures.'\n 'SizedImage may be registered with register_sizer'\n )\n\n if attr_name in self._sizedimage_registry:\n raise AlreadyRegistered(\n 'A SizedImage class is already registered to the `%s` '\n 'attribute. If you would like to override this attribute, '\n 'use the unregister method' % attr_name\n )\n else:\n self._sizedimage_registry[attr_name] = sizedimage_cls", "response": "Registers a new SizedImage subclass with the given attribute name."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef register_filter(self, attr_name, filterimage_cls):\n if attr_name.startswith('_'):\n raise UnallowedFilterName(\n '`%s` is an unallowed Filter name. Filter names cannot begin '\n 'with an underscore.' % attr_name\n )\n if not issubclass(filterimage_cls, FilteredImage):\n raise InvalidFilteredImageSubclass(\n 'Only subclasses of FilteredImage may be registered as '\n 'filters with VersatileImageFieldRegistry'\n )\n\n if attr_name in self._filter_registry:\n raise AlreadyRegistered(\n 'A ProcessedImageMixIn class is already registered to the `%s`'\n ' attribute. If you would like to override this attribute, '\n 'use the unregister method' % attr_name\n )\n else:\n self._filter_registry[attr_name] = filterimage_cls", "response": "Register a new FilteredImage subclass with the given attribute name."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef unregister_filter(self, attr_name):\n if attr_name not in self._filter_registry:\n raise NotRegistered(\n 'No FilteredImage subclass is registered to %s' % attr_name\n )\n else:\n del self._filter_registry[attr_name]", "response": "Unregister the FilteredImage subclass currently assigned to attr_name."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef url(self):\n if not self.name and self.field.placeholder_image_name:\n return self.storage.url(self.field.placeholder_image_name)\n\n return super(VersatileImageMixIn, self).url", "response": "Returns the appropriate URL for this VersatileImageFieldFile."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef ppoi(self, value):\n ppoi = validate_ppoi(\n value,\n return_converted_tuple=True\n )\n if ppoi is not False:\n self._ppoi_value = ppoi\n self.build_filters_and_sizers(ppoi, self.create_on_demand)", "response": "Primary Point of Interest ( ppoi setter."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef build_filters_and_sizers(self, ppoi_value, create_on_demand):\n name = self.name\n if not name and self.field.placeholder_image_name:\n name = self.field.placeholder_image_name\n self.filters = FilterLibrary(\n name,\n self.storage,\n versatileimagefield_registry,\n ppoi_value,\n create_on_demand\n )\n for (\n attr_name,\n sizedimage_cls\n ) in iteritems(versatileimagefield_registry._sizedimage_registry):\n setattr(\n self,\n attr_name,\n sizedimage_cls(\n path_to_image=name,\n storage=self.storage,\n create_on_demand=create_on_demand,\n ppoi=ppoi_value\n )\n )", "response": "Build the filters and sizers for a field."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the location where filtered images are stored.", "response": "def get_filtered_root_folder(self):\n \"\"\"Return the location where filtered images are stored.\"\"\"\n folder, filename = os.path.split(self.name)\n return os.path.join(folder, VERSATILEIMAGEFIELD_FILTERED_DIRNAME, '')"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_sized_root_folder(self):\n folder, filename = os.path.split(self.name)\n return os.path.join(VERSATILEIMAGEFIELD_SIZED_DIRNAME, folder, '')", "response": "Return the location where sized images are stored."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the location where filtered + sized images are stored.", "response": "def get_filtered_sized_root_folder(self):\n \"\"\"Return the location where filtered + sized images are stored.\"\"\"\n sized_root_folder = self.get_sized_root_folder()\n return os.path.join(\n sized_root_folder,\n VERSATILEIMAGEFIELD_FILTERED_DIRNAME\n )"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef delete_matching_files_from_storage(self, root_folder, regex):\n if not self.name: # pragma: no cover\n return\n try:\n directory_list, file_list = self.storage.listdir(root_folder)\n except OSError: # pragma: no cover\n pass\n else:\n folder, filename = os.path.split(self.name)\n basename, ext = os.path.splitext(filename)\n for f in file_list:\n if not f.startswith(basename) or not f.endswith(ext): # pragma: no cover\n continue\n tag = f[len(basename):-len(ext)]\n assert f == basename + tag + ext\n if regex.match(tag) is not None:\n file_location = os.path.join(root_folder, f)\n self.storage.delete(file_location)\n cache.delete(\n self.storage.url(file_location)\n )\n print(\n \"Deleted {file} (created from: {original})\".format(\n file=os.path.join(root_folder, f),\n original=self.name\n )\n )", "response": "Delete files in root_folder which match regex before file ext."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef validate_ppoi_tuple(value):\n valid = True\n while valid is True:\n if len(value) == 2 and isinstance(value, tuple):\n for x in value:\n if x >= 0 and x <= 1:\n pass\n else:\n valid = False\n break\n else:\n valid = False\n return valid", "response": "Validates that a tuple is a valid PPOI."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconverts a string with format x y x y to a two position tuple.", "response": "def validate_ppoi(value, return_converted_tuple=False):\n \"\"\"\n Converts, validates and optionally returns a string with formatting:\n '%(x_axis)dx%(y_axis)d' into a two position tuple.\n\n If a tuple is passed to `value` it is also validated.\n\n Both x_axis and y_axis must be floats or ints greater\n than 0 and less than 1.\n \"\"\"\n\n valid_ppoi = True\n to_return = None\n if isinstance(value, tuple):\n valid_ppoi = validate_ppoi_tuple(value)\n if valid_ppoi:\n to_return = value\n else:\n tup = tuple()\n try:\n string_split = [\n float(segment.strip())\n for segment in value.split('x')\n if float(segment.strip()) >= 0 and float(segment.strip()) <= 1\n ]\n except Exception:\n valid_ppoi = False\n else:\n tup = tuple(string_split)\n\n valid_ppoi = validate_ppoi_tuple(tup)\n\n if valid_ppoi:\n to_return = tup\n if not valid_ppoi:\n raise ValidationError(\n message=INVALID_CENTERPOINT_ERROR_MESSAGE % str(value),\n code='invalid_ppoi'\n )\n else:\n if to_return and return_converted_tuple is True:\n return to_return"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef preprocess(self, image, image_format):\n save_kwargs = {'format': image_format}\n\n # Ensuring image is properly rotated\n if hasattr(image, '_getexif'):\n exif_datadict = image._getexif() # returns None if no EXIF data\n if exif_datadict is not None:\n exif = dict(exif_datadict.items())\n orientation = exif.get(EXIF_ORIENTATION_KEY, None)\n if orientation == 3:\n image = image.transpose(Image.ROTATE_180)\n elif orientation == 6:\n image = image.transpose(Image.ROTATE_270)\n elif orientation == 8:\n image = image.transpose(Image.ROTATE_90)\n\n # Ensure any embedded ICC profile is preserved\n save_kwargs['icc_profile'] = image.info.get('icc_profile')\n\n if hasattr(self, 'preprocess_%s' % image_format):\n image, addl_save_kwargs = getattr(\n self,\n 'preprocess_%s' % image_format\n )(image=image)\n save_kwargs.update(addl_save_kwargs)\n\n return image, save_kwargs", "response": "Preprocesses an image and returns a PIL Image instance and a dictionary of keyword arguments to be used when saving the image."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreceives a PIL Image instance of a GIF and return a 2 - tuple with the image and kwargs.", "response": "def preprocess_GIF(self, image, **kwargs):\n \"\"\"\n Receive a PIL Image instance of a GIF and return 2-tuple.\n\n Args:\n * [0]: Original Image instance (passed to `image`)\n * [1]: Dict with a transparency key (to GIF transparency layer)\n \"\"\"\n if 'transparency' in image.info:\n save_kwargs = {'transparency': image.info['transparency']}\n else:\n save_kwargs = {}\n return (image, save_kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreceiving a PIL Image instance of a JPEG and returns a 2 - tuple with the image and kwargs.", "response": "def preprocess_JPEG(self, image, **kwargs):\n \"\"\"\n Receive a PIL Image instance of a JPEG and returns 2-tuple.\n\n Args:\n * [0]: Image instance, converted to RGB\n * [1]: Dict with a quality key (mapped to the value of `QUAL` as\n defined by the `VERSATILEIMAGEFIELD_JPEG_RESIZE_QUALITY`\n setting)\n \"\"\"\n save_kwargs = {\n 'progressive': VERSATILEIMAGEFIELD_PROGRESSIVE_JPEG,\n 'quality': QUAL\n }\n if image.mode != 'RGB':\n image = image.convert('RGB')\n return (image, save_kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef retrieve_image(self, path_to_image):\n image = self.storage.open(path_to_image, 'rb')\n file_ext = path_to_image.rsplit('.')[-1]\n image_format, mime_type = get_image_metadata_from_file_ext(file_ext)\n\n return (\n Image.open(image),\n file_ext,\n image_format,\n mime_type\n )", "response": "Return a PIL Image instance stored at path_to_image."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef save_image(self, imagefile, save_path, file_ext, mime_type):\n file_to_save = InMemoryUploadedFile(\n imagefile,\n None,\n 'foo.%s' % file_ext,\n mime_type,\n imagefile.tell(),\n None\n )\n file_to_save.seek(0)\n self.storage.save(save_path, file_to_save)", "response": "Save an image to the local storage at save_path."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef ppoi_as_str(self):\n return \"%s__%s\" % (\n str(self.ppoi[0]).replace('.', '-'),\n str(self.ppoi[1]).replace('.', '-')\n )", "response": "Return PPOI value as a string."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a resized image.", "response": "def create_resized_image(self, path_to_image, save_path_on_storage,\n width, height):\n \"\"\"\n Create a resized image.\n\n `path_to_image`: The path to the image with the media directory to\n resize. If `None`, the\n VERSATILEIMAGEFIELD_PLACEHOLDER_IMAGE will be used.\n `save_path_on_storage`: Where on self.storage to save the resized image\n `width`: Width of resized image (int)\n `height`: Desired height of resized image (int)\n `filename_key`: A string that will be used in the sized image filename\n to signify what operation was done to it.\n Examples: 'crop' or 'scale'\n \"\"\"\n image, file_ext, image_format, mime_type = self.retrieve_image(\n path_to_image\n )\n\n image, save_kwargs = self.preprocess(image, image_format)\n\n imagefile = self.process_image(\n image=image,\n image_format=image_format,\n save_kwargs=save_kwargs,\n width=width,\n height=height\n )\n self.save_image(imagefile, save_path_on_storage, file_ext, mime_type)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrender the widget as an HTML string.", "response": "def render(self, name, value, attrs=None, renderer=None):\n \"\"\"\n Render the widget as an HTML string.\n\n Overridden here to support Django < 1.11.\n \"\"\"\n if self.has_template_widget_rendering:\n return super(ClearableFileInputWithImagePreview, self).render(\n name, value, attrs=attrs, renderer=renderer\n )\n else:\n context = self.get_context(name, value, attrs)\n return render_to_string(self.template_name, context)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the context to render this widget with.", "response": "def get_context(self, name, value, attrs):\n \"\"\"Get the context to render this widget with.\"\"\"\n if self.has_template_widget_rendering:\n context = super(ClearableFileInputWithImagePreview, self).get_context(name, value, attrs)\n else:\n # Build the context manually.\n context = {}\n context['widget'] = {\n 'name': name,\n 'is_hidden': self.is_hidden,\n 'required': self.is_required,\n 'value': self._format_value(value),\n 'attrs': self.build_attrs(self.attrs, attrs),\n 'template_name': self.template_name,\n 'type': self.input_type,\n }\n\n # It seems Django 1.11's ClearableFileInput doesn't add everything to the 'widget' key, so we can't use it\n # in MultiWidget. Add it manually here.\n checkbox_name = self.clear_checkbox_name(name)\n checkbox_id = self.clear_checkbox_id(checkbox_name)\n\n context['widget'].update({\n 'checkbox_name': checkbox_name,\n 'checkbox_id': checkbox_id,\n 'is_initial': self.is_initial(value),\n 'input_text': self.input_text,\n 'initial_text': self.initial_text,\n 'clear_checkbox_label': self.clear_checkbox_label,\n })\n\n if value and hasattr(value, \"url\"):\n context['widget'].update({\n 'hidden_field_id': self.get_hidden_field_id(name),\n 'point_stage_id': self.get_point_stage_id(name),\n 'ppoi_id': self.get_ppoi_id(name),\n 'sized_url': self.get_sized_url(value),\n 'image_preview_id': self.image_preview_id(name),\n })\n\n return context"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nbuilds an attribute dictionary.", "response": "def build_attrs(self, base_attrs, extra_attrs=None):\n \"\"\"Build an attribute dictionary.\"\"\"\n attrs = base_attrs.copy()\n if extra_attrs is not None:\n attrs.update(extra_attrs)\n return attrs"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the filename that is resized according to width height and filename_key.", "response": "def get_resized_filename(filename, width, height, filename_key):\n \"\"\"\n Return the 'resized filename' (according to `width`, `height` and\n `filename_key`) in the following format:\n `filename`-`filename_key`-`width`x`height`.ext\n \"\"\"\n try:\n image_name, ext = filename.rsplit('.', 1)\n except ValueError:\n image_name = filename\n ext = 'jpg'\n\n resized_template = \"%(filename_key)s-%(width)dx%(height)d\"\n if ext.lower() in ['jpg', 'jpeg']:\n resized_template = resized_template + \"-%(quality)d\"\n\n resized_key = resized_template % ({\n 'filename_key': filename_key,\n 'width': width,\n 'height': height,\n 'quality': QUAL\n })\n\n return \"%(image_name)s-%(image_key)s.%(ext)s\" % ({\n 'image_name': image_name,\n 'image_key': post_process_image_key(resized_key),\n 'ext': ext\n })"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a path_to_image location on storage as dictated by width height and filename_key", "response": "def get_resized_path(path_to_image, width, height,\n filename_key, storage):\n \"\"\"\n Return a `path_to_image` location on `storage` as dictated by `width`, `height`\n and `filename_key`\n \"\"\"\n containing_folder, filename = os.path.split(path_to_image)\n\n resized_filename = get_resized_filename(\n filename,\n width,\n height,\n filename_key\n )\n\n joined_path = os.path.join(*[\n VERSATILEIMAGEFIELD_SIZED_DIRNAME,\n containing_folder,\n resized_filename\n ]).replace(' ', '') # Removing spaces so this path is memcached friendly\n\n return joined_path"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the filtered filename according to filename_key", "response": "def get_filtered_filename(filename, filename_key):\n \"\"\"\n Return the 'filtered filename' (according to `filename_key`)\n in the following format:\n `filename`__`filename_key`__.ext\n \"\"\"\n try:\n image_name, ext = filename.rsplit('.', 1)\n except ValueError:\n image_name = filename\n ext = 'jpg'\n return \"%(image_name)s__%(filename_key)s__.%(ext)s\" % ({\n 'image_name': image_name,\n 'filename_key': filename_key,\n 'ext': ext\n })"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_filtered_path(path_to_image, filename_key, storage):\n containing_folder, filename = os.path.split(path_to_image)\n\n filtered_filename = get_filtered_filename(filename, filename_key)\n path_to_return = os.path.join(*[\n containing_folder,\n VERSATILEIMAGEFIELD_FILTERED_DIRNAME,\n filtered_filename\n ])\n # Removing spaces so this path is memcached key friendly\n path_to_return = path_to_return.replace(' ', '')\n return path_to_return", "response": "Return the path to the filtered file"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nvalidates a list of size keys.", "response": "def validate_versatileimagefield_sizekey_list(sizes):\n \"\"\"\n Validate a list of size keys.\n\n `sizes`: An iterable of 2-tuples, both strings. Example:\n [\n ('large', 'url'),\n ('medium', 'crop__400x400'),\n ('small', 'thumbnail__100x100')\n ]\n \"\"\"\n try:\n for key, size_key in sizes:\n size_key_split = size_key.split('__')\n if size_key_split[-1] != 'url' and (\n 'x' not in size_key_split[-1]\n ):\n raise InvalidSizeKey(\n \"{0} is an invalid size. All sizes must be either \"\n \"'url' or made up of at least two segments separated \"\n \"by double underscores. Examples: 'crop__400x400', \"\n \"filters__invert__url\".format(size_key)\n )\n except ValueError:\n raise InvalidSizeKeySet(\n '{} is an invalid size key set. Size key sets must be an '\n 'iterable of 2-tuples'.format(str(sizes))\n )\n return list(set(sizes))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nbuild a URL from image_key.", "response": "def get_url_from_image_key(image_instance, image_key):\n \"\"\"Build a URL from `image_key`.\"\"\"\n img_key_split = image_key.split('__')\n if 'x' in img_key_split[-1]:\n size_key = img_key_split.pop(-1)\n else:\n size_key = None\n img_url = reduce(getattr, img_key_split, image_instance)\n if size_key:\n img_url = img_url[size_key].url\n return img_url"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef build_versatileimagefield_url_set(image_instance, size_set, request=None):\n size_set = validate_versatileimagefield_sizekey_list(size_set)\n to_return = {}\n if image_instance or image_instance.field.placeholder_image:\n for key, image_key in size_set:\n img_url = get_url_from_image_key(image_instance, image_key)\n if request is not None:\n img_url = request.build_absolute_uri(img_url)\n to_return[key] = img_url\n return to_return", "response": "Builds a dictionary of urls corresponding to size_set."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_rendition_key_set(key):\n try:\n rendition_key_set = IMAGE_SETS[key]\n except KeyError:\n raise ImproperlyConfigured(\n \"No Rendition Key Set exists at \"\n \"settings.VERSATILEIMAGEFIELD_RENDITION_KEY_SETS['{}']\".format(key)\n )\n else:\n return validate_versatileimagefield_sizekey_list(rendition_key_set)", "response": "Returns a validated and prepped Rendition Key Set for the given key."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ntakes a raw Instruction and translates it into a human readable text representation.", "response": "def format_instruction(insn):\n \"\"\"\n Takes a raw `Instruction` and translates it into a human readable text\n representation. As of writing, the text representation for WASM is not yet\n standardized, so we just emit some generic format.\n \"\"\"\n text = insn.op.mnemonic\n\n if not insn.imm:\n return text\n\n return text + ' ' + ', '.join([\n getattr(insn.op.imm_struct, x.name).to_string(\n getattr(insn.imm, x.name)\n )\n for x in insn.op.imm_struct._meta.fields\n ])"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef format_function(\n func_body,\n func_type=None,\n indent=2,\n format_locals=True,\n):\n \"\"\"\n Takes a `FunctionBody` and optionally a `FunctionType`, yielding the string \n representation of the function line by line. The function type is required\n for formatting function parameter and return value information.\n \"\"\"\n if func_type is None:\n yield 'func'\n else:\n param_section = ' (param {})'.format(' '.join(\n map(format_lang_type, func_type.param_types)\n )) if func_type.param_types else ''\n result_section = ' (result {})'.format(\n format_lang_type(func_type.return_type)\n ) if func_type.return_type else ''\n yield 'func' + param_section + result_section\n\n if format_locals and func_body.locals:\n yield '(locals {})'.format(' '.join(itertools.chain.from_iterable(\n itertools.repeat(format_lang_type(x.type), x.count)\n for x in func_body.locals\n )))\n\n level = 1\n for cur_insn in decode_bytecode(func_body.code):\n if cur_insn.op.flags & INSN_LEAVE_BLOCK:\n level -= 1\n yield ' ' * (level * indent) + format_instruction(cur_insn)\n if cur_insn.op.flags & INSN_ENTER_BLOCK:\n level += 1", "response": "Yields a string \n representation of a function."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndecode raw bytecode yielding Instructions.", "response": "def decode_bytecode(bytecode):\n \"\"\"Decodes raw bytecode, yielding `Instruction`s.\"\"\"\n bytecode_wnd = memoryview(bytecode)\n while bytecode_wnd:\n opcode_id = byte2int(bytecode_wnd[0])\n opcode = OPCODE_MAP[opcode_id]\n\n if opcode.imm_struct is not None:\n offs, imm, _ = opcode.imm_struct.from_raw(None, bytecode_wnd[1:])\n else:\n imm = None\n offs = 0\n\n insn_len = 1 + offs\n yield Instruction(opcode, imm, insn_len)\n bytecode_wnd = bytecode_wnd[insn_len:]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef decode_module(module, decode_name_subsections=False):\n module_wnd = memoryview(module)\n\n # Read & yield module header.\n hdr = ModuleHeader()\n hdr_len, hdr_data, _ = hdr.from_raw(None, module_wnd)\n yield ModuleFragment(hdr, hdr_data)\n module_wnd = module_wnd[hdr_len:]\n\n # Read & yield sections.\n while module_wnd:\n sec = Section()\n sec_len, sec_data, _ = sec.from_raw(None, module_wnd)\n\n # If requested, decode name subsections when encountered.\n if (\n decode_name_subsections and\n sec_data.id == SEC_UNK and\n sec_data.name == SEC_NAME\n ):\n sec_wnd = sec_data.payload\n while sec_wnd:\n subsec = NameSubSection()\n subsec_len, subsec_data, _ = subsec.from_raw(None, sec_wnd)\n yield ModuleFragment(subsec, subsec_data)\n sec_wnd = sec_wnd[subsec_len:]\n else:\n yield ModuleFragment(sec, sec_data)\n\n module_wnd = module_wnd[sec_len:]", "response": "Decodes raw WASM modules yielding ModuleFragments."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef deprecated_func(func):\n\n # We use a mutable container here to work around Py2's lack of\n # the `nonlocal` keyword.\n first_usage = [True]\n\n @functools.wraps(func)\n def wrapper(*args, **kwargs):\n if first_usage[0]:\n warnings.warn(\n \"Call to deprecated function {}.\".format(func.__name__),\n DeprecationWarning,\n )\n first_usage[0] = False\n return func(*args, **kwargs)\n\n return wrapper", "response": "Deprecates a function printing a warning on the first usage."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsend an action to the server.", "response": "def send_action(self, action, as_list=None, **kwargs):\n \"\"\"Send an :class:`~panoramisk.actions.Action` to the server:\n\n :param action: an Action or dict with action name and parameters to\n send\n :type action: Action or dict or Command\n :param as_list: If True, the action Future will retrieve all responses\n :type as_list: boolean\n :return: a Future that will receive the response\n :rtype: asyncio.Future\n\n :Example:\n\n To retrieve answer in a coroutine::\n\n manager = Manager()\n resp = yield from manager.send_action({'Action': 'Status'})\n\n With a callback::\n\n manager = Manager()\n future = manager.send_action({'Action': 'Status'})\n future.add_done_callback(handle_status_response)\n\n See https://wiki.asterisk.org/wiki/display/AST/AMI+Actions for\n more information on actions\n \"\"\"\n action.update(kwargs)\n return self.protocol.send(action, as_list=as_list)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsending a command to the server.", "response": "def send_command(self, command, as_list=False):\n \"\"\"Send a :class:`~panoramisk.actions.Command` to the server::\n\n manager = Manager()\n resp = manager.send_command('http show status')\n\n Return a response :class:`~panoramisk.message.Message`.\n See https://wiki.asterisk.org/wiki/display/AST/ManagerAction_Command\n \"\"\"\n action = actions.Action({'Command': command, 'Action': 'Command'},\n as_list=as_list)\n return self.send_action(action)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsends a command to the AGI server.", "response": "def send_agi_command(self, channel, command, as_list=False):\n \"\"\"Send a :class:`~panoramisk.actions.Command` to the server:\n\n :param channel: Channel name where to launch command.\n Ex: 'SIP/000000-00000a53'\n :type channel: String\n :param command: command to launch. Ex: 'GET VARIABLE async_agi_server'\n :type command: String\n :param as_list: If True, the action Future will retrieve all responses\n :type as_list: boolean\n :return: a Future that will receive the response\n :rtype: asyncio.Future\n\n :Example:\n\n ::\n\n manager = Manager()\n resp = manager.send_agi_command('SIP/000000-00000a53',\n 'GET VARIABLE async_agi_server')\n\n\n Return a response :class:`~panoramisk.message.Message`.\n See https://wiki.asterisk.org/wiki/display/AST/Asterisk+11+ManagerAction_AGI\n \"\"\"\n action = actions.Command({'Action': 'AGI',\n 'Channel': channel,\n 'Command': command},\n as_list=as_list)\n return self.send_action(action)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef connect(self):\n if self.loop is None: # pragma: no cover\n self.loop = asyncio.get_event_loop()\n t = asyncio.Task(\n self.loop.create_connection(\n self.config['protocol_factory'],\n self.config['host'], self.config['port'],\n ssl=self.config['ssl']),\n loop=self.loop)\n t.add_done_callback(self.connection_made)\n return t", "response": "connect to the server"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef register_event(self, pattern, callback=None):\n def _register_event(callback):\n if not self.callbacks[pattern]:\n self.patterns.append((pattern,\n re.compile(fnmatch.translate(pattern))))\n self.callbacks[pattern].append(callback)\n return callback\n if callback is not None:\n return _register_event(callback)\n else:\n return _register_event", "response": "register an event pattern with a callback"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncloses the connection to the database.", "response": "def close(self):\n \"\"\"Close the connection\"\"\"\n if self.pinger:\n self.pinger.cancel()\n self.pinger = None\n if getattr(self, 'protocol', None):\n self.protocol.close()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsending a command to FastAGI.", "response": "def send_command(self, command):\n \"\"\"Send a command for FastAGI request:\n\n :param command: Command to launch on FastAGI request. Ex: 'EXEC StartMusicOnHolds'\n :type command: String\n\n :Example:\n\n ::\n\n @asyncio.coroutine\n def call_waiting(request):\n print(['AGI variables:', request.headers])\n yield from request.send_command('ANSWER')\n yield from request.send_command('EXEC StartMusicOnHold')\n yield from request.send_command('EXEC Wait 10')\n\n \"\"\"\n command += '\\n'\n self.writer.write(command.encode(self.encoding))\n yield from self.writer.drain()\n\n agi_result = yield from self._read_result()\n # If Asterisk returns `100 Trying...`, wait for next the response.\n while agi_result.get('status_code') == 100:\n agi_result = yield from self._read_result()\n\n # when we got AGIUsageError the following line contains some indication\n if 'error' in agi_result and agi_result['error'] == 'AGIUsageError':\n buff_usage_error = yield from self.reader.readline()\n agi_result['msg'] += buff_usage_error.decode(self.encoding)\n\n return agi_result"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _read_result(self):\n response = yield from self.reader.readline()\n return parse_agi_result(response.decode(self.encoding)[:-1])", "response": "Parse the response from the AGI and parse it."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add_route(self, path, endpoint):\n assert callable(endpoint), endpoint\n if path in self._route:\n raise ValueError('A route already exists.')\n if not asyncio.iscoroutinefunction(endpoint):\n endpoint = asyncio.coroutine(endpoint)\n self._route[path] = endpoint", "response": "Add a route for FastAGI requests."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef del_route(self, path):\n if path not in self._route:\n raise ValueError('This route doesn\\'t exist.')\n del(self._route[path])", "response": "Delete a route for FastAGI requests."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef handler(self, reader, writer):\n buffer = b''\n while b'\\n\\n' not in buffer:\n buffer += yield from reader.read(self.buf_size)\n lines = buffer[:-2].decode(self.default_encoding).split('\\n')\n headers = OrderedDict([\n line.split(': ', 1) for line in lines if ': ' in line\n ])\n\n agi_network_script = headers.get('agi_network_script')\n log.info('Received FastAGI request from %r for \"%s\" route',\n writer.get_extra_info('peername'), agi_network_script)\n log.debug(\"Asterisk Headers: %r\", headers)\n\n if agi_network_script is not None:\n route = self._route.get(agi_network_script)\n if route is not None:\n request = Request(app=self,\n headers=headers,\n reader=reader, writer=writer,\n encoding=self.default_encoding)\n try:\n yield from route(request)\n except BaseException:\n log.exception(\n 'An exception has been raised for the request \"%s\"',\n agi_network_script\n )\n else:\n log.error('No route for the request \"%s\"', agi_network_script)\n else:\n log.error('No agi_network_script header for the request')\n log.debug(\"Closing client socket\")\n writer.close()", "response": "A coroutine handler to handle a FastAGI request."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nparse AGI results using Regular expression.", "response": "def parse_agi_result(line):\n \"\"\"Parse AGI results using Regular expression.\n\n AGI Result examples::\n\n 100 result=0 Trying...\n\n 200 result=0\n\n 200 result=-1\n\n 200 result=132456\n\n 200 result= (timeout)\n\n 510 Invalid or unknown command\n\n 520-Invalid command syntax. Proper usage follows:\n int() argument must be a string, a bytes-like object or a number, not\n 'NoneType'\n\n HANGUP\n\n \"\"\"\n # print(\"--------------\\n\", line)\n if line == 'HANGUP':\n return {'error': 'AGIResultHangup',\n 'msg': 'User hungup during execution'}\n\n kwargs = dict(code=0, response=\"\", line=line)\n m = re_code.search(line)\n try:\n kwargs.update(m.groupdict())\n except AttributeError:\n # None has no attribute groupdict\n pass\n return agi_code_check(**kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks the AGI code and return a dict to help on error handling.", "response": "def agi_code_check(code=None, response=None, line=None):\n \"\"\"\n Check the AGI code and return a dict to help on error handling.\n \"\"\"\n code = int(code)\n response = response or \"\"\n result = {'status_code': code, 'result': ('', ''), 'msg': ''}\n if code == 100:\n result['msg'] = line\n elif code == 200:\n for key, value, data in re_kv.findall(response):\n result[key] = (value, data)\n # If user hangs up... we get 'hangup' in the data\n if data == 'hangup':\n return {\n 'error': 'AGIResultHangup',\n 'msg': 'User hungup during execution'}\n elif key == 'result' and value == '-1':\n return {\n 'error': 'AGIAppError',\n 'msg': 'Error executing application, or hangup'}\n elif code == 510:\n result['error'] = 'AGIInvalidCommand'\n elif code == 520:\n # AGI Usage error\n result['error'] = 'AGIUsageError'\n result['msg'] = line\n else:\n # Unhandled code or undefined response\n result['error'] = 'AGIUnknownError'\n result['msg'] = line\n return result"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreset all the related objects to the given uid.", "response": "def reset(cls, uid=None):\n \"\"\"Mostly used for unit testing. Allow to use a static uuid and reset\n all counter\"\"\"\n for instance in cls.instances:\n if uid:\n instance.uid = uid\n instance.generator = instance.get_generator()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_instances(self):\n return [\"<%s prefix:%s (uid:%s)>\" % (self.__class__.__name__,\n i.prefix, self.uid)\n for i in self.instances]", "response": "Return a list of all the instances in this database."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn True if a response is Success or Follows", "response": "def success(self):\n \"\"\"return True if a response status is Success or Follows:\n\n .. code-block:: python\n\n >>> resp = Message({'Response': 'Success'})\n >>> print(resp.success)\n True\n >>> resp['Response'] = 'Failed'\n >>> resp.success\n False\n \"\"\"\n if 'event' in self:\n return True\n if self.response in self.success_responses:\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef getdict(self, key):\n values = self.get(key, None)\n if not isinstance(values, list):\n raise TypeError(\"{0} must be a list. got {1}\".format(key, values))\n result = utils.CaseInsensitiveDict()\n for item in values:\n k, v = item.split('=', 1)\n result[k] = v\n return result", "response": "Convert a multi values header to a case - insensitive dict"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef run_setup(script_name, script_args=None, stop_after=\"run\"):\n if stop_after not in ('init', 'config', 'commandline', 'run'):\n raise ValueError(\"invalid value for 'stop_after': %r\" % stop_after)\n\n core._setup_stop_after = stop_after\n\n save_argv = sys.argv\n glocals = copy(globals())\n glocals['__file__'] = script_name\n glocals['__name__'] = \"__main__\"\n try:\n try:\n sys.argv[0] = script_name\n if script_args is not None:\n sys.argv[1:] = script_args\n f = open(script_name)\n try:\n exec(f.read(), glocals, glocals)\n finally:\n f.close()\n finally:\n sys.argv = save_argv\n core._setup_stop_after = None\n except Exception:\n logging.warn(\"Exception when running setup.\", exc_info=True)\n\n if core._setup_distribution is None:\n raise RuntimeError(\n \"'distutils.core.setup()' was never called -- \"\n \"perhaps '%s' is not a Distutils setup script?\" %\n script_name)\n\n # I wonder if the setup script's namespace -- g and l -- would be of\n # any interest to callers?\n return core._setup_distribution", "response": "Run a setup script in a somewhat controlled environment and return the Distribution instance that drives things."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning data from a package directory.", "response": "def get_data(path):\n \"\"\"\n Returns data from a package directory.\n 'path' should be an absolute path.\n \"\"\"\n # Run the imported setup to get the metadata.\n with FakeContext(path):\n with SetupMonkey() as sm:\n try:\n distro = run_setup('setup.py', stop_after='config')\n\n metadata = {'_setuptools': sm.used_setuptools}\n\n for k, v in distro.metadata.__dict__.items():\n if k[0] == '_' or not v:\n continue\n if all(not x for x in v):\n continue\n metadata[k] = v\n\n if sm.used_setuptools:\n for extras in ['cmdclass', 'zip_safe', 'test_suite']:\n v = getattr(distro, extras, None)\n if v is not None and v not in ([], {}):\n metadata[extras] = v\n\n except ImportError as e:\n # Either there is no setup py, or it's broken.\n logging.exception(e)\n metadata = {}\n\n return metadata"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_primary_keys(model):\n mapper = model.__mapper__\n return [mapper.get_property_by_column(column) for column in mapper.primary_key]", "response": "Get primary key properties for a SQLAlchemy model."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _deserialize(self, value, *args, **kwargs):\n if not isinstance(value, dict):\n if len(self.related_keys) != 1:\n self.fail(\n \"invalid\",\n value=value,\n keys=[prop.key for prop in self.related_keys],\n )\n value = {self.related_keys[0].key: value}\n if self.transient:\n return self.related_model(**value)\n try:\n result = self._get_existing_instance(\n self.session.query(self.related_model), value\n )\n except NoResultFound:\n # The related-object DNE in the DB, but we still want to deserialize it\n # ...perhaps we want to add it to the DB later\n return self.related_model(**value)\n return result", "response": "Deserialize a serialized value to a model instance."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_existing_instance(self, query, value):\n if self.columns:\n result = query.filter_by(\n **{prop.key: value.get(prop.key) for prop in self.related_keys}\n ).one()\n else:\n # Use a faster path if the related key is the primary key.\n result = query.get([value.get(prop.key) for prop in self.related_keys])\n if result is None:\n raise NoResultFound\n return result", "response": "Retrieve the related object from an existing instance in the DB."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding keyword arguments to kwargs based on the passed in SQLAlchemy column.", "response": "def _add_column_kwargs(self, kwargs, column):\n \"\"\"Add keyword arguments to kwargs (in-place) based on the passed in\n `Column `.\n \"\"\"\n if column.nullable:\n kwargs[\"allow_none\"] = True\n kwargs[\"required\"] = not column.nullable and not _has_default(column)\n\n if hasattr(column.type, \"enums\"):\n kwargs[\"validate\"].append(validate.OneOf(choices=column.type.enums))\n\n # Add a length validator if a max length is set on the column\n # Skip UUID columns\n # (see https://github.com/marshmallow-code/marshmallow-sqlalchemy/issues/54)\n if hasattr(column.type, \"length\"):\n try:\n python_type = column.type.python_type\n except (AttributeError, NotImplementedError):\n python_type = None\n if not python_type or not issubclass(python_type, uuid.UUID):\n kwargs[\"validate\"].append(validate.Length(max=column.type.length))\n\n if hasattr(column.type, \"scale\"):\n kwargs[\"places\"] = getattr(column.type, \"scale\", None)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding keyword arguments to kwargs based on the passed in relationship Property.", "response": "def _add_relationship_kwargs(self, kwargs, prop):\n \"\"\"Add keyword arguments to kwargs (in-place) based on the passed in\n relationship `Property`.\n \"\"\"\n nullable = True\n for pair in prop.local_remote_pairs:\n if not pair[0].nullable:\n if prop.uselist is True:\n nullable = False\n break\n kwargs.update({\"allow_none\": nullable, \"required\": not nullable})"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_declared_fields(mcs, klass, cls_fields, inherited_fields, dict_cls):\n opts = klass.opts\n Converter = opts.model_converter\n converter = Converter(schema_cls=klass)\n declared_fields = super(SchemaMeta, mcs).get_declared_fields(\n klass, cls_fields, inherited_fields, dict_cls\n )\n fields = mcs.get_fields(converter, opts, declared_fields, dict_cls)\n fields.update(declared_fields)\n return fields", "response": "Updates declared fields with fields converted from the SQLAlchemy model\n passed as the model class Meta option."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nretrieves an existing record by primary key.", "response": "def get_instance(self, data):\n \"\"\"Retrieve an existing record by primary key(s). If the schema instance\n is transient, return None.\n\n :param data: Serialized data to inform lookup.\n \"\"\"\n if self.transient:\n return None\n props = get_primary_keys(self.opts.model)\n filters = {prop.key: data.get(prop.key) for prop in props}\n if None not in filters.values():\n return self.session.query(self.opts.model).filter_by(**filters).first()\n return None"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nsplit serialized attrs to ensure association proxies are passed separately.", "response": "def _split_model_kwargs_association(self, data):\n \"\"\"Split serialized attrs to ensure association proxies are passed separately.\n\n This is necessary for Python < 3.6.0, as the order in which kwargs are passed\n is non-deterministic, and associations must be parsed by sqlalchemy after their\n intermediate relationship, unless their `creator` has been set.\n\n Ignore invalid keys at this point - behaviour for unknowns should be\n handled elsewhere.\n\n :param data: serialized dictionary of attrs to split on association_proxy.\n \"\"\"\n association_attrs = {\n key: value\n for key, value in iteritems(data)\n # association proxy\n if hasattr(getattr(self.opts.model, key, None), \"remote_attr\")\n }\n kwargs = {\n key: value\n for key, value in iteritems(data)\n if (hasattr(self.opts.model, key) and key not in association_attrs)\n }\n return kwargs, association_attrs"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndeleting old stellar tables that are not used anymore", "response": "def gc():\n \"\"\"Deletes old stellar tables that are not used anymore\"\"\"\n def after_delete(database):\n click.echo(\"Deleted table %s\" % database)\n\n app = get_app()\n upgrade_from_old_version(app)\n app.delete_orphan_snapshots(after_delete)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ntaking a snapshot of the database", "response": "def snapshot(name):\n \"\"\"Takes a snapshot of the database\"\"\"\n app = get_app()\n upgrade_from_old_version(app)\n name = name or app.default_snapshot_name\n\n if app.get_snapshot(name):\n click.echo(\"Snapshot with name %s already exists\" % name)\n sys.exit(1)\n else:\n def before_copy(table_name):\n click.echo(\"Snapshotting database %s\" % table_name)\n app.create_snapshot(name, before_copy=before_copy)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a list of snapshots", "response": "def list():\n \"\"\"Returns a list of snapshots\"\"\"\n snapshots = get_app().get_snapshots()\n\n click.echo('\\n'.join(\n '%s: %s' % (\n s.snapshot_name,\n humanize.naturaltime(datetime.utcnow() - s.created_at)\n )\n for s in snapshots\n ))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef restore(name):\n app = get_app()\n\n if not name:\n snapshot = app.get_latest_snapshot()\n if not snapshot:\n click.echo(\n \"Couldn't find any snapshots for project %s\" %\n load_config()['project_name']\n )\n sys.exit(1)\n else:\n snapshot = app.get_snapshot(name)\n if not snapshot:\n click.echo(\n \"Couldn't find snapshot with name %s.\\n\"\n \"You can list snapshots with 'stellar list'\" % name\n )\n sys.exit(1)\n\n # Check if slaves are ready\n if not snapshot.slaves_ready:\n if app.is_copy_process_running(snapshot):\n sys.stdout.write(\n 'Waiting for background process(%s) to finish' %\n snapshot.worker_pid\n )\n sys.stdout.flush()\n while not snapshot.slaves_ready:\n sys.stdout.write('.')\n sys.stdout.flush()\n sleep(1)\n app.db.session.refresh(snapshot)\n click.echo('')\n else:\n click.echo('Background process missing, doing slow restore.')\n app.inline_slave_copy(snapshot)\n\n app.restore(snapshot)\n click.echo('Restore complete.')", "response": "Restores the database from a snapshot"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ninitializes the current language.", "response": "def init():\n \"\"\"Initializes Stellar configuration.\"\"\"\n while True:\n url = click.prompt(\n \"Please enter the url for your database.\\n\\n\"\n \"For example:\\n\"\n \"PostgreSQL: postgresql://localhost:5432/\\n\"\n \"MySQL: mysql+pymysql://root@localhost/\"\n )\n if url.count('/') == 2 and not url.endswith('/'):\n url = url + '/'\n\n if (\n url.count('/') == 3 and\n url.endswith('/') and\n url.startswith('postgresql://')\n ):\n connection_url = url + 'template1'\n else:\n connection_url = url\n\n engine = create_engine(connection_url, echo=False)\n try:\n conn = engine.connect()\n except OperationalError as err:\n click.echo(\"Could not connect to database: %s\" % url)\n click.echo(\"Error message: %s\" % err.message)\n click.echo('')\n else:\n break\n\n if engine.dialect.name not in SUPPORTED_DIALECTS:\n click.echo(\"Your engine dialect %s is not supported.\" % (\n engine.dialect.name\n ))\n click.echo(\"Supported dialects: %s\" % (\n ', '.join(SUPPORTED_DIALECTS)\n ))\n\n if url.count('/') == 3 and url.endswith('/'):\n while True:\n click.echo(\"You have the following databases: %s\" % ', '.join([\n db for db in list_of_databases(conn)\n if not db.startswith('stellar_')\n ]))\n\n db_name = click.prompt(\n \"Please enter the name of the database (eg. projectdb)\"\n )\n if database_exists(conn, db_name):\n break\n else:\n click.echo(\"Could not find database %s\" % db_name)\n click.echo('')\n else:\n db_name = url.rsplit('/', 1)[-1]\n url = url.rsplit('/', 1)[0] + '/'\n\n name = click.prompt(\n 'Please enter your project name (used internally, eg. %s)' % db_name,\n default=db_name\n )\n\n raw_url = url\n\n if engine.dialect.name == 'postgresql':\n raw_url = raw_url + 'template1'\n\n with open('stellar.yaml', 'w') as project_file:\n project_file.write(\n \"\"\"\nproject_name: '%(name)s'\ntracked_databases: ['%(db_name)s']\nurl: '%(raw_url)s'\nstellar_url: '%(url)sstellar_data'\n \"\"\".strip() %\n {\n 'name': name,\n 'raw_url': raw_url,\n 'url': url,\n 'db_name': db_name\n }\n )\n\n click.echo(\"Wrote stellar.yaml\")\n click.echo('')\n if engine.dialect.name == 'mysql':\n click.echo(\"Warning: MySQL support is still in beta.\")\n click.echo(\"Tip: You probably want to take a snapshot: stellar snapshot\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nupdating indexes after each epoch for shuffling", "response": "def on_epoch_end(self) -> None:\n 'Updates indexes after each epoch for shuffling'\n self.indexes = np.arange(self.nrows)\n if self.shuffle:\n np.random.shuffle(self.indexes)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndefines the default function for cleaning text. This function operates over a list.", "response": "def textacy_cleaner(text: str) -> str:\n \"\"\"\n Defines the default function for cleaning text.\n\n This function operates over a list.\n \"\"\"\n return preprocess_text(text,\n fix_unicode=True,\n lowercase=True,\n transliterate=True,\n no_urls=True,\n no_emails=True,\n no_phone_numbers=True,\n no_numbers=True,\n no_currency_symbols=True,\n no_punct=True,\n no_contractions=False,\n no_accents=True)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef apply_parallel(func: Callable,\n data: List[Any],\n cpu_cores: int = None) -> List[Any]:\n \"\"\"\n Apply function to list of elements.\n\n Automatically determines the chunk size.\n \"\"\"\n if not cpu_cores:\n cpu_cores = cpu_count()\n\n try:\n chunk_size = ceil(len(data) / cpu_cores)\n pool = Pool(cpu_cores)\n transformed_data = pool.map(func, chunked(data, chunk_size), chunksize=1)\n finally:\n pool.close()\n pool.join()\n return transformed_data", "response": "Apply function to list of elements."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngenerate a function that will clean and tokenize text.", "response": "def process_text_constructor(cleaner: Callable,\n tokenizer: Callable,\n append_indicators: bool,\n start_tok: str,\n end_tok: str):\n \"\"\"Generate a function that will clean and tokenize text.\"\"\"\n def process_text(text):\n if append_indicators:\n return [[start_tok] + tokenizer(cleaner(doc)) + [end_tok] for doc in text]\n return [tokenizer(cleaner(doc)) for doc in text]\n\n return process_text"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncombine the cleaner and tokenizer.", "response": "def process_text(self, text: List[str]) -> List[List[str]]:\n \"\"\"Combine the cleaner and tokenizer.\"\"\"\n process_text = process_text_constructor(cleaner=self.cleaner,\n tokenizer=self.tokenizer,\n append_indicators=self.append_indicators,\n start_tok=self.start_tok,\n end_tok=self.end_tok)\n return process_text(text)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef parallel_process_text(self, data: List[str]) -> List[List[str]]:\n process_text = process_text_constructor(cleaner=self.cleaner,\n tokenizer=self.tokenizer,\n append_indicators=self.append_indicators,\n start_tok=self.start_tok,\n end_tok=self.end_tok)\n n_cores = self.num_cores\n return flattenlist(apply_parallel(process_text, data, n_cores))", "response": "Apply cleaner - > tokenizer."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef generate_doc_length_stats(self):\n heuristic = self.heuristic_pct\n histdf = (pd.DataFrame([(a, b) for a, b in self.document_length_histogram.items()],\n columns=['bin', 'doc_count'])\n .sort_values(by='bin'))\n histdf['cumsum_pct'] = histdf.doc_count.cumsum() / histdf.doc_count.sum()\n\n self.document_length_stats = histdf\n self.doc_length_huerestic = histdf.query(f'cumsum_pct >= {heuristic}').bin.head(1).values[0]\n logging.warning(' '.join([\"Setting maximum document length to\",\n f'{self.doc_length_huerestic} based upon',\n f'heuristic of {heuristic} percentile.\\n',\n 'See full histogram by insepecting the',\n \"`document_length_stats` attribute.\"]))\n self.padding_maxlen = self.doc_length_huerestic", "response": "Analyze the document length statistics for padding strategy"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef fit(self,\n data: List[str],\n return_tokenized_data: bool = False) -> Union[None, List[List[str]]]:\n \"\"\"\n TODO: update docs\n\n Apply cleaner and tokenzier to raw data and build vocabulary.\n\n Parameters\n ----------\n data : List[str]\n These are raw documents, which are a list of strings. ex:\n [[\"The quick brown fox\"], [\"jumps over the lazy dog\"]]\n return_tokenized_data : bool\n Return the tokenized strings. This is primarly used for debugging\n purposes.\n\n Returns\n -------\n None or List[List[str]]\n if return_tokenized_data=True then will return tokenized documents,\n otherwise will not return anything.\n \"\"\"\n self.__clear_data()\n now = get_time()\n logging.warning(f'....tokenizing data')\n tokenized_data = self.parallel_process_text(data)\n\n if not self.padding_maxlen:\n # its not worth the overhead to parallelize document length counts\n length_counts = map(count_len, tokenized_data)\n self.document_length_histogram = Counter(length_counts)\n self.generate_doc_length_stats()\n\n # Learn corpus on single thread\n logging.warning(f'(1/2) done. {time_diff(now)} sec')\n logging.warning(f'....building corpus')\n now = get_time()\n self.indexer = custom_Indexer(num_words=self.keep_n)\n self.indexer.fit_on_tokenized_texts(tokenized_data)\n\n # Build Dictionary accounting For 0 padding, and reserve 1 for unknown and rare Words\n self.token2id = self.indexer.word_index\n self.id2token = {v: k for k, v in self.token2id.items()}\n self.n_tokens = max(self.indexer.word_index.values())\n\n # logging\n logging.warning(f'(2/2) done. {time_diff(now)} sec')\n logging.warning(f'Finished parsing {self.indexer.document_count:,} documents.')\n\n if return_tokenized_data:\n return tokenized_data", "response": "Fit the corpus on a list of strings."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef token_count_pandas(self):\n freq_df = pd.DataFrame.from_dict(self.indexer.word_counts, orient='index')\n freq_df.columns = ['count']\n return freq_df.sort_values('count', ascending=False)", "response": "See token counts as pandas dataframe"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fit_transform(self,\n data: List[str]) -> List[List[int]]:\n \"\"\"\n Apply cleaner and tokenzier to raw data, build vocabulary and return\n transfomred dataset that is a List[List[int]]. This will use\n process-based-threading on all available cores.\n\n ex:\n >>> data = [[\"The quick brown fox\"], [\"jumps over the lazy dog\"]]\n >>> pp = preprocess(maxlen=5, no_below=0)\n >>> pp.fit_transform(data)\n # 0 padding is applied\n [[0, 2, 3, 4, 5], [6, 7, 2, 8, 9]]\n\n Parameters\n ----------\n data : List[str]\n These are raw documents, which are a list of strings. ex:\n [[\"The quick brown fox\"], [\"jumps over the lazy dog\"]]\n\n Returns\n -------\n numpy.array with shape (number of documents, max_len)\n\n \"\"\"\n tokenized_data = self.fit(data, return_tokenized_data=True)\n\n logging.warning(f'...fit is finished, beginning transform')\n now = get_time()\n indexed_data = self.indexer.tokenized_texts_to_sequences(tokenized_data)\n logging.warning(f'...padding data')\n final_data = self.pad(indexed_data)\n logging.warning(f'done. {time_diff(now)} sec')\n return final_data", "response": "Fit the tokenizer and tokenzier to raw data and build vocabulary and return the transfomred dataset that is a List [ int ]."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef transform(self, data: List[str]) -> List[List[int]]:\n tokenized_data = self.process_text(data)\n indexed_data = self.indexer.tokenized_texts_to_sequences(tokenized_data)\n return self.pad(indexed_data)", "response": "Transform a list of documents into a list of lists of lists of lists of integers."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef transform_parallel(self, data: List[str]) -> List[List[int]]:\n logging.warning(f'...tokenizing data')\n tokenized_data = self.parallel_process_text(data)\n logging.warning(f'...indexing data')\n indexed_data = self.indexer.tokenized_texts_to_sequences(tokenized_data)\n logging.warning(f'...padding data')\n return self.pad(indexed_data)", "response": "Transform list of documents into list of sequence ids."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef pad(self, docs: List[List[int]]) -> List[List[int]]:\n # First apply indexing on all the rows then pad_sequnces (i found this\n # faster than trying to do these steps on each row\n return pad_sequences(docs,\n maxlen=self.padding_maxlen,\n dtype=self.padding_dtype,\n padding=self.padding,\n truncating=self.truncating,\n value=self.padding_value)", "response": "Vectorize and apply padding on a set of tokenized doucments that are not already in the cache."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef tokenized_texts_to_sequences(self, tok_texts):\n res = []\n for vect in self.tokenized_texts_to_sequences_generator(tok_texts):\n res.append(vect)\n return res", "response": "Transforms tokenized text to a sequence of integers."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ntransforms tokenized text to a sequence of integers.", "response": "def tokenized_texts_to_sequences_generator(self, tok_texts):\n \"\"\"Transforms tokenized text to a sequence of integers.\n Only top \"num_words\" most frequent words will be taken into account.\n Only words known by the tokenizer will be taken into account.\n # Arguments\n tokenized texts: List[List[str]]\n # Yields\n Yields individual sequences.\n \"\"\"\n for seq in tok_texts:\n vect = []\n for w in seq:\n # if the word is missing you get oov_index\n i = self.word_index.get(w, 1)\n vect.append(i)\n yield vect"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nperforming param type mapping", "response": "def map_param_type(param_type):\n \"\"\"\n Perform param type mapping\n This requires a bit of logic since this isn't standardized.\n If a type doesn't map, assume str\n \"\"\"\n main_type, sub_type = TYPE_INFO_RE.match(param_type).groups()\n\n if main_type in ('list', 'array'):\n # Handle no sub-type: \"required list\"\n if sub_type is not None:\n sub_type = sub_type.strip()\n\n if not sub_type:\n sub_type = 'str'\n\n # Handle list of pairs: \"optional list>\"\n sub_match = TYPE_INFO_RE.match(sub_type)\n if sub_match:\n sub_type = sub_match.group(1).lower()\n\n return [PARAM_TYPE_MAP.setdefault(sub_type, string_types)]\n\n return PARAM_TYPE_MAP.setdefault(main_type, string_types)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nparsing the conduit. query json dict response", "response": "def parse_interfaces(interfaces):\n \"\"\"\n Parse the conduit.query json dict response\n This performs the logic of parsing the non-standard params dict\n and then returning a dict Resource can understand\n \"\"\"\n parsed_interfaces = collections.defaultdict(dict)\n\n for m, d in iteritems(interfaces):\n app, func = m.split('.', 1)\n\n method = parsed_interfaces[app][func] = {}\n\n # Make default assumptions since these aren't provided by Phab\n method['formats'] = ['json', 'human']\n method['method'] = 'POST'\n\n method['optional'] = {}\n method['required'] = {}\n\n for name, type_info in iteritems(dict(d['params'])):\n # Set the defaults\n optionality = 'required'\n param_type = 'string'\n\n # Usually in the format: \n type_info = TYPE_INFO_COMMENT_RE.sub('', type_info)\n info_pieces = TYPE_INFO_SPLITTER_RE.findall(type_info)\n for info_piece in info_pieces:\n if info_piece in ('optional', 'required'):\n optionality = info_piece\n elif info_piece == 'ignored':\n optionality = 'optional'\n param_type = 'string'\n elif info_piece == 'nonempty':\n optionality = 'required'\n elif info_piece == 'deprecated':\n optionality = 'optional'\n else:\n param_type = info_piece\n\n method[optionality][name] = map_param_type(param_type)\n\n return dict(parsed_interfaces)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _inv_cls(cls):\n if cls._fwdm_cls is cls._invm_cls:\n return cls\n if not getattr(cls, '_inv_cls_', None):\n class _Inv(cls):\n _fwdm_cls = cls._invm_cls\n _invm_cls = cls._fwdm_cls\n _inv_cls_ = cls\n _Inv.__name__ = cls.__name__ + 'Inv'\n cls._inv_cls_ = _Inv\n return cls._inv_cls_", "response": "The inverse of this bidict type i. e. one with * _fwdm_cls and * _invm_cls* swapped."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks *key* and *val* for any duplication in self. Handle any duplication as per the duplication policies given in *on_dup*. (key, val) already present is construed as a no-op, not a duplication. If duplication is found and the corresponding duplication policy is :attr:`~bidict.RAISE`, raise the appropriate error. If duplication is found and the corresponding duplication policy is :attr:`~bidict.IGNORE`, return *None*. If duplication is found and the corresponding duplication policy is :attr:`~bidict.OVERWRITE`, or if no duplication is found, return the _DedupResult *(isdupkey, isdupval, oldkey, oldval)*.", "response": "def _dedup_item(self, key, val, on_dup):\n \"\"\"\n Check *key* and *val* for any duplication in self.\n\n Handle any duplication as per the duplication policies given in *on_dup*.\n\n (key, val) already present is construed as a no-op, not a duplication.\n\n If duplication is found and the corresponding duplication policy is\n :attr:`~bidict.RAISE`, raise the appropriate error.\n\n If duplication is found and the corresponding duplication policy is\n :attr:`~bidict.IGNORE`, return *None*.\n\n If duplication is found and the corresponding duplication policy is\n :attr:`~bidict.OVERWRITE`,\n or if no duplication is found,\n return the _DedupResult *(isdupkey, isdupval, oldkey, oldval)*.\n \"\"\"\n fwdm = self._fwdm\n invm = self._invm\n oldval = fwdm.get(key, _MISS)\n oldkey = invm.get(val, _MISS)\n isdupkey = oldval is not _MISS\n isdupval = oldkey is not _MISS\n dedup_result = _DedupResult(isdupkey, isdupval, oldkey, oldval)\n if isdupkey and isdupval:\n if self._isdupitem(key, val, dedup_result):\n # (key, val) duplicates an existing item -> no-op.\n return _NOOP\n # key and val each duplicate a different existing item.\n if on_dup.kv is RAISE:\n raise KeyAndValueDuplicationError(key, val)\n elif on_dup.kv is IGNORE:\n return _NOOP\n assert on_dup.kv is OVERWRITE, 'invalid on_dup_kv: %r' % on_dup.kv\n # Fall through to the return statement on the last line.\n elif isdupkey:\n if on_dup.key is RAISE:\n raise KeyDuplicationError(key)\n elif on_dup.key is IGNORE:\n return _NOOP\n assert on_dup.key is OVERWRITE, 'invalid on_dup.key: %r' % on_dup.key\n # Fall through to the return statement on the last line.\n elif isdupval:\n if on_dup.val is RAISE:\n raise ValueDuplicationError(val)\n elif on_dup.val is IGNORE:\n return _NOOP\n assert on_dup.val is OVERWRITE, 'invalid on_dup.val: %r' % on_dup.val\n # Fall through to the return statement on the last line.\n # else neither isdupkey nor isdupval.\n return dedup_result"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _update_with_rollback(self, on_dup, *args, **kw):\n writelog = []\n appendlog = writelog.append\n dedup_item = self._dedup_item\n write_item = self._write_item\n for (key, val) in _iteritems_args_kw(*args, **kw):\n try:\n dedup_result = dedup_item(key, val, on_dup)\n except DuplicationError:\n undo_write = self._undo_write\n for dedup_result, write_result in reversed(writelog):\n undo_write(dedup_result, write_result)\n raise\n if dedup_result is not _NOOP:\n write_result = write_item(key, val, dedup_result)\n appendlog((dedup_result, write_result))", "response": "Update with rolling back on failure."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef copy(self):\n # Could just ``return self.__class__(self)`` here instead, but the below is faster. It uses\n # __new__ to create a copy instance while bypassing its __init__, which would result\n # in copying this bidict's items into the copy instance one at a time. Instead, make whole\n # copies of each of the backing mappings, and make them the backing mappings of the copy,\n # avoiding copying items one at a time.\n copy = self.__class__.__new__(self.__class__)\n copy._fwdm = self._fwdm.copy() # pylint: disable=protected-access\n copy._invm = self._invm.copy() # pylint: disable=protected-access\n copy._init_inv() # pylint: disable=protected-access\n return copy", "response": "A shallow copy of the bidict."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef copy(self):\n # Fast copy implementation bypassing __init__. See comments in :meth:`BidictBase.copy`.\n copy = self.__class__.__new__(self.__class__)\n sntl = _Sentinel()\n fwdm = self._fwdm.copy()\n invm = self._invm.copy()\n cur = sntl\n nxt = sntl.nxt\n for (key, val) in iteritems(self):\n nxt = _Node(cur, sntl)\n cur.nxt = fwdm[key] = invm[val] = nxt\n cur = nxt\n sntl.prv = nxt\n copy._sntl = sntl # pylint: disable=protected-access\n copy._fwdm = fwdm # pylint: disable=protected-access\n copy._invm = invm # pylint: disable=protected-access\n copy._init_inv() # pylint: disable=protected-access\n return copy", "response": "A shallow copy of this ordered bidict."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn whether ( key val ) duplicates an existing item.", "response": "def _isdupitem(self, key, val, dedup_result):\n \"\"\"Return whether (key, val) duplicates an existing item.\"\"\"\n isdupkey, isdupval, nodeinv, nodefwd = dedup_result\n isdupitem = nodeinv is nodefwd\n if isdupitem:\n assert isdupkey\n assert isdupval\n return isdupitem"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef equals_order_sensitive(self, other):\n # Same short-circuit as BidictBase.__eq__. Factoring out not worth function call overhead.\n if not isinstance(other, Mapping) or len(self) != len(other):\n return False\n return all(i == j for (i, j) in izip(iteritems(self), iteritems(other)))", "response": "Returns True if self and other are equal."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating an empty named bidict with the indicated arguments and return an empty instance.", "response": "def _make_empty(typename, keyname, valname, base_type):\n \"\"\"Create a named bidict with the indicated arguments and return an empty instance.\n Used to make :func:`bidict.namedbidict` instances picklable.\n \"\"\"\n cls = namedbidict(typename, keyname, valname, base_type=base_type)\n return cls()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _iteritems_args_kw(*args, **kw):\n args_len = len(args)\n if args_len > 1:\n raise TypeError('Expected at most 1 positional argument, got %d' % args_len)\n itemchain = None\n if args:\n arg = args[0]\n if arg:\n itemchain = _iteritems_mapping_or_iterable(arg)\n if kw:\n iterkw = iteritems(kw)\n itemchain = chain(itemchain, iterkw) if itemchain else iterkw\n return itemchain or _NULL_IT", "response": "Yield the items from the positional argument and then any from *kw."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef inverted(arg):\n inv = getattr(arg, '__inverted__', None)\n if callable(inv):\n return inv()\n return ((val, key) for (key, val) in _iteritems_mapping_or_iterable(arg))", "response": "Yield the inverse items of the provided object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef clear(self):\n self._fwdm.clear()\n self._invm.clear()\n self._sntl.nxt = self._sntl.prv = self._sntl", "response": "Remove all items from the cache."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nremoving and returns the most recently added item as a tuple.", "response": "def popitem(self, last=True): # pylint: disable=arguments-differ\n u\"\"\"*x.popitem() \u2192 (k, v)*\n\n Remove and return the most recently added item as a (key, value) pair\n if *last* is True, else the least recently added item.\n\n :raises KeyError: if *x* is empty.\n \"\"\"\n if not self:\n raise KeyError('mapping is empty')\n key = next((reversed if last else iter)(self))\n val = self._pop(key)\n return key, val"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef move_to_end(self, key, last=True):\n node = self._fwdm[key]\n node.prv.nxt = node.nxt\n node.nxt.prv = node.prv\n sntl = self._sntl\n if last:\n last = sntl.prv\n node.prv = last\n node.nxt = sntl\n sntl.prv = last.nxt = node\n else:\n first = sntl.nxt\n node.prv = sntl\n node.nxt = first\n sntl.nxt = first.prv = node", "response": "Moves an existing key to the beginning or end of this ordered bidict."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nassociating key with val with the specified duplication policies.", "response": "def put(self, key, val, on_dup_key=RAISE, on_dup_val=RAISE, on_dup_kv=None):\n \"\"\"\n Associate *key* with *val* with the specified duplication policies.\n\n If *on_dup_kv* is ``None``, the *on_dup_val* policy will be used for it.\n\n For example, if all given duplication policies are :attr:`~bidict.RAISE`,\n then *key* will be associated with *val* if and only if\n *key* is not already associated with an existing value and\n *val* is not already associated with an existing key,\n otherwise an exception will be raised.\n\n If *key* is already associated with *val*, this is a no-op.\n\n :raises bidict.KeyDuplicationError: if attempting to insert an item\n whose key only duplicates an existing item's, and *on_dup_key* is\n :attr:`~bidict.RAISE`.\n\n :raises bidict.ValueDuplicationError: if attempting to insert an item\n whose value only duplicates an existing item's, and *on_dup_val* is\n :attr:`~bidict.RAISE`.\n\n :raises bidict.KeyAndValueDuplicationError: if attempting to insert an\n item whose key duplicates one existing item's, and whose value\n duplicates another existing item's, and *on_dup_kv* is\n :attr:`~bidict.RAISE`.\n \"\"\"\n on_dup = self._get_on_dup((on_dup_key, on_dup_val, on_dup_kv))\n self._put(key, val, on_dup)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef forceput(self, key, val):\n self._put(key, val, self._ON_DUP_OVERWRITE)", "response": "Force put a key - value pair in the cache."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef pop(self, key, default=_MISS):\n try:\n return self._pop(key)\n except KeyError:\n if default is _MISS:\n raise\n return default", "response": "u Pop specified key from the cache."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nremoves and returns some item as a key value pair.", "response": "def popitem(self):\n u\"\"\"*x.popitem() \u2192 (k, v)*\n\n Remove and return some item as a (key, value) pair.\n\n :raises KeyError: if *x* is empty.\n \"\"\"\n if not self:\n raise KeyError('mapping is empty')\n key, val = self._fwdm.popitem()\n del self._invm[val]\n return key, val"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef update(self, *args, **kw):\n if args or kw:\n self._update(False, None, *args, **kw)", "response": "Like putall with default duplication policies."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nlike a bulk : meth : forceput.", "response": "def forceupdate(self, *args, **kw):\n \"\"\"Like a bulk :meth:`forceput`.\"\"\"\n self._update(False, self._ON_DUP_OVERWRITE, *args, **kw)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nlike a bulk put.", "response": "def putall(self, items, on_dup_key=RAISE, on_dup_val=RAISE, on_dup_kv=None):\n \"\"\"\n Like a bulk :meth:`put`.\n\n If one of the given items causes an exception to be raised,\n none of the items is inserted.\n \"\"\"\n if items:\n on_dup = self._get_on_dup((on_dup_key, on_dup_val, on_dup_kv))\n self._update(False, on_dup, items)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef write_temp_file(text=\"\"):\n with NamedTemporaryFile(mode='w+t', suffix='.yml', delete=False) \\\n as tempfile:\n tempfile.write(text)\n return tempfile.name", "response": "Create a temporary file and write some initial text to it."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_contact_list_by_user_selection(address_books, search, strict_search):\n return get_contacts(\n address_books, search, \"name\" if strict_search else \"all\",\n config.reverse(), config.group_by_addressbook(), config.sort)", "response": "returns a list of contacts by user selection"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_contacts(address_books, query, method=\"all\", reverse=False,\n group=False, sort=\"first_name\"):\n \"\"\"Get a list of contacts from one or more address books.\n\n :param address_books: the address books to search\n :type address_books: list(address_book.AddressBook)\n :param query: a search query to select contacts\n :type quer: str\n :param method: the search method, one of \"all\", \"name\" or \"uid\"\n :type method: str\n :param reverse: reverse the order of the returned contacts\n :type reverse: bool\n :param group: group results by address book\n :type group: bool\n :param sort: the field to use for sorting, one of \"first_name\", \"last_name\"\n :type sort: str\n :returns: contacts from the address_books that match the query\n :rtype: list(CarddavObject)\n\n \"\"\"\n # Search for the contacts in all address books.\n contacts = []\n for address_book in address_books:\n contacts.extend(address_book.search(query, method=method))\n # Sort the contacts.\n if group:\n if sort == \"first_name\":\n return sorted(contacts, reverse=reverse, key=lambda x: (\n unidecode(x.address_book.name).lower(),\n unidecode(x.get_first_name_last_name()).lower()))\n elif sort == \"last_name\":\n return sorted(contacts, reverse=reverse, key=lambda x: (\n unidecode(x.address_book.name).lower(),\n unidecode(x.get_last_name_first_name()).lower()))\n else:\n raise ValueError('sort must be \"first_name\" or \"last_name\" not '\n '{}.'.format(sort))\n else:\n if sort == \"first_name\":\n return sorted(contacts, reverse=reverse, key=lambda x:\n unidecode(x.get_first_name_last_name()).lower())\n elif sort == \"last_name\":\n return sorted(contacts, reverse=reverse, key=lambda x:\n unidecode(x.get_last_name_first_name()).lower())\n else:\n raise ValueError('sort must be \"first_name\" or \"last_name\" not '\n '{}.'.format(sort))", "response": "Search for contacts in one or more address books."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nmerging the parsed arguments from argparse into the config object.", "response": "def merge_args_into_config(args, config):\n \"\"\"Merge the parsed arguments from argparse into the config object.\n\n :param args: the parsed command line arguments\n :type args: argparse.Namespace\n :param config: the parsed config file\n :type config: config.Config\n :returns: the merged config object\n :rtype: config.Config\n\n \"\"\"\n # display by name: first or last name\n if \"display\" in args and args.display:\n config.set_display_by_name(args.display)\n # group by address book\n if \"group_by_addressbook\" in args and args.group_by_addressbook:\n config.set_group_by_addressbook(True)\n # reverse contact list\n if \"reverse\" in args and args.reverse:\n config.set_reverse(True)\n # sort criteria: first or last name\n if \"sort\" in args and args.sort:\n config.sort = args.sort\n # preferred vcard version\n if \"vcard_version\" in args and args.vcard_version:\n config.set_preferred_vcard_version(args.vcard_version)\n # search in source files\n if \"search_in_source_files\" in args and args.search_in_source_files:\n config.set_search_in_source_files(True)\n # skip unparsable vcards\n if \"skip_unparsable\" in args and args.skip_unparsable:\n config.set_skip_unparsable(True)\n # If the user could but did not specify address books on the command line\n # it means they want to use all address books in that place.\n if \"addressbook\" in args and not args.addressbook:\n args.addressbook = [abook.name for abook in config.abooks]\n if \"target_addressbook\" in args and not args.target_addressbook:\n args.target_addressbook = [abook.name for abook in config.abooks]"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nload all address books with the given names from the config.", "response": "def load_address_books(names, config, search_queries):\n \"\"\"Load all address books with the given names from the config.\n\n :param names: the address books to load\n :type names: list(str)\n :param config: the config instance to use when looking up address books\n :type config: config.Config\n :param search_queries: a mapping of address book names to search queries\n :type search_queries: dict\n :yields: the loaded address books\n :ytype: addressbook.AddressBook\n\n \"\"\"\n all_names = {str(book) for book in config.abooks}\n if not names:\n names = all_names\n elif not all_names.issuperset(names):\n sys.exit('Error: The entered address books \"{}\" do not exist.\\n'\n 'Possible values are: {}'.format(\n '\", \"'.join(set(names) - all_names),\n ', '.join(all_names)))\n # load address books which are defined in the configuration file\n for name in names:\n address_book = config.abook.get_abook(name)\n address_book.load(search_queries[address_book.name],\n search_in_source_files=config.search_in_source_files())\n yield address_book"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\npreparing the search query string from the given command line args.", "response": "def prepare_search_queries(args):\n \"\"\"Prepare the search query string from the given command line args.\n\n Each address book can get a search query string to filter vcards befor\n loading them. Depending on the question if the address book is used for\n source or target searches different regexes have to be combined into one\n search string.\n\n :param args: the parsed command line\n :type args: argparse.Namespace\n :returns: a dict mapping abook names to their loading queries, if the query\n is None it means that all cards should be loaded\n :rtype: dict(str:str or None)\n\n \"\"\"\n # get all possible search queries for address book parsing\n source_queries = []\n target_queries = []\n if \"source_search_terms\" in args and args.source_search_terms:\n escaped_term = \".*\".join(re.escape(x)\n for x in args.source_search_terms)\n source_queries.append(escaped_term)\n args.source_search_terms = escaped_term\n if \"search_terms\" in args and args.search_terms:\n escaped_term = \".*\".join(re.escape(x) for x in args.search_terms)\n source_queries.append(escaped_term)\n args.search_terms = escaped_term\n if \"target_contact\" in args and args.target_contact:\n escaped_term = re.escape(args.target_contact)\n target_queries.append(escaped_term)\n args.target_contact = escaped_term\n if \"uid\" in args and args.uid:\n source_queries.append(args.uid)\n if \"target_uid\" in args and args.target_uid:\n target_queries.append(args.target_uid)\n # create and return regexp, None means that no query is given and hence all\n # contacts should be searched.\n source_queries = \"^.*(%s).*$\" % ')|('.join(source_queries) \\\n if source_queries else None\n target_queries = \"^.*(%s).*$\" % ')|('.join(target_queries) \\\n if target_queries else None\n logging.debug('Created source query regex: %s', source_queries)\n logging.debug('Created target query regex: %s', target_queries)\n # Get all possible search queries for address book parsing, always\n # depending on the fact if the address book is used to find source or\n # target contacts or both.\n queries = {abook.name: [] for abook in config.abook._abooks}\n for name in queries:\n if \"addressbook\" in args and name in args.addressbook:\n queries[name].append(source_queries)\n if \"target_addressbook\" in args and name in args.target_addressbook:\n queries[name].append(target_queries)\n # If None is included in the search queries of an address book it means\n # that either no source or target query was given and this address book\n # is affected by this. All contacts should be loaded from that address\n # book.\n if None in queries[name]:\n queries[name] = None\n else:\n queries[name] = \"({})\".format(')|('.join(queries[name]))\n logging.debug('Created query regex: %s', queries)\n return queries"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef generate_contact_list(config, args):\n # fill contact list\n vcard_list = []\n if \"uid\" in args and args.uid:\n # If an uid was given we use it to find the contact.\n logging.debug(\"args.uid=%s\", args.uid)\n # set search terms to the empty query to prevent errors in\n # phone and email actions\n args.search_terms = \".*\"\n vcard_list = get_contacts(args.addressbook, args.uid, method=\"uid\")\n # We require that the uid given can uniquely identify a contact.\n if not vcard_list:\n sys.exit(\"Found no contact for {}uid {}\".format(\n \"source \" if args.action == \"merge\" else \"\", args.uid))\n elif len(vcard_list) != 1:\n print(\"Found multiple contacts for {}uid {}\".format(\n \"source \" if args.action == \"merge\" else \"\", args.uid))\n for vcard in vcard_list:\n print(\" {}: {}\".format(vcard, vcard.get_uid()))\n sys.exit(1)\n else:\n # No uid was given so we try to use the search terms to select a\n # contact.\n if \"source_search_terms\" in args:\n # exception for merge command\n if args.source_search_terms:\n args.search_terms = args.source_search_terms\n else:\n args.search_terms = \".*\"\n elif \"search_terms\" in args:\n if args.search_terms:\n args.search_terms = args.search_terms\n else:\n args.search_terms = \".*\"\n else:\n # If no search terms where given on the command line we match\n # everything with the empty search pattern.\n args.search_terms = \".*\"\n logging.debug(\"args.search_terms=%s\", args.search_terms)\n vcard_list = get_contact_list_by_user_selection(\n args.addressbook, args.search_terms,\n args.strict_search if \"strict_search\" in args else False)\n return vcard_list", "response": "This function generates a list of contacts for the current user."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef new_subcommand(selected_address_books, input_from_stdin_or_file,\n open_editor):\n \"\"\"Create a new contact.\n\n :param selected_address_books: a list of addressbooks that were selected on\n the command line\n :type selected_address_books: list of address_book.AddressBook\n :param input_from_stdin_or_file: the data for the new contact as a yaml\n formatted string\n :type input_from_stdin_or_file: str\n :param open_editor: whether to open the new contact in the edior after\n creation\n :type open_editor: bool\n :returns: None\n :rtype: None\n\n \"\"\"\n # ask for address book, in which to create the new contact\n selected_address_book = choose_address_book_from_list(\n \"Select address book for new contact\", selected_address_books)\n if selected_address_book is None:\n print(\"Error: address book list is empty\")\n sys.exit(1)\n # if there is some data in stdin\n if input_from_stdin_or_file:\n # create new contact from stdin\n try:\n new_contact = CarddavObject.from_user_input(\n selected_address_book, input_from_stdin_or_file,\n config.get_supported_private_objects(),\n config.get_preferred_vcard_version(),\n config.localize_dates())\n except ValueError as err:\n print(err)\n sys.exit(1)\n else:\n new_contact.write_to_file()\n if open_editor:\n modify_existing_contact(new_contact)\n else:\n print(\"Creation successful\\n\\n%s\" % new_contact.print_vcard())\n else:\n create_new_contact(selected_address_book)", "response": "Create a new contact."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding a new email address to contacts creating new contacts if necessary.", "response": "def add_email_subcommand(input_from_stdin_or_file, selected_address_books):\n \"\"\"Add a new email address to contacts, creating new contacts if necessary.\n\n :param input_from_stdin_or_file: the input text to search for the new email\n :type input_from_stdin_or_file: str\n :param selected_address_books: the addressbooks that were selected on the\n command line\n :type selected_address_books: list of address_book.AddressBook\n :returns: None\n :rtype: None\n\n \"\"\"\n # get name and email address\n message = message_from_string(input_from_stdin_or_file, policy=SMTP_POLICY)\n\n print(\"Khard: Add email address to contact\")\n if not message['From'] \\\n or not message['From'].addresses:\n print(\"Found no email address\")\n sys.exit(1)\n\n email_address = message['From'].addresses[0].addr_spec\n name = message['From'].addresses[0].display_name\n\n print(\"Email address: %s\" % email_address)\n if not name:\n name = input(\"Contact's name: \")\n\n # search for an existing contact\n selected_vcard = choose_vcard_from_list(\n \"Select contact for the found e-mail address\",\n get_contact_list_by_user_selection(selected_address_books, name, True))\n if selected_vcard is None:\n # create new contact\n while True:\n input_string = input(\"Contact %s does not exist. Do you want \"\n \"to create it (y/n)? \" % name)\n if input_string.lower() in [\"\", \"n\", \"q\"]:\n print(\"Canceled\")\n sys.exit(0)\n if input_string.lower() == \"y\":\n break\n # ask for address book, in which to create the new contact\n selected_address_book = choose_address_book_from_list(\n \"Select address book for new contact\", config.abooks)\n if selected_address_book is None:\n print(\"Error: address book list is empty\")\n sys.exit(1)\n # ask for name and organisation of new contact\n while True:\n first_name = input(\"First name: \")\n last_name = input(\"Last name: \")\n organisation = input(\"Organisation: \")\n if not first_name and not last_name and not organisation:\n print(\"Error: All fields are empty.\")\n else:\n break\n selected_vcard = CarddavObject.from_user_input(\n selected_address_book,\n \"First name : %s\\nLast name : %s\\nOrganisation : %s\" % (\n first_name, last_name, organisation),\n config.get_supported_private_objects(),\n config.get_preferred_vcard_version(),\n config.localize_dates())\n\n # check if the contact already contains the email address\n for type, email_list in sorted(\n selected_vcard.get_email_addresses().items(),\n key=lambda k: k[0].lower()):\n for email in email_list:\n if email == email_address:\n print(\"The contact %s already contains the email address %s\" %\n (selected_vcard, email_address))\n sys.exit(0)\n\n # ask for confirmation again\n while True:\n input_string = input(\n \"Do you want to add the email address %s to the contact %s (y/n)? \"\n % (email_address, selected_vcard))\n if input_string.lower() in [\"\", \"n\", \"q\"]:\n print(\"Canceled\")\n sys.exit(0)\n if input_string.lower() == \"y\":\n break\n\n # ask for the email label\n print(\"\\nAdding email address %s to contact %s\\n\"\n \"Enter email label\\n\"\n \" vcard 3.0: At least one of home, internet, pref, work, x400\\n\"\n \" vcard 4.0: At least one of home, internet, pref, work\\n\"\n \" Or a custom label (only letters\" %\n (email_address, selected_vcard))\n while True:\n label = input(\"email label [internet]: \") or \"internet\"\n try:\n selected_vcard.add_email_address(label, email_address)\n except ValueError as err:\n print(err)\n else:\n break\n # save to disk\n selected_vcard.write_to_file(overwrite=True)\n print(\"Done.\\n\\n%s\" % selected_vcard.print_vcard())"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nprint the birthday contact table.", "response": "def birthdays_subcommand(vcard_list, parsable):\n \"\"\"Print birthday contact table.\n\n :param vcard_list: the vcards to search for matching entries which should\n be printed\n :type vcard_list: list of carddav_object.CarddavObject\n :param parsable: machine readable output: columns devided by tabulator (\\t)\n :type parsable: bool\n :returns: None\n :rtype: None\n\n \"\"\"\n # filter out contacts without a birthday date\n vcard_list = [\n vcard for vcard in vcard_list if vcard.get_birthday() is not None]\n # sort by date (month and day)\n # The sort function should work for strings and datetime objects. All\n # strings will besorted before any datetime objects.\n vcard_list.sort(\n key=lambda x: (x.get_birthday().month, x.get_birthday().day)\n if isinstance(x.get_birthday(), datetime.datetime)\n else (0, 0, x.get_birthday()))\n # add to string list\n birthday_list = []\n for vcard in vcard_list:\n date = vcard.get_birthday()\n if parsable:\n if config.display_by_name() == \"first_name\":\n birthday_list.append(\"%04d.%02d.%02d\\t%s\"\n % (date.year, date.month, date.day,\n vcard.get_first_name_last_name()))\n else:\n birthday_list.append(\"%04d.%02d.%02d\\t%s\"\n % (date.year, date.month, date.day,\n vcard.get_last_name_first_name()))\n else:\n if config.display_by_name() == \"first_name\":\n birthday_list.append(\"%s\\t%s\"\n % (vcard.get_first_name_last_name(),\n vcard.get_formatted_birthday()))\n else:\n birthday_list.append(\"%s\\t%s\"\n % (vcard.get_last_name_first_name(),\n vcard.get_formatted_birthday()))\n if birthday_list:\n if parsable:\n print('\\n'.join(birthday_list))\n else:\n list_birthdays(birthday_list)\n else:\n if not parsable:\n print(\"Found no birthdays\")\n sys.exit(1)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef phone_subcommand(search_terms, vcard_list, parsable):\n all_phone_numbers_list = []\n matching_phone_number_list = []\n for vcard in vcard_list:\n for type, number_list in sorted(vcard.get_phone_numbers().items(),\n key=lambda k: k[0].lower()):\n for number in sorted(number_list):\n if config.display_by_name() == \"first_name\":\n name = vcard.get_first_name_last_name()\n else:\n name = vcard.get_last_name_first_name()\n # create output lines\n line_formatted = \"\\t\".join([name, type, number])\n line_parsable = \"\\t\".join([number, name, type])\n if parsable:\n # parsable option: start with phone number\n phone_number_line = line_parsable\n else:\n # else: start with name\n phone_number_line = line_formatted\n if re.search(search_terms,\n \"%s\\n%s\" % (line_formatted, line_parsable),\n re.IGNORECASE | re.DOTALL):\n matching_phone_number_list.append(phone_number_line)\n elif len(re.sub(\"\\D\", \"\", search_terms)) >= 3:\n # The user likely searches for a phone number cause the\n # search string contains at least three digits. So we\n # remove all non-digit chars from the phone number field\n # and match against that.\n if re.search(re.sub(\"\\D\", \"\", search_terms),\n re.sub(\"\\D\", \"\", number), re.IGNORECASE):\n matching_phone_number_list.append(phone_number_line)\n # collect all phone numbers in a different list as fallback\n all_phone_numbers_list.append(phone_number_line)\n if matching_phone_number_list:\n if parsable:\n print('\\n'.join(matching_phone_number_list))\n else:\n list_phone_numbers(matching_phone_number_list)\n elif all_phone_numbers_list:\n if parsable:\n print('\\n'.join(all_phone_numbers_list))\n else:\n list_phone_numbers(all_phone_numbers_list)\n else:\n if not parsable:\n print(\"Found no phone numbers\")\n sys.exit(1)", "response": "Print a phone application friendly contact table."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nprinting a contact table with all postal addresses and matching entries.", "response": "def post_address_subcommand(search_terms, vcard_list, parsable):\n \"\"\"Print a contact table. with all postal / mailing addresses\n\n :param search_terms: used as search term to filter the contacts before\n printing\n :type search_terms: str\n :param vcard_list: the vcards to search for matching entries which should\n be printed\n :type vcard_list: list of carddav_object.CarddavObject\n :param parsable: machine readable output: columns devided by tabulator (\\t)\n :type parsable: bool\n :returns: None\n :rtype: None\n\n \"\"\"\n all_post_address_list = []\n matching_post_address_list = []\n for vcard in vcard_list:\n # vcard name\n if config.display_by_name() == \"first_name\":\n name = vcard.get_first_name_last_name()\n else:\n name = vcard.get_last_name_first_name()\n # create post address line list\n post_address_line_list = []\n if parsable:\n for type, post_address_list in sorted(vcard.get_post_addresses().items(),\n key=lambda k: k[0].lower()):\n for post_address in post_address_list:\n post_address_line_list.append(\n \"\\t\".join([str(post_address), name, type]))\n else:\n for type, post_address_list in sorted(vcard.get_formatted_post_addresses().items(),\n key=lambda k: k[0].lower()):\n for post_address in sorted(post_address_list):\n post_address_line_list.append(\n \"\\t\".join([name, type, post_address]))\n # add to matching and all post address lists\n for post_address_line in post_address_line_list:\n if re.search(search_terms,\n \"%s\\n%s\" % (post_address_line, post_address_line),\n re.IGNORECASE | re.DOTALL):\n matching_post_address_list.append(post_address_line)\n # collect all post addresses in a different list as fallback\n all_post_address_list.append(post_address_line)\n if matching_post_address_list:\n if parsable:\n print('\\n'.join(matching_post_address_list))\n else:\n list_post_addresses(matching_post_address_list)\n elif all_post_address_list:\n if parsable:\n print('\\n'.join(all_post_address_list))\n else:\n list_post_addresses(all_post_address_list)\n else:\n if not parsable:\n print(\"Found no post adresses\")\n sys.exit(1)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef email_subcommand(search_terms, vcard_list, parsable, remove_first_line):\n matching_email_address_list = []\n all_email_address_list = []\n for vcard in vcard_list:\n for type, email_list in sorted(vcard.get_email_addresses().items(),\n key=lambda k: k[0].lower()):\n for email in sorted(email_list):\n if config.display_by_name() == \"first_name\":\n name = vcard.get_first_name_last_name()\n else:\n name = vcard.get_last_name_first_name()\n # create output lines\n line_formatted = \"\\t\".join([name, type, email])\n line_parsable = \"\\t\".join([email, name, type])\n if parsable:\n # parsable option: start with email address\n email_address_line = line_parsable\n else:\n # else: start with name\n email_address_line = line_formatted\n if re.search(search_terms,\n \"%s\\n%s\" % (line_formatted, line_parsable),\n re.IGNORECASE | re.DOTALL):\n matching_email_address_list.append(email_address_line)\n # collect all email addresses in a different list as fallback\n all_email_address_list.append(email_address_line)\n if matching_email_address_list:\n if parsable:\n if not remove_first_line:\n # at least mutt requires that line\n print(\"searching for '%s' ...\" % search_terms)\n print('\\n'.join(matching_email_address_list))\n else:\n list_email_addresses(matching_email_address_list)\n elif all_email_address_list:\n if parsable:\n if not remove_first_line:\n # at least mutt requires that line\n print(\"searching for '%s' ...\" % search_terms)\n print('\\n'.join(all_email_address_list))\n else:\n list_email_addresses(all_email_address_list)\n else:\n if not parsable:\n print(\"Found no email addresses\")\n elif not remove_first_line:\n print(\"searching for '%s' ...\" % search_terms)\n sys.exit(1)", "response": "Print a mail client friendly contacts table that is compatible with the given search terms."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nprinting a user friendly contacts table.", "response": "def list_subcommand(vcard_list, parsable):\n \"\"\"Print a user friendly contacts table.\n\n :param vcard_list: the vcards to print\n :type vcard_list: list of carddav_object.CarddavObject\n :param parsable: machine readable output: columns devided by tabulator (\\t)\n :type parsable: bool\n :returns: None\n :rtype: None\n\n \"\"\"\n if not vcard_list:\n if not parsable:\n print(\"Found no contacts\")\n sys.exit(1)\n elif parsable:\n contact_line_list = []\n for vcard in vcard_list:\n if config.display_by_name() == \"first_name\":\n name = vcard.get_first_name_last_name()\n else:\n name = vcard.get_last_name_first_name()\n contact_line_list.append('\\t'.join([vcard.get_uid(), name,\n vcard.address_book.name]))\n print('\\n'.join(contact_line_list))\n else:\n list_contacts(vcard_list)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef modify_subcommand(selected_vcard, input_from_stdin_or_file, open_editor):\n # show warning, if vcard version of selected contact is not 3.0 or 4.0\n if selected_vcard.get_version() not in config.supported_vcard_versions:\n print(\"Warning:\\nThe selected contact is based on vcard version %s \"\n \"but khard only supports the creation and modification of vcards\"\n \" with version 3.0 and 4.0.\\nIf you proceed, the contact will be\"\n \" converted to vcard version %s but beware: This could corrupt \"\n \"the contact file or cause data loss.\"\n % (selected_vcard.get_version(),\n config.get_preferred_vcard_version()))\n while True:\n input_string = input(\"Do you want to proceed anyway (y/n)? \")\n if input_string.lower() in [\"\", \"n\", \"q\"]:\n print(\"Canceled\")\n sys.exit(0)\n if input_string.lower() == \"y\":\n break\n # if there is some data in stdin\n if input_from_stdin_or_file:\n # create new contact from stdin\n try:\n new_contact = \\\n CarddavObject.from_existing_contact_with_new_user_input(\n selected_vcard, input_from_stdin_or_file,\n config.localize_dates())\n except ValueError as err:\n print(err)\n sys.exit(1)\n if selected_vcard == new_contact:\n print(\"Nothing changed\\n\\n%s\" % new_contact.print_vcard())\n else:\n print(\"Modification\\n\\n%s\\n\" % new_contact.print_vcard())\n while True:\n input_string = input(\"Do you want to proceed (y/n)? \")\n if input_string.lower() in [\"\", \"n\", \"q\"]:\n print(\"Canceled\")\n break\n if input_string.lower() == \"y\":\n new_contact.write_to_file(overwrite=True)\n if open_editor:\n modify_existing_contact(new_contact)\n else:\n print(\"Done\")\n break\n else:\n modify_existing_contact(selected_vcard)", "response": "Modify a single contact in an external editor."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef remove_subcommand(selected_vcard, force):\n if not force:\n while True:\n input_string = input(\n \"Deleting contact %s from address book %s. Are you sure? \"\n \"(y/n): \" % (selected_vcard, selected_vcard.address_book))\n if input_string.lower() in [\"\", \"n\", \"q\"]:\n print(\"Canceled\")\n sys.exit(0)\n if input_string.lower() == \"y\":\n break\n selected_vcard.delete_vcard_file()\n print(\"Contact %s deleted successfully\" % selected_vcard.get_full_name())", "response": "Remove a contact from the addressbook."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef source_subcommand(selected_vcard, editor):\n child = subprocess.Popen([editor, selected_vcard.filename])\n child.communicate()", "response": "Open the vcard file for a contact in an external editor."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nmerges two contacts into one. :param vcard_list: the vcards from which to choose contacts for mergeing :type vcard_list: list of carddav_object.CarddavObject :param selected_address_books: the addressbooks to use to find the target contact :type selected_address_books: list(addressbook.AddressBook) :param search_terms: the search terms to find the target contact :type search_terms: str :param target_uid: the uid of the target contact or empty :type target_uid: str :returns: None :rtype: None", "response": "def merge_subcommand(vcard_list, selected_address_books, search_terms,\n target_uid):\n \"\"\"Merge two contacts into one.\n\n :param vcard_list: the vcards from which to choose contacts for mergeing\n :type vcard_list: list of carddav_object.CarddavObject\n :param selected_address_books: the addressbooks to use to find the target\n contact\n :type selected_address_books: list(addressbook.AddressBook)\n :param search_terms: the search terms to find the target contact\n :type search_terms: str\n :param target_uid: the uid of the target contact or empty\n :type target_uid: str\n :returns: None\n :rtype: None\n\n \"\"\"\n # Check arguments.\n if target_uid != \"\" and search_terms != \"\":\n print(\"You can not specify a target uid and target search terms for a \"\n \"merge.\")\n sys.exit(1)\n # Find possible target contacts.\n if target_uid != \"\":\n target_vcards = get_contacts(selected_address_books, target_uid,\n method=\"uid\")\n # We require that the uid given can uniquely identify a contact.\n if len(target_vcards) != 1:\n if not target_vcards:\n print(\"Found no contact for target uid %s\" % target_uid)\n else:\n print(\"Found multiple contacts for target uid %s\" % target_uid)\n for vcard in target_vcards:\n print(\" %s: %s\" % (vcard, vcard.get_uid()))\n sys.exit(1)\n else:\n target_vcards = get_contact_list_by_user_selection(\n selected_address_books, search_terms, False)\n # get the source vcard, from which to merge\n source_vcard = choose_vcard_from_list(\"Select contact from which to merge\",\n vcard_list)\n if source_vcard is None:\n print(\"Found no source contact for merging\")\n sys.exit(1)\n else:\n print(\"Merge from %s from address book %s\\n\\n\"\n % (source_vcard, source_vcard.address_book))\n # get the target vcard, into which to merge\n target_vcard = choose_vcard_from_list(\"Select contact into which to merge\",\n target_vcards)\n if target_vcard is None:\n print(\"Found no target contact for merging\")\n sys.exit(1)\n else:\n print(\"Merge into %s from address book %s\\n\\n\"\n % (target_vcard, target_vcard.address_book))\n # merging\n if source_vcard == target_vcard:\n print(\"The selected contacts are already identical\")\n else:\n merge_existing_contacts(source_vcard, target_vcard, True)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncopies or move a contact to a different address book. :action: the string \"copy\" or \"move\" to indicate what to do :type action: str :param vcard_list: the contact list from which to select one for the action :type vcard_list: list of carddav_object.CarddavObject :param target_address_book_list: the list of target address books :type target_address_book_list: list(addressbook.AddressBook) :returns: None :rtype: None", "response": "def copy_or_move_subcommand(action, vcard_list, target_address_book_list):\n \"\"\"Copy or move a contact to a different address book.\n\n :action: the string \"copy\" or \"move\" to indicate what to do\n :type action: str\n :param vcard_list: the contact list from which to select one for the action\n :type vcard_list: list of carddav_object.CarddavObject\n :param target_address_book_list: the list of target address books\n :type target_address_book_list: list(addressbook.AddressBook)\n :returns: None\n :rtype: None\n\n \"\"\"\n # get the source vcard, which to copy or move\n source_vcard = choose_vcard_from_list(\n \"Select contact to %s\" % action.title(), vcard_list)\n if source_vcard is None:\n print(\"Found no contact\")\n sys.exit(1)\n else:\n print(\"%s contact %s from address book %s\"\n % (action.title(), source_vcard, source_vcard.address_book))\n\n # get target address book\n if len(target_address_book_list) == 1 \\\n and target_address_book_list[0] == source_vcard.address_book:\n print(\"The address book %s already contains the contact %s\"\n % (target_address_book_list[0], source_vcard))\n sys.exit(1)\n else:\n available_address_books = [abook for abook in target_address_book_list\n if abook != source_vcard.address_book]\n selected_target_address_book = choose_address_book_from_list(\n \"Select target address book\", available_address_books)\n if selected_target_address_book is None:\n print(\"Error: address book list is empty\")\n sys.exit(1)\n\n # check if a contact already exists in the target address book\n target_vcard = choose_vcard_from_list(\n \"Select target contact which to overwrite\",\n get_contact_list_by_user_selection([selected_target_address_book],\n source_vcard.get_full_name(), True))\n # If the target contact doesn't exist, move or copy the source contact into\n # the target address book without further questions.\n if target_vcard is None:\n copy_contact(source_vcard, selected_target_address_book,\n action == \"move\")\n else:\n if source_vcard == target_vcard:\n # source and target contact are identical\n print(\"Target contact: %s\" % target_vcard)\n if action == \"move\":\n copy_contact(source_vcard, selected_target_address_book, True)\n else:\n print(\"The selected contacts are already identical\")\n else:\n # source and target contacts are different\n # either overwrite the target one or merge into target contact\n print(\"The address book %s already contains the contact %s\\n\\n\"\n \"Source\\n\\n%s\\n\\nTarget\\n\\n%s\\n\\n\"\n \"Possible actions:\\n\"\n \" a: %s anyway\\n\"\n \" m: Merge from source into target contact\\n\"\n \" o: Overwrite target contact\\n\"\n \" q: Quit\" % (\n target_vcard.address_book, source_vcard,\n source_vcard.print_vcard(), target_vcard.print_vcard(),\n \"Move\" if action == \"move\" else \"Copy\"))\n while True:\n input_string = input(\"Your choice: \")\n if input_string.lower() == \"a\":\n copy_contact(source_vcard, selected_target_address_book,\n action == \"move\")\n break\n if input_string.lower() == \"o\":\n copy_contact(source_vcard, selected_target_address_book,\n action == \"move\")\n target_vcard.delete_vcard_file()\n break\n if input_string.lower() == \"m\":\n merge_existing_contacts(source_vcard, target_vcard,\n action == \"move\")\n break\n if input_string.lower() in [\"\", \"q\"]:\n print(\"Canceled\")\n break"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nparse the command line arguments and return the parsed namespace that was creates by argparse. ArgumentParser. parse_args.", "response": "def parse_args(argv):\n \"\"\"Parse the command line arguments and return the namespace that was\n creates by argparse.ArgumentParser.parse_args().\n\n :returns: the namespace parsed from the command line\n :rtype: argparse.Namespace\n\n \"\"\"\n # Create the base argument parser. It will be reused for the first and\n # second round of argument parsing.\n base = argparse.ArgumentParser(\n description=\"Khard is a carddav address book for the console\",\n formatter_class=argparse.RawTextHelpFormatter, add_help=False)\n base.add_argument(\"-c\", \"--config\", default=\"\", help=\"config file to use\")\n base.add_argument(\"--debug\", action=\"store_true\",\n help=\"enable debug output\")\n base.add_argument(\"--skip-unparsable\", action=\"store_true\",\n help=\"skip unparsable vcard files\")\n base.add_argument(\"-v\", \"--version\", action=\"version\",\n version=\"Khard version %s\" % khard_version)\n\n # Create the first argument parser. Its main job is to set the correct\n # config file. The config file is needed to get the default command if no\n # subcommand is given on the command line. This parser will ignore most\n # arguments, as they will be parsed by the second parser.\n first_parser = argparse.ArgumentParser(parents=[base])\n first_parser.add_argument('remainder', nargs=argparse.REMAINDER)\n\n # Create the main argument parser. It will handle the complete command\n # line only ignoring the config and debug options as these have already\n # been set.\n parser = argparse.ArgumentParser(parents=[base])\n\n # create address book subparsers with different help texts\n default_addressbook_parser = argparse.ArgumentParser(add_help=False)\n default_addressbook_parser.add_argument(\n \"-a\", \"--addressbook\", default=[],\n type=lambda x: [y.strip() for y in x.split(\",\")],\n help=\"Specify one or several comma separated address book names to \"\n \"narrow the list of contacts\")\n new_addressbook_parser = argparse.ArgumentParser(add_help=False)\n new_addressbook_parser.add_argument(\n \"-a\", \"--addressbook\", default=[],\n type=lambda x: [y.strip() for y in x.split(\",\")],\n help=\"Specify address book in which to create the new contact\")\n copy_move_addressbook_parser = argparse.ArgumentParser(add_help=False)\n copy_move_addressbook_parser.add_argument(\n \"-a\", \"--addressbook\", default=[],\n type=lambda x: [y.strip() for y in x.split(\",\")],\n help=\"Specify one or several comma separated address book names to \"\n \"narrow the list of contacts\")\n copy_move_addressbook_parser.add_argument(\n \"-A\", \"--target-addressbook\", default=[],\n type=lambda x: [y.strip() for y in x.split(\",\")],\n help=\"Specify target address book in which to copy / move the \"\n \"selected contact\")\n merge_addressbook_parser = argparse.ArgumentParser(add_help=False)\n merge_addressbook_parser.add_argument(\n \"-a\", \"--addressbook\", default=[],\n type=lambda x: [y.strip() for y in x.split(\",\")],\n help=\"Specify one or several comma separated address book names to \"\n \"narrow the list of source contacts\")\n merge_addressbook_parser.add_argument(\n \"-A\", \"--target-addressbook\", default=[],\n type=lambda x: [y.strip() for y in x.split(\",\")],\n help=\"Specify one or several comma separated address book names to \"\n \"narrow the list of target contacts\")\n\n # create input file subparsers with different help texts\n email_header_input_file_parser = argparse.ArgumentParser(add_help=False)\n email_header_input_file_parser.add_argument(\n \"-i\", \"--input-file\", default=\"-\",\n help=\"Specify input email header file name or use stdin by default\")\n template_input_file_parser = argparse.ArgumentParser(add_help=False)\n template_input_file_parser.add_argument(\n \"-i\", \"--input-file\", default=\"-\",\n help=\"Specify input template file name or use stdin by default\")\n template_input_file_parser.add_argument(\n \"--open-editor\", action=\"store_true\", help=\"Open the default text \"\n \"editor after successful creation of new contact\")\n\n # create sort subparser\n sort_parser = argparse.ArgumentParser(add_help=False)\n sort_parser.add_argument(\n \"-d\", \"--display\", choices=(\"first_name\", \"last_name\"),\n help=\"Display names in contact table by first or last name\")\n sort_parser.add_argument(\n \"-g\", \"--group-by-addressbook\", action=\"store_true\",\n help=\"Group contact table by address book\")\n sort_parser.add_argument(\n \"-r\", \"--reverse\", action=\"store_true\",\n help=\"Reverse order of contact table\")\n sort_parser.add_argument(\n \"-s\", \"--sort\", choices=(\"first_name\", \"last_name\"),\n help=\"Sort contact table by first or last name\")\n\n # create search subparsers\n default_search_parser = argparse.ArgumentParser(add_help=False)\n default_search_parser.add_argument(\n \"-f\", \"--search-in-source-files\", action=\"store_true\",\n help=\"Look into source vcf files to speed up search queries in \"\n \"large address books. Beware that this option could lead \"\n \"to incomplete results.\")\n default_search_parser.add_argument(\n \"-e\", \"--strict-search\", action=\"store_true\",\n help=\"narrow contact search to name field\")\n default_search_parser.add_argument(\n \"-u\", \"--uid\", default=\"\", help=\"select contact by uid\")\n default_search_parser.add_argument(\n \"search_terms\", nargs=\"*\", metavar=\"search terms\",\n help=\"search in all fields to find matching contact\")\n merge_search_parser = argparse.ArgumentParser(add_help=False)\n merge_search_parser.add_argument(\n \"-f\", \"--search-in-source-files\", action=\"store_true\",\n help=\"Look into source vcf files to speed up search queries in \"\n \"large address books. Beware that this option could lead \"\n \"to incomplete results.\")\n merge_search_parser.add_argument(\n \"-e\", \"--strict-search\", action=\"store_true\",\n help=\"narrow contact search to name fields\")\n merge_search_parser.add_argument(\n \"-t\", \"--target-contact\", \"--target\", default=\"\",\n help=\"search in all fields to find matching target contact\")\n merge_search_parser.add_argument(\n \"-u\", \"--uid\", default=\"\", help=\"select source contact by uid\")\n merge_search_parser.add_argument(\n \"-U\", \"--target-uid\", default=\"\", help=\"select target contact by uid\")\n merge_search_parser.add_argument(\n \"source_search_terms\", nargs=\"*\", metavar=\"source\",\n help=\"search in all fields to find matching source contact\")\n\n # create subparsers for actions\n subparsers = parser.add_subparsers(dest=\"action\")\n list_parser = subparsers.add_parser(\n \"list\",\n aliases=Actions.get_aliases(\"list\"),\n parents=[default_addressbook_parser, default_search_parser,\n sort_parser],\n description=\"list all (selected) contacts\",\n help=\"list all (selected) contacts\")\n list_parser.add_argument(\n \"-p\", \"--parsable\", action=\"store_true\",\n help=\"Machine readable format: uid\\\\tcontact_name\\\\taddress_book_name\")\n subparsers.add_parser(\n \"details\",\n aliases=Actions.get_aliases(\"details\"),\n parents=[default_addressbook_parser, default_search_parser,\n sort_parser],\n description=\"display detailed information about one contact\",\n help=\"display detailed information about one contact\")\n export_parser = subparsers.add_parser(\n \"export\",\n aliases=Actions.get_aliases(\"export\"),\n parents=[default_addressbook_parser, default_search_parser,\n sort_parser],\n description=\"export a contact to the custom yaml format that is \"\n \"also used for editing and creating contacts\",\n help=\"export a contact to the custom yaml format that is also \"\n \"used for editing and creating contacts\")\n export_parser.add_argument(\n \"--empty-contact-template\", action=\"store_true\",\n help=\"Export an empty contact template\")\n export_parser.add_argument(\n \"-o\", \"--output-file\", default=sys.stdout,\n type=argparse.FileType(\"w\"),\n help=\"Specify output template file name or use stdout by default\")\n birthdays_parser = subparsers.add_parser(\n \"birthdays\",\n aliases=Actions.get_aliases(\"birthdays\"),\n parents=[default_addressbook_parser, default_search_parser],\n description=\"list birthdays (sorted by month and day)\",\n help=\"list birthdays (sorted by month and day)\")\n birthdays_parser.add_argument(\n \"-d\", \"--display\", choices=(\"first_name\", \"last_name\"),\n help=\"Display names in birthdays table by first or last name\")\n birthdays_parser.add_argument(\n \"-p\", \"--parsable\", action=\"store_true\",\n help=\"Machine readable format: name\\\\tdate\")\n email_parser = subparsers.add_parser(\n \"email\",\n aliases=Actions.get_aliases(\"email\"),\n parents=[default_addressbook_parser, default_search_parser,\n sort_parser],\n description=\"list email addresses\",\n help=\"list email addresses\")\n email_parser.add_argument(\n \"-p\", \"--parsable\", action=\"store_true\",\n help=\"Machine readable format: address\\\\tname\\\\ttype\")\n email_parser.add_argument(\n \"--remove-first-line\", action=\"store_true\",\n help=\"remove \\\"searching for '' ...\\\" line from parsable output \"\n \"(that line is required by mutt)\")\n phone_parser = subparsers.add_parser(\n \"phone\",\n aliases=Actions.get_aliases(\"phone\"),\n parents=[default_addressbook_parser, default_search_parser,\n sort_parser],\n description=\"list phone numbers\",\n help=\"list phone numbers\")\n phone_parser.add_argument(\n \"-p\", \"--parsable\", action=\"store_true\",\n help=\"Machine readable format: number\\\\tname\\\\ttype\")\n post_address_parser = subparsers.add_parser(\n \"postaddress\",\n aliases=Actions.get_aliases(\"postaddress\"),\n parents=[default_addressbook_parser, default_search_parser,\n sort_parser],\n description=\"list postal addresses\",\n help=\"list postal addresses\")\n post_address_parser.add_argument(\n \"-p\", \"--parsable\", action=\"store_true\",\n help=\"Machine readable format: address\\\\tname\\\\ttype\")\n subparsers.add_parser(\n \"source\",\n aliases=Actions.get_aliases(\"source\"),\n parents=[default_addressbook_parser, default_search_parser,\n sort_parser],\n description=\"edit the vcard file of a contact directly\",\n help=\"edit the vcard file of a contact directly\")\n new_parser = subparsers.add_parser(\n \"new\",\n aliases=Actions.get_aliases(\"new\"),\n parents=[new_addressbook_parser, template_input_file_parser],\n description=\"create a new contact\",\n help=\"create a new contact\")\n new_parser.add_argument(\n \"--vcard-version\", choices=(\"3.0\", \"4.0\"),\n help=\"Select preferred vcard version for new contact\")\n add_email_parser = subparsers.add_parser(\n \"add-email\",\n aliases=Actions.get_aliases(\"add-email\"),\n parents=[default_addressbook_parser, email_header_input_file_parser,\n default_search_parser, sort_parser],\n description=\"Extract email address from the \\\"From:\\\" field of an \"\n \"email header and add to an existing contact or create a new one\",\n help=\"Extract email address from the \\\"From:\\\" field of an email \"\n \"header and add to an existing contact or create a new one\")\n add_email_parser.add_argument(\n \"--vcard-version\", choices=(\"3.0\", \"4.0\"),\n help=\"Select preferred vcard version for new contact\")\n subparsers.add_parser(\n \"merge\",\n aliases=Actions.get_aliases(\"merge\"),\n parents=[merge_addressbook_parser, merge_search_parser, sort_parser],\n description=\"merge two contacts\",\n help=\"merge two contacts\")\n subparsers.add_parser(\n \"modify\",\n aliases=Actions.get_aliases(\"modify\"),\n parents=[default_addressbook_parser, template_input_file_parser,\n default_search_parser, sort_parser],\n description=\"edit the data of a contact\",\n help=\"edit the data of a contact\")\n subparsers.add_parser(\n \"copy\",\n aliases=Actions.get_aliases(\"copy\"),\n parents=[copy_move_addressbook_parser, default_search_parser,\n sort_parser],\n description=\"copy a contact to a different addressbook\",\n help=\"copy a contact to a different addressbook\")\n subparsers.add_parser(\n \"move\",\n aliases=Actions.get_aliases(\"move\"),\n parents=[copy_move_addressbook_parser, default_search_parser,\n sort_parser],\n description=\"move a contact to a different addressbook\",\n help=\"move a contact to a different addressbook\")\n remove_parser = subparsers.add_parser(\n \"remove\",\n aliases=Actions.get_aliases(\"remove\"),\n parents=[default_addressbook_parser, default_search_parser,\n sort_parser],\n description=\"remove a contact\",\n help=\"remove a contact\")\n remove_parser.add_argument(\n \"--force\", action=\"store_true\",\n help=\"Remove contact without confirmation\")\n subparsers.add_parser(\n \"addressbooks\",\n aliases=Actions.get_aliases(\"addressbooks\"),\n description=\"list addressbooks\",\n help=\"list addressbooks\")\n subparsers.add_parser(\n \"filename\",\n aliases=Actions.get_aliases(\"filename\"),\n parents=[default_addressbook_parser, default_search_parser,\n sort_parser],\n description=\"list filenames of all matching contacts\",\n help=\"list filenames of all matching contacts\")\n\n # Replace the print_help method of the first parser with the print_help\n # method of the main parser. This makes it possible to have the first\n # parser handle the help option so that command line help can be printed\n # without parsing the config file first (which is a problem if there are\n # errors in the config file). The config file will still be parsed before\n # the full command line is parsed so errors in the config file might be\n # reported before command line syntax errors.\n first_parser.print_help = parser.print_help\n\n # Parese the command line with the first argument parser. It will handle\n # the config option (its main job) and also the help, version and debug\n # options as these do not depend on anything else.\n args = first_parser.parse_args(argv)\n remainder = args.remainder\n\n # Set the loglevel to debug if given on the command line. This is done\n # before parsing the config file to make it possible to debug the parsing\n # of the config file.\n if \"debug\" in args and args.debug:\n logging.basicConfig(level=logging.DEBUG)\n\n # Create the global config instance.\n global config\n config = Config(args.config)\n\n # Check the log level again and merge the value from the command line with\n # the config file.\n if (\"debug\" in args and args.debug) or config.debug:\n logging.basicConfig(level=logging.DEBUG)\n logging.debug(\"first args=%s\", args)\n logging.debug(\"remainder=%s\", remainder)\n\n # Set the default command from the config file if none was given on the\n # command line.\n if not remainder or remainder[0] not in Actions.get_all():\n remainder.insert(0, config.default_action)\n logging.debug(\"updated remainder=%s\", remainder)\n\n # Save the last option that needs to be carried from the first parser run\n # to the second.\n skip = args.skip_unparsable\n\n # Parse the remainder of the command line. All options from the previous\n # run have already been processed and are not needed any more.\n args = parser.parse_args(remainder)\n\n # Restore settings that are left from the first parser run.\n args.skip_unparsable = skip\n logging.debug(\"second args=%s\", args)\n\n # An integrity check for some options.\n if \"uid\" in args and args.uid and (\n (\"search_terms\" in args and args.search_terms) or\n (\"source_search_terms\" in args and args.source_search_terms)):\n # If an uid was given we require that no search terms where given.\n parser.error(\"You can not give arbitrary search terms and --uid at the\"\n \" same time.\")\n return args"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfinding the name of the action for the supplied alias.", "response": "def get_action(cls, alias):\n \"\"\"Find the name of the action for the supplied alias. If no action is\n asociated with the given alias, None is returned.\n\n :param alias: the alias to look up\n :type alias: str\n :rturns: the name of the corresponding action or None\n :rtype: str or NoneType\n\n \"\"\"\n for action, alias_list in cls.action_map.items():\n if alias in alias_list:\n return action\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _convert_boolean_config_value(config, name, default=True):\n if name not in config:\n config[name] = default\n elif config[name] == \"yes\":\n config[name] = True\n elif config[name] == \"no\":\n config[name] = False\n else:\n raise ValueError(\"Error in config file\\nInvalid value for %s \"\n \"parameter\\nPossible values: yes, no\" % name)", "response": "Convert the named field to boolean."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nuse this to create a new empty contact.", "response": "def new_contact(cls, address_book, supported_private_objects, version,\n localize_dates):\n \"\"\"Use this to create a new and empty contact.\"\"\"\n return cls(address_book, None, supported_private_objects, version,\n localize_dates)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a new cDNAContact object from a file.", "response": "def from_file(cls, address_book, filename, supported_private_objects,\n localize_dates):\n \"\"\"\n Use this if you want to create a new contact from an existing .vcf\n file.\n \"\"\"\n return cls(address_book, filename, supported_private_objects, None,\n localize_dates)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nuses this if you want to create a new contact from user input.", "response": "def from_user_input(cls, address_book, user_input,\n supported_private_objects, version, localize_dates):\n \"\"\"Use this if you want to create a new contact from user input.\"\"\"\n contact = cls(address_book, None, supported_private_objects, version,\n localize_dates)\n contact._process_user_input(user_input)\n return contact"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a new instance of a new contact with the given user input.", "response": "def from_existing_contact_with_new_user_input(cls, contact, user_input,\n localize_dates):\n \"\"\"\n Use this if you want to clone an existing contact and replace its data\n with new user input in one step.\n \"\"\"\n contact = cls(contact.address_book, contact.filename,\n contact.supported_private_objects, None, localize_dates)\n contact._process_user_input(user_input)\n return contact"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_names_part(self, part):\n try:\n the_list = getattr(self.vcard.n.value, part)\n except AttributeError:\n return []\n else:\n # check if list only contains empty strings\n if not ''.join(the_list):\n return []\n return the_list if isinstance(the_list, list) else [the_list]", "response": "Get some part of the N entry in the vCard as a list"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_first_name_last_name(self):\n names = []\n if self._get_first_names():\n names += self._get_first_names()\n if self._get_additional_names():\n names += self._get_additional_names()\n if self._get_last_names():\n names += self._get_last_names()\n if names:\n return helpers.list_to_string(names, \" \")\n else:\n return self.get_full_name()", "response": "returns the first name of the user s sometags"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_last_name_first_name(self):\n last_names = []\n if self._get_last_names():\n last_names += self._get_last_names()\n first_and_additional_names = []\n if self._get_first_names():\n first_and_additional_names += self._get_first_names()\n if self._get_additional_names():\n first_and_additional_names += self._get_additional_names()\n if last_names and first_and_additional_names:\n return \"{}, {}\".format(\n helpers.list_to_string(last_names, \" \"),\n helpers.list_to_string(first_and_additional_names, \" \"))\n elif last_names:\n return helpers.list_to_string(last_names, \" \")\n elif first_and_additional_names:\n return helpers.list_to_string(first_and_additional_names, \" \")\n else:\n return self.get_full_name()", "response": "returns the last name and first name of the class"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_organisations(self):\n organisations = []\n for child in self.vcard.getChildren():\n if child.name == \"ORG\":\n organisations.append(child.value)\n return sorted(organisations)", "response": "Returns a list of organisations sorted alphabetically by the ISO."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _get_titles(self):\n titles = []\n for child in self.vcard.getChildren():\n if child.name == \"TITLE\":\n titles.append(child.value)\n return sorted(titles)", "response": "Returns a list of all the Titles in the VCard."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_roles(self):\n roles = []\n for child in self.vcard.getChildren():\n if child.name == \"ROLE\":\n roles.append(child.value)\n return sorted(roles)", "response": "Returns a list of all the roles in the vCard."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_phone_numbers(self):\n phone_dict = {}\n for child in self.vcard.getChildren():\n if child.name == \"TEL\":\n # phone types\n type = helpers.list_to_string(\n self._get_types_for_vcard_object(child, \"voice\"), \", \")\n if type not in phone_dict:\n phone_dict[type] = []\n # phone value\n #\n # vcard version 4.0 allows URI scheme \"tel\" in phone attribute value\n # Doc: https://tools.ietf.org/html/rfc6350#section-6.4.1\n # example: TEL;VALUE=uri;PREF=1;TYPE=\"voice,home\":tel:+1-555-555-5555;ext=5555\n if child.value.lower().startswith(\"tel:\"):\n # cut off the \"tel:\" uri prefix\n phone_dict[type].append(child.value[4:])\n else:\n # free text field\n phone_dict[type].append(child.value)\n # sort phone number lists\n for number_list in phone_dict.values():\n number_list.sort()\n return phone_dict", "response": "returns a dictionary of type and phone number lists for the current user s account."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_email_addresses(self):\n email_dict = {}\n for child in self.vcard.getChildren():\n if child.name == \"EMAIL\":\n type = helpers.list_to_string(\n self._get_types_for_vcard_object(child, \"internet\"), \", \")\n if type not in email_dict:\n email_dict[type] = []\n email_dict[type].append(child.value)\n # sort email address lists\n for email_list in email_dict.values():\n email_list.sort()\n return email_dict", "response": "returns a dictionary of type and email address list for each of the children of the vCard"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_post_addresses(self):\n post_adr_dict = {}\n for child in self.vcard.getChildren():\n if child.name == \"ADR\":\n type = helpers.list_to_string(\n self._get_types_for_vcard_object(child, \"home\"), \", \")\n if type not in post_adr_dict:\n post_adr_dict[type] = []\n post_adr_dict[type].append(\n {\n \"box\": child.value.box,\n \"extended\": child.value.extended,\n \"street\": child.value.street,\n \"code\": child.value.code,\n \"city\": child.value.city,\n \"region\": child.value.region,\n \"country\": child.value.country\n })\n # sort post address lists\n for post_adr_list in post_adr_dict.values():\n post_adr_list.sort(key=lambda x: (\n helpers.list_to_string(x['city'], \" \").lower(),\n helpers.list_to_string(x['street'], \" \").lower()))\n return post_adr_dict", "response": "returns a dict of type and post address list for the current ADR"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_categories(self):\n category_list = []\n for child in self.vcard.getChildren():\n if child.name == \"CATEGORIES\":\n value = child.value\n category_list.append(\n value if isinstance(value, list) else [value])\n if len(category_list) == 1:\n return category_list[0]\n return sorted(category_list)", "response": "returns a list of all the categories in the vCard"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding a category to the vCard.", "response": "def _add_category(self, categories):\n \"\"\" categories variable must be a list \"\"\"\n categories_obj = self.vcard.add('categories')\n categories_obj.value = helpers.convert_to_vcard(\n \"category\", categories, ObjectType.list_with_strings)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_nicknames(self):\n nicknames = []\n for child in self.vcard.getChildren():\n if child.name == \"NICKNAME\":\n nicknames.append(child.value)\n return sorted(nicknames)", "response": "returns a list of all the nicknames in the vCard"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get_notes(self):\n notes = []\n for child in self.vcard.getChildren():\n if child.name == \"NOTE\":\n notes.append(child.value)\n return sorted(notes)", "response": "Returns a list of all the notes in the vCard."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a dict of all private objects in the vCard", "response": "def _get_private_objects(self):\n \"\"\"\n :rtype: dict(str, list(str))\n \"\"\"\n private_objects = {}\n for child in self.vcard.getChildren():\n if child.name.lower().startswith(\"x-\"):\n try:\n key_index = [\n x.lower() for x in self.supported_private_objects\n ].index(child.name[2:].lower())\n except ValueError:\n pass\n else:\n key = self.supported_private_objects[key_index]\n if key not in private_objects:\n private_objects[key] = []\n private_objects[key].append(child.value)\n # sort private object lists\n for value in private_objects.values():\n value.sort()\n return private_objects"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get_webpages(self):\n urls = []\n for child in self.vcard.getChildren():\n if child.name == \"URL\":\n urls.append(child.value)\n return sorted(urls)", "response": "Returns a list of all webpages in the vCard."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning anniversary value or None if not available", "response": "def get_anniversary(self):\n \"\"\":returns: contacts anniversary or None if not available\n :rtype: datetime.datetime or str\n \"\"\"\n # vcard 4.0 could contain a single text value\n try:\n if self.vcard.anniversary.params.get(\"VALUE\")[0] == \"text\":\n return self.vcard.anniversary.value\n except (AttributeError, IndexError, TypeError):\n pass\n # else try to convert to a datetime object\n try:\n return helpers.string_to_date(self.vcard.anniversary.value)\n except (AttributeError, ValueError):\n # vcard 3.0: x-anniversary (private object)\n try:\n return helpers.string_to_date(self.vcard.x_anniversary.value)\n except (AttributeError, ValueError):\n pass\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the contacts birthday if available otherwise None", "response": "def get_birthday(self):\n \"\"\":returns: contacts birthday or None if not available\n :rtype: datetime.datetime or str\n \"\"\"\n # vcard 4.0 could contain a single text value\n try:\n if self.vcard.bday.params.get(\"VALUE\")[0] == \"text\":\n return self.vcard.bday.value\n except (AttributeError, IndexError, TypeError):\n pass\n # else try to convert to a datetime object\n try:\n return helpers.string_to_date(self.vcard.bday.value)\n except (AttributeError, ValueError):\n pass\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets list of types for a given object", "response": "def _get_types_for_vcard_object(self, object, default_type):\n \"\"\"\n get list of types for phone number, email or post address\n :param object: vcard class object\n :type object: vobject.vCard\n :param default_type: use if the object contains no type\n :type default_type: str\n :returns: list of type labels\n :rtype: list(str)\n \"\"\"\n type_list = []\n # try to find label group for custom value type\n if object.group:\n for label in self.vcard.getChildren():\n if label.name == \"X-ABLABEL\" and label.group == object.group:\n custom_type = label.value.strip()\n if custom_type:\n type_list.append(custom_type)\n # then load type from params dict\n standard_types = object.params.get(\"TYPE\")\n if standard_types is not None:\n if not isinstance(standard_types, list):\n standard_types = [standard_types]\n for type in standard_types:\n type = type.strip()\n if type and type.lower() != \"pref\":\n if not type.lower().startswith(\"x-\"):\n type_list.append(type)\n elif type[2:].lower() not in [x.lower() for x in type_list]:\n # add x-custom type in case it's not already added by\n # custom label for loop above but strip x- before\n type_list.append(type[2:])\n # try to get pref parameter from vcard version 4.0\n try:\n type_list.append(\"pref=%d\" % int(object.params.get(\"PREF\")[0]))\n except (IndexError, TypeError, ValueError):\n # else try to determine, if type params contain pref attribute\n try:\n for x in object.params.get(\"TYPE\"):\n if x.lower() == \"pref\" and \"pref\" not in type_list:\n type_list.append(\"pref\")\n except TypeError:\n pass\n # return type_list or default type\n if type_list:\n return type_list\n return [default_type]"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nparses the type value of phone numbers email and post addresses.", "response": "def _parse_type_value(types, value, supported_types):\n \"\"\"Parse type value of phone numbers, email and post addresses.\n\n :param types: list of type values\n :type types: list(str)\n :param value: the corresponding label, required for more verbose\n exceptions\n :type value: str\n :param supported_types: all allowed standard types\n :type supported_types: list(str)\n :returns: tuple of standard and custom types and pref integer\n :rtype: tuple(list(str), list(str), int)\n\n \"\"\"\n custom_types = []\n standard_types = []\n pref = 0\n for type in types:\n type = type.strip()\n if type:\n if type.lower() in supported_types:\n standard_types.append(type)\n elif type.lower() == \"pref\":\n pref += 1\n elif re.match(r\"^pref=\\d{1,2}$\", type.lower()):\n pref += int(type.split(\"=\")[1])\n else:\n if type.lower().startswith(\"x-\"):\n custom_types.append(type[2:])\n standard_types.append(type)\n else:\n custom_types.append(type)\n standard_types.append(\"X-{}\".format(type))\n return (standard_types, custom_types, pref)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef list_to_string(input, delimiter):\n if isinstance(input, list):\n return delimiter.join(\n list_to_string(item, delimiter) for item in input)\n return input", "response": "converts list to string recursively so that nested lists are supported by the user"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconverting a string to a date object.", "response": "def string_to_date(input):\n \"\"\"Convert string to date object.\n\n :param input: the date string to parse\n :type input: str\n :returns: the parsed datetime object\n :rtype: datetime.datetime\n \"\"\"\n # try date formats --mmdd, --mm-dd, yyyymmdd, yyyy-mm-dd and datetime\n # formats yyyymmddThhmmss, yyyy-mm-ddThh:mm:ss, yyyymmddThhmmssZ,\n # yyyy-mm-ddThh:mm:ssZ.\n for format_string in (\"--%m%d\", \"--%m-%d\", \"%Y%m%d\", \"%Y-%m-%d\",\n \"%Y%m%dT%H%M%S\", \"%Y-%m-%dT%H:%M:%S\",\n \"%Y%m%dT%H%M%SZ\", \"%Y-%m-%dT%H:%M:%SZ\"):\n try:\n return datetime.strptime(input, format_string)\n except ValueError:\n pass\n # try datetime formats yyyymmddThhmmsstz and yyyy-mm-ddThh:mm:sstz where tz\n # may look like -06:00.\n for format_string in (\"%Y%m%dT%H%M%S%z\", \"%Y-%m-%dT%H:%M:%S%z\"):\n try:\n return datetime.strptime(''.join(input.rsplit(\":\", 1)),\n format_string)\n except ValueError:\n pass\n raise ValueError"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef convert_to_yaml(\n name, value, indentation, indexOfColon, show_multi_line_character):\n \"\"\"converts a value list into yaml syntax\n :param name: name of object (example: phone)\n :type name: str\n :param value: object contents\n :type value: str, list(str), list(list(str))\n :param indentation: indent all by number of spaces\n :type indentation: int\n :param indexOfColon: use to position : at the name string (-1 for no space)\n :type indexOfColon: int\n :param show_multi_line_character: option to hide \"|\"\n :type show_multi_line_character: boolean\n :returns: yaml formatted string array of name, value pair\n :rtype: list(str)\n \"\"\"\n strings = []\n if isinstance(value, list):\n # special case for single item lists:\n if len(value) == 1 \\\n and isinstance(value[0], str):\n # value = [\"string\"] should not be converted to\n # name:\n # - string\n # but to \"name: string\" instead\n value = value[0]\n elif len(value) == 1 \\\n and isinstance(value[0], list) \\\n and len(value[0]) == 1 \\\n and isinstance(value[0][0], str):\n # same applies to value = [[\"string\"]]\n value = value[0][0]\n if isinstance(value, str):\n strings.append(\"%s%s%s: %s\" % (\n ' ' * indentation, name, ' ' * (indexOfColon-len(name)),\n indent_multiline_string(value, indentation+4,\n show_multi_line_character)))\n elif isinstance(value, list):\n strings.append(\"%s%s%s: \" % (\n ' ' * indentation, name, ' ' * (indexOfColon-len(name))))\n for outer in value:\n # special case for single item sublists\n if isinstance(outer, list) \\\n and len(outer) == 1 \\\n and isinstance(outer[0], str):\n # outer = [\"string\"] should not be converted to\n # -\n # - string\n # but to \"- string\" instead\n outer = outer[0]\n if isinstance(outer, str):\n strings.append(\"%s- %s\" % (\n ' ' * (indentation+4), indent_multiline_string(\n outer, indentation+8, show_multi_line_character)))\n elif isinstance(outer, list):\n strings.append(\"%s- \" % (' ' * (indentation+4)))\n for inner in outer:\n if isinstance(inner, str):\n strings.append(\"%s- %s\" % (\n ' ' * (indentation+8), indent_multiline_string(\n inner, indentation+12,\n show_multi_line_character)))\n return strings", "response": "converts a value list into a yaml formatted string array of name value pair pair\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconverting user input into vcard compatible data structures", "response": "def convert_to_vcard(name, value, allowed_object_type):\n \"\"\"converts user input into vcard compatible data structures\n :param name: object name, only required for error messages\n :type name: str\n :param value: user input\n :type value: str or list(str)\n :param allowed_object_type: set the accepted return type for vcard\n attribute\n :type allowed_object_type: enum of type ObjectType\n :returns: cleaned user input, ready for vcard or a ValueError\n :rtype: str or list(str)\n \"\"\"\n if isinstance(value, str):\n if allowed_object_type == ObjectType.list_with_strings:\n raise ValueError(\n \"Error: \" + name + \" must not contain a single string.\")\n else:\n return value.strip()\n elif isinstance(value, list):\n if allowed_object_type == ObjectType.string:\n raise ValueError(\n \"Error: \" + name + \" must not contain a list.\")\n else:\n for entry in value:\n if not isinstance(entry, str):\n raise ValueError(\n \"Error: \" + name + \" must not contain a nested list\")\n # filter out empty list items and strip leading and trailing space\n return [x.strip() for x in value if x]\n else:\n if allowed_object_type == ObjectType.string:\n raise ValueError(\n \"Error: \" + name + \" must be a string.\")\n elif allowed_object_type == ObjectType.list_with_strings:\n raise ValueError(\n \"Error: \" + name + \" must be a list with strings.\")\n else:\n raise ValueError(\n \"Error: \" + name + \" must be a string or a list with strings.\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncalculating the minimum length of initial substrings of uid1 and uid2 for them to be different.", "response": "def _compare_uids(uid1, uid2):\n \"\"\"Calculate the minimum length of initial substrings of uid1 and uid2\n for them to be different.\n\n :param uid1: first uid to compare\n :type uid1: str\n :param uid2: second uid to compare\n :type uid2: str\n :returns: the length of the shortes unequal initial substrings\n :rtype: int\n \"\"\"\n sum = 0\n for char1, char2 in zip(uid1, uid2):\n if char1 == char2:\n sum += 1\n else:\n break\n return sum"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsearches in all fields for contacts matching query.", "response": "def _search_all(self, query):\n \"\"\"Search in all fields for contacts matching query.\n\n :param query: the query to search for\n :type query: str\n :yields: all found contacts\n :rtype: generator(carddav_object.CarddavObject)\n\n \"\"\"\n regexp = re.compile(query, re.IGNORECASE | re.DOTALL)\n for contact in self.contacts.values():\n # search in all contact fields\n contact_details = contact.print_vcard()\n if regexp.search(contact_details) is not None:\n yield contact\n else:\n # find phone numbers with special chars like /\n clean_contact_details = re.sub(\"[^a-zA-Z0-9\\n]\", \"\",\n contact_details)\n if regexp.search(clean_contact_details) is not None \\\n and len(re.sub(\"\\D\", \"\", query)) >= 3:\n yield contact"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsearches in the name filed for contacts matching query.", "response": "def _search_names(self, query):\n \"\"\"Search in the name filed for contacts matching query.\n\n :param query: the query to search for\n :type query: str\n :yields: all found contacts\n :rtype: generator(carddav_object.CarddavObject)\n\n \"\"\"\n regexp = re.compile(query, re.IGNORECASE | re.DOTALL)\n for contact in self.contacts.values():\n # only search in contact name\n if regexp.search(contact.get_full_name()) is not None:\n yield contact"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsearches for contacts with a matching uid.", "response": "def _search_uid(self, query):\n \"\"\"Search for contacts with a matching uid.\n\n :param query: the query to search for\n :type query: str\n :yields: all found contacts\n :rtype: generator(carddav_object.CarddavObject)\n\n \"\"\"\n try:\n # First we treat the argument as a full UID and try to match it\n # exactly.\n yield self.contacts[query]\n except KeyError:\n # If that failed we look for all contacts whos UID start with the\n # given query.\n for uid in self.contacts:\n if uid.startswith(query):\n yield self.contacts[uid]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsearch this address book for contacts matching the query.", "response": "def search(self, query, method=\"all\"):\n \"\"\"Search this address book for contacts matching the query.\n\n The method can be one of \"all\", \"name\" and \"uid\". The backend for this\n address book migth be load()ed if needed.\n\n :param query: the query to search for\n :type query: str\n :param method: the type of fileds to use when seaching\n :type method: str\n :returns: all found contacts\n :rtype: list(carddav_object.CarddavObject)\n\n \"\"\"\n logging.debug('address book %s, searching with %s', self.name, query)\n if not self._loaded:\n self.load(query)\n if method == \"all\":\n search_function = self._search_all\n elif method == \"name\":\n search_function = self._search_names\n elif method == \"uid\":\n search_function = self._search_uid\n else:\n raise ValueError('Only the search methods \"all\", \"name\" and \"uid\" '\n 'are supported.')\n return list(search_function(query))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_short_uid_dict(self, query=None):\n if self._short_uids is None:\n if not self._loaded:\n self.load(query)\n if not self.contacts:\n self._short_uids = {}\n elif len(self.contacts) == 1:\n self._short_uids = {uid[0:1]: contact\n for uid, contact in self.contacts.items()}\n else:\n self._short_uids = {}\n sorted_uids = sorted(self.contacts)\n # Prepare for the loop; the first and last items are handled\n # seperatly.\n item0, item1 = sorted_uids[:2]\n same1 = self._compare_uids(item0, item1)\n self._short_uids[item0[:same1 + 1]] = self.contacts[item0]\n for item_new in sorted_uids[2:]:\n # shift the items and the common prefix lenght one further\n item0, item1 = item1, item_new\n same0, same1 = same1, self._compare_uids(item0, item1)\n # compute the final prefix length for item1\n same = max(same0, same1)\n self._short_uids[item0[:same + 1]] = self.contacts[item0]\n # Save the last item.\n self._short_uids[item1[:same1 + 1]] = self.contacts[item1]\n return self._short_uids", "response": "Create a dictionary of shortend UIDs for all contacts."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_short_uid(self, uid):\n if uid:\n short_uids = self.get_short_uid_dict()\n for length_of_uid in range(len(uid), 0, -1):\n if short_uids.get(uid[:length_of_uid]) is not None:\n return uid[:length_of_uid]\n return \"\"", "response": "Get the shortend UID for the given UID."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfind all vcard files inside this address book.", "response": "def _find_vcard_files(self, search=None, search_in_source_files=False):\n \"\"\"Find all vcard files inside this address book.\n\n If a search string is given only files which contents match that will\n be returned.\n\n :param search: a regular expression to limit the results\n :type search: str\n :param search_in_source_files: apply search regexp directly on the .vcf files to speed up parsing (less accurate)\n :type search_in_source_files: bool\n :returns: the paths of the vcard files\n :rtype: generator\n\n \"\"\"\n files = glob.glob(os.path.join(self.path, \"*.vcf\"))\n if search and search_in_source_files:\n for filename in files:\n with open(filename, \"r\") as filehandle:\n if re.search(search, filehandle.read(),\n re.IGNORECASE | re.DOTALL):\n yield filename\n else:\n yield from files"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef load(self, query=None, search_in_source_files=False):\n if self._loaded:\n return\n logging.debug('Loading Vdir %s with query %s', self.name, query)\n errors = 0\n for filename in self._find_vcard_files(\n search=query, search_in_source_files=search_in_source_files):\n try:\n card = CarddavObject.from_file(self, filename,\n self._private_objects,\n self._localize_dates)\n except (IOError, vobject.base.ParseError) as err:\n verb = \"open\" if isinstance(err, IOError) else \"parse\"\n logging.debug(\"Error: Could not %s file %s\\n%s\", verb,\n filename, err)\n if self._skip:\n errors += 1\n else:\n # FIXME: This should throw an apropriate exception and the\n # sys.exit should be called somewhere closer to the command\n # line parsing.\n logging.error(\n \"The vcard file %s of address book %s could not be \"\n \"parsed\\nUse --debug for more information or \"\n \"--skip-unparsable to proceed\", filename, self.name)\n sys.exit(2)\n else:\n uid = card.get_uid()\n if not uid:\n logging.warning(\"Card %s from address book %s has no UID \"\n \"and will not be availbale.\", card,\n self.name)\n elif uid in self.contacts:\n logging.warning(\n \"Card %s and %s from address book %s have the same \"\n \"UID. The former will not be availbale.\", card,\n self.contacts[uid], self.name)\n else:\n self.contacts[uid] = card\n self._loaded = True\n if errors:\n logging.warning(\n \"%d of %d vCard files of address book %s could not be parsed.\",\n errors, len(self.contacts) + errors, self)\n logging.debug('Loded %s contacts from address book %s.',\n len(self.contacts), self.name)", "response": "Load all vcard files in this address book from disk."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_abook(self, name):\n for abook in self._abooks:\n if abook.name == name:\n return abook", "response": "Get one of the backing abdress books by its name"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ninitializes the dictionary of architectures for assembling via keystone", "response": "def avail_archs(self):\n ''' Initialize the dictionary of architectures for assembling via keystone'''\n\n return {\n ARM32: (KS_ARCH_ARM, KS_MODE_ARM),\n ARM64: (KS_ARCH_ARM64, KS_MODE_LITTLE_ENDIAN),\n ARM_TB: (KS_ARCH_ARM, KS_MODE_THUMB),\n HEXAGON: (KS_ARCH_HEXAGON, KS_MODE_BIG_ENDIAN),\n MIPS32: (KS_ARCH_MIPS, KS_MODE_MIPS32),\n MIPS64: (KS_ARCH_MIPS, KS_MODE_MIPS64),\n PPC32: (KS_ARCH_PPC, KS_MODE_PPC32),\n PPC64: (KS_ARCH_PPC, KS_MODE_PPC64),\n SPARC32: (KS_ARCH_SPARC, KS_MODE_SPARC32),\n SPARC64: (KS_ARCH_SPARC, KS_MODE_SPARC64),\n SYSTEMZ: (KS_ARCH_SYSTEMZ, KS_MODE_BIG_ENDIAN),\n X86_16: (KS_ARCH_X86, KS_MODE_16),\n X86_32: (KS_ARCH_X86, KS_MODE_32),\n X86_64: (KS_ARCH_X86, KS_MODE_64),\n }"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef avail_archs(self):\n ''' Initialize the dictionary of architectures for disassembling via capstone'''\n\n return {\n ARM32: (CS_ARCH_ARM, CS_MODE_ARM),\n ARM64: (CS_ARCH_ARM64, CS_MODE_LITTLE_ENDIAN),\n ARM_TB: (CS_ARCH_ARM, CS_MODE_THUMB),\n MIPS32: (CS_ARCH_MIPS, CS_MODE_MIPS32),\n MIPS64: (CS_ARCH_MIPS, CS_MODE_MIPS64),\n SPARC32: (CS_ARCH_SPARC, CS_MODE_BIG_ENDIAN),\n SPARC64: (CS_ARCH_SPARC, CS_MODE_V9),\n SYSTEMZ: (CS_ARCH_SYSZ, CS_MODE_BIG_ENDIAN),\n X86_16: (CS_ARCH_X86, CS_MODE_16),\n X86_32: (CS_ARCH_X86, CS_MODE_32),\n X86_64: (CS_ARCH_X86, CS_MODE_64),\n }", "response": "Initialize the dictionary of architectures for disassembling via capstone"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef getargspec_permissive(func):\n if inspect.ismethod(func):\n func = func.im_func\n\n # Py2 Stdlib uses isfunction(func) which is too strict for Cython-compiled\n # functions though such have perfectly usable func_code, func_defaults.\n if not (hasattr(func, \"func_code\") and hasattr(func, \"func_defaults\")):\n raise TypeError('{!r} missing func_code or func_defaults'.format(func))\n\n args, varargs, varkw = inspect.getargs(func.func_code)\n return inspect.ArgSpec(args, varargs, varkw, func.func_defaults)", "response": "A simple wrapper around inspect. getargspec that handles Cython - compiled functions."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef dispatch(parser, argv=None, add_help_command=True,\n completion=True, pre_call=None,\n output_file=sys.stdout, errors_file=sys.stderr,\n raw_output=False, namespace=None,\n skip_unknown_args=False):\n \"\"\"\n Parses given list of arguments using given parser, calls the relevant\n function and prints the result.\n\n The target function should expect one positional argument: the\n :class:`argparse.Namespace` object. However, if the function is decorated with\n :func:`~argh.decorators.plain_signature`, the positional and named\n arguments from the namespace object are passed to the function instead\n of the object itself.\n\n :param parser:\n\n the ArgumentParser instance.\n\n :param argv:\n\n a list of strings representing the arguments. If `None`, ``sys.argv``\n is used instead. Default is `None`.\n\n :param add_help_command:\n\n if `True`, converts first positional argument \"help\" to a keyword\n argument so that ``help foo`` becomes ``foo --help`` and displays usage\n information for \"foo\". Default is `True`.\n\n :param output_file:\n\n A file-like object for output. If `None`, the resulting lines are\n collected and returned as a string. Default is ``sys.stdout``.\n\n :param errors_file:\n\n Same as `output_file` but for ``sys.stderr``.\n\n :param raw_output:\n\n If `True`, results are written to the output file raw, without adding\n whitespaces or newlines between yielded strings. Default is `False`.\n\n :param completion:\n\n If `True`, shell tab completion is enabled. Default is `True`. (You\n will also need to install it.) See :mod:`argh.completion`.\n\n :param skip_unknown_args:\n\n If `True`, unknown arguments do not cause an error\n (`ArgumentParser.parse_known_args` is used).\n\n :param namespace:\n\n An `argparse.Namespace`-like object. By default an\n :class:`ArghNamespace` object is used. Please note that support for\n combined default and nested functions may be broken if a different\n type of object is forced.\n\n By default the exceptions are not wrapped and will propagate. The only\n exception that is always wrapped is :class:`~argh.exceptions.CommandError`\n which is interpreted as an expected event so the traceback is hidden.\n You can also mark arbitrary exceptions as \"wrappable\" by using the\n :func:`~argh.decorators.wrap_errors` decorator.\n \"\"\"\n if completion:\n autocomplete(parser)\n\n if argv is None:\n argv = sys.argv[1:]\n\n if add_help_command:\n if argv and argv[0] == 'help':\n argv.pop(0)\n argv.append('--help')\n\n if skip_unknown_args:\n parse_args = parser.parse_known_args\n else:\n parse_args = parser.parse_args\n\n if not namespace:\n namespace = ArghNamespace()\n\n # this will raise SystemExit if parsing fails\n namespace_obj = parse_args(argv, namespace=namespace)\n\n function = _get_function_from_namespace_obj(namespace_obj)\n\n if function:\n lines = _execute_command(function, namespace_obj, errors_file,\n pre_call=pre_call)\n else:\n # no commands declared, can't dispatch; display help message\n lines = [parser.format_usage()]\n\n if output_file is None:\n # user wants a string; we create an internal temporary file-like object\n # and will return its contents as a string\n if sys.version_info < (3,0):\n f = compat.BytesIO()\n else:\n f = compat.StringIO()\n else:\n # normally this is stdout; can be any file\n f = output_file\n\n for line in lines:\n # print the line as soon as it is generated to ensure that it is\n # displayed to the user before anything else happens, e.g.\n # raw_input() is called\n\n io.dump(line, f)\n if not raw_output:\n # in most cases user wants one message per line\n io.dump('\\n', f)\n\n if output_file is None:\n # user wanted a string; return contents of our temporary file-like obj\n f.seek(0)\n return f.read()", "response": "This function dispatches the given list of arguments using given parser and returns the result of the relevant function."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nexecute the given function with the given namespace_obj.", "response": "def _execute_command(function, namespace_obj, errors_file, pre_call=None):\n \"\"\"\n Assumes that `function` is a callable. Tries different approaches\n to call it (with `namespace_obj` or with ordinary signature).\n Yields the results line by line.\n\n If :class:`~argh.exceptions.CommandError` is raised, its message is\n appended to the results (i.e. yielded by the generator as a string).\n All other exceptions propagate unless marked as wrappable\n by :func:`wrap_errors`.\n \"\"\"\n if pre_call: # XXX undocumented because I'm unsure if it's OK\n # Actually used in real projects:\n # * https://google.com/search?q=argh+dispatch+pre_call\n # * https://github.com/neithere/argh/issues/63\n pre_call(namespace_obj)\n\n # the function is nested to catch certain exceptions (see below)\n def _call():\n # Actually call the function\n if getattr(function, ATTR_EXPECTS_NAMESPACE_OBJECT, False):\n result = function(namespace_obj)\n else:\n # namespace -> dictionary\n _flat_key = lambda key: key.replace('-', '_')\n all_input = dict((_flat_key(k), v)\n for k,v in vars(namespace_obj).items())\n\n # filter the namespace variables so that only those expected\n # by the actual function will pass\n\n spec = get_arg_spec(function)\n\n positional = [all_input[k] for k in spec.args]\n kwonly = getattr(spec, 'kwonlyargs', [])\n keywords = dict((k, all_input[k]) for k in kwonly)\n\n # *args\n if spec.varargs:\n positional += getattr(namespace_obj, spec.varargs)\n\n # **kwargs\n varkw = getattr(spec, 'varkw', getattr(spec, 'keywords', []))\n if varkw:\n not_kwargs = [DEST_FUNCTION] + spec.args + [spec.varargs] + kwonly\n for k in vars(namespace_obj):\n if k.startswith('_') or k in not_kwargs:\n continue\n keywords[k] = getattr(namespace_obj, k)\n\n result = function(*positional, **keywords)\n\n # Yield the results\n if isinstance(result, (GeneratorType, list, tuple)):\n # yield each line ASAP, convert CommandError message to a line\n for line in result:\n yield line\n else:\n # yield non-empty non-iterable result as a single line\n if result is not None:\n yield result\n\n wrappable_exceptions = [CommandError]\n wrappable_exceptions += getattr(function, ATTR_WRAPPED_EXCEPTIONS, [])\n\n try:\n result = _call()\n for line in result:\n yield line\n except tuple(wrappable_exceptions) as e:\n processor = getattr(function, ATTR_WRAPPED_EXCEPTIONS_PROCESSOR,\n lambda e: '{0.__class__.__name__}: {0}'.format(e))\n\n errors_file.write(compat.text_type(processor(e)))\n errors_file.write('\\n')"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef dispatch_command(function, *args, **kwargs):\n parser = argparse.ArgumentParser(formatter_class=PARSER_FORMATTER)\n set_default_command(parser, function)\n dispatch(parser, *args, **kwargs)", "response": "A wrapper for the dispatch function that creates a one - command parser."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef safe_input(prompt):\n\n if sys.version_info < (3,0):\n if isinstance(prompt, compat.text_type):\n # Python 2.x: unicode \u2192 bytes\n encoding = locale.getpreferredencoding() or 'utf-8'\n prompt = prompt.encode(encoding)\n else:\n if not isinstance(prompt, compat.text_type):\n # Python 3.x: bytes \u2192 unicode\n prompt = prompt.decode()\n\n return _input(prompt)", "response": "Prompts user for input. Correctly handles prompt message encoding."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nencode given value so it can be written to given file object.", "response": "def encode_output(value, output_file):\n \"\"\"\n Encodes given value so it can be written to given file object.\n\n Value may be Unicode, binary string or any other data type.\n\n The exact behaviour depends on the Python version:\n\n Python 3.x\n\n `sys.stdout` is a `_io.TextIOWrapper` instance that accepts `str`\n (unicode) and breaks on `bytes`.\n\n It is OK to simply assume that everything is Unicode unless special\n handling is introduced in the client code.\n\n Thus, no additional processing is performed.\n\n Python 2.x\n\n `sys.stdout` is a file-like object that accepts `str` (bytes)\n and breaks when `unicode` is passed to `sys.stdout.write()`.\n\n We can expect both Unicode and bytes. They need to be encoded so as\n to match the file object encoding.\n\n The output is binary if the object doesn't explicitly require Unicode.\n\n \"\"\"\n if sys.version_info > (3,0):\n # Python 3: whatever \u2192 unicode\n return compat.text_type(value)\n else:\n # Python 2: handle special cases\n stream_encoding = getattr(output_file, 'encoding', None)\n if stream_encoding:\n if stream_encoding.upper() == 'UTF-8':\n return compat.text_type(value)\n else:\n return value.encode(stream_encoding, 'ignore')\n else:\n # no explicit encoding requirements; force binary\n if isinstance(value, compat.text_type):\n # unicode \u2192 binary\n return value.encode('utf-8')\n else:\n return str(value)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nwriting given line to given output file.", "response": "def dump(raw_data, output_file):\n \"\"\"\n Writes given line to given output file.\n See :func:`encode_output` for details.\n \"\"\"\n data = encode_output(raw_data, output_file)\n output_file.write(data)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd support for shell completion via argcomplete_ by patching given `argparse.ArgumentParser` (sub)class. If completion is not enabled, logs a debug-level message.", "response": "def autocomplete(parser):\n \"\"\"\n Adds support for shell completion via argcomplete_ by patching given\n `argparse.ArgumentParser` (sub)class.\n\n If completion is not enabled, logs a debug-level message.\n \"\"\"\n if COMPLETION_ENABLED:\n argcomplete.autocomplete(parser)\n elif 'bash' in os.getenv('SHELL', ''):\n logger.debug('Bash completion not available. Install argcomplete.')"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nwrap for argparse. ArgumentParser. parse_args.", "response": "def parse_args(self, args=None, namespace=None):\n \"\"\"\n Wrapper for :meth:`argparse.ArgumentParser.parse_args`. If `namespace`\n is not defined, :class:`argh.dispatching.ArghNamespace` is used.\n This is required for functions to be properly used as commands.\n \"\"\"\n namespace = namespace or ArghNamespace()\n return super(ArghParser, self).parse_args(args, namespace)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _expand_help(self, action):\n params = dict(vars(action), prog=self._prog)\n for name in list(params):\n if params[name] is argparse.SUPPRESS:\n del params[name]\n for name in list(params):\n if hasattr(params[name], '__name__'):\n params[name] = params[name].__name__\n if params.get('choices') is not None:\n choices_str = ', '.join([str(c) for c in params['choices']])\n params['choices'] = choices_str\n\n # XXX this is added in Argh vs. argparse.ArgumentDefaultsHelpFormatter\n # (avoiding empty strings, otherwise Argparse would die with\n # an IndexError in _format_action)\n #\n if 'default' in params:\n if params['default'] is None:\n params['default'] = '-'\n else:\n params['default'] = repr(params['default'])\n #\n # /\n\n return self._get_help_string(action) % params", "response": "This method is copied verbatim from ArgumentDefaultsHelpFormatter. _expand_help"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _guess(kwargs):\n guessed = {}\n\n # Parser actions that accept argument 'type'\n TYPE_AWARE_ACTIONS = 'store', 'append'\n\n # guess type/action from default value\n value = kwargs.get('default')\n if value is not None:\n if isinstance(value, bool):\n if kwargs.get('action') is None:\n # infer action from default value\n guessed['action'] = 'store_false' if value else 'store_true'\n elif kwargs.get('type') is None:\n # infer type from default value\n # (make sure that action handler supports this keyword)\n if kwargs.get('action', 'store') in TYPE_AWARE_ACTIONS:\n guessed['type'] = type(value)\n\n # guess type from choices (first item)\n if kwargs.get('choices') and 'type' not in list(guessed) + list(kwargs):\n guessed['type'] = type(kwargs['choices'][0])\n\n return dict(kwargs, **guessed)", "response": "Guesses the type of the object in the given argument specification."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef set_default_command(parser, function):\n if parser._subparsers:\n _require_support_for_default_command_with_subparsers()\n\n spec = get_arg_spec(function)\n\n declared_args = getattr(function, ATTR_ARGS, [])\n inferred_args = list(_get_args_from_signature(function))\n\n if inferred_args and declared_args:\n # We've got a mixture of declared and inferred arguments\n\n # a mapping of \"dest\" strings to argument declarations.\n #\n # * a \"dest\" string is a normalized form of argument name, i.e.:\n #\n # '-f', '--foo' \u2192 'foo'\n # 'foo-bar' \u2192 'foo_bar'\n #\n # * argument declaration is a dictionary representing an argument;\n # it is obtained either from _get_args_from_signature() or from\n # an @arg decorator (as is).\n #\n dests = OrderedDict()\n\n for argspec in inferred_args:\n dest = _get_parser_param_kwargs(parser, argspec)['dest']\n dests[dest] = argspec\n\n for declared_kw in declared_args:\n # an argument is declared via decorator\n dest = _get_dest(parser, declared_kw)\n if dest in dests:\n # the argument is already known from function signature\n #\n # now make sure that this declared arg conforms to the function\n # signature and therefore only refines an inferred arg:\n #\n # @arg('my-foo') maps to func(my_foo)\n # @arg('--my-bar') maps to func(my_bar=...)\n\n # either both arguments are positional or both are optional\n decl_positional = _is_positional(declared_kw['option_strings'])\n infr_positional = _is_positional(dests[dest]['option_strings'])\n if decl_positional != infr_positional:\n kinds = {True: 'positional', False: 'optional'}\n raise AssemblingError(\n '{func}: argument \"{dest}\" declared as {kind_i} '\n '(in function signature) and {kind_d} (via decorator)'\n .format(\n func=function.__name__,\n dest=dest,\n kind_i=kinds[infr_positional],\n kind_d=kinds[decl_positional],\n ))\n\n # merge explicit argument declaration into the inferred one\n # (e.g. `help=...`)\n dests[dest].update(**declared_kw)\n else:\n # the argument is not in function signature\n varkw = getattr(spec, 'varkw', getattr(spec, 'keywords', []))\n if varkw:\n # function accepts **kwargs; the argument goes into it\n dests[dest] = declared_kw\n else:\n # there's no way we can map the argument declaration\n # to function signature\n xs = (dests[x]['option_strings'] for x in dests)\n raise AssemblingError(\n '{func}: argument {flags} does not fit '\n 'function signature: {sig}'.format(\n flags=', '.join(declared_kw['option_strings']),\n func=function.__name__,\n sig=', '.join('/'.join(x) for x in xs)))\n\n # pack the modified data back into a list\n inferred_args = dests.values()\n\n command_args = inferred_args or declared_args\n\n # add types, actions, etc. (e.g. default=3 implies type=int)\n command_args = [_guess(x) for x in command_args]\n\n for draft in command_args:\n draft = draft.copy()\n if 'help' not in draft:\n draft.update(help=DEFAULT_ARGUMENT_TEMPLATE)\n dest_or_opt_strings = draft.pop('option_strings')\n if parser.add_help and '-h' in dest_or_opt_strings:\n dest_or_opt_strings = [x for x in dest_or_opt_strings if x != '-h']\n completer = draft.pop('completer', None)\n try:\n action = parser.add_argument(*dest_or_opt_strings, **draft)\n if COMPLETION_ENABLED and completer:\n action.completer = completer\n except Exception as e:\n raise type(e)('{func}: cannot add arg {args}: {msg}'.format(\n args='/'.join(dest_or_opt_strings), func=function.__name__, msg=e))\n\n if function.__doc__ and not parser.description:\n parser.description = function.__doc__\n parser.set_defaults(**{\n DEST_FUNCTION: function,\n })", "response": "Sets the default command for given parser."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd given functions as commands to given parser.", "response": "def add_commands(parser, functions, namespace=None, namespace_kwargs=None,\n func_kwargs=None,\n # deprecated args:\n title=None, description=None, help=None):\n \"\"\"\n Adds given functions as commands to given parser.\n\n :param parser:\n\n an :class:`argparse.ArgumentParser` instance.\n\n :param functions:\n\n a list of functions. A subparser is created for each of them.\n If the function is decorated with :func:`~argh.decorators.arg`, the\n arguments are passed to :class:`argparse.ArgumentParser.add_argument`.\n See also :func:`~argh.dispatching.dispatch` for requirements\n concerning function signatures. The command name is inferred from the\n function name. Note that the underscores in the name are replaced with\n hyphens, i.e. function name \"foo_bar\" becomes command name \"foo-bar\".\n\n :param namespace:\n\n an optional string representing the group of commands. For example, if\n a command named \"hello\" is added without the namespace, it will be\n available as \"prog.py hello\"; if the namespace if specified as \"greet\",\n then the command will be accessible as \"prog.py greet hello\". The\n namespace itself is not callable, so \"prog.py greet\" will fail and only\n display a help message.\n\n :param func_kwargs:\n\n a `dict` of keyword arguments to be passed to each nested ArgumentParser\n instance created per command (i.e. per function). Members of this\n dictionary have the highest priority, so a function's docstring is\n overridden by a `help` in `func_kwargs` (if present).\n\n :param namespace_kwargs:\n\n a `dict` of keyword arguments to be passed to the nested ArgumentParser\n instance under given `namespace`.\n\n Deprecated params that should be moved into `namespace_kwargs`:\n\n :param title:\n\n passed to :meth:`argparse.ArgumentParser.add_subparsers` as `title`.\n\n .. deprecated:: 0.26.0\n\n Please use `namespace_kwargs` instead.\n\n :param description:\n\n passed to :meth:`argparse.ArgumentParser.add_subparsers` as\n `description`.\n\n .. deprecated:: 0.26.0\n\n Please use `namespace_kwargs` instead.\n\n :param help:\n\n passed to :meth:`argparse.ArgumentParser.add_subparsers` as `help`.\n\n .. deprecated:: 0.26.0\n\n Please use `namespace_kwargs` instead.\n\n .. note::\n\n This function modifies the parser object. Generally side effects are\n bad practice but we don't seem to have any choice as ArgumentParser is\n pretty opaque.\n You may prefer :class:`~argh.helpers.ArghParser.add_commands` for a bit\n more predictable API.\n\n .. note::\n\n An attempt to add commands to a parser which already has a default\n function (e.g. added with :func:`~argh.assembling.set_default_command`)\n results in `AssemblingError`.\n\n \"\"\"\n # FIXME \"namespace\" is a correct name but it clashes with the \"namespace\"\n # that represents arguments (argparse.Namespace and our ArghNamespace).\n # We should rename the argument here.\n\n if DEST_FUNCTION in parser._defaults:\n _require_support_for_default_command_with_subparsers()\n\n namespace_kwargs = namespace_kwargs or {}\n\n # FIXME remove this by 1.0\n #\n if title:\n warnings.warn('argument `title` is deprecated in add_commands(),'\n ' use `parser_kwargs` instead', DeprecationWarning)\n namespace_kwargs['description'] = title\n if help:\n warnings.warn('argument `help` is deprecated in add_commands(),'\n ' use `parser_kwargs` instead', DeprecationWarning)\n namespace_kwargs['help'] = help\n if description:\n warnings.warn('argument `description` is deprecated in add_commands(),'\n ' use `parser_kwargs` instead', DeprecationWarning)\n namespace_kwargs['description'] = description\n #\n # /\n\n subparsers_action = get_subparsers(parser, create=True)\n\n if namespace:\n # Make a nested parser and init a deeper _SubParsersAction under it.\n\n # Create a named group of commands. It will be listed along with\n # root-level commands in ``app.py --help``; in that context its `title`\n # can be used as a short description on the right side of its name.\n # Normally `title` is shown above the list of commands\n # in ``app.py my-namespace --help``.\n subsubparser_kw = {\n 'help': namespace_kwargs.get('title'),\n }\n subsubparser = subparsers_action.add_parser(namespace, **subsubparser_kw)\n subparsers_action = subsubparser.add_subparsers(**namespace_kwargs)\n else:\n assert not namespace_kwargs, ('`parser_kwargs` only makes sense '\n 'with `namespace`.')\n\n for func in functions:\n cmd_name, func_parser_kwargs = _extract_command_meta_from_func(func)\n\n # override any computed kwargs by manually supplied ones\n if func_kwargs:\n func_parser_kwargs.update(func_kwargs)\n\n # create and set up the parser for this command\n command_parser = subparsers_action.add_parser(cmd_name, **func_parser_kwargs)\n set_default_command(command_parser, func)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_subcommands(parser, namespace, functions, **namespace_kwargs):\n add_commands(parser, functions, namespace=namespace,\n namespace_kwargs=namespace_kwargs)", "response": "A wrapper for add_commands."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the ArgumentParser s subparsers.", "response": "def get_subparsers(parser, create=False):\n \"\"\"\n Returns the :class:`argparse._SubParsersAction` instance for given\n :class:`ArgumentParser` instance as would have been returned by\n :meth:`ArgumentParser.add_subparsers`. The problem with the latter is that\n it only works once and raises an exception on the second attempt, and the\n public API seems to lack a method to get *existing* subparsers.\n\n :param create:\n If `True`, creates the subparser if it does not exist. Default if\n `False`.\n\n \"\"\"\n # note that ArgumentParser._subparsers is *not* what is returned by\n # ArgumentParser.add_subparsers().\n if parser._subparsers:\n actions = [a for a in parser._actions\n if isinstance(a, argparse._SubParsersAction)]\n assert len(actions) == 1\n return actions[0]\n else:\n if create:\n return parser.add_subparsers()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_arg_spec(function):\n while hasattr(function, '__wrapped__'):\n function = function.__wrapped__\n spec = compat.getargspec(function)\n if inspect.ismethod(function):\n spec = spec._replace(args=spec.args[1:])\n return spec", "response": "Returns argument specification for given function."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset given string as command name instead of the function name. The string is used verbatim without further processing. Usage:: @named('load') def do_load_some_stuff_and_keep_the_original_function_name(args): ... The resulting command will be available only as ``load``. To add aliases without renaming the command, check :func:`aliases`. .. versionadded:: 0.19", "response": "def named(new_name):\n \"\"\"\n Sets given string as command name instead of the function name.\n The string is used verbatim without further processing.\n\n Usage::\n\n @named('load')\n def do_load_some_stuff_and_keep_the_original_function_name(args):\n ...\n\n The resulting command will be available only as ``load``. To add aliases\n without renaming the command, check :func:`aliases`.\n\n .. versionadded:: 0.19\n \"\"\"\n def wrapper(func):\n setattr(func, ATTR_NAME, new_name)\n return func\n return wrapper"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef aliases(*names):\n def wrapper(func):\n setattr(func, ATTR_ALIASES, names)\n return func\n return wrapper", "response": "Decorator for aliases for a given function."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndeclare an argument for given function. Does not register the function anywhere, nor does it modify the function in any way. The signature of the decorator matches that of :meth:`argparse.ArgumentParser.add_argument`, only some keywords are not required if they can be easily guessed (e.g. you don't have to specify type or action when an `int` or `bool` default value is supplied). Typical use cases: - In combination with :func:`expects_obj` (which is not recommended); - in combination with ordinary function signatures to add details that cannot be expressed with that syntax (e.g. help message). Usage:: from argh import arg @arg('path', help='path to the file to load') @arg('--format', choices=['yaml','json']) @arg('-v', '--verbosity', choices=range(0,3), default=2) def load(path, something=None, format='json', dry_run=False, verbosity=1): loaders = {'json': json.load, 'yaml': yaml.load} loader = loaders[args.format] data = loader(args.path) if not args.dry_run: if verbosity < 1: print('saving to the database') put_to_database(data) In this example: - `path` declaration is extended with `help`; - `format` declaration is extended with `choices`; - `dry_run` declaration is not duplicated; - `verbosity` is extended with `choices` and the default value is overridden. (If both function signature and `@arg` define a default value for an argument, `@arg` wins.) .. note:: It is recommended to avoid using this decorator unless there's no way to tune the argument's behaviour or presentation using ordinary function signatures. Readability counts, don't repeat yourself.", "response": "def arg(*args, **kwargs):\n \"\"\"\n Declares an argument for given function. Does not register the function\n anywhere, nor does it modify the function in any way.\n\n The signature of the decorator matches that of\n :meth:`argparse.ArgumentParser.add_argument`, only some keywords are not\n required if they can be easily guessed (e.g. you don't have to specify type\n or action when an `int` or `bool` default value is supplied).\n\n Typical use cases:\n\n - In combination with :func:`expects_obj` (which is not recommended);\n - in combination with ordinary function signatures to add details that\n cannot be expressed with that syntax (e.g. help message).\n\n Usage::\n\n from argh import arg\n\n @arg('path', help='path to the file to load')\n @arg('--format', choices=['yaml','json'])\n @arg('-v', '--verbosity', choices=range(0,3), default=2)\n def load(path, something=None, format='json', dry_run=False, verbosity=1):\n loaders = {'json': json.load, 'yaml': yaml.load}\n loader = loaders[args.format]\n data = loader(args.path)\n if not args.dry_run:\n if verbosity < 1:\n print('saving to the database')\n put_to_database(data)\n\n In this example:\n\n - `path` declaration is extended with `help`;\n - `format` declaration is extended with `choices`;\n - `dry_run` declaration is not duplicated;\n - `verbosity` is extended with `choices` and the default value is\n overridden. (If both function signature and `@arg` define a default\n value for an argument, `@arg` wins.)\n\n .. note::\n\n It is recommended to avoid using this decorator unless there's no way\n to tune the argument's behaviour or presentation using ordinary\n function signatures. Readability counts, don't repeat yourself.\n\n \"\"\"\n def wrapper(func):\n declared_args = getattr(func, ATTR_ARGS, [])\n # The innermost decorator is called first but appears last in the code.\n # We need to preserve the expected order of positional arguments, so\n # the outermost decorator inserts its value before the innermost's:\n declared_args.insert(0, dict(option_strings=args, **kwargs))\n setattr(func, ATTR_ARGS, declared_args)\n return func\n return wrapper"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef wrap_errors(errors=None, processor=None, *args):\n\n def wrapper(func):\n if errors:\n setattr(func, ATTR_WRAPPED_EXCEPTIONS, errors)\n\n if processor:\n setattr(func, ATTR_WRAPPED_EXCEPTIONS_PROCESSOR, processor)\n\n return func\n return wrapper", "response": "Decorator that wraps given exceptions into\n ."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nselects the provided column names from the Redis session.", "response": "def select(self, *column_names, **kwargs):\n '''\n Select the provided column names from the model, do not return an entity,\n do not involve the rom session, just get the raw and/or processed column\n data from Redis.\n\n Keyword-only arguments:\n\n * *include_pk=False* - whether to include the primary key in the\n returned data (we need to get this in some cases, so we fetch\n it anyway; if you want it, we can return it to you - just be\n careful with the namedtuple option - see the warning below)\n * *decode=True* - whether to take a pass through normal data\n decoding in the model (will not return an entity/model)\n * *ff=_dict_data_factory* - the type of data to return from the\n select after all filters/limits/order_by are applied\n\n .. warning:: If ``include_pk = True`` and if you don't provide\n the primary key column, it will be appended to your list of columns.\n\n .. note:: if you want to provide a new factory function for the returned\n data, it must be of the form (below is the actual dict factory\n function)\n\n ::\n\n def _dict_data_factory(columns):\n _dict = dict\n _zip = zip\n def make(data):\n # do whatever you need to turn your tuple of columns plus\n # your list of data into whatever you want:\n return _dict(_zip(columns, data))\n return make\n\n Available factory functions:\n\n * *``rom.query._dict_data_factory``* - default\n * *``rom.query._list_data_factory``* - lowest overhead, as the\n ``data`` passed in above is a list that you can do anything to\n * *``rom.query._tuple_data_factory``* - when you want tuples instead\n * *``rom.query._namedtuple_data_factory``* - get namedtuples, see\n see warning below\n\n .. warning:: If you use the ``_namedtuple_data_factory``, and your\n columns include underscore prefixes, they will be stripped. If this\n results in a name collision, you *will* get an exception. If you want\n differerent behavior, write your own 20 line factory function that\n does exactly what you want, and pass it; they are really easy!\n\n '''\n include_pk = kwargs.pop('include_pk', False)\n decode = kwargs.pop('decode', True)\n ff = kwargs.pop('ff', _dict_data_factory)\n\n if isinstance(column_names[0], (list, tuple)):\n column_names = column_names[0]\n\n if not column_names:\n raise QueryError(\"Must provide at least one column to query for raw data\")\n\n if len(set(column_names)) != len(column_names):\n raise QueryError(\"Column names must be unique\")\n\n missing = [c for c in column_names if c not in self._model._columns]\n if missing:\n raise QueryError(\"No such columns known: %r\"%(missing,))\n\n remove_last = False\n if self._model._pkey not in column_names:\n column_names += (self._model._pkey,)\n remove_last = not include_pk\n\n return self.replace(select=(column_names, decode, remove_last, ff))"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a new Query object with the same attributes and values replaced.", "response": "def replace(self, **kwargs):\n '''\n Copy the Query object, optionally replacing the filters, order_by, or\n limit information on the copy. This is mostly an internal detail that\n you can ignore.\n '''\n data = {\n 'model': self._model,\n 'filters': self._filters,\n 'order_by': self._order_by,\n 'limit': self._limit,\n 'select': self._select,\n }\n data.update(**kwargs)\n return Query(**data)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a new QuerySet with only the entries that start with the specified prefix.", "response": "def startswith(self, **kwargs):\n '''\n When provided with keyword arguments of the form ``col=prefix``, this\n will limit the entities returned to those that have a word with the\n provided prefix in the specified column(s). This requires that the\n ``prefix=True`` option was provided during column definition.\n\n Usage::\n\n User.query.startswith(email='user@').execute()\n\n '''\n new = []\n for k, v in kwargs.items():\n v = self._check(k, v, 'startswith')\n new.append(Prefix(k, v))\n return self.replace(filters=self._filters+tuple(new))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef endswith(self, **kwargs):\n '''\n When provided with keyword arguments of the form ``col=suffix``, this\n will limit the entities returned to those that have a word with the\n provided suffix in the specified column(s). This requires that the\n ``suffix=True`` option was provided during column definition.\n\n Usage::\n\n User.query.endswith(email='@gmail.com').execute()\n\n '''\n new = []\n for k, v in kwargs.items():\n v = self._check(k, v, 'endswith')\n new.append(Suffix(k, v[::-1]))\n return self.replace(filters=self._filters+tuple(new))", "response": "Returns a new QuerySet with only the entries with the specified suffix."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nliking the given patterns and return a new string containing only the entries that match the provided patterns.", "response": "def like(self, **kwargs):\n '''\n When provided with keyword arguments of the form ``col=pattern``, this\n will limit the entities returned to those that include the provided\n pattern. Note that 'like' queries require that the ``prefix=True``\n option must have been provided as part of the column definition.\n\n Patterns allow for 4 wildcard characters, whose semantics are as\n follows:\n\n * *?* - will match 0 or 1 of any character\n * *\\** - will match 0 or more of any character\n * *+* - will match 1 or more of any character\n * *!* - will match exactly 1 of any character\n\n As an example, imagine that you have enabled the required prefix\n matching on your ``User.email`` column. And lets say that you want to\n find everyone with an email address that contains the name 'frank'\n before the ``@`` sign. You can use either of the following patterns\n to discover those users.\n\n * *\\*frank\\*@*\n * *\\*frank\\*@*\n\n .. note:: Like queries implicitly start at the beginning of strings\n checked, so if you want to match a pattern that doesn't start at\n the beginning of a string, you should prefix it with one of the\n wildcard characters (like ``*`` as we did with the 'frank' pattern).\n '''\n new = []\n for k, v in kwargs.items():\n v = self._check(k, v, 'like')\n new.append(Pattern(k, v))\n return self.replace(filters=self._filters+tuple(new))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a new QuerySet with the order_by clause applied to the current QuerySet.", "response": "def order_by(self, column):\n '''\n When provided with a column name, will sort the results of your query::\n\n # returns all users, ordered by the created_at column in\n # descending order\n User.query.order_by('-created_at').execute()\n '''\n cname = column.lstrip('-')\n col = self._check(cname)\n if type(col).__name__ in ('String', 'Text', 'Json') and col._keygen.__name__ not in _STRING_SORT_KEYGENS:\n warnings.warn(\"You are trying to order by a non-numeric column %r. \"\n \"Unless you have provided your own keygen or are using \"\n \"one of the sortable keygens: (%s), this probably won't \"\n \"work the way you expect it to.\"%(cname, STRING_SORT_KEYGENS_STR),\n stacklevel=2)\n\n return self.replace(order_by=column)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the total count of the objects that match the specified filters.", "response": "def count(self):\n '''\n Will return the total count of the objects that match the specified\n filters.::\n\n # counts the number of users created in the last 24 hours\n User.query.filter(created_at=(time.time()-86400, time.time())).count()\n '''\n filters = self._filters\n if self._order_by:\n filters += (self._order_by.lstrip('-'),)\n if not filters:\n # We can actually count entities here...\n size = _connect(self._model).hlen(self._model._namespace + '::')\n limit = self._limit or (0, 2**64)\n size = max(size - max(limit[0], 0), 0)\n return min(size, limit[1])\n\n return self._model._gindex.count(_connect(self._model), filters)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\niterating over the results of the query.", "response": "def iter_result(self, timeout=30, pagesize=100, no_hscan=False):\n '''\n Iterate over the results of your query instead of getting them all with\n `.all()`. Will only perform a single query. If you expect that your\n processing will take more than 30 seconds to process 100 items, you\n should pass `timeout` and `pagesize` to reflect an appropriate timeout\n and page size to fetch at once.\n\n Usage::\n\n for user in User.query.endswith(email='@gmail.com').iter_result():\n # do something with user\n ...\n\n '''\n if not self._filters and not self._order_by:\n if self._model._columns[self._model._pkey]._index:\n return self._iter_all_pkey()\n conn = _connect(self._model)\n version = list(map(int, conn.info()['redis_version'].split('.')[:2]))\n if version >= [2,8] and not no_hscan:\n return self._iter_all_hscan()\n return self._iter_all()\n return self._iter_results(timeout, pagesize)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef cached_result(self, timeout):\n '''\n This will execute the query, returning the key where a ZSET of your\n results will be stored for pagination, further operations, etc.\n\n The timeout must be a positive integer number of seconds for which to\n set the expiration time on the key (this is to ensure that any cached\n query results are eventually deleted, unless you make the explicit\n step to use the PERSIST command).\n\n .. note:: Limit clauses are ignored and not passed.\n\n Usage::\n\n ukey = User.query.endswith(email='@gmail.com').cached_result(30)\n for i in xrange(0, conn.zcard(ukey), 100):\n # refresh the expiration\n conn.expire(ukey, 30)\n users = User.get(conn.zrange(ukey, i, i+99))\n ...\n '''\n if not (self._filters or self._order_by):\n raise QueryError(\"You are missing filter or order criteria\")\n timeout = int(timeout)\n if timeout < 1:\n raise QueryError(\"You must specify a timeout >= 1, you gave %r\"%timeout)\n return self._model._gindex.search(\n _connect(self._model), self._filters, self._order_by, timeout=timeout)", "response": "This will execute the query and return the key where a ZSET of your getCachedEntryEntry is stored for pagination further operations etc."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef first(self):\n '''\n Returns only the first result from the query, if any.\n '''\n lim = [0, 1]\n if self._limit:\n lim[0] = self._limit[0]\n if not self._filters and not self._order_by:\n for ent in self:\n return ent\n return None\n ids = self.limit(*lim)._search()\n if ids:\n return self._model.get(ids[0])\n return None", "response": "Returns only the first result from the query if any."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef delete(self, blocksize=100):\n '''\n Will delete the entities that match at the time the query is executed.\n\n Used like::\n\n MyModel.query.filter(email=...).delete()\n MyModel.query.endswith(email='@host.com').delete()\n\n .. warning:: can't be used on models on either side of a ``OneToMany``,\n ``ManyToOne``, or ``OneToOne`` relationship.\n '''\n\n from .columns import MODELS_REFERENCED\n if not self._model._no_fk or self._model._namespace in MODELS_REFERENCED:\n raise QueryError(\"Can't delete entities of models with foreign key relationships\")\n\n de = []\n i = 0\n for result in self.iter_result(pagesize=blocksize):\n de.append(result)\n i += 1\n if i >= blocksize:\n session.delete(de) # one round-trip to delete \"chunk\" items\n del de[:]\n i = 0\n\n if de:\n session.delete(de)", "response": "Deletes the items in the database."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _on_delete(ent):\n '''\n This function handles all on_delete semantics defined on OneToMany columns.\n\n This function only exists because 'cascade' is *very* hard to get right.\n '''\n seen_d = set([ent._pk])\n to_delete = [ent]\n seen_s = set()\n to_save = []\n\n def _set_default(ent, attr, de=NULL):\n pk = ent._pk\n if pk in seen_d:\n # going to be deleted, don't need to modify\n return\n\n col = ent.__class__._columns[attr]\n de = de if de is not NULL else col._default\n if de in (None, NULL):\n setattr(ent, attr, None)\n elif callable(col._default):\n setattr(ent, attr, col._default())\n else:\n setattr(ent, attr, col._default)\n\n if pk not in seen_s:\n seen_s.add(pk)\n to_save.append(ent)\n\n for self in to_delete:\n for tbl, attr, action in MODELS_REFERENCED.get(self._namespace, ()):\n if action == 'no action':\n continue\n\n refs = MODELS[tbl].get_by(**{attr: self.id})\n if not refs:\n continue\n\n if action == 'restrict':\n # raise the exception here for a better traceback\n raise _restrict(self, attr, refs)\n elif action == 'set null':\n for ref in refs:\n _set_default(ref, attr, None)\n continue\n elif action == 'set default':\n for ref in refs:\n _set_default(ref, attr)\n continue\n\n # otherwise col._on_delete == 'cascade'\n for ent in (refs if isinstance(refs, list) else [refs]):\n if ent._pk not in seen_d:\n seen_d.add(ent._pk)\n to_delete.append(ent)\n\n # If we got here, then to_delete includes all items to delete. Let's delete\n # them!\n for self in to_delete:\n self.delete(skip_on_delete_i_really_mean_it=SKIP_ON_DELETE)\n for self in to_save:\n # Careful not to resurrect deleted entities\n if self._pk not in seen_d:\n self.save()", "response": "This function handles all on_delete semantics defined on OneToMany columns."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nperforms the actual prefix suffix and pattern match operations.", "response": "def redis_prefix_lua(conn, dest, index, prefix, is_first, pattern=None):\n '''\n Performs the actual prefix, suffix, and pattern match operations. \n '''\n tkey = '%s:%s'%(index.partition(':')[0], uuid.uuid4())\n start, end = _start_end(prefix)\n return _redis_prefix_lua(conn,\n [dest, tkey, index],\n [start, end, pattern or prefix, int(pattern is not None), int(bool(is_first))]\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nestimating the total work needed to calculate the prefix match over the given index with the provided prefix.", "response": "def estimate_work_lua(conn, index, prefix):\n '''\n Estimates the total work necessary to calculate the prefix match over the\n given index with the provided prefix.\n '''\n if index.endswith(':idx'):\n args = [] if not prefix else list(prefix)\n if args:\n args[0] = '-inf' if args[0] is None else repr(float(args[0]))\n args[1] = 'inf' if args[1] is None else repr(float(args[1]))\n return _estimate_work_lua(conn, [index], args, force_eval=True)\n elif index.endswith(':geo'):\n return _estimate_work_lua(conn, [index], filter(None, [prefix]), force_eval=True)\n\n start, end = _start_end(prefix)\n return _estimate_work_lua(conn, [index], [start, '(' + end], force_eval=True)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsearching for model ids that match the provided filters.", "response": "def search(self, conn, filters, order_by, offset=None, count=None, timeout=None):\n '''\n Search for model ids that match the provided filters.\n\n Arguments:\n\n * *filters* - A list of filters that apply to the search of one of\n the following two forms:\n\n 1. ``'column:string'`` - a plain string will match a word in a\n text search on the column\n\n .. note:: Read the documentation about the ``Query`` object\n for what is actually passed during text search\n\n 2. ``('column', min, max)`` - a numeric column range search,\n between min and max (inclusive by default)\n\n .. note:: Read the documentation about the ``Query`` object\n for information about open-ended ranges\n\n 3. ``['column:string1', 'column:string2']`` - will match any\n of the provided words in a text search on the column\n\n 4. ``Prefix('column', 'prefix')`` - will match prefixes of\n words in a text search on the column\n\n 5. ``Suffix('column', 'suffix')`` - will match suffixes of\n words in a text search on the column\n\n 6. ``Pattern('column', 'pattern')`` - will match patterns over\n words in a text search on the column\n\n * *order_by* - A string that names the numeric column by which to\n sort the results by. Prefixing with '-' will return results in\n descending order\n\n .. note:: While you can technically pass a non-numeric index as an\n *order_by* clause, the results will basically be to order the\n results by string comparison of the ids (10 will come before 2).\n\n .. note:: If you omit the ``order_by`` argument, results will be\n ordered by the last filter. If the last filter was a text\n filter, see the previous note. If the last filter was numeric,\n then results will be ordered by that result.\n\n * *offset* - A numeric starting offset for results\n * *count* - The maximum number of results to return from the query\n '''\n # prepare the filters\n pipe, intersect, temp_id = self._prepare(conn, filters)\n\n # handle ordering\n if order_by:\n reverse = order_by and order_by.startswith('-')\n order_clause = '%s:%s:idx'%(self.namespace, order_by.lstrip('-'))\n intersect(temp_id, {temp_id:0, order_clause: -1 if reverse else 1})\n\n # handle returning the temporary result key\n if timeout is not None:\n pipe.expire(temp_id, timeout)\n pipe.execute()\n return temp_id\n\n offset = offset if offset is not None else 0\n end = (offset + count - 1) if count and count > 0 else -1\n pipe.zrange(temp_id, offset, end)\n pipe.delete(temp_id)\n return pipe.execute()[-2]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef count(self, conn, filters):\n '''\n Returns the count of the items that match the provided filters.\n\n For the meaning of what the ``filters`` argument means, see the\n ``.search()`` method docs.\n '''\n pipe, intersect, temp_id = self._prepare(conn, filters)\n pipe.zcard(temp_id)\n pipe.delete(temp_id)\n return pipe.execute()[-2]", "response": "Returns the number of items that match the provided filters."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _connect(obj):\n '''\n Tries to get the _conn attribute from a model. Barring that, gets the\n global default connection using other methods.\n '''\n from .columns import MODELS\n if isinstance(obj, MODELS['Model']):\n obj = obj.__class__\n if hasattr(obj, '_conn'):\n return obj._conn\n if hasattr(obj, 'CONN'):\n return obj.CONN\n return get_connection()", "response": "Get the connection to the\n model."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a Lock object for the given entity.", "response": "def EntityLock(entity, acquire_timeout, lock_timeout):\n '''\n Useful when you want exclusive access to an entity across all writers.::\n\n # example\n import rom\n\n class Document(rom.Model):\n owner = rom.ManyToOne('User', on_delete='restrict')\n ...\n\n def change_owner(document, new_owner):\n with rom.util.EntityLock(document, 5, 90):\n document.owner = new_owner\n document.save()\n\n '''\n return Lock(entity._connection, entity._pk, acquire_timeout, lock_timeout)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add(self, obj):\n '''\n Adds an entity to the session.\n '''\n if self.null_session:\n return\n self._init()\n pk = obj._pk\n if not pk.endswith(':None'):\n self.known[pk] = obj\n self.wknown[pk] = obj", "response": "Adds an entity to the session."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef forget(self, obj):\n '''\n Forgets about an entity (automatically called when an entity is\n deleted). Call this to ensure that an entity that you've modified is\n not automatically saved on ``session.commit()`` .\n '''\n self._init()\n self.known.pop(obj._pk, None)\n self.wknown.pop(obj._pk, None)", "response": "Forgets about an entity."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget an entity from the session based on the primary key.", "response": "def get(self, pk):\n '''\n Fetches an entity from the session based on primary key.\n '''\n self._init()\n return self.known.get(pk) or self.wknown.get(pk)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef flush(self, full=False, all=False, force=False):\n '''\n Call ``.save()`` on all modified entities in the session. Use when you\n want to flush changes to Redis, but don't want to lose your local\n session cache.\n\n See the ``.commit()`` method for arguments and their meanings.\n '''\n self._init()\n\n return self.save(*self.known.values(), full=full, all=all, force=force)", "response": "Flush all modified entities in the cache."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef commit(self, full=False, all=False, force=False):\n '''\n Call ``.save()`` on all modified entities in the session. Also forgets\n all known entities in the session, so this should only be called at\n the end of a request.\n\n Arguments:\n\n * *full* - pass ``True`` to force save full entities, not only\n changes\n * *all* - pass ``True`` to save all entities known, not only those\n entities that have been modified.\n * *full* - pass ``True`` to force-save all entities known, ignoring\n DataRaceError and EntityDeletedError exceptions\n '''\n changes = self.flush(full, all, force)\n self.known = {}\n return changes", "response": "Call. save on all modified entities in the session."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef save(self, *objects, **kwargs):\n '''\n This method is an alternate API for saving many entities (possibly not\n tracked by the session). You can call::\n\n session.save(obj)\n session.save(obj1, obj2, ...)\n session.save([obj1, obj2, ...])\n\n And the entities will be flushed to Redis.\n\n You can pass the keyword arguments ``full``, ``all``, and ``force`` with\n the same meaning and semantics as the ``.commit()`` method.\n '''\n from rom import Model\n full = kwargs.get('full')\n all = kwargs.get('all')\n force = kwargs.get('force')\n changes = 0\n items = deque()\n items.extend(objects)\n while items:\n o = items.popleft()\n if isinstance(o, (list, tuple)):\n items.extendleft(reversed(o))\n elif isinstance(o, Model):\n if not o._deleted and (all or o._modified):\n changes += o.save(full, force)\n\n else:\n raise ORMError(\n \"Cannot save an object that is not an instance of a Model (you provided %r)\"%(\n o,))\n\n return changes", "response": "Save many objects in the database."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef delete(self, *objects, **kwargs):\n '''\n This method offers the ability to delete multiple entities in a single\n round trip to Redis (assuming your models are all stored on the same\n server). You can call::\n\n\n session.delete(obj)\n session.delete(obj1, obj2, ...)\n session.delete([obj1, obj2, ...])\n\n The keyword argument ``force=True`` can be provided, which can force\n the deletion of an entitiy again, even if we believe it to already be\n deleted.\n\n If ``force=True``, we won't re-call the object's ``_before_delete()``\n method, but we will re-call ``_after_delete()``.\n\n .. note:: Objects are automatically dropped from the session after\n delete for the sake of cache coherency.\n '''\n force = kwargs.get('force')\n from .model import Model, SKIP_ON_DELETE\n\n flat = []\n items = deque()\n items.extend(objects)\n types = set()\n # flatten what was passed in, more or less arbitrarily deep\n while items:\n o = items.popleft()\n if isinstance(o, (list, tuple)):\n items.extendleft(reversed(o))\n elif isinstance(o, Model):\n if force or not o._deleted:\n flat.append(o)\n types.add(type(o))\n\n # make sure we can bulk delete everything we've been requested to\n from .columns import MODELS_REFERENCED\n for t in types:\n if not t._no_fk or t._namespace in MODELS_REFERENCED:\n raise ORMError(\"Can't bulk delete entities of models with foreign key relationships\")\n\n c2p = {}\n for o in flat:\n # prepare delete\n if not o._deleted:\n o._before_delete()\n\n # make sure we've got connections\n c = o._connection\n if c not in c2p:\n c2p[c] = c.pipeline()\n\n # use our existing delete, and pass through a pipeline :P\n o.delete(_conn=c2p[c],\n skip_on_delete_i_really_mean_it=SKIP_ON_DELETE)\n\n # actually delete the data in Redis\n for p in c2p.values():\n p.execute()\n\n # remove the objects from the session\n forget = self.forget\n for o in flat:\n if o._deleted == 1:\n o._after_delete()\n o._deleted = 2\n forget(o)", "response": "Delete one or more objects from the cache."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef refresh(self, *objects, **kwargs):\n '''\n This method is an alternate API for refreshing many entities (possibly\n not tracked by the session). You can call::\n\n session.refresh(obj)\n session.refresh(obj1, obj2, ...)\n session.refresh([obj1, obj2, ...])\n\n And all provided entities will be reloaded from Redis.\n\n To force reloading for modified entities, you can pass ``force=True``.\n '''\n self._init()\n from rom import Model\n force = kwargs.get('force')\n for o in objects:\n if isinstance(o, (list, tuple)):\n self.refresh(*o, force=force)\n elif isinstance(o, Model):\n if not o._new:\n o.refresh(force=force)\n else:\n # all objects are re-added to the session after refresh,\n # except for deleted entities...\n self.add(o)\n else:\n raise ORMError(\n \"Cannot refresh an object that is not an instance of a Model (you provided %r)\"%(\n o,))", "response": "This method is used to refresh the related objects in the session."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef refresh_all(self, *objects, **kwargs):\n '''\n This method is an alternate API for refreshing all entities tracked\n by the session. You can call::\n\n session.refresh_all()\n session.refresh_all(force=True)\n\n And all entities known by the session will be reloaded from Redis.\n\n To force reloading for modified entities, you can pass ``force=True``.\n '''\n self.refresh(*self.known.values(), force=kwargs.get('force'))", "response": "This method is an alternate API for refreshing all entities in the session."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef redis_writer_lua(conn, pkey, namespace, id, unique, udelete, delete,\n data, keys, scored, prefix, suffix, geo, old_data, is_delete):\n '''\n ... Actually write data to Redis. This is an internal detail. Please don't\n call me directly.\n '''\n ldata = []\n for pair in data.items():\n ldata.extend(pair)\n\n for item in prefix:\n item.append(_prefix_score(item[-1]))\n for item in suffix:\n item.append(_prefix_score(item[-1]))\n\n data = [json.dumps(x, default=_fix_bytes) for x in\n (unique, udelete, delete, ldata, keys, scored, prefix, suffix, geo, is_delete, old_data)]\n result = _redis_writer_lua(conn, [], [namespace, id] + data)\n\n if isinstance(conn, _Pipeline):\n # we're in a pipelined write situation, don't parse the pipeline :P\n return\n\n if six.PY3:\n result = result.decode()\n\n result = json.loads(result)\n if 'unique' in result:\n result = result['unique']\n raise UniqueKeyViolation(\n \"Value %r for %s:%s:uidx not distinct (failed for pk=%s)\"%(\n unique[result], namespace, result, id),\n namespace, id)\n\n if 'race' in result:\n result = result['race']\n if pkey in result:\n raise EntityDeletedError(\n \"Entity %s:%s deleted by another writer; use .save(force=True) to re-save\"%(\n namespace, id),\n namespace, id)\n\n raise DataRaceError(\n \"%s:%s Column(s) %r updated by another writer, write aborted!\"%(\n namespace, id, result),\n namespace, id)", "response": "This function writes data to Redis."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsaves the current entity to Redis.", "response": "def save(self, full=False, force=False):\n '''\n Saves the current entity to Redis. Will only save changed data by\n default, but you can force a full save by passing ``full=True``.\n\n If the underlying entity was deleted and you want to re-save the entity,\n you can pass ``force=True`` to force a full re-save of the entity.\n '''\n # handle the pre-commit hooks\n was_new = self._new\n if was_new:\n self._before_insert()\n else:\n self._before_update()\n\n new = self.to_dict()\n ret, data = self._apply_changes(\n self._last, new, full or self._new or force, is_new=self._new or force)\n self._last = data\n self._new = False\n self._modified = False\n self._deleted = False\n # handle the post-commit hooks\n if was_new:\n self._after_insert()\n else:\n self._after_update()\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef delete(self, **kwargs):\n '''\n Deletes the entity immediately. Also performs any on_delete operations\n specified as part of column definitions.\n '''\n if kwargs.get('skip_on_delete_i_really_mean_it') is not SKIP_ON_DELETE:\n # handle the pre-commit hook\n self._before_delete()\n # handle any foreign key references + cascade options\n _on_delete(self)\n\n session.forget(self)\n self._apply_changes(self._last, {}, delete=True, _conn=kwargs.get('_conn'))\n self._modified = True\n self._deleted = True\n # handle the post-commit hooks\n if kwargs.get('skip_on_delete_i_really_mean_it') is not SKIP_ON_DELETE:\n self._after_delete()", "response": "Deletes the entity immediately. Also performs any on_delete operations."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a shallow copy of the given entity.", "response": "def copy(self):\n '''\n Creates a shallow copy of the given entity (any entities that can be\n retrieved from a OneToMany relationship will not be copied).\n '''\n x = self.to_dict()\n x.pop(self._pkey)\n return self.__class__(**x)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get(cls, ids):\n '''\n Will fetch one or more entities of this type from the session or\n Redis.\n\n Used like::\n\n MyModel.get(5)\n MyModel.get([1, 6, 2, 4])\n\n Passing a list or a tuple will return multiple entities, in the same\n order that the ids were passed.\n '''\n conn = _connect(cls)\n # prepare the ids\n single = not isinstance(ids, (list, tuple, set, frozenset))\n if single:\n ids = [ids]\n pks = ['%s:%s'%(cls._namespace, id) for id in map(int, ids)]\n # get from the session, if possible\n out = list(map(session.get, pks))\n # if we couldn't get an instance from the session, load from Redis\n if None in out:\n pipe = conn.pipeline(True)\n idxs = []\n # Fetch missing data\n for i, data in enumerate(out):\n if data is None:\n idxs.append(i)\n pipe.hgetall(pks[i])\n # Update output list\n for i, data in zip(idxs, pipe.execute()):\n if data:\n if six.PY3:\n data = dict((k.decode(), v.decode()) for k, v in data.items())\n out[i] = cls(_loading=True, **data)\n # Get rid of missing models\n out = [x for x in out if x]\n if single:\n return out[0] if out else None\n return out", "response": "Get one or more entities of this type from the Redis session or Redis Redis."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_by(cls, **kwargs):\n '''\n This method offers a simple query method for fetching entities of this\n type via attribute numeric ranges (such columns must be ``indexed``),\n or via ``unique`` columns.\n\n Some examples::\n\n user = User.get_by(email_address='user@domain.com')\n # gets up to 25 users created in the last 24 hours\n users = User.get_by(\n created_at=(time.time()-86400, time.time()),\n _limit=(0, 25))\n\n Optional keyword-only arguments:\n\n * *_limit* - A 2-tuple of (offset, count) that can be used to\n paginate or otherwise limit results returned by a numeric range\n query\n * *_numeric* - An optional boolean defaulting to False that forces\n the use of a numeric index for ``.get_by(col=val)`` queries even\n when ``col`` has an existing unique index\n\n If you would like to make queries against multiple columns or with\n multiple criteria, look into the Model.query class property.\n\n .. note:: rom will attempt to use a unique index first, then a numeric\n index if there was no unique index. You can explicitly tell rom to\n only use the numeric index by using ``.get_by(..., _numeric=True)``.\n .. note:: Ranged queries with `get_by(col=(start, end))` will only work\n with columns that use a numeric index.\n '''\n conn = _connect(cls)\n model = cls._namespace\n # handle limits and query requirements\n _limit = kwargs.pop('_limit', ())\n if _limit and len(_limit) != 2:\n raise QueryError(\"Limit must include both 'offset' and 'count' parameters\")\n elif _limit and not all(isinstance(x, six.integer_types) for x in _limit):\n raise QueryError(\"Limit arguments must both be integers\")\n if len(kwargs) != 1:\n raise QueryError(\"We can only fetch object(s) by exactly one attribute, you provided %s\"%(len(kwargs),))\n\n _numeric = bool(kwargs.pop('_numeric', None))\n\n for attr, value in kwargs.items():\n plain_attr = attr.partition(':')[0]\n if isinstance(value, tuple) and len(value) != 2:\n raise QueryError(\"Range queries must include exactly two endpoints\")\n\n # handle unique index lookups\n if attr in cls._unique and (plain_attr not in cls._index or not _numeric):\n if isinstance(value, tuple):\n raise QueryError(\"Cannot query a unique index with a range of values\")\n single = not isinstance(value, list)\n if single:\n value = [value]\n qvalues = list(map(cls._columns[attr]._to_redis, value))\n ids = [x for x in conn.hmget('%s:%s:uidx'%(model, attr), qvalues) if x]\n if not ids:\n return None if single else []\n return cls.get(ids[0] if single else ids)\n\n if plain_attr not in cls._index:\n raise QueryError(\"Cannot query on a column without an index\")\n\n if isinstance(value, NUMERIC_TYPES) and not isinstance(value, bool):\n value = (value, value)\n\n if isinstance(value, tuple):\n # this is a numeric range query, we'll just pull it directly\n args = list(value)\n for i, a in enumerate(args):\n # Handle the ranges where None is -inf on the left and inf\n # on the right when used in the context of a range tuple.\n args[i] = ('-inf', 'inf')[i] if a is None else cls._columns[attr]._to_redis(a)\n if _limit:\n args.extend(_limit)\n ids = conn.zrangebyscore('%s:%s:idx'%(model, attr), *args)\n if not ids:\n return []\n return cls.get(ids)\n\n # defer other index lookups to the query object\n query = cls.query.filter(**{attr: value})\n if _limit:\n query = query.limit(*_limit)\n return query.all()", "response": "This method returns a list of all the objects of the specified class that are in the database."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nupdate multiple attributes in a model.", "response": "def update(self, *args, **kwargs):\n '''\n Updates multiple attributes in a model. If ``args`` are provided, this\n method will assign attributes in the order returned by\n ``list(self._columns)`` until one or both are exhausted.\n\n If ``kwargs`` are provided, this method will assign attributes to the\n names provided, after ``args`` have been processed.\n '''\n sa = setattr\n for a, v in zip(self._columns, args):\n sa(self, a, v)\n for a, v in kwargs.items():\n sa(self, a, v)\n return self"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef dump(obj, file, reducers=None, protocol=None):\n '''Replacement for pickle.dump() using _LokyPickler.'''\n global _LokyPickler\n _LokyPickler(file, reducers=reducers, protocol=protocol).dump(obj)", "response": "Replacement for pickle. dump using _LokyPickler."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef register(cls, type, reduce_func):\n if sys.version_info < (3,):\n # Python 2 pickler dispatching is not explicitly customizable.\n # Let us use a closure to workaround this limitation.\n def dispatcher(cls, obj):\n reduced = reduce_func(obj)\n cls.save_reduce(obj=obj, *reduced)\n cls.dispatch_table[type] = dispatcher\n else:\n cls.dispatch_table[type] = reduce_func", "response": "Attach a reducer function to a given type in the dispatch table."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconstructing or retrieve a semaphore with the given name and optional value.", "response": "def _sem_open(name, value=None):\n \"\"\" Construct or retrieve a semaphore with the given name\n\n If value is None, try to retrieve an existing named semaphore.\n Else create a new semaphore with the given value\n \"\"\"\n if value is None:\n handle = pthread.sem_open(ctypes.c_char_p(name), 0)\n else:\n handle = pthread.sem_open(ctypes.c_char_p(name), SEM_OFLAG, SEM_PERM,\n ctypes.c_int(value))\n\n if handle == SEM_FAILURE:\n e = ctypes.get_errno()\n if e == errno.EEXIST:\n raise FileExistsError(\"a semaphore named %s already exists\" % name)\n elif e == errno.ENOENT:\n raise FileNotFoundError('cannot find semaphore named %s' % name)\n elif e == errno.ENOSYS:\n raise NotImplementedError('No semaphore implementation on this '\n 'system')\n else:\n raiseFromErrno()\n\n return handle"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the number of CPUs the current process can use.", "response": "def cpu_count():\n \"\"\"Return the number of CPUs the current process can use.\n\n The returned number of CPUs accounts for:\n * the number of CPUs in the system, as given by\n ``multiprocessing.cpu_count``;\n * the CPU affinity settings of the current process\n (available with Python 3.4+ on some Unix systems);\n * CFS scheduler CPU bandwidth limit (available on Linux only, typically\n set by docker and similar container orchestration systems);\n * the value of the LOKY_MAX_CPU_COUNT environment variable if defined.\n and is given as the minimum of these constraints.\n It is also always larger or equal to 1.\n \"\"\"\n import math\n\n try:\n cpu_count_mp = mp.cpu_count()\n except NotImplementedError:\n cpu_count_mp = 1\n\n # Number of available CPUs given affinity settings\n cpu_count_affinity = cpu_count_mp\n if hasattr(os, 'sched_getaffinity'):\n try:\n cpu_count_affinity = len(os.sched_getaffinity(0))\n except NotImplementedError:\n pass\n\n # CFS scheduler CPU bandwidth limit\n # available in Linux since 2.6 kernel\n cpu_count_cfs = cpu_count_mp\n cfs_quota_fname = \"/sys/fs/cgroup/cpu/cpu.cfs_quota_us\"\n cfs_period_fname = \"/sys/fs/cgroup/cpu/cpu.cfs_period_us\"\n if os.path.exists(cfs_quota_fname) and os.path.exists(cfs_period_fname):\n with open(cfs_quota_fname, 'r') as fh:\n cfs_quota_us = int(fh.read())\n with open(cfs_period_fname, 'r') as fh:\n cfs_period_us = int(fh.read())\n\n if cfs_quota_us > 0 and cfs_period_us > 0:\n # Make sure this quantity is an int as math.ceil returns a\n # float in python2.7. (See issue #165)\n cpu_count_cfs = int(math.ceil(cfs_quota_us / cfs_period_us))\n\n # User defined soft-limit passed as an loky specific environment variable.\n cpu_count_loky = int(os.environ.get('LOKY_MAX_CPU_COUNT', cpu_count_mp))\n aggregate_cpu_count = min(cpu_count_mp, cpu_count_affinity, cpu_count_cfs,\n cpu_count_loky)\n return max(aggregate_cpu_count, 1)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a queue object", "response": "def Queue(self, maxsize=0, reducers=None):\n '''Returns a queue object'''\n from .queues import Queue\n return Queue(maxsize, reducers=reducers,\n ctx=self.get_context())"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a queue object", "response": "def SimpleQueue(self, reducers=None):\n '''Returns a queue object'''\n from .queues import SimpleQueue\n return SimpleQueue(reducers=reducers, ctx=self.get_context())"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\niterates over iterables in chunks.", "response": "def _get_chunks(chunksize, *iterables):\n \"\"\"Iterates over zip()ed iterables in chunks. \"\"\"\n if sys.version_info < (3, 3):\n it = itertools.izip(*iterables)\n else:\n it = zip(*iterables)\n while True:\n chunk = tuple(itertools.islice(it, chunksize))\n if not chunk:\n return\n yield chunk"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _process_worker(call_queue, result_queue, initializer, initargs,\n processes_management_lock, timeout, worker_exit_lock,\n current_depth):\n \"\"\"Evaluates calls from call_queue and places the results in result_queue.\n\n This worker is run in a separate process.\n\n Args:\n call_queue: A ctx.Queue of _CallItems that will be read and\n evaluated by the worker.\n result_queue: A ctx.Queue of _ResultItems that will written\n to by the worker.\n initializer: A callable initializer, or None\n initargs: A tuple of args for the initializer\n process_management_lock: A ctx.Lock avoiding worker timeout while some\n workers are being spawned.\n timeout: maximum time to wait for a new item in the call_queue. If that\n time is expired, the worker will shutdown.\n worker_exit_lock: Lock to avoid flagging the executor as broken on\n workers timeout.\n current_depth: Nested parallelism level, to avoid infinite spawning.\n \"\"\"\n if initializer is not None:\n try:\n initializer(*initargs)\n except BaseException:\n _base.LOGGER.critical('Exception in initializer:', exc_info=True)\n # The parent will notice that the process stopped and\n # mark the pool broken\n return\n\n # set the global _CURRENT_DEPTH mechanism to limit recursive call\n global _CURRENT_DEPTH\n _CURRENT_DEPTH = current_depth\n _process_reference_size = None\n _last_memory_leak_check = None\n pid = os.getpid()\n\n mp.util.debug('Worker started with timeout=%s' % timeout)\n while True:\n try:\n call_item = call_queue.get(block=True, timeout=timeout)\n if call_item is None:\n mp.util.info(\"Shutting down worker on sentinel\")\n except queue.Empty:\n mp.util.info(\"Shutting down worker after timeout %0.3fs\"\n % timeout)\n if processes_management_lock.acquire(block=False):\n processes_management_lock.release()\n call_item = None\n else:\n mp.util.info(\"Could not acquire processes_management_lock\")\n continue\n except BaseException as e:\n previous_tb = traceback.format_exc()\n try:\n result_queue.put(_RemoteTraceback(previous_tb))\n except BaseException:\n # If we cannot format correctly the exception, at least print\n # the traceback.\n print(previous_tb)\n sys.exit(1)\n if call_item is None:\n # Notify queue management thread about clean worker shutdown\n result_queue.put(pid)\n with worker_exit_lock:\n return\n try:\n r = call_item()\n except BaseException as e:\n exc = _ExceptionWithTraceback(e)\n result_queue.put(_ResultItem(call_item.work_id, exception=exc))\n else:\n _sendback_result(result_queue, call_item.work_id, result=r)\n del r\n\n # Free the resource as soon as possible, to avoid holding onto\n # open files or shared memory that is not needed anymore\n del call_item\n\n if _USE_PSUTIL:\n if _process_reference_size is None:\n # Make reference measurement after the first call\n _process_reference_size = _get_memory_usage(pid, force_gc=True)\n _last_memory_leak_check = time()\n continue\n if time() - _last_memory_leak_check > _MEMORY_LEAK_CHECK_DELAY:\n mem_usage = _get_memory_usage(pid)\n _last_memory_leak_check = time()\n if mem_usage - _process_reference_size < _MAX_MEMORY_LEAK_SIZE:\n # Memory usage stays within bounds: everything is fine.\n continue\n\n # Check again memory usage; this time take the measurement\n # after a forced garbage collection to break any reference\n # cycles.\n mem_usage = _get_memory_usage(pid, force_gc=True)\n _last_memory_leak_check = time()\n if mem_usage - _process_reference_size < _MAX_MEMORY_LEAK_SIZE:\n # The GC managed to free the memory: everything is fine.\n continue\n\n # The process is leaking memory: let the master process\n # know that we need to start a new worker.\n mp.util.info(\"Memory leak detected: shutting down worker\")\n result_queue.put(pid)\n with worker_exit_lock:\n return\n else:\n # if psutil is not installed, trigger gc.collect events\n # regularly to limit potential memory leaks due to reference cycles\n if ((_last_memory_leak_check is None) or\n (time() - _last_memory_leak_check >\n _MEMORY_LEAK_CHECK_DELAY)):\n gc.collect()\n _last_memory_leak_check = time()", "response": "This function processes the call_queue and places the results in result_queue."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _add_call_item_to_queue(pending_work_items,\n running_work_items,\n work_ids,\n call_queue):\n \"\"\"Fills call_queue with _WorkItems from pending_work_items.\n\n This function never blocks.\n\n Args:\n pending_work_items: A dict mapping work ids to _WorkItems e.g.\n {5: <_WorkItem...>, 6: <_WorkItem...>, ...}\n work_ids: A queue.Queue of work ids e.g. Queue([5, 6, ...]). Work ids\n are consumed and the corresponding _WorkItems from\n pending_work_items are transformed into _CallItems and put in\n call_queue.\n call_queue: A ctx.Queue that will be filled with _CallItems\n derived from _WorkItems.\n \"\"\"\n while True:\n if call_queue.full():\n return\n try:\n work_id = work_ids.get(block=False)\n except queue.Empty:\n return\n else:\n work_item = pending_work_items[work_id]\n\n if work_item.future.set_running_or_notify_cancel():\n running_work_items += [work_id]\n call_queue.put(_CallItem(work_id,\n work_item.fn,\n work_item.args,\n work_item.kwargs),\n block=True)\n else:\n del pending_work_items[work_id]\n continue", "response": "Adds a call item to the queue that is used to execute the pending_work_items."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nmanage the communication between this process and the worker processes. This function is run in a local thread. Args: executor_reference: A weakref.ref to the ProcessPoolExecutor that owns this thread. Used to determine if the ProcessPoolExecutor has been garbage collected and that this function can exit. executor_flags: A ExecutorFlags holding internal states of the ProcessPoolExecutor. It permits to know if the executor is broken even the object has been gc. process: A list of the ctx.Process instances used as workers. pending_work_items: A dict mapping work ids to _WorkItems e.g. {5: <_WorkItem...>, 6: <_WorkItem...>, ...} work_ids_queue: A queue.Queue of work ids e.g. Queue([5, 6, ...]). call_queue: A ctx.Queue that will be filled with _CallItems derived from _WorkItems for processing by the process workers. result_queue: A ctx.SimpleQueue of _ResultItems generated by the process workers. thread_wakeup: A _ThreadWakeup to allow waking up the queue_manager_thread from the main Thread and avoid deadlocks caused by permanently locked queues.", "response": "def _queue_management_worker(executor_reference,\n executor_flags,\n processes,\n pending_work_items,\n running_work_items,\n work_ids_queue,\n call_queue,\n result_queue,\n thread_wakeup,\n processes_management_lock):\n \"\"\"Manages the communication between this process and the worker processes.\n\n This function is run in a local thread.\n\n Args:\n executor_reference: A weakref.ref to the ProcessPoolExecutor that owns\n this thread. Used to determine if the ProcessPoolExecutor has been\n garbage collected and that this function can exit.\n executor_flags: A ExecutorFlags holding internal states of the\n ProcessPoolExecutor. It permits to know if the executor is broken\n even the object has been gc.\n process: A list of the ctx.Process instances used as\n workers.\n pending_work_items: A dict mapping work ids to _WorkItems e.g.\n {5: <_WorkItem...>, 6: <_WorkItem...>, ...}\n work_ids_queue: A queue.Queue of work ids e.g. Queue([5, 6, ...]).\n call_queue: A ctx.Queue that will be filled with _CallItems\n derived from _WorkItems for processing by the process workers.\n result_queue: A ctx.SimpleQueue of _ResultItems generated by the\n process workers.\n thread_wakeup: A _ThreadWakeup to allow waking up the\n queue_manager_thread from the main Thread and avoid deadlocks\n caused by permanently locked queues.\n \"\"\"\n executor = None\n\n def is_shutting_down():\n # No more work items can be added if:\n # - The interpreter is shutting down OR\n # - The executor that own this worker is not broken AND\n # * The executor that owns this worker has been collected OR\n # * The executor that owns this worker has been shutdown.\n # If the executor is broken, it should be detected in the next loop.\n return (_global_shutdown or\n ((executor is None or executor_flags.shutdown)\n and not executor_flags.broken))\n\n def shutdown_all_workers():\n mp.util.debug(\"queue management thread shutting down\")\n executor_flags.flag_as_shutting_down()\n # Create a list to avoid RuntimeError due to concurrent modification of\n # processes. nb_children_alive is thus an upper bound. Also release the\n # processes' _worker_exit_lock to accelerate the shutdown procedure, as\n # there is no need for hand-shake here.\n with processes_management_lock:\n n_children_alive = 0\n for p in list(processes.values()):\n p._worker_exit_lock.release()\n n_children_alive += 1\n n_children_to_stop = n_children_alive\n n_sentinels_sent = 0\n # Send the right number of sentinels, to make sure all children are\n # properly terminated.\n while n_sentinels_sent < n_children_to_stop and n_children_alive > 0:\n for i in range(n_children_to_stop - n_sentinels_sent):\n try:\n call_queue.put_nowait(None)\n n_sentinels_sent += 1\n except Full:\n break\n with processes_management_lock:\n n_children_alive = sum(\n p.is_alive() for p in list(processes.values())\n )\n\n # Release the queue's resources as soon as possible. Flag the feeder\n # thread for clean exit to avoid having the crash detection thread flag\n # the Executor as broken during the shutdown. This is safe as either:\n # * We don't need to communicate with the workers anymore\n # * There is nothing left in the Queue buffer except None sentinels\n mp.util.debug(\"closing call_queue\")\n call_queue.close()\n\n mp.util.debug(\"joining processes\")\n # If .join() is not called on the created processes then\n # some ctx.Queue methods may deadlock on Mac OS X.\n while processes:\n _, p = processes.popitem()\n p.join()\n mp.util.debug(\"queue management thread clean shutdown of worker \"\n \"processes: {}\".format(list(processes)))\n\n result_reader = result_queue._reader\n wakeup_reader = thread_wakeup._reader\n readers = [result_reader, wakeup_reader]\n\n while True:\n _add_call_item_to_queue(pending_work_items,\n running_work_items,\n work_ids_queue,\n call_queue)\n # Wait for a result to be ready in the result_queue while checking\n # that all worker processes are still running, or for a wake up\n # signal send. The wake up signals come either from new tasks being\n # submitted, from the executor being shutdown/gc-ed, or from the\n # shutdown of the python interpreter.\n worker_sentinels = [p.sentinel for p in list(processes.values())]\n ready = wait(readers + worker_sentinels)\n\n broken = (\"A worker process managed by the executor was unexpectedly \"\n \"terminated. This could be caused by a segmentation fault \"\n \"while calling the function or by an excessive memory usage \"\n \"causing the Operating System to kill the worker.\", None,\n TerminatedWorkerError)\n if result_reader in ready:\n try:\n result_item = result_reader.recv()\n broken = None\n if isinstance(result_item, _RemoteTraceback):\n broken = (\"A task has failed to un-serialize. Please \"\n \"ensure that the arguments of the function are \"\n \"all picklable.\", result_item.tb,\n BrokenProcessPool)\n except BaseException as e:\n tb = getattr(e, \"__traceback__\", None)\n if tb is None:\n _, _, tb = sys.exc_info()\n broken = (\"A result has failed to un-serialize. Please \"\n \"ensure that the objects returned by the function \"\n \"are always picklable.\",\n traceback.format_exception(type(e), e, tb),\n BrokenProcessPool)\n elif wakeup_reader in ready:\n broken = None\n result_item = None\n thread_wakeup.clear()\n if broken is not None:\n msg, cause_tb, exc_type = broken\n if (issubclass(exc_type, TerminatedWorkerError) and\n (sys.platform != \"win32\")):\n # In Windows, introspecting terminated workers exitcodes seems\n # unstable, therefore they are not appended in the exception\n # message.\n msg += \" The exit codes of the workers are {}\".format(\n get_exitcodes_terminated_worker(processes))\n\n bpe = exc_type(msg)\n if cause_tb is not None:\n bpe = set_cause(bpe, _RemoteTraceback(\n \"\\n'''\\n{}'''\".format(''.join(cause_tb))))\n # Mark the process pool broken so that submits fail right now.\n executor_flags.flag_as_broken(bpe)\n\n # All futures in flight must be marked failed\n for work_id, work_item in pending_work_items.items():\n work_item.future.set_exception(bpe)\n # Delete references to object. See issue16284\n del work_item\n pending_work_items.clear()\n\n # Terminate remaining workers forcibly: the queues or their\n # locks may be in a dirty state and block forever.\n while processes:\n _, p = processes.popitem()\n mp.util.debug('terminate process {}'.format(p.name))\n try:\n recursive_terminate(p)\n except ProcessLookupError: # pragma: no cover\n pass\n\n shutdown_all_workers()\n return\n if isinstance(result_item, int):\n # Clean shutdown of a worker using its PID, either on request\n # by the executor.shutdown method or by the timeout of the worker\n # itself: we should not mark the executor as broken.\n with processes_management_lock:\n p = processes.pop(result_item, None)\n\n # p can be None is the executor is concurrently shutting down.\n if p is not None:\n p._worker_exit_lock.release()\n p.join()\n del p\n\n # Make sure the executor have the right number of worker, even if a\n # worker timeout while some jobs were submitted. If some work is\n # pending or there is less processes than running items, we need to\n # start a new Process and raise a warning.\n n_pending = len(pending_work_items)\n n_running = len(running_work_items)\n if (n_pending - n_running > 0 or n_running > len(processes)):\n executor = executor_reference()\n if (executor is not None\n and len(processes) < executor._max_workers):\n warnings.warn(\n \"A worker stopped while some jobs were given to the \"\n \"executor. This can be caused by a too short worker \"\n \"timeout or by a memory leak.\", UserWarning\n )\n executor._adjust_process_count()\n executor = None\n\n elif result_item is not None:\n work_item = pending_work_items.pop(result_item.work_id, None)\n # work_item can be None if another process terminated\n if work_item is not None:\n if result_item.exception:\n work_item.future.set_exception(result_item.exception)\n else:\n work_item.future.set_result(result_item.result)\n # Delete references to object. See issue16284\n del work_item\n running_work_items.remove(result_item.work_id)\n # Delete reference to result_item\n del result_item\n\n # Check whether we should start shutting down.\n executor = executor_reference()\n # No more work items can be added if:\n # - The interpreter is shutting down OR\n # - The executor that owns this worker has been collected OR\n # - The executor that owns this worker has been shutdown.\n if is_shutting_down():\n # bpo-33097: Make sure that the executor is flagged as shutting\n # down even if it is shutdown by the interpreter exiting.\n with executor_flags.shutdown_lock:\n executor_flags.shutdown = True\n if executor_flags.kill_workers:\n while pending_work_items:\n _, work_item = pending_work_items.popitem()\n work_item.future.set_exception(ShutdownExecutorError(\n \"The Executor was shutdown before this job could \"\n \"complete.\"))\n del work_item\n # Terminate remaining workers forcibly: the queues or their\n # locks may be in a dirty state and block forever.\n while processes:\n _, p = processes.popitem()\n recursive_terminate(p)\n shutdown_all_workers()\n return\n # Since no new work items can be added, it is safe to shutdown\n # this thread if there are no pending work items.\n if not pending_work_items:\n shutdown_all_workers()\n return\n elif executor_flags.broken:\n return\n executor = None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nensuring all workers and management threads are running", "response": "def _ensure_executor_running(self):\n \"\"\"ensures all workers and management thread are running\n \"\"\"\n with self._processes_management_lock:\n if len(self._processes) != self._max_workers:\n self._adjust_process_count()\n self._start_queue_management_thread()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef map(self, fn, *iterables, **kwargs):\n timeout = kwargs.get('timeout', None)\n chunksize = kwargs.get('chunksize', 1)\n if chunksize < 1:\n raise ValueError(\"chunksize must be >= 1.\")\n\n results = super(ProcessPoolExecutor, self).map(\n partial(_process_chunk, fn), _get_chunks(chunksize, *iterables),\n timeout=timeout)\n return _chain_from_iterable_of_lists(results)", "response": "Returns an iterator equivalent to map."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwraps for non - picklable objects in a CloudpickledObjectWrapper.", "response": "def wrap_non_picklable_objects(obj, keep_wrapper=True):\n \"\"\"Wrapper for non-picklable object to use cloudpickle to serialize them.\n\n Note that this wrapper tends to slow down the serialization process as it\n is done with cloudpickle which is typically slower compared to pickle. The\n proper way to solve serialization issues is to avoid defining functions and\n objects in the main scripts and to implement __reduce__ functions for\n complex classes.\n \"\"\"\n if not cloudpickle:\n raise ImportError(\"could not import cloudpickle. Please install \"\n \"cloudpickle to allow extended serialization. \"\n \"(`pip install cloudpickle`).\")\n\n # If obj is a class, create a CloudpickledClassWrapper which instantiates\n # the object internally and wrap it directly in a CloudpickledObjectWrapper\n if inspect.isclass(obj):\n class CloudpickledClassWrapper(CloudpickledObjectWrapper):\n def __init__(self, *args, **kwargs):\n self._obj = obj(*args, **kwargs)\n self._keep_wrapper = keep_wrapper\n\n CloudpickledClassWrapper.__name__ = obj.__name__\n return CloudpickledClassWrapper\n\n # If obj is an instance of a class, just wrap it in a regular\n # CloudpickledObjectWrapper\n return _wrap_non_picklable_objects(obj, keep_wrapper=keep_wrapper)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nspawn a server process for this manager object.", "response": "def start(self, initializer=None, initargs=()):\n '''Spawn a server process for this manager object'''\n assert self._state.value == State.INITIAL\n\n if (initializer is not None\n and not hasattr(initializer, '__call__')):\n raise TypeError('initializer must be a callable')\n\n # pipe over which we will retrieve address of server\n reader, writer = mp.Pipe(duplex=False)\n\n # spawn process which runs a server\n self._process = Process(\n target=type(self)._run_server,\n args=(self._registry, self._address, bytes(self._authkey),\n self._serializer, writer, initializer, initargs),\n )\n ident = ':'.join(str(i) for i in self._process._identity)\n self._process.name = type(self).__name__ + '-' + ident\n self._process.start()\n\n # get address of server\n writer.close()\n self._address = reader.recv()\n reader.close()\n\n # register a finalizer\n self._state.value = State.STARTED\n self.shutdown = mp.util.Finalize(\n self, type(self)._finalize_manager,\n args=(self._process, self._address, self._authkey,\n self._state, self._Client),\n exitpriority=0\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef DupFd(fd):\n '''Return a wrapper for an fd.'''\n popen_obj = get_spawning_popen()\n if popen_obj is not None:\n return popen_obj.DupFd(popen_obj.duplicate_for_child(fd))\n elif HAVE_SEND_HANDLE and sys.version_info[:2] > (3, 3):\n from multiprocessing import resource_sharer\n return resource_sharer.DupFd(fd)\n else:\n raise TypeError(\n 'Cannot pickle connection object. This object can only be '\n 'passed when spawning a new process'\n )", "response": "Return a wrapper for an fd."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a ReusableExecutor instance.", "response": "def get_reusable_executor(max_workers=None, context=None, timeout=10,\n kill_workers=False, reuse=\"auto\",\n job_reducers=None, result_reducers=None,\n initializer=None, initargs=()):\n \"\"\"Return the current ReusableExectutor instance.\n\n Start a new instance if it has not been started already or if the previous\n instance was left in a broken state.\n\n If the previous instance does not have the requested number of workers, the\n executor is dynamically resized to adjust the number of workers prior to\n returning.\n\n Reusing a singleton instance spares the overhead of starting new worker\n processes and importing common python packages each time.\n\n ``max_workers`` controls the maximum number of tasks that can be running in\n parallel in worker processes. By default this is set to the number of\n CPUs on the host.\n\n Setting ``timeout`` (in seconds) makes idle workers automatically shutdown\n so as to release system resources. New workers are respawn upon submission\n of new tasks so that ``max_workers`` are available to accept the newly\n submitted tasks. Setting ``timeout`` to around 100 times the time required\n to spawn new processes and import packages in them (on the order of 100ms)\n ensures that the overhead of spawning workers is negligible.\n\n Setting ``kill_workers=True`` makes it possible to forcibly interrupt\n previously spawned jobs to get a new instance of the reusable executor\n with new constructor argument values.\n\n The ``job_reducers`` and ``result_reducers`` are used to customize the\n pickling of tasks and results send to the executor.\n\n When provided, the ``initializer`` is run first in newly spawned\n processes with argument ``initargs``.\n \"\"\"\n with _executor_lock:\n global _executor, _executor_kwargs\n executor = _executor\n\n if max_workers is None:\n if reuse is True and executor is not None:\n max_workers = executor._max_workers\n else:\n max_workers = cpu_count()\n elif max_workers <= 0:\n raise ValueError(\n \"max_workers must be greater than 0, got {}.\"\n .format(max_workers))\n\n if isinstance(context, STRING_TYPE):\n context = get_context(context)\n if context is not None and context.get_start_method() == \"fork\":\n raise ValueError(\"Cannot use reusable executor with the 'fork' \"\n \"context\")\n\n kwargs = dict(context=context, timeout=timeout,\n job_reducers=job_reducers,\n result_reducers=result_reducers,\n initializer=initializer, initargs=initargs)\n if executor is None:\n mp.util.debug(\"Create a executor with max_workers={}.\"\n .format(max_workers))\n executor_id = _get_next_executor_id()\n _executor_kwargs = kwargs\n _executor = executor = _ReusablePoolExecutor(\n _executor_lock, max_workers=max_workers,\n executor_id=executor_id, **kwargs)\n else:\n if reuse == 'auto':\n reuse = kwargs == _executor_kwargs\n if (executor._flags.broken or executor._flags.shutdown\n or not reuse):\n if executor._flags.broken:\n reason = \"broken\"\n elif executor._flags.shutdown:\n reason = \"shutdown\"\n else:\n reason = \"arguments have changed\"\n mp.util.debug(\n \"Creating a new executor with max_workers={} as the \"\n \"previous instance cannot be reused ({}).\"\n .format(max_workers, reason))\n executor.shutdown(wait=True, kill_workers=kill_workers)\n _executor = executor = _executor_kwargs = None\n # Recursive call to build a new instance\n return get_reusable_executor(max_workers=max_workers,\n **kwargs)\n else:\n mp.util.debug(\"Reusing existing executor with max_workers={}.\"\n .format(executor._max_workers))\n executor._resize(max_workers)\n\n return executor"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nwait for the jobs to be completed before resizing the pool.", "response": "def _wait_job_completion(self):\n \"\"\"Wait for the cache to be empty before resizing the pool.\"\"\"\n # Issue a warning to the user about the bad effect of this usage.\n if len(self._pending_work_items) > 0:\n warnings.warn(\"Trying to resize an executor with running jobs: \"\n \"waiting for jobs completion before resizing.\",\n UserWarning)\n mp.util.debug(\"Executor {} waiting for jobs completion before\"\n \" resizing\".format(self.executor_id))\n # Wait for the completion of the jobs\n while len(self._pending_work_items) > 0:\n time.sleep(1e-3)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the basic data needed by child to unpickle process object", "response": "def get_preparation_data(name, init_main_module=True):\n '''\n Return info about parent needed by child to unpickle process object\n '''\n _check_not_importing_main()\n d = dict(\n log_to_stderr=util._log_to_stderr,\n authkey=bytes(process.current_process().authkey),\n )\n\n if util._logger is not None:\n d['log_level'] = util._logger.getEffectiveLevel()\n if len(util._logger.handlers) > 0:\n h = util._logger.handlers[0]\n d['log_fmt'] = h.formatter._fmt\n\n sys_path = [p for p in sys.path]\n try:\n i = sys_path.index('')\n except ValueError:\n pass\n else:\n sys_path[i] = process.ORIGINAL_DIR\n\n d.update(\n name=name,\n sys_path=sys_path,\n sys_argv=sys.argv,\n orig_dir=process.ORIGINAL_DIR,\n dir=os.getcwd()\n )\n\n if sys.platform != \"win32\":\n # Pass the semaphore_tracker pid to avoid re-spawning it in every child\n from . import semaphore_tracker\n semaphore_tracker.ensure_running()\n d['tracker_pid'] = semaphore_tracker._semaphore_tracker._pid\n\n # Figure out whether to initialise main in the subprocess as a module\n # or through direct execution (or to leave it alone entirely)\n if init_main_module:\n main_module = sys.modules['__main__']\n try:\n main_mod_name = getattr(main_module.__spec__, \"name\", None)\n except BaseException:\n main_mod_name = None\n if main_mod_name is not None:\n d['init_main_from_name'] = main_mod_name\n elif sys.platform != 'win32' or (not WINEXE and not WINSERVICE):\n main_path = getattr(main_module, '__file__', None)\n if main_path is not None:\n if (not os.path.isabs(main_path) and\n process.ORIGINAL_DIR is not None):\n main_path = os.path.join(process.ORIGINAL_DIR, main_path)\n d['init_main_from_path'] = os.path.normpath(main_path)\n # Compat for python2.7\n d['main_path'] = d['init_main_from_path']\n\n return d"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef prepare(data):\n '''\n Try to get current process ready to unpickle process object\n '''\n if 'name' in data:\n process.current_process().name = data['name']\n\n if 'authkey' in data:\n process.current_process().authkey = data['authkey']\n\n if 'log_to_stderr' in data and data['log_to_stderr']:\n util.log_to_stderr()\n\n if 'log_level' in data:\n util.get_logger().setLevel(data['log_level'])\n\n if 'log_fmt' in data:\n import logging\n util.get_logger().handlers[0].setFormatter(\n logging.Formatter(data['log_fmt'])\n )\n\n if 'sys_path' in data:\n sys.path = data['sys_path']\n\n if 'sys_argv' in data:\n sys.argv = data['sys_argv']\n\n if 'dir' in data:\n os.chdir(data['dir'])\n\n if 'orig_dir' in data:\n process.ORIGINAL_DIR = data['orig_dir']\n\n if 'tracker_pid' in data:\n from . import semaphore_tracker\n semaphore_tracker._semaphore_tracker._pid = data[\"tracker_pid\"]\n\n if 'init_main_from_name' in data:\n _fixup_main_from_name(data['init_main_from_name'])\n elif 'init_main_from_path' in data:\n _fixup_main_from_path(data['init_main_from_path'])", "response": "Prepare the object for processing"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef wait(object_list, timeout=None):\n '''\n Wait till an object in object_list is ready/readable.\n Returns list of those objects which are ready/readable.\n '''\n if timeout is not None:\n if timeout <= 0:\n return _poll(object_list, 0)\n else:\n deadline = monotonic() + timeout\n while True:\n try:\n return _poll(object_list, timeout)\n except (OSError, IOError, socket.error) as e: # pragma: no cover\n if e.errno != errno.EINTR:\n raise\n if timeout is not None:\n timeout = deadline - monotonic()", "response": "Wait till an object in object_list is ready or readable."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncloses all the file descriptors except those in keep_fds.", "response": "def close_fds(keep_fds): # pragma: no cover\n \"\"\"Close all the file descriptors except those in keep_fds.\"\"\"\n\n # Make sure to keep stdout and stderr open for logging purpose\n keep_fds = set(keep_fds).union([1, 2])\n\n # We try to retrieve all the open fds\n try:\n open_fds = set(int(fd) for fd in os.listdir('/proc/self/fd'))\n except FileNotFoundError:\n import resource\n max_nfds = resource.getrlimit(resource.RLIMIT_NOFILE)[0]\n open_fds = set(fd for fd in range(3, max_nfds))\n open_fds.add(0)\n\n for i in open_fds - keep_fds:\n try:\n os.close(i)\n except OSError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _recursive_terminate_without_psutil(process):\n try:\n _recursive_terminate(process.pid)\n except OSError as e:\n warnings.warn(\"Failed to kill subprocesses on this platform. Please\"\n \"install psutil: https://github.com/giampaolo/psutil\")\n # In case we cannot introspect the children, we fall back to the\n # classic Process.terminate.\n process.terminate()\n process.join()", "response": "Terminate a process and its descendants."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a formated string with the exitcodes of terminated workers.", "response": "def get_exitcodes_terminated_worker(processes):\n \"\"\"Return a formated string with the exitcodes of terminated workers.\n\n If necessary, wait (up to .25s) for the system to correctly set the\n exitcode of one terminated worker.\n \"\"\"\n patience = 5\n\n # Catch the exitcode of the terminated workers. There should at least be\n # one. If not, wait a bit for the system to correctly set the exitcode of\n # the terminated worker.\n exitcodes = [p.exitcode for p in list(processes.values())\n if p.exitcode is not None]\n while len(exitcodes) == 0 and patience > 0:\n patience -= 1\n exitcodes = [p.exitcode for p in list(processes.values())\n if p.exitcode is not None]\n time.sleep(.05)\n\n return _format_exitcodes(exitcodes)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nformatting a list of exit codes with names of the signals if possible", "response": "def _format_exitcodes(exitcodes):\n \"\"\"Format a list of exit code with names of the signals if possible\"\"\"\n str_exitcodes = [\"{}({})\".format(_get_exitcode_name(e), e)\n for e in exitcodes if e is not None]\n return \"{\" + \", \".join(str_exitcodes) + \"}\""} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef ensure_running(self):\n '''Make sure that semaphore tracker process is running.\n\n This can be run from any process. Usually a child process will use\n the semaphore created by its parent.'''\n with self._lock:\n if self._fd is not None:\n # semaphore tracker was launched before, is it still running?\n if self._check_alive():\n # => still alive\n return\n # => dead, launch it again\n os.close(self._fd)\n try:\n # Clean-up to avoid dangling processes.\n os.waitpid(self._pid, 0)\n except OSError:\n # The process was terminated or is a child from an ancestor\n # of the current process.\n pass\n self._fd = None\n self._pid = None\n\n warnings.warn('semaphore_tracker: process died unexpectedly, '\n 'relaunching. Some semaphores might leak.')\n\n fds_to_pass = []\n try:\n fds_to_pass.append(sys.stderr.fileno())\n except Exception:\n pass\n\n r, w = os.pipe()\n cmd = 'from {} import main; main({}, {})'.format(\n main.__module__, r, VERBOSE)\n try:\n fds_to_pass.append(r)\n # process will out live us, so no need to wait on pid\n exe = spawn.get_executable()\n args = [exe] + util._args_from_interpreter_flags()\n # In python 3.3, there is a bug which put `-RRRRR..` instead of\n # `-R` in args. Replace it to get the correct flags.\n # See https://github.com/python/cpython/blob/3.3/Lib/subprocess.py#L488\n if sys.version_info[:2] <= (3, 3):\n import re\n for i in range(1, len(args)):\n args[i] = re.sub(\"-R+\", \"-R\", args[i])\n args += ['-c', cmd]\n util.debug(\"launching Semaphore tracker: {}\".format(args))\n # bpo-33613: Register a signal mask that will block the\n # signals. This signal mask will be inherited by the child\n # that is going to be spawned and will protect the child from a\n # race condition that can make the child die before it\n # registers signal handlers for SIGINT and SIGTERM. The mask is\n # unregistered after spawning the child.\n try:\n if _HAVE_SIGMASK:\n signal.pthread_sigmask(signal.SIG_BLOCK,\n _IGNORED_SIGNALS)\n pid = spawnv_passfds(exe, args, fds_to_pass)\n finally:\n if _HAVE_SIGMASK:\n signal.pthread_sigmask(signal.SIG_UNBLOCK,\n _IGNORED_SIGNALS)\n except BaseException:\n os.close(w)\n raise\n else:\n self._fd = w\n self._pid = pid\n finally:\n os.close(r)", "response": "Make sure that the semaphore tracker process is running."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef event_processor(self, frame, event, arg):\n 'A simple event processor that prints out events.'\n out = self.debugger.intf[-1].output\n lineno = frame.f_lineno\n filename = self.core.canonic_filename(frame)\n filename = self.core.filename(filename)\n if not out:\n print(\"%s - %s:%d\" % (event, filename, lineno))\n else:\n out.write(\"%s - %s:%d\" % (event, filename, lineno))\n if arg is not None:\n out.writeline(', %s ' % repr(arg))\n else:\n out.writeline('')\n pass\n pass\n return self.event_processor", "response": "A simple event processor that prints out events."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef runcode(obj, code_obj):\n try:\n exec(code_obj, obj.locals, obj.globals)\n except SystemExit:\n raise\n except:\n obj.showtraceback()\n else:\n if code.softspace(sys.stdout, 0):\n print()\n pass\n pass\n return", "response": "Execute a code object."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadjust stack frame by pos positions.", "response": "def adjust_frame(proc_obj, name, pos, absolute_pos):\n \"\"\"Adjust stack frame by pos positions. If absolute_pos then\n pos is an absolute number. Otherwise it is a relative number.\n\n A negative number indexes from the other end.\"\"\"\n if not proc_obj.curframe:\n proc_obj.errmsg(\"No stack.\")\n return\n\n # Below we remove any negativity. At the end, pos will be\n # the new value of proc_obj.curindex.\n if absolute_pos:\n if pos >= 0:\n pos = frame_num(proc_obj, pos)\n else:\n pos = -pos - 1\n pass\n else:\n pos += proc_obj.curindex\n pass\n\n if pos < 0:\n proc_obj.errmsg(\"Adjusting would put us beyond the oldest frame.\")\n return\n elif pos >= len(proc_obj.stack):\n proc_obj.errmsg(\"Adjusting would put us beyond the newest frame.\")\n return\n\n proc_obj.curindex = pos\n proc_obj.curframe = proc_obj.stack[proc_obj.curindex][0]\n proc_obj.location()\n proc_obj.list_lineno = None\n proc_obj.list_offset = proc_obj.curframe.f_lasti\n proc_obj.list_object = proc_obj.curframe\n proc_obj.list_filename = proc_obj.curframe.f_code.co_filename\n\n return"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse_list_cmd(proc, args, listsize=10):\n\n text = proc.current_command[len(args[0])+1:].strip()\n\n if text in frozenset(('', '.', '+', '-')):\n if text == '.':\n location = resolve_location(proc, '.')\n return location.path, location.line_number, listsize\n else:\n if proc.list_lineno is None:\n proc.errmsg(\"Don't have previous list location\")\n return INVALID_PARSE_LIST\n filename = proc.list_filename\n if text == '+':\n first = max(1, proc.list_lineno + listsize)\n elif text == '-':\n if proc.list_lineno == 1 + listsize:\n proc.errmsg(\"Already at start of %s.\" % proc.list_filename)\n return INVALID_PARSE_LIST\n first = max(1, proc.list_lineno - (2*listsize) - 1)\n elif text == '':\n # Continue from where we last left off\n first = proc.list_lineno + 1\n last = first + listsize - 1\n return filename, first, last\n else:\n try:\n list_range = build_range(text)\n except LocationError as e:\n proc.errmsg(\"Error in parsing list range at or around:\")\n proc.errmsg(e.text)\n proc.errmsg(e.text_cursor)\n return INVALID_PARSE_LIST\n except ScannerError as e:\n proc.errmsg(\"Lexical error in parsing list range at or around:\")\n proc.errmsg(e.text)\n proc.errmsg(e.text_cursor)\n return INVALID_PARSE_LIST\n\n if list_range.first is None:\n # Last must have been given\n assert isinstance(list_range.last, Location)\n location = resolve_location(proc, list_range.last)\n if not location:\n return INVALID_PARSE_LIST\n last = location.line_number\n first = max(1, last - listsize)\n return location.path, first, last\n elif isinstance(list_range.first, int):\n first = list_range.first\n location = resolve_location(proc, list_range.last)\n if not location:\n return INVALID_PARSE_LIST\n filename = location.path\n last = location.line_number\n if last < first:\n # Treat as a count rather than an absolute location\n last = first + last\n return location.path, first, last\n else:\n # First is location. Last may be empty or a number\n assert isinstance(list_range.first, Location)\n location = resolve_location(proc, list_range.first)\n if not location:\n return INVALID_PARSE_LIST\n first = location.line_number\n last = list_range.last\n if location.method:\n first -= listsize // 2\n if isinstance(last, str):\n # Is an offset +number\n assert last[0] == '+'\n last = first + int(last[1:])\n elif not last:\n last = first + listsize\n elif last < first:\n # Treat as a count rather than an absolute location\n last = first + last\n\n return location.path, first, last\n pass\n return", "response": "Parses the arguments for the list command and returns the tuple that is returned."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef open(self, inp, opts={}):\n get_option = lambda key: Mmisc.option_set(opts, key,\n self.DEFAULT_OPEN_READ_OPTS)\n if (isinstance(inp, io.TextIOWrapper) or\n isinstance(inp, io.StringIO) or\n hasattr(inp, 'isatty') and inp.isatty()):\n self.use_raw = get_option('use_raw')\n elif isinstance(inp, 'string'.__class__): # FIXME\n if opts is None:\n self.use_raw = False\n else:\n self.use_raw = get_option('use_raw')\n pass\n inp = open(inp, 'r')\n else:\n raise IOError(\"Invalid input type (%s) for %s\" % (type(inp),\n inp))\n self.input = inp\n self.line_edit = get_option('try_readline') and readline_importable()\n self.closed = False\n return", "response": "Open a new file in the current directory."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef readline(self, use_raw=None, prompt=''):\n # FIXME we don't do command completion.\n if use_raw is None:\n use_raw = self.use_raw\n pass\n if use_raw:\n try:\n inp = input(prompt)\n # import pdb; pdb.set_trace()\n return inp\n except ValueError:\n raise EOFError\n pass\n\n else:\n line = self.input.readline()\n if not line: raise EOFError\n return line.rstrip(\"\\n\")\n pass", "response": "Read a line of input."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nrunning a command and return the result.", "response": "def run(self, cmd, start_opts=None, globals_=None, locals_=None):\n \"\"\" Run debugger on string `cmd' using builtin function eval\n and if that builtin exec. Arguments `globals_' and `locals_'\n are the dictionaries to use for local and global variables. By\n default, the value of globals is globals(), the current global\n variables. If `locals_' is not given, it becomes a copy of\n `globals_'.\n\n Debugger.core.start settings are passed via optional\n dictionary `start_opts'. Overall debugger settings are in\n Debugger.settings which changed after an instance is created\n . Also see `run_eval' if what you want to run is an\n run_eval'able expression have that result returned and\n `run_call' if you want to debug function run_call.\n \"\"\"\n if globals_ is None:\n globals_ = globals()\n if locals_ is None:\n locals_ = globals_\n if not isinstance(cmd, types.CodeType):\n self.eval_string = cmd\n cmd = cmd+'\\n'\n pass\n retval = None\n self.core.start(start_opts)\n try:\n retval = eval(cmd, globals_, locals_)\n except SyntaxError:\n try:\n exec(cmd, globals_, locals_)\n except DebuggerQuit:\n pass\n except DebuggerQuit:\n pass\n pass\n except DebuggerQuit:\n pass\n finally:\n self.core.stop()\n return retval"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef run_exec(self, cmd, start_opts=None, globals_=None, locals_=None):\n if globals_ is None:\n globals_ = globals()\n if locals_ is None:\n locals_ = globals_\n if not isinstance(cmd, types.CodeType):\n cmd = cmd+'\\n'\n pass\n self.core.start(start_opts)\n try:\n exec(cmd, globals_, locals_)\n except DebuggerQuit:\n pass\n finally:\n self.core.stop()\n return", "response": "Run the debugger on string cmd."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef run_call(self, func, start_opts=None, *args, **kwds):\n res = None\n self.core.start(opts=start_opts)\n try:\n res = func(*args, **kwds)\n except DebuggerQuit:\n pass\n finally:\n self.core.stop()\n return res", "response": "Run debugger on function call."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef run_eval(self, expr, start_opts=None, globals_=None, locals_=None):\n if globals_ is None:\n globals_ = globals()\n if locals_ is None:\n locals_ = globals_\n if not isinstance(expr, types.CodeType):\n self.eval_string = expr\n expr = expr+'\\n'\n pass\n retval = None\n self.core.start(start_opts)\n try:\n retval = eval(expr, globals_, locals_)\n except DebuggerQuit:\n pass\n finally:\n pyficache.remove_remap_file('')\n self.core.stop()\n return retval", "response": "Run the debugger on string expr and return the result."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nrun a Python script.", "response": "def run_script(self, filename, start_opts=None, globals_=None,\n locals_=None):\n \"\"\" Run debugger on Python script `filename'. The script may\n inspect sys.argv for command arguments. `globals_' and\n `locals_' are the dictionaries to use for local and global\n variables. If `globals' is not given, globals() (the current\n global variables) is used. If `locals_' is not given, it\n becomes a copy of `globals_'.\n\n True is returned if the program terminated normally and False\n if the debugger initiated a quit or the program did not normally\n terminate.\n\n See also `run_call' if what you to debug a function call,\n `run_eval' if you want to debug an expression, and `run' if you\n want to debug general Python statements not inside a file.\n \"\"\"\n self.mainpyfile = self.core.canonic(filename)\n\n # Start with fresh empty copy of globals and locals and tell the script\n # that it's being run as __main__ to avoid scripts being able to access\n # the debugger namespace.\n if globals_ is None:\n import __main__ # NOQA\n globals_ = {\"__name__\" : \"__main__\",\n \"__file__\" : self.mainpyfile,\n \"__builtins__\" : __builtins__\n } # NOQA\n if locals_ is None:\n locals_ = globals_\n retval = False\n self.core.execution_status = 'Running'\n try:\n compiled = compile(open(self.mainpyfile).read(),\n self.mainpyfile, 'exec')\n self.core.start(start_opts)\n exec(compiled, globals_, locals_)\n retval = True\n except SyntaxError:\n print(sys.exc_info()[1])\n retval = False\n pass\n except IOError:\n print(sys.exc_info()[1])\n except DebuggerQuit:\n retval = False\n pass\n except DebuggerRestart:\n self.core.execution_status = 'Restart requested'\n raise DebuggerRestart\n finally:\n self.core.stop(options={'remove': True})\n return retval"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsplits a command line s arguments in a shell - like manner returned as a list of lists. Use ; with white space to indicate separate commands.", "response": "def arg_split(s, posix=False):\n \"\"\"Split a command line's arguments in a shell-like manner returned\n as a list of lists. Use ';;' with white space to indicate separate\n commands.\n\n This is a modified version of the standard library's shlex.split()\n function, but with a default of posix=False for splitting, so that quotes\n in inputs are respected.\n \"\"\"\n\n args_list = [[]]\n if isinstance(s, bytes):\n s = s.decode(\"utf-8\")\n lex = shlex.shlex(s, posix=posix)\n\n lex.whitespace_split = True\n args = list(lex)\n for arg in args:\n if ';;' == arg:\n args_list.append([])\n else:\n args_list[-1].append(arg)\n pass\n pass\n return args_list"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a stack of frames which the debugger will use for in Taxonomy showing backtraces and in frame switching.", "response": "def get_stack(f, t, botframe, proc_obj=None):\n \"\"\"Return a stack of frames which the debugger will use for in\n showing backtraces and in frame switching. As such various frame\n that are really around may be excluded unless we are debugging the\n sebugger. Also we will add traceback frame on top if that\n exists.\"\"\"\n exclude_frame = lambda f: False\n if proc_obj:\n settings = proc_obj.debugger.settings\n if not settings['dbg_trepan']:\n exclude_frame = lambda f: \\\n proc_obj.core.ignore_filter.is_included(f)\n pass\n pass\n stack = []\n if t and t.tb_frame is f:\n t = t.tb_next\n while f is not None:\n if exclude_frame(f): break # See commented alternative below\n stack.append((f, f.f_lineno))\n # bdb has:\n # if f is botframe: break\n f = f.f_back\n pass\n stack.reverse()\n i = max(0, len(stack) - 1)\n while t is not None:\n stack.append((t.tb_frame, t.tb_lineno))\n t = t.tb_next\n pass\n return stack, i"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nruns each function in hooks with args", "response": "def run_hooks(obj, hooks, *args):\n \"\"\"Run each function in `hooks' with args\"\"\"\n for hook in hooks:\n if hook(obj, *args): return True\n pass\n return False"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nprints out a source location information.", "response": "def print_source_location_info(print_fn, filename, lineno, fn_name=None,\n f_lasti=None, remapped_file=None):\n \"\"\"Print out a source location , e.g. the first line in\n line in:\n (/tmp.py:2 @21): \n L -- 2 import sys,os\n (trepan3k)\n \"\"\"\n if remapped_file:\n mess = '(%s:%s remapped %s' % (remapped_file, lineno, filename)\n else:\n mess = '(%s:%s' % (filename, lineno)\n if f_lasti and f_lasti != -1:\n mess += ' @%d' % f_lasti\n pass\n mess += '):'\n if fn_name and fn_name != '?':\n mess += \" %s\" % fn_name\n pass\n print_fn(mess)\n return"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nprinting the location of the currently selected item in the system.", "response": "def print_location(proc_obj):\n \"\"\"Show where we are. GUI's and front-end interfaces often\n use this to update displays. So it is helpful to make sure\n we give at least some place that's located in a file.\n \"\"\"\n i_stack = proc_obj.curindex\n if i_stack is None or proc_obj.stack is None:\n return False\n core_obj = proc_obj.core\n dbgr_obj = proc_obj.debugger\n intf_obj = dbgr_obj.intf[-1]\n\n # Evaluation routines like \"exec\" don't show useful location\n # info. In these cases, we will use the position before that in\n # the stack. Hence the looping below which in practices loops\n # once and sometimes twice.\n remapped_file = None\n source_text = None\n while i_stack >= 0:\n frame_lineno = proc_obj.stack[i_stack]\n i_stack -= 1\n frame, lineno = frame_lineno\n\n# # Next check to see that local variable breadcrumb exists and\n# # has the magic dynamic value.\n# # If so, it's us and we don't normally show this.a\n# if 'breadcrumb' in frame.f_locals:\n# if self.run == frame.f_locals['breadcrumb']:\n# break\n\n filename = Mstack.frame2file(core_obj, frame, canonic=False)\n if '' == filename and dbgr_obj.eval_string:\n remapped_file = filename\n filename = pyficache.unmap_file(filename)\n if '' == filename:\n remapped = cmdfns.source_tempfile_remap('eval_string',\n dbgr_obj.eval_string)\n pyficache.remap_file(filename, remapped)\n filename = remapped\n lineno = pyficache.unmap_file_line(filename, lineno)\n pass\n pass\n elif '' == filename:\n source_text = deparse_fn(frame.f_code)\n filename = \"\" % source_text\n pass\n else:\n m = re.search('^', filename)\n if m and m.group(1) in pyficache.file2file_remap:\n remapped_file = pyficache.file2file_remap[m.group(1)]\n pass\n elif filename in pyficache.file2file_remap:\n remapped_file = pyficache.unmap_file(filename)\n # FIXME: a remapped_file shouldn't be the same as its unmapped version\n if remapped_file == filename:\n remapped_file = None\n pass\n pass\n elif m and m.group(1) in sys.modules:\n remapped_file = m.group(1)\n pyficache.remap_file(filename, remapped_file)\n pass\n\n opts = {\n 'reload_on_change' : proc_obj.settings('reload'),\n 'output' : proc_obj.settings('highlight')\n }\n\n if 'style' in proc_obj.debugger.settings:\n opts['style'] = proc_obj.settings('style')\n\n pyficache.update_cache(filename)\n line = pyficache.getline(filename, lineno, opts)\n if not line:\n if (not source_text and\n filename.startswith(\"', filename)\n if m and m.group(1):\n remapped_file = m.group(1)\n try_module = sys.modules.get(remapped_file)\n if (try_module and inspect.ismodule(try_module) and\n hasattr(try_module, '__file__')):\n remapped_file = sys.modules[remapped_file].__file__\n pyficache.remap_file(filename, remapped_file)\n line = linecache.getline(remapped_file, lineno,\n proc_obj.curframe.f_globals)\n else:\n remapped_file = m.group(1)\n code = proc_obj.curframe.f_code\n filename, line = cmdfns.deparse_getline(code, remapped_file,\n lineno, opts)\n pass\n pass\n\n try:\n match, reason = Mstack.check_path_with_frame(frame, filename)\n if not match:\n if filename not in warned_file_mismatches:\n proc_obj.errmsg(reason)\n warned_file_mismatches.add(filename)\n except:\n pass\n\n fn_name = frame.f_code.co_name\n last_i = frame.f_lasti\n print_source_location_info(intf_obj.msg, filename, lineno, fn_name,\n remapped_file = remapped_file,\n f_lasti = last_i)\n if line and len(line.strip()) != 0:\n if proc_obj.event:\n print_source_line(intf_obj.msg, lineno, line,\n proc_obj.event2short[proc_obj.event])\n pass\n if '' != filename: break\n pass\n\n if proc_obj.event in ['return', 'exception']:\n val = proc_obj.event_arg\n intf_obj.msg('R=> %s' % proc_obj._saferepr(val))\n pass\n return True"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncommand event processor : reading a commands do something with them.", "response": "def event_processor(self, frame, event, event_arg, prompt='trepan3k'):\n 'command event processor: reading a commands do something with them.'\n self.frame = frame\n self.event = event\n self.event_arg = event_arg\n\n filename = frame.f_code.co_filename\n lineno = frame.f_lineno\n line = linecache.getline(filename, lineno, frame.f_globals)\n if not line:\n opts = {'output': 'plain',\n 'reload_on_change': self.settings('reload'),\n 'strip_nl': False}\n m = re.search('^', filename)\n if m and m.group(1):\n filename = pyficache.unmap_file(m.group(1))\n line = pyficache.getline(filename, lineno, opts)\n self.current_source_text = line\n if self.settings('skip') is not None:\n if Mbytecode.is_def_stmt(line, frame):\n return True\n if Mbytecode.is_class_def(line, frame):\n return True\n pass\n self.thread_name = Mthread.current_thread_name()\n self.frame_thread_name = self.thread_name\n self.set_prompt(prompt)\n self.process_commands()\n if filename == '': pyficache.remove_remap_file('')\n return True"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nremoving memory of state variables set in the command processor", "response": "def forget(self):\n \"\"\" Remove memory of state variables set in the command processor \"\"\"\n self.stack = []\n self.curindex = 0\n self.curframe = None\n self.thread_name = None\n self.frame_thread_name = None\n return"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_an_int(self, arg, msg_on_error, min_value=None, max_value=None):\n ret_value = self.get_int_noerr(arg)\n if ret_value is None:\n if msg_on_error:\n self.errmsg(msg_on_error)\n else:\n self.errmsg('Expecting an integer, got: %s.' % str(arg))\n pass\n return None\n if min_value and ret_value < min_value:\n self.errmsg('Expecting integer value to be at least %d, got: %d.' %\n (min_value, ret_value))\n return None\n elif max_value and ret_value > max_value:\n self.errmsg('Expecting integer value to be at most %d, got: %d.' %\n (max_value, ret_value))\n return None\n return ret_value", "response": "Like get_int_noerr but if there s a stack frame use that\n in evaluation."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_int_noerr(self, arg):\n if self.curframe:\n g = self.curframe.f_globals\n l = self.curframe.f_locals\n else:\n g = globals()\n l = locals()\n pass\n try:\n val = int(eval(arg, g, l))\n except (SyntaxError, NameError, ValueError, TypeError):\n return None\n return val", "response": "Eval arg and it is an integer return the value. Otherwise return None."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_int(self, arg, min_value=0, default=1, cmdname=None,\n at_most=None):\n \"\"\"If no argument use the default. If arg is a an integer between\n least min_value and at_most, use that. Otherwise report an error.\n If there's a stack frame use that in evaluation.\"\"\"\n\n if arg is None: return default\n default = self.get_int_noerr(arg)\n if default is None:\n if cmdname:\n self.errmsg((\"Command '%s' expects an integer; \"\n + \"got: %s.\") % (cmdname, str(arg)))\n else:\n self.errmsg('Expecting a positive integer, got: %s'\n % str(arg))\n pass\n return None\n pass\n if default < min_value:\n if cmdname:\n self.errmsg((\"Command '%s' expects an integer at least\" +\n ' %d; got: %d.')\n % (cmdname, min_value, default))\n else:\n self.errmsg((\"Expecting a positive integer at least\" +\n ' %d; got: %d')\n % (min_value, default))\n pass\n return None\n elif at_most and default > at_most:\n if cmdname:\n self.errmsg((\"Command '%s' expects an integer at most\" +\n ' %d; got: %d.')\n % (cmdname, at_most, default))\n else:\n self.errmsg((\"Expecting an integer at most %d; got: %d\")\n % (at_most, default))\n pass\n pass\n return default", "response": "Get the integer value from the command line."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks whether the command is ok for running.", "response": "def ok_for_running(self, cmd_obj, name, nargs):\n \"\"\"We separate some of the common debugger command checks here:\n whether it makes sense to run the command in this execution state,\n if the command has the right number of arguments and so on.\n \"\"\"\n if hasattr(cmd_obj, 'execution_set'):\n if not (self.core.execution_status in cmd_obj.execution_set):\n part1 = (\"Command '%s' is not available for execution status:\"\n % name)\n mess = Mmisc.wrapped_lines(part1,\n self.core.execution_status,\n self.debugger.settings['width'])\n self.errmsg(mess)\n return False\n pass\n if self.frame is None and cmd_obj.need_stack:\n self.intf[-1].errmsg(\"Command '%s' needs an execution stack.\"\n % name)\n return False\n if nargs < cmd_obj.min_args:\n self.errmsg((\"Command '%s' needs at least %d argument(s); \" +\n \"got %d.\") %\n (name, cmd_obj.min_args, nargs))\n return False\n elif cmd_obj.max_args is not None and nargs > cmd_obj.max_args:\n self.errmsg((\"Command '%s' can take at most %d argument(s);\" +\n \" got %d.\") %\n (name, cmd_obj.max_args, nargs))\n return False\n return True"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef setup(self):\n self.forget()\n if self.settings('dbg_trepan'):\n self.frame = inspect.currentframe()\n pass\n if self.event in ['exception', 'c_exception']:\n exc_type, exc_value, exc_traceback = self.event_arg\n else:\n _, _, exc_traceback = (None, None, None,) # NOQA\n pass\n if self.frame or exc_traceback:\n self.stack, self.curindex = \\\n get_stack(self.frame, exc_traceback, None, self)\n self.curframe = self.stack[self.curindex][0]\n self.thread_name = Mthread.current_thread_name()\n if exc_traceback:\n self.list_lineno = traceback.extract_tb(exc_traceback, 1)[0][1]\n self.list_offset = self.curframe.f_lasti\n self.list_object = self.curframe\n else:\n self.stack = self.curframe = \\\n self.botframe = None\n pass\n if self.curframe:\n self.list_lineno = \\\n max(1, inspect.getlineno(self.curframe)\n - int(self.settings('listsize') / 2)) - 1\n self.list_offset = self.curframe.f_lasti\n self.list_filename = self.curframe.f_code.co_filename\n self.list_object = self.curframe\n else:\n if not exc_traceback: self.list_lineno = None\n pass\n # if self.execRcLines()==1: return True\n\n # FIXME: do we want to save self.list_lineno a second place\n # so that we can do 'list .' and go back to the first place we listed?\n return False", "response": "This method is called before entering the debugger - command\n loop."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef queue_startfile(self, cmdfile):\n expanded_cmdfile = os.path.expanduser(cmdfile)\n is_readable = Mfile.readable(expanded_cmdfile)\n if is_readable:\n self.cmd_queue.append('source ' + expanded_cmdfile)\n elif is_readable is None:\n self.errmsg(\"source file '%s' doesn't exist\" % expanded_cmdfile)\n else:\n self.errmsg(\"source file '%s' is not readable\" %\n expanded_cmdfile)\n pass\n return", "response": "Arrange for file of debugger commands to get read in the\n process - command loop."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef read_history_file(self):\n histfile = self.debugger.intf[-1].histfile\n try:\n import readline\n readline.read_history_file(histfile)\n except IOError:\n pass\n except ImportError:\n pass\n return", "response": "Read the command history file -- possibly."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef write_history_file(self):\n settings = self.debugger.settings\n histfile = self.debugger.intf[-1].histfile\n if settings['hist_save']:\n try:\n import readline\n try:\n readline.write_history_file(histfile)\n except IOError:\n pass\n except ImportError:\n pass\n pass\n return", "response": "Write the command history file -- possibly."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _populate_commands(self):\n from trepan.processor import command as Mcommand\n if hasattr(Mcommand, '__modules__'):\n return self.populate_commands_easy_install(Mcommand)\n else:\n return self.populate_commands_pip(Mcommand)", "response": "Create an instance of each of the debugger\n command."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\npopulating self. lists and hashes with the commands aliases and categories.", "response": "def _populate_cmd_lists(self):\n \"\"\" Populate self.lists and hashes:\n self.commands, and self.aliases, self.category \"\"\"\n self.commands = {}\n self.aliases = {}\n self.category = {}\n# self.short_help = {}\n for cmd_instance in self.cmd_instances:\n if not hasattr(cmd_instance, 'aliases'): continue\n alias_names = cmd_instance.aliases\n cmd_name = cmd_instance.name\n self.commands[cmd_name] = cmd_instance\n for alias_name in alias_names:\n self.aliases[alias_name] = cmd_name\n pass\n cat = getattr(cmd_instance, 'category')\n if cat and self.category.get(cat):\n self.category[cat].append(cmd_name)\n else:\n self.category[cat] = [cmd_name]\n pass\n# sh = getattr(cmd_instance, 'short_help')\n# if sh:\n# self.short_help[cmd_name] = getattr(c, 'short_help')\n# pass\n pass\n for k in list(self.category.keys()):\n self.category[k].sort()\n pass\n\n return"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfinds all starting matches in dictionary aliases that start with prefix but filter out any matches already in the expanded dictionary.", "response": "def complete_token_filtered_with_next(aliases, prefix, expanded, commands):\n \"\"\"Find all starting matches in dictionary *aliases* that start\n with *prefix*, but filter out any matches already in\n *expanded*.\"\"\"\n\n complete_ary = list(aliases.keys())\n expanded_ary = list(expanded.keys())\n # result = [cmd for cmd in\n # complete_ary if cmd.startswith(prefix) and not (\n # cmd in aliases and\n # 0 == len(set(expanded_ary) - set([aliases[cmd]])))]\n result = []\n for cmd in complete_ary:\n if cmd.startswith(prefix):\n if cmd in aliases and (\n 0 == len(set(expanded_ary) - set([aliases[cmd]]))):\n result.append([cmd, aliases[cmd]])\n pass\n pass\n pass\n return sorted(result, key=lambda pair: pair[0])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfinding all starting matches in aliases that start with prefix but filter out any already in expanded", "response": "def complete_token_filtered(aliases, prefix, expanded):\n \"\"\"Find all starting matches in dictionary *aliases* that start\n with *prefix*, but filter out any matches already in *expanded*\"\"\"\n\n complete_ary = aliases.keys()\n return [cmd for cmd in complete_ary if cmd.startswith(prefix)]"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfind the next token in str from start_pos we return the token and the next blank position after the token.", "response": "def next_token(str, start_pos):\n \"\"\"Find the next token in str string from start_pos, we return\n the token and the next blank position after the token or\n str.size if this is the last token. Tokens are delimited by\n white space.\"\"\"\n look_at = str[start_pos:]\n match = re.search('\\S', look_at)\n if match:\n pos = match.start()\n else:\n pos = 0\n pass\n next_nonblank_pos = start_pos + pos\n next_match = re.search('\\s', str[next_nonblank_pos:])\n if next_match:\n next_blank_pos = next_nonblank_pos + next_match.start()\n else:\n next_blank_pos = len(str)\n pass\n return [next_blank_pos, str[next_nonblank_pos:next_blank_pos+1].rstrip()]"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef errmsg(self, msg, prefix=\"** \"):\n # self.verbose shows lines so we don't have to duplicate info\n # here. Perhaps there should be a 'terse' mode to never show\n # position info.\n if not self.verbose:\n location = (\"%s:%s: Error in source command file\"\n % (self.script_name, self.input_lineno))\n msg = \"%s%s:\\n%s%s\" %(prefix, location, prefix, msg)\n else:\n msg = \"%s%s\" %(prefix, msg)\n pass\n self.msg(msg)\n if self.abort_on_error:\n raise EOFError\n return", "response": "Common routine for reporting debugger error messages."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef read_command(self, prompt=''):\n '''Script interface to read a command. `prompt' is a parameter for\n compatibilty and is ignored.'''\n self.input_lineno += 1\n line = self.readline()\n if self.verbose:\n location = \"%s line %s\" % (self.script_name, self.input_lineno)\n self.msg('+ %s: %s' % (location, line))\n pass\n # Do something with history?\n return line", "response": "Script interface to read a command."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nclosing both input and output.", "response": "def close(self):\n \"\"\" Closes both input and output \"\"\"\n self.state = 'closing'\n if self.input:\n self.input.close()\n pass\n if self.output:\n self.output.close()\n pass\n self.state = 'disconnnected'\n return"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreads a line of input.", "response": "def read_msg(self):\n \"\"\"Read a line of input. EOFError will be raised on EOF.\n\n Note that we don't support prompting\"\"\"\n # FIXME: do we have to create and check a buffer for\n # lines?\n if self.state == 'active':\n if not self.input:\n self.input = open(self.in_name, 'r')\n pass\n line = self.input.readline()\n if not line:\n self.state = 'disconnected'\n raise EOFError\n return line.encode(\"utf-8\")\n else:\n raise IOError(\"readline called in state: %s.\" % self.state)\n return"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef write(self, msg):\n if self.state == 'active':\n if not self.output:\n self.output = open(self.out_name, 'w')\n pass\n pass\n else:\n raise EOFError\n self.output.write(msg)\n if self.flush_after_write: self.flush()\n return", "response": "This method writes a message to the output file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nhandle debugger options. Set `option_list' if you are writing another main program and want to extend the existing set of debugger options. The options dicionary from opt_parser is return. sys_argv is also updated.", "response": "def process_options(pkg_version, sys_argv, option_list=None):\n \"\"\"Handle debugger options. Set `option_list' if you are writing\n another main program and want to extend the existing set of debugger\n options.\n\n The options dicionary from opt_parser is return. sys_argv is\n also updated.\"\"\"\n usage_str=\"\"\"%prog [debugger-options]]\n\n Client connection to an out-of-process trepan3k debugger session\"\"\"\n\n # serverChoices = ('TCP','FIFO', None) # we use PID for now.\n\n optparser = OptionParser(usage=usage_str, option_list=option_list,\n version=\"%%prog version %s\" % pkg_version)\n\n optparser.add_option(\"-H\", \"--host\", dest=\"host\", default='127.0.0.1',\n action=\"store\", type='string', metavar='IP-OR-HOST',\n help=\"connect IP or host name.\")\n optparser.add_option(\"-P\", \"--port\", dest=\"port\", default=1027,\n action=\"store\", type='int', metavar='NUMBER',\n help=\"Use TCP port number NUMBER for \"\n \"out-of-process connections.\")\n optparser.add_option(\"--pid\", dest=\"pid\", default=0,\n action=\"store\", type='int', metavar='NUMBER',\n help=\"Use PID to get FIFO names for \"\n \"out-of-process connections.\")\n\n optparser.disable_interspersed_args()\n\n sys.argv = list(sys_argv)\n (opts, sys.argv) = optparser.parse_args()\n return opts, sys.argv"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndisassemble the message and return the message and the last traceback.", "response": "def dis(msg, msg_nocr, section, errmsg, x=None, start_line=-1, end_line=None,\n relative_pos = False, highlight='light', start_offset=0, end_offset=None,\n include_header=False):\n \"\"\"Disassemble classes, methods, functions, or code.\n\n With no argument, disassemble the last traceback.\n\n \"\"\"\n lasti = -1\n if x is None:\n distb()\n return None, None\n if start_offset is None:\n start_offset = 0\n mess = ''\n if start_line > 1:\n mess += \"from line %d \" % start_line\n elif start_offset > 1:\n mess = \"from offset %d \" % start_offset\n if end_line:\n mess += \"to line %d\" % end_line\n elif end_offset:\n mess += \"to offset %d\" % end_offset\n\n sectioned = False\n\n\n # Try to dogpaddle to the code object for the type setting x\n if hasattr(types, 'InstanceType') and isinstance(x, types.InstanceType):\n x = x.__class__\n if inspect.ismethod(x):\n section(\"Disassembly of %s: %s\" % (x, mess))\n sectioned = True\n x = x.im_func\n elif inspect.isfunction(x) or inspect.isgeneratorfunction(x):\n section(\"Disassembly of %s: %s\" % (x, mess))\n x = x.func_code\n sectioned = True\n elif inspect.isgenerator(x):\n section(\"Disassembly of %s: %s\" % (x, mess))\n frame = x.gi_frame\n lasti = frame.f_last_i\n x = x.gi_code\n sectioned = True\n elif inspect.isframe(x):\n section(\"Disassembly of %s: %s\" % (x, mess))\n sectioned = True\n if hasattr(x, 'f_lasti'):\n lasti = x.f_lasti\n if lasti == -1: lasti = 0\n pass\n opc = get_opcode(PYTHON_VERSION, IS_PYPY)\n x = x.f_code\n if include_header:\n header_lines = Bytecode(x, opc).info().split(\"\\n\")\n header = '\\n'.join([format_token(Mformat.Comment, h) for h in header_lines])\n msg(header)\n pass\n elif inspect.iscode(x):\n pass\n\n if hasattr(x, '__dict__'): # Class or module\n items = sorted(x.__dict__.items())\n for name, x1 in items:\n if isinstance(x1, _have_code):\n if not sectioned:\n section(\"Disassembly of %s: \" % x)\n try:\n dis(msg, msg_nocr, section, errmsg, x1,\n start_line=start_line, end_line=end_line,\n relative_pos = relative_pos)\n msg(\"\")\n except TypeError:\n _, msg, _ = sys.exc_info()\n errmsg(\"Sorry:\", msg)\n pass\n pass\n pass\n pass\n elif hasattr(x, 'co_code'): # Code object\n if not sectioned:\n section(\"Disassembly of %s: \" % x)\n return disassemble(msg, msg_nocr, section, x, lasti=lasti,\n start_line=start_line, end_line=end_line,\n relative_pos = relative_pos,\n highlight = highlight,\n start_offset = start_offset,\n end_offset = end_offset)\n elif isinstance(x, str): # Source code\n return disassemble_string(msg, msg_nocr, x,)\n else:\n errmsg(\"Don't know how to disassemble %s objects.\" %\n type(x).__name__)\n return None, None"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef disassemble(msg, msg_nocr, section, co, lasti=-1, start_line=-1,\n end_line=None, relative_pos=False, highlight='light',\n start_offset=0, end_offset=None):\n \"\"\"Disassemble a code object.\"\"\"\n return disassemble_bytes(msg, msg_nocr, co.co_code, lasti, co.co_firstlineno,\n start_line, end_line, relative_pos,\n co.co_varnames, co.co_names, co.co_consts,\n co.co_cellvars, co.co_freevars,\n dict(findlinestarts(co)), highlight,\n start_offset=start_offset, end_offset=end_offset)", "response": "Disassemble a message object."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef disassemble_bytes(orig_msg, orig_msg_nocr, code, lasti=-1, cur_line=0,\n start_line=-1, end_line=None, relative_pos=False,\n varnames=(), names=(), constants=(), cells=(),\n freevars=(), linestarts={}, highlight='light',\n start_offset=0, end_offset=None):\n \"\"\"Disassemble byte string of code. If end_line is negative\n it counts the number of statement linestarts to use.\"\"\"\n statement_count = 10000\n if end_line is None:\n end_line = 10000\n elif relative_pos:\n end_line += start_line -1\n pass\n\n labels = findlabels(code)\n\n null_print = lambda x: None\n if start_line > cur_line:\n msg_nocr = null_print\n msg = null_print\n else:\n msg_nocr = orig_msg_nocr\n msg = orig_msg\n\n for instr in get_instructions_bytes(code, opc, varnames, names,\n constants, cells, linestarts):\n offset = instr.offset\n if end_offset and offset > end_offset:\n break\n\n if instr.starts_line:\n if offset:\n msg(\"\")\n\n cur_line = instr.starts_line\n if (start_line and ((start_line > cur_line) or\n start_offset and start_offset > offset)) :\n msg_nocr = null_print\n msg = null_print\n else:\n statement_count -= 1\n msg_nocr = orig_msg_nocr\n msg = orig_msg\n pass\n if ((cur_line > end_line) or\n (end_offset and offset > end_offset)):\n break\n msg_nocr(format_token(Mformat.LineNumber,\n \"%4d\" % cur_line,\n highlight=highlight))\n else:\n if start_offset and offset and start_offset <= offset:\n msg_nocr = orig_msg_nocr\n msg = orig_msg\n pass\n msg_nocr(' ')\n\n if offset == lasti: msg_nocr(format_token(Mformat.Arrow, '-->',\n highlight=highlight))\n else: msg_nocr(' ')\n if offset in labels: msg_nocr(format_token(Mformat.Arrow, '>>',\n highlight=highlight))\n else: msg_nocr(' ')\n msg_nocr(repr(offset).rjust(4))\n msg_nocr(' ')\n msg_nocr(format_token(Mformat.Opcode,\n instr.opname.ljust(20),\n highlight=highlight))\n msg_nocr(repr(instr.arg).ljust(10))\n msg_nocr(' ')\n # Show argval?\n msg(format_token(Mformat.Name,\n instr.argrepr.ljust(20),\n highlight=highlight))\n pass\n\n return code, offset", "response": "Disassemble byte string of code."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a count of the number of frames", "response": "def count_frames(frame, count_start=0):\n \"Return a count of the number of frames\"\n count = -count_start\n while frame:\n count += 1\n frame = frame.f_back\n return count"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nformatting and return a stack entry in gdb - style.", "response": "def format_stack_entry(dbg_obj, frame_lineno, lprefix=': ',\n include_location=True, color='plain'):\n \"\"\"Format and return a stack entry gdb-style.\n Note: lprefix is not used. It is kept for compatibility.\n \"\"\"\n frame, lineno = frame_lineno\n filename = frame2file(dbg_obj.core, frame)\n\n s = ''\n if frame.f_code.co_name:\n funcname = frame.f_code.co_name\n else:\n funcname = \"\"\n pass\n s = format_token(Mformat.Function, funcname, highlight=color)\n\n args, varargs, varkw, local_vars = inspect.getargvalues(frame)\n if '' == funcname and ([], None, None,) == (args, varargs, varkw,):\n is_module = True\n if is_exec_stmt(frame):\n fn_name = format_token(Mformat.Function, 'exec', highlight=color)\n source_text = deparse_source_from_code(frame.f_code)\n s += ' %s(%s)' % (format_token(Mformat.Function, fn_name,\n highlight=color), source_text)\n else:\n fn_name = get_call_function_name(frame)\n if fn_name:\n source_text = deparse_source_from_code(frame.f_code)\n if fn_name: s += ' %s(%s)' % (format_token(Mformat.Function, fn_name,\n highlight=color), source_text)\n pass\n else:\n is_module = False\n parms=inspect.formatargvalues(args, varargs, varkw, local_vars)\n maxargstrsize = dbg_obj.settings['maxargstrsize']\n if len(parms) >= maxargstrsize:\n parms = \"%s...)\" % parms[0:maxargstrsize]\n pass\n s += parms\n pass\n\n # Note: ddd can't handle wrapped stack entries (yet).\n # The 35 is hoaky though. FIXME.\n if len(s) >= 35: s += \"\\n \"\n\n if '__return__' in frame.f_locals:\n rv = frame.f_locals['__return__']\n s += '->'\n s += format_token(Mformat.Return, Mrepr.repr(rv),\n highlight=color)\n pass\n\n if include_location:\n is_pseudo_file = _re_pseudo_file.match(filename)\n add_quotes_around_file = not is_pseudo_file\n if is_module:\n if filename == '':\n s += ' in exec'\n elif not is_exec_stmt(frame) and not is_pseudo_file:\n s += ' file'\n elif s == '?()':\n if is_exec_stmt(frame):\n s = 'in exec'\n # exec_str = get_exec_string(frame.f_back)\n # if exec_str != None:\n # filename = exec_str\n # add_quotes_around_file = False\n # pass\n # pass\n elif not is_pseudo_file:\n s = 'in file'\n pass\n pass\n elif not is_pseudo_file:\n s += ' called from file'\n pass\n\n if add_quotes_around_file: filename = \"'%s'\" % filename\n s += \" %s at line %s\" % (\n format_token(Mformat.Filename, filename,\n highlight=color),\n format_token(Mformat.LineNumber, \"%r\" % lineno,\n highlight=color)\n )\n return s"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the name of the call function that is currently in the frame.", "response": "def get_call_function_name(frame):\n \"\"\"If f_back is looking at a call function, return\n the name for it. Otherwise return None\"\"\"\n f_back = frame.f_back\n if not f_back: return None\n if 'CALL_FUNCTION' != Mbytecode.op_at_frame(f_back): return None\n\n co = f_back.f_code\n code = co.co_code\n # labels = dis.findlabels(code)\n linestarts = dict(dis.findlinestarts(co))\n offset = f_back.f_lasti\n while offset >= 0:\n if offset in linestarts:\n op = code[offset]\n offset += 1\n arg = code[offset]\n # FIXME: put this code in xdis\n extended_arg = 0\n while True:\n if PYTHON_VERSION >= 3.6:\n if op == opc.EXTENDED_ARG:\n extended_arg += (arg << 8)\n continue\n arg = code[offset] + extended_arg\n # FIXME: Python 3.6.0a1 is 2, for 3.6.a3 we have 1\n else:\n if op == opc.EXTENDED_ARG:\n extended_arg += (arg << 256)\n continue\n arg = code[offset] + code[offset+1]*256 + extended_arg\n break\n\n return co.co_names[arg]\n offset -= 1\n pass\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nprints count entries of the stack trace", "response": "def print_stack_trace(proc_obj, count=None, color='plain', opts={}):\n \"Print count entries of the stack trace\"\n if count is None:\n n=len(proc_obj.stack)\n else:\n n=min(len(proc_obj.stack), count)\n try:\n for i in range(n):\n print_stack_entry(proc_obj, i, color=color, opts=opts)\n except KeyboardInterrupt:\n pass\n return"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nevaluate an object in a current context and print it.", "response": "def eval_print_obj(arg, frame, format=None, short=False):\n \"\"\"Return a string representation of an object \"\"\"\n try:\n if not frame:\n # ?? Should we have set up a dummy globals\n # to have persistence?\n val = eval(arg, None, None)\n else:\n val = eval(arg, frame.f_globals, frame.f_locals)\n pass\n except:\n return 'No symbol \"' + arg + '\" in current context.'\n\n return print_obj(arg, val, format, short)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef print_obj(arg, val, format=None, short=False):\n what = arg\n if format:\n what = format + ' ' + arg\n val = Mprint.printf(val, format)\n pass\n s = '%s = %s' % (what, val)\n if not short:\n s += '\\n type = %s' % type(val)\n # Try to list the members of a class.\n # Not sure if this is correct or the\n # best way to do.\n s = print_dict(s, val, \"object variables\")\n if hasattr(val, \"__class__\"):\n s = print_dict(s, val.__class__, \"class variables\")\n pass\n pass\n return s", "response": "Print an object in the order they appear in the object hierarchy."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfinding a subcmd with the given prefix.", "response": "def lookup(self, subcmd_prefix):\n \"\"\"Find subcmd in self.subcmds\"\"\"\n for subcmd_name in list(self.subcmds.keys()):\n if subcmd_name.startswith(subcmd_prefix) \\\n and len(subcmd_prefix) >= \\\n self.subcmds[subcmd_name].__class__.min_abbrev:\n return self.subcmds[subcmd_name]\n pass\n return None"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nshowing short help for a subcommand.", "response": "def short_help(self, subcmd_cb, subcmd_name, label=False):\n \"\"\"Show short help for a subcommand.\"\"\"\n entry = self.lookup(subcmd_name)\n if entry:\n if label:\n prefix = entry.name\n else:\n prefix = ''\n pass\n if hasattr(entry, 'short_help'):\n if prefix: prefix += ' -- '\n self.cmd_obj.msg(prefix + entry.short_help)\n pass\n pass\n else:\n self.undefined_subcmd(\"help\", subcmd_name)\n pass\n return"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add(self, subcmd_cb):\n subcmd_name = subcmd_cb.name\n self.subcmds[subcmd_name] = subcmd_cb\n\n # We keep a list of subcommands to assist command completion\n self.cmdlist.append(subcmd_name)", "response": "Add a subcmd to the available subcommands for this object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nrun subcmd_name with args using obj for the environent", "response": "def run(self, subcmd_name, arg):\n \"\"\"Run subcmd_name with args using obj for the environent\"\"\"\n entry=self.lookup(subcmd_name)\n if entry:\n entry['callback'](arg)\n else:\n self.cmdproc.undefined_cmd(entry.__class__.name, subcmd_name)\n pass\n return"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nyielding the Sphinx - only markup for the current language.", "response": "def yield_sphinx_only_markup(lines):\n \"\"\"\n :param file_inp: a `filename` or ``sys.stdin``?\n :param file_out: a `filename` or ``sys.stdout`?`\n\n \"\"\"\n substs = [\n ## Selected Sphinx-only Roles.\n #\n (r':abbr:`([^`]+)`', r'\\1'),\n (r':ref:`([^`]+)`', r'`\\1`_'),\n (r':term:`([^`]+)`', r'**\\1**'),\n (r':dfn:`([^`]+)`', r'**\\1**'),\n (r':(samp|guilabel|menuselection):`([^`]+)`', r'``\\2``'),\n\n\n ## Sphinx-only roles:\n # :foo:`bar` --> foo(``bar``)\n # :a:foo:`bar` XXX afoo(``bar``)\n #\n #(r'(:(\\w+))?:(\\w+):`([^`]*)`', r'\\2\\3(``\\4``)'),\n (r':(\\w+):`([^`]*)`', r'\\1(``\\2``)'),\n\n\n ## Sphinx-only Directives.\n #\n (r'\\.\\. doctest', r'code-block'),\n (r'\\.\\. plot::', r'.. '),\n (r'\\.\\. seealso', r'info'),\n (r'\\.\\. glossary', r'rubric'),\n (r'\\.\\. figure::', r'.. '),\n\n\n ## Other\n #\n (r'\\|version\\|', r'x.x.x'),\n ]\n\n regex_subs = [ (re.compile(regex, re.IGNORECASE), sub) for (regex, sub) in substs ]\n\n def clean_line(line):\n try:\n for (regex, sub) in regex_subs:\n line = regex.sub(sub, line)\n except Exception as ex:\n print(\"ERROR: %s, (line(%s)\"%(regex, sub))\n raise ex\n\n return line\n\n for line in lines:\n yield clean_line(line)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nevaluates the expression and return the result.", "response": "def run_eval(expression, debug_opts=None, start_opts=None, globals_=None,\n locals_=None, tb_fn = None):\n\n \"\"\"Evaluate the expression (given as a string) under debugger\n control starting with the statement subsequent to the place that\n this appears in your program.\n\n This is a wrapper to Debugger.run_eval(), so see that.\n\n When run_eval() returns, it returns the value of the expression.\n Otherwise this function is similar to run().\n \"\"\"\n\n dbg = Mdebugger.Trepan(opts=debug_opts)\n try:\n return dbg.run_eval(expression, start_opts=start_opts,\n globals_=globals_, locals_=locals_)\n except:\n dbg.core.trace_hook_suspend = True\n if start_opts and 'tb_fn' in start_opts: tb_fn = start_opts['tb_fn']\n Mpost_mortem.uncaught_exception(dbg, tb_fn)\n finally:\n dbg.core.trace_hook_suspend = False\n return"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef run_call(func, debug_opts=None, start_opts=None, *args, **kwds):\n\n \"\"\"Call the function (a function or method object, not a string)\n with the given arguments starting with the statement subsequent to\n the place that this appears in your program.\n\n When run_call() returns, it returns whatever the function call\n returned. The debugger prompt appears as soon as the function is\n entered.\"\"\"\n\n dbg = Mdebugger.Trepan(opts=debug_opts)\n try:\n return dbg.run_call(func, start_opts, *args, **kwds)\n except:\n Mpost_mortem.uncaught_exception(dbg)\n pass\n return", "response": "Call the function with the given arguments starting with the statement subsequent to\n The debugger prompt appears as soon as the function is called."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nexecuting a statement in the debugger and return the result.", "response": "def run_exec(statement, debug_opts=None, start_opts=None, globals_=None,\n locals_=None):\n\n \"\"\"Execute the statement (given as a string) under debugger\n control starting with the statement subsequent to the place that\n this run_call appears in your program.\n\n This is a wrapper to Debugger.run_exec(), so see that.\n\n The debugger prompt appears before any code is executed;\n you can set breakpoints and type 'continue', or you can step\n through the statement using 'step' or 'next'\n\n The optional globals_ and locals_ arguments specify the environment\n in which the code is executed; by default the dictionary of the\n module __main__ is used.\"\"\"\n\n dbg = Mdebugger.Trepan(opts=debug_opts)\n try:\n return dbg.run_exec(statement, start_opts=start_opts,\n globals_=globals_, locals_=locals_)\n except:\n Mpost_mortem.uncaught_exception(dbg)\n pass\n return"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a new node that will be used to debug the current node.", "response": "def debug(dbg_opts=None, start_opts=None, post_mortem=True,\n step_ignore=1, level=0):\n \"\"\"\nEnter the debugger.\n\nParameters\n----------\n\nlevel : how many stack frames go back. Usually it will be\nthe default 0. But sometimes though there may be calls in setup to the debugger\nthat you may want to skip.\n\nstep_ignore : how many line events to ignore after the\ndebug() call. 0 means don't even wait for the debug() call to finish.\n\nparam dbg_opts : is an optional \"options\" dictionary that gets fed\ntrepan.Debugger(); `start_opts' are the optional \"options\"\ndictionary that gets fed to trepan.Debugger.core.start().\n\nUse like this:\n\n.. code-block:: python\n\n ... # Possibly some Python code\n import trepan.api # Needed only once\n ... # Possibly some more Python code\n trepan.api.debug() # You can wrap inside conditional logic too\n pass # Stop will be here.\n # Below is code you want to use the debugger to do things.\n .... # more Python code\n # If you get to a place in the program where you aren't going\n # want to debug any more, but want to remove debugger trace overhead:\n trepan.api.stop()\n\nParameter \"level\" specifies how many stack frames go back. Usually it will be\nthe default 0. But sometimes though there may be calls in setup to the debugger\nthat you may want to skip.\n\nParameter \"step_ignore\" specifies how many line events to ignore after the\ndebug() call. 0 means don't even wait for the debug() call to finish.\n\nIn situations where you want an immediate stop in the \"debug\" call\nrather than the statement following it (\"pass\" above), add parameter\nstep_ignore=0 to debug() like this::\n\n import trepan.api # Needed only once\n # ... as before\n trepan.api.debug(step_ignore=0)\n # ... as before\n\nModule variable _debugger_obj_ from module trepan.debugger is used as\nthe debugger instance variable; it can be subsequently used to change\nsettings or alter behavior. It should be of type Debugger (found in\nmodule trepan). If not, it will get changed to that type::\n\n $ python\n >>> from trepan.debugger import debugger_obj\n >>> type(debugger_obj)\n \n >>> import trepan.api\n >>> trepan.api.debug()\n ...\n (Trepan) c\n >>> from trepan.debugger import debugger_obj\n >>> debugger_obj\n \n >>>\n\nIf however you want your own separate debugger instance, you can\ncreate it from the debugger _class Debugger()_ from module\ntrepan.debugger::\n\n $ python\n >>> from trepan.debugger import Debugger\n >>> dbgr = Debugger() # Add options as desired\n >>> dbgr\n \n\n`dbg_opts' is an optional \"options\" dictionary that gets fed\ntrepan.Debugger(); `start_opts' are the optional \"options\"\ndictionary that gets fed to trepan.Debugger.core.start().\n\"\"\"\n if not isinstance(Mdebugger.debugger_obj, Mdebugger.Trepan):\n Mdebugger.debugger_obj = Mdebugger.Trepan(dbg_opts)\n Mdebugger.debugger_obj.core.add_ignore(debug, stop)\n pass\n core = Mdebugger.debugger_obj.core\n frame = sys._getframe(0+level)\n core.set_next(frame)\n if start_opts and 'startup-profile' in start_opts and start_opts['startup-profile']:\n dbg_initfiles = start_opts['startup-profile']\n from trepan import options\n options.add_startup_file(dbg_initfiles)\n for init_cmdfile in dbg_initfiles:\n core.processor.queue_startfile(init_cmdfile)\n\n if not core.is_started():\n core.start(start_opts)\n pass\n if post_mortem:\n debugger_on_post_mortem()\n pass\n if 0 == step_ignore:\n frame = sys._getframe(1+level)\n core.stop_reason = 'at a debug() call'\n old_trace_hook_suspend = core.trace_hook_suspend\n core.trace_hook_suspend = True\n core.processor.event_processor(frame, 'line', None)\n core.trace_hook_suspend = old_trace_hook_suspend\n else:\n core.step_ignore = step_ignore-1\n pass\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nlists the command categories and a short description of each.", "response": "def list_categories(self):\n \"\"\"List the command categories and a short description of each.\"\"\"\n self.section(\"Classes of commands:\")\n cats = list(categories.keys())\n cats.sort()\n for cat in cats: # Foo! iteritems() doesn't do sorting\n self.msg(\" %-13s -- %s\" % (cat, categories[cat]))\n pass\n final_msg = \"\"\"\nType `help` followed by a class name for a list of commands in that class.\nType `help aliases` for a list of current aliases.\nType `help macros` for a list of current macros.\nType `help syntax *item*` for help on syntax *item*\nType `help *` for the list of all commands.\nType `help` *regexp* for the list of commands matching /^#{*regexp*}/\nType `help` followed by command name for full documentation.\n\"\"\"\n for line in re.compile('\\n').split(final_msg.rstrip('\\n')):\n self.rst_msg(line)\n pass\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef show_category(self, category, args):\n n2cmd = self.proc.commands\n names = list(n2cmd.keys())\n if len(args) == 1 and args[0] == '*':\n self.section(\"Commands in class %s:\" % category)\n cmds = [cmd for cmd in names if category == n2cmd[cmd].category]\n cmds.sort()\n self.msg_nocr(self.columnize_commands(cmds))\n return\n\n self.msg(\"%s.\\n\" % categories[category])\n self.section(\"List of commands:\")\n names.sort()\n for name in names: # Foo! iteritems() doesn't do sorting\n if category != n2cmd[name].category: continue\n self.msg(\"%-13s -- %s\" % (name, n2cmd[name].short_help,))\n pass\n return", "response": "Show short help for all commands in category."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef readable(path):\n try:\n st = os.stat(path)\n return 0 != st.st_mode & READABLE_MASK\n except os.error:\n return None\n return True", "response": "Test whether a path exists and is readable."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nparses the argument as filename module : lineno", "response": "def parse_position(errmsg, arg):\n \"\"\"parse_position(errmsg, arg)->(fn, name, lineno)\n\n Parse arg as [filename|module:]lineno\n Make sure it works for C:\\foo\\bar.py:12\n \"\"\"\n colon = arg.rfind(':')\n if colon >= 0:\n filename = arg[:colon].rstrip()\n m, f = lookupmodule(filename)\n if not f:\n errmsg(\"'%s' not found using sys.path\" % filename)\n return (None, None, None)\n else:\n filename = pyficache.pyc2py(f)\n arg = arg[colon+1:].lstrip()\n pass\n try:\n lineno = int(arg)\n except TypeError:\n errmsg(\"Bad line number: %s\", str(arg))\n return (None, filename, None)\n return (None, filename, lineno)\n return (None, None, None)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfinding the first frame that is a debugged frame.", "response": "def find_debugged_frame(frame):\n \"\"\"Find the first frame that is a debugged frame. We do this\n Generally we want traceback information without polluting it with\n debugger frames. We can tell these because those are frames on the\n top which don't have f_trace set. So we'll look back from the top\n to find the fist frame where f_trace is set.\n \"\"\"\n f_prev = f = frame\n while f is not None and f.f_trace is None:\n f_prev = f\n f = f.f_back\n pass\n if f_prev:\n val = f_prev.f_locals.get('tracer_func_frame')\n if val == f_prev:\n if f_prev.f_back:\n f_prev = f_prev.f_back\n pass\n pass\n pass\n else:\n return frame\n return f_prev"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef map_thread_names():\n '''Invert threading._active'''\n name2id = {}\n for thread_id in list(threading._active.keys()):\n thread = threading._active[thread_id]\n name = thread.getName()\n if name not in list(name2id.keys()):\n name2id[name] = thread_id\n pass\n pass\n return name2id", "response": "Invert threading. _active and return a dictionary of thread names to thread id"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nopens the input file.", "response": "def open(self, inp, opts=None):\n \"\"\"Use this to set where to read from.\n \"\"\"\n if isinstance(inp, list):\n self.input = inp\n else:\n raise IOError(\"Invalid input type (%s) for %s\" % (type(inp), inp))\n return"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreading a line of input.", "response": "def readline(self, use_raw=None, prompt=''):\n \"\"\"Read a line of input. EOFError will be raised on EOF.\n\n Note that we don't support prompting\"\"\"\n if self.closed: raise ValueError\n if 0 == len(self.input):\n self.closed = True\n raise EOFError\n line = self.input[0]\n del self.input[0]\n return line"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nuse this to set where to write to. output can be a string or a file object.", "response": "def open(self, output):\n \"\"\"Use this to set where to write to. output can be a\n file object or a string. This code raises IOError on error.\n\n If another file was previously open upon calling this open,\n that will be stacked and will come back into use after\n a close_write().\n \"\"\"\n if isinstance(output, types.Listype):\n self.output = output\n else:\n raise IOError(\"Invalid output type (%s) for %s\" % (type(output),\n output))\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreads debugger startup file and add it to dbg_initfiles.", "response": "def add_startup_file(dbg_initfiles):\n \"\"\" Read debugger startup file(s): both python code and\n debugger profile to dbg_initfiles.\"\"\"\n\n startup_python_file = default_configfile('profile.py')\n\n if Mfile.readable(startup_python_file):\n with codecs.open(startup_python_file, 'r', encoding='utf8') as fp:\n exec(fp.read())\n\n startup_trepan_file = default_configfile('profile')\n if Mfile.readable(startup_trepan_file):\n if startup_trepan_file not in dbg_initfiles:\n dbg_initfiles.append(startup_trepan_file)\n pass\n return"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef process_options(debugger_name, pkg_version, sys_argv, option_list=None):\n usage_str=\"\"\"%prog [debugger-options] [python-script [script-options...]]\n\n Runs the extended python debugger\"\"\"\n\n # serverChoices = ('TCP','FIFO', None)\n\n optparser = OptionParser(usage=usage_str, option_list=option_list,\n version=\"%%prog version %s\" % pkg_version)\n\n optparser.add_option(\"-X\", \"--trace\", dest=\"linetrace\",\n action=\"store_true\", default=False,\n help=\"Show lines before executing them. \"\n \"This option also sets --batch\")\n optparser.add_option(\"-F\", \"--fntrace\", dest=\"fntrace\",\n action=\"store_true\", default=False,\n help=\"Show functions before executing them. \"\n \"This option also sets --batch\")\n optparser.add_option(\"--basename\", dest=\"basename\",\n action=\"store_true\", default=False,\n help=\"Filenames strip off basename, \"\n \"(e.g. for regression tests)\"\n )\n # optparser.add_option(\"--batch\", dest=\"noninteractive\",\n # action=\"store_true\", default=False,\n # help=\"Don't run interactive commands shell on \"+\n # \"stops.\")\n optparser.add_option(\"--client\", dest=\"client\",\n action='store_true',\n help=\"Connect to an existing debugger process \"\n \"started with the --server option. \"\n \"See options for client.\")\n optparser.add_option(\"-x\", \"--command\", dest=\"command\",\n action=\"store\", type='string', metavar='FILE',\n help=\"Execute commands from FILE.\")\n optparser.add_option(\"--cd\", dest=\"cd\",\n action=\"store\", type='string', metavar='DIR',\n help=\"Change current directory to DIR.\")\n optparser.add_option(\"--confirm\", dest=\"confirm\",\n action=\"store_true\", default=True,\n help=\"Confirm potentially dangerous operations\")\n optparser.add_option(\"--dbg_trepan\", dest=\"dbg_trepan\",\n action=\"store_true\", default=False,\n help=\"Debug the debugger\")\n optparser.add_option(\"--different\", dest=\"different\",\n action=\"store_true\", default=True,\n help=\"Consecutive stops should have \"\n \"different positions\")\n # optparser.add_option(\"--error\", dest=\"errors\", metavar='FILE',\n # action=\"store\", type='string',\n # help=\"Write debugger's error output \"\n # + \"(stderr) to FILE\")\n optparser.add_option(\"-e\", \"--exec\", dest=\"execute\", type=\"string\",\n help=\"list of debugger commands to \" +\n \"execute. Separate the commands with ;;\")\n\n optparser.add_option(\"-H\", \"--host\", dest=\"host\", default='127.0.0.1',\n action=\"store\", type='string', metavar='IP-OR-HOST',\n help=\"connect IP or host name. \"\n \"Only valid if --client option given.\")\n\n optparser.add_option(\"--highlight\", dest=\"highlight\",\n action=\"store\", type='string',\n metavar='{light|dark|plain}',\n default='light',\n help=\"Use syntax and terminal highlight output. \"\n \"'plain' is no highlight\")\n\n optparser.add_option(\"--private\", dest=\"private\",\n action='store_true', default=False,\n help=\"Don't register this as a global debugger\")\n\n optparser.add_option(\"--main\", dest=\"main\",\n action=\"store_true\", default=True,\n help=\"First stop should be in __main__\"\n )\n optparser.add_option(\"--no-main\", dest=\"main\",\n action=\"store_false\", default=True,\n help=\"First stop should be in __main__\"\n )\n optparser.add_option(\"--post-mortem\", dest=\"post_mortem\",\n action='store_true', default=True,\n help=(\"Enter debugger on an uncaught (fatal) \"\n \"exception\"))\n\n optparser.add_option(\"--no-post-mortem\", dest=\"post_mortem\",\n action='store_false', default=True,\n help=(\"Don't enter debugger on an uncaught (fatal) \"\n \"exception\"))\n\n optparser.add_option(\"-n\", \"--nx\", dest=\"noexecute\",\n action=\"store_true\", default=False,\n help=(\"Don't execute commands found in any \"\n \"initialization files\"))\n\n optparser.add_option(\"-o\", \"--output\", dest=\"output\", metavar='FILE',\n action=\"store\", type='string',\n help=(\"Write debugger's output (stdout) \"\n \"to FILE\"))\n optparser.add_option(\"-P\", \"--port\", dest=\"port\", default=1027,\n action=\"store\", type='int',\n help=\"Use TCP port number NUMBER for \"\n \"out-of-process connections.\")\n\n optparser.add_option(\"--server\", dest=\"server\",\n action='store_true',\n help=\"Out-of-process server connection mode\")\n\n # optparser.add_option(\"--style\", dest=\"style\",\n # action=\"store\", type='string',\n # metavar='*pygments-style*',\n # default=None,\n # help=(\"Pygments style; 'none' \"\n # \"uses 8-color rather than 256-color terminal\"))\n\n optparser.add_option(\"--sigcheck\", dest=\"sigcheck\",\n action=\"store_true\", default=False,\n help=\"Set to watch for signal handler changes\")\n optparser.add_option(\"-t\", \"--target\", dest=\"target\",\n help=(\"Specify a target to connect to. Arguments\"\n \" should be of form, 'protocol address'.\")),\n optparser.add_option(\"--from_ipython\", dest='from_ipython', action='store_true',\n default=False, help=\"Called from inside ipython\")\n\n # annotate option produces annotations, used in trepan.el for a\n # better emacs integration. Annotations are similar in purpose to\n # those of GDB (see that manual for a description), although the\n # syntax is different. they have the following format:\n #\n # ^Z^Zannotation-name\n # \n # ^Z^Z\n #\n # where ^Z is the ctrl-Z character, and \"annotname\" is the name of the\n # annotation. A line with only two ^Z ends the annotation (no nesting\n # allowed). See trepan.el for the usage\n optparser.add_option(\"--annotate\", default=0, type=\"int\",\n help=\"Use annotations to work inside emacs\")\n\n # Set up to stop on the first non-option because that's the name\n # of the script to be debugged on arguments following that are\n # that scripts options that should be left untouched. We would\n # not want to interpret and option for the script, e.g. --help, as\n # one one of our own, e.g. --help.\n\n optparser.disable_interspersed_args()\n\n sys.argv = list(sys_argv)\n # FIXME: why does this mess up integration tests?\n # (opts, sys.argv) = optparser.parse_args(sys_argv)\n (opts, sys.argv) = optparser.parse_args()\n dbg_opts = {'from_ipython': opts.from_ipython}\n\n # Handle debugger startup command files: --nx (-n) and --command.\n dbg_initfiles = []\n if not opts.noexecute:\n add_startup_file(dbg_initfiles)\n\n # As per gdb, first we execute user initialization files and then\n # we execute any file specified via --command.\n if opts.command:\n dbg_initfiles.append(opts.command)\n pass\n\n dbg_opts['proc_opts'] = {'initfile_list': dbg_initfiles}\n\n if opts.cd:\n os.chdir(opts.cd)\n pass\n\n if opts.output:\n try:\n dbg_opts['output'] = Moutput.DebuggerUserOutput(opts.output)\n except IOError:\n _, xxx_todo_changeme, _ = sys.exc_info()\n (errno, strerror) = xxx_todo_changeme.args\n print(\"I/O in opening debugger output file %s\" % opts.output)\n print(\"error(%s): %s\" % (errno, strerror))\n except:\n print(\"Unexpected error in opening debugger output file %s\" %\n opts.output)\n print(sys.exc_info()[0])\n sys.exit(2)\n pass\n pass\n\n return opts, dbg_opts, sys.argv", "response": "Handle debugger options. Set `option_list' if you are writing\n another main program and want to extend the existing set of debugger\n options.\n\n The options dicionary from optparser is returned. sys_argv is\n also updated."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nhandles the options that feed into the debugger and sets the appropriate settings.", "response": "def _postprocess_options(dbg, opts):\n ''' Handle options (`opts') that feed into the debugger (`dbg')'''\n # Set dbg.settings['printset']\n print_events = []\n if opts.fntrace: print_events = ['c_call', 'c_return', 'call', 'return']\n if opts.linetrace: print_events += ['line']\n if len(print_events):\n dbg.settings['printset'] = frozenset(print_events)\n pass\n\n for setting in ('annotate', 'basename', 'different'):\n dbg.settings[setting] = getattr(opts, setting)\n pass\n\n if getattr(opts, 'highlight'):\n dbg.settings['highlight'] = opts.highlight\n else:\n dbg.settings['highlight'] = 'plain'\n\n # if getattr(opts, 'style') and opts.style != 'none':\n # dbg.settings['style'] = opts.style\n # else:\n # dbg.settings['style'] = None\n dbg.settings['style'] = None\n\n # Normally we want to set Mdebugger.debugger_obj so that one can\n # put trepan.debugger breakpoints in a program and not have more\n # than one debugger running. More than one debugger may confuse\n # users, e.g. set different might stop at the same line once for\n # each debugger.\n if not opts.private:\n Mdebugger.debugger_obj = dbg\n pass\n\n # if opts.errors:\n # try:\n # dbg.stderr = open(opts.errors, 'w')\n # except IOError, (errno, strerror):\n # print \"I/O in opening debugger output file %s\" % opts.errors\n # print \"error(%s): %s\" % (errno, strerror)\n # except ValueError:\n # print \"Could not convert data to an integer.\"\n # except:\n # print \"Unexpected error in opening debugger output \"\n # \"file %s\" % opts.errors\n # print sys.exc_info()[0]\n # sys.exit(2)\n\n # if opts.execute:\n # dbg.cmdqueue = list(opts.execute.split(';;'))\n\n if opts.post_mortem:\n Mapi.debugger_on_post_mortem()\n pass\n return"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nsend a message back to the server.", "response": "def read_remote(self):\n '''Send a message back to the server (in contrast to\n the local user output channel).'''\n coded_line = self.inout.read_msg()\n if isinstance(coded_line, bytes):\n coded_line = coded_line.decode(\"utf-8\")\n control = coded_line[0]\n remote_line = coded_line[1:]\n return (control, remote_line)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_int(errmsg, arg, default=1, cmdname=None):\n if arg:\n try:\n # eval() is used so we will allow arithmetic expressions,\n # variables etc.\n default = int(eval(arg))\n except (SyntaxError, NameError, ValueError):\n if cmdname:\n errmsg(\"Command '%s' expects an integer; got: %s.\" %\n (cmdname, str(arg)))\n else:\n errmsg('Expecting an integer, got: %s.' % str(arg))\n pass\n raise ValueError\n return default", "response": "Get an integer from the availabe."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_onoff(errmsg, arg, default=None, print_error=True):\n if not arg:\n if default is None:\n if print_error:\n errmsg(\"Expecting 'on', 1, 'off', or 0. Got nothing.\")\n pass\n raise ValueError\n return default\n if arg == '1' or arg == 'on': return True\n if arg == '0' or arg =='off': return False\n\n if print_error:\n errmsg(\"Expecting 'on', 1, 'off', or 0. Got: %s.\" % str(arg))\n raise ValueError", "response": "Return True if arg is on and False if arg is off."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset a Boolean - valued debugger setting. obj is a generally a subcommand that has name and debugger. settings attributes. args is a list of strings that can be passed to the debugger.", "response": "def run_set_bool(obj, args):\n \"\"\"set a Boolean-valued debugger setting. 'obj' is a generally a\n subcommand that has 'name' and 'debugger.settings' attributes\"\"\"\n try:\n if 0 == len(args): args = ['on']\n obj.debugger.settings[obj.name] = get_onoff(obj.errmsg, args[0])\n except ValueError:\n pass\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets an Integer -valued debugger setting. obj is a generally a that has name and debugger. settings attributes. msg_on_error is True if the argument is not empty and the value is less than min_value and max_value. msg_on_error is True if the argument is not empty.", "response": "def run_set_int(obj, arg, msg_on_error, min_value=None, max_value=None):\n \"\"\"set an Integer-valued debugger setting. 'obj' is a generally a\n subcommand that has 'name' and 'debugger.settings' attributes\"\"\"\n if '' == arg.strip():\n obj.errmsg(\"You need to supply a number.\")\n return\n obj.debugger.settings[obj.name] = \\\n get_an_int(obj.errmsg, arg, msg_on_error, min_value, max_value)\n return obj.debugger.settings[obj.name]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef run_show_bool(obj, what=None):\n val = show_onoff(obj.debugger.settings[obj.name])\n if not what: what = obj.name\n return obj.msg(\"%s is %s.\" % (what, val))", "response": "Generic subcommand showing a boolean - valued debugger setting."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef run_show_int(obj, what=None):\n val = obj.debugger.settings[obj.name]\n if not what: what = obj.name\n return obj.msg(\"%s is %d.\" % (what, val))", "response": "Generic subcommand integer value display"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef next_opcode(code, offset):\n '''Return the next opcode and offset as a tuple. Tuple (-100,\n -1000) is returned when reaching the end.'''\n n = len(code)\n while offset < n:\n op = code[offset]\n offset += 1\n if op >= HAVE_ARGUMENT:\n offset += 2\n pass\n yield op, offset\n pass\n yield -100, -1000\n pass", "response": "Return the next opcode and offset as a tuple. Tuple - 100 is returned when reaching the end. Tuple - 1000 is returned when reaching the end."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn True if we are looking at a def statement.", "response": "def is_def_stmt(line, frame):\n \"\"\"Return True if we are looking at a def statement\"\"\"\n # Should really also check that operand of 'LOAD_CONST' is a code object\n return (line and _re_def.match(line) and op_at_frame(frame)=='LOAD_CONST'\n and stmt_contains_opcode(frame.f_code, frame.f_lineno,\n 'MAKE_FUNCTION'))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns True if we are looking at a class definition statement.", "response": "def is_class_def(line, frame):\n \"\"\"Return True if we are looking at a class definition statement\"\"\"\n return (line and _re_class.match(line)\n and stmt_contains_opcode(frame.f_code, frame.f_lineno,\n 'BUILD_CLASS'))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef nothread_quit(self, arg):\n\n self.debugger.core.stop()\n self.debugger.core.execution_status = 'Quit command'\n raise Mexcept.DebuggerQuit", "response": "quit command when there s just one thread"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nquit command when several threads are involved.", "response": "def threaded_quit(self, arg):\n \"\"\" quit command when several threads are involved. \"\"\"\n threading_list = threading.enumerate()\n mythread = threading.currentThread()\n for t in threading_list:\n if t != mythread:\n ctype_async_raise(t, Mexcept.DebuggerQuit)\n pass\n pass\n raise Mexcept.DebuggerQuit"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_default_bg():\n term = environ.get('TERM', None)\n if term:\n if (term.startswith('xterm',) or term.startswith('eterm')\n or term == 'dtterm'):\n return False\n return True", "response": "Set default background color based on the TERM environment variable\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef is_dark_rgb(r, g, b):\n\n try:\n midpoint = int(environ.get('TERMINAL_COLOR_MIDPOINT', None))\n except:\n pass\n if not midpoint:\n term = environ.get('TERM', None)\n # 117963 = (* .6 (+ 65535 65535 65535))\n # 382.5 = (* .6 (+ 65535 65535 65535))\n print(\"midpoint\", midpoint, 'vs', (16*5 + 16*g + 16*b))\n midpoint = 383 if term and term == 'xterm-256color' else 117963\n\n if ( (16*5 + 16*g + 16*b) < midpoint ):\n return True\n else:\n return False", "response": "Check if color is dark."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef is_dark_color_fg_bg():\n dark_bg = environ.get('DARK_BG', None)\n if dark_bg is not None:\n return dark_bg != '0'\n color_fg_bg = environ.get('COLORFGBG', None)\n if color_fg_bg:\n if color_fg_bg in ('15;0', '15;default;0'):\n return True\n elif color_fg_bg in ('0;15', '0;default;15' ):\n return False\n else:\n return True\n return None", "response": "Consult ( environment ) variables DARK_BG and COLORFGBG On return variable is_dark_bg is set"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nhandling debugger options. Set `option_list' if you are writing another main program and want to extend the existing set of debugger options. The options dicionary from opt_parser is return. sys_argv is also updated.", "response": "def process_options(debugger_name, pkg_version, sys_argv, option_list=None):\n \"\"\"Handle debugger options. Set `option_list' if you are writing\n another main program and want to extend the existing set of debugger\n options.\n\n The options dicionary from opt_parser is return. sys_argv is\n also updated.\"\"\"\n usage_str=\"\"\"%prog [debugger-options] [python-script [script-options...]]\n\n Runs the extended python debugger\"\"\"\n\n # serverChoices = ('TCP','FIFO', None)\n\n optparser = OptionParser(usage=usage_str, option_list=option_list,\n version=\"%%prog version %s\" % pkg_version)\n\n optparser.add_option(\"-F\", \"--fntrace\", dest=\"fntrace\",\n action=\"store_true\", default=False,\n help=\"Show functions before executing them. \" +\n \"This option also sets --batch\")\n optparser.add_option(\"--basename\", dest=\"basename\",\n action=\"store_true\", default=False,\n help=\"Filenames strip off basename, \"\n \"(e.g. for regression tests)\")\n optparser.add_option(\"--different\", dest=\"different\",\n action=\"store_true\", default=True,\n help=\"Consecutive stops should have different \"\n \"positions\")\n optparser.disable_interspersed_args()\n\n sys.argv = list(sys_argv)\n (opts, sys.argv) = optparser.parse_args()\n dbg_opts = {}\n\n return opts, dbg_opts, sys.argv"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _postprocess_options(dbg, opts):\n ''' Handle options (`opts') that feed into the debugger (`dbg')'''\n # Set dbg.settings['printset']\n print_events = []\n if opts.fntrace: print_events = ['c_call', 'c_return', 'call', 'return']\n # if opts.linetrace: print_events += ['line']\n if len(print_events):\n dbg.settings['printset'] = frozenset(print_events)\n pass\n\n for setting in ('basename', 'different',):\n dbg.settings[setting] = getattr(opts, setting)\n pass\n\n dbg.settings['highlight'] = 'plain'\n\n Mdebugger.debugger_obj = dbg\n return", "response": "Postprocess options that feed into the debugger."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the name of the caller s caller.", "response": "def get_name():\n \"\"\"Get the name caller's caller.\n NB: f_code.co_filenames and thus this code kind of broken for\n zip'ed eggs circa Jan 2009\n \"\"\"\n caller = sys._getframe(2)\n filename = caller.f_code.co_filename\n filename = os.path.normcase(os.path.basename(filename))\n return os.path.splitext(filename)[0]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef signature(frame):\n '''return suitable frame signature to key display expressions off of.'''\n if not frame: return None\n code = frame.f_code\n return (code.co_name, code.co_filename, code.co_firstlineno)", "response": "return suitable frame signature to key display expressions off of."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nlisting all display items ; return 0 if none", "response": "def all(self):\n \"\"\"List all display items; return 0 if none\"\"\"\n found = False\n s = []\n for display in self.list:\n if not found:\n s.append(\"\"\"Auto-display expressions now in effect:\nNum Enb Expression\"\"\")\n found = True\n pass\n s.append(display.format())\n return s"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef delete_index(self, display_number):\n old_size = len(self.list)\n self.list = [disp for disp in self.list\n if display_number != disp.number]\n return old_size != len(self.list)", "response": "Delete the index of a display expression."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef display(self, frame):\n '''display any items that are active'''\n if not frame: return\n s = []\n sig = signature(frame)\n for display in self.list:\n if display.signature == sig and display.enabled:\n s.append(display.to_s(frame))\n pass\n pass\n return s", "response": "display any items that are active"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef read_msg(self):\n if self.state == 'connected':\n if 0 == len(self.buf):\n self.buf = self.inout.recv(Mtcpfns.TCP_MAX_PACKET)\n if 0 == (self.buf):\n self.state = 'disconnected'\n raise EOFError\n pass\n self.buf, data = Mtcpfns.unpack_msg(self.buf)\n return data.decode('utf-8')\n else:\n raise IOError(\"read_msg called in state: %s.\" % self.state)", "response": "Read one message unit from the connection."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the current debugger instance if any.", "response": "def debugger():\n \"\"\"Return the current debugger instance (if any),\n or creates a new one.\"\"\"\n dbg = _current[0]\n if dbg is None or not dbg.active:\n dbg = _current[0] = RemoteCeleryTrepan()\n return dbg"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets breakpoint at current location or a specified frame", "response": "def debug(frame=None):\n \"\"\"Set breakpoint at current location, or a specified frame\"\"\"\n # ???\n if frame is None:\n frame = _frame().f_back\n\n dbg = RemoteCeleryTrepan()\n dbg.say(BANNER.format(self=dbg))\n # dbg.say(SESSION_STARTED.format(self=dbg))\n trepan.api.debug(dbg_opts=dbg.dbg_opts)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngiving a path to a. py file return the path to its. pyc. pyo file.", "response": "def cache_from_source(path, debug_override=None):\n \"\"\"Given the path to a .py file, return the path to its .pyc/.pyo file.\n\n The .py file does not need to exist; this simply returns the path to the\n .pyc/.pyo file calculated as if the .py file were imported. The extension\n will be .pyc unless sys.flags.optimize is non-zero, then it will be .pyo.\n\n If debug_override is not None, then it must be a boolean and is used in\n place of sys.flags.optimize.\n\n If sys.implementation.cache_tag is None then NotImplementedError is raised.\n\n \"\"\"\n debug = not sys.flags.optimize if debug_override is None else debug_override\n if debug:\n suffixes = DEBUG_BYTECODE_SUFFIXES\n else:\n suffixes = OPTIMIZED_BYTECODE_SUFFIXES\n pass\n head, tail = os.path.split(path)\n base_filename, sep, _ = tail.partition('.')\n if not hasattr(sys, 'implementation'):\n # Python <= 3.2\n raise NotImplementedError('No sys.implementation')\n tag = sys.implementation.cache_tag\n if tag is None:\n raise NotImplementedError('sys.implementation.cache_tag is None')\n filename = ''.join([base_filename, sep, tag, suffixes[0]])\n return os.path.join(head, _PYCACHE, filename)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nload all the debugger commands for the given name.", "response": "def _load_debugger_subcommands(self, name):\n \"\"\" Create an instance of each of the debugger\n subcommands. Commands are found by importing files in the\n directory 'name' + 'sub'. Some files are excluded via an array set\n in __init__. For each of the remaining files, we import them\n and scan for class names inside those files and for each class\n name, we will create an instance of that class. The set of\n DebuggerCommand class instances form set of possible debugger\n commands.\"\"\"\n\n # Initialization\n cmd_instances = []\n class_prefix = capitalize(name) # e.g. Info, Set, or Show\n module_dir = 'trepan.processor.command.%s_subcmd' % name\n mod = __import__(module_dir, None, None, ['*'])\n eval_cmd_template = 'command_mod.%s(self)'\n\n # Import, instantiate, and add classes for each of the\n # modules found in module_dir imported above.\n for module_name in mod.__modules__:\n import_name = module_dir + '.' + module_name\n try:\n command_mod = importlib.import_module(import_name)\n except ImportError:\n print((\"Error importing name %s module %s: %s\" %\n (import_name, module_name, sys.exc_info()[0])))\n continue\n\n # Even though we tend not to do this, it is possible to\n # put more than one class into a module/file. So look for\n # all of them.\n classnames = [ classname for classname, classvalue in\n inspect.getmembers(command_mod, inspect.isclass)\n if ('DebuggerCommand' != classname and\n classname.startswith(class_prefix)) ]\n\n for classname in classnames:\n eval_cmd = eval_cmd_template % classname\n try:\n instance = eval(eval_cmd)\n self.cmds.add(instance)\n except:\n print(\"Error eval'ing class %s\" % classname)\n pass\n pass\n pass\n return cmd_instances"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef help(self, args):\n if len(args) <= 2:\n # \"help cmd\". Give the general help for the command part.\n doc = self.__doc__ or self.run.__doc__\n if doc:\n self.rst_msg(doc.rstrip('\\n'))\n else:\n self.proc.intf[-1].errmsg('Sorry - author mess up. ' +\n 'No help registered for command' +\n self.name)\n pass\n return\n\n subcmd_name = args[2]\n\n if '*' == subcmd_name:\n self.section(\"List of subcommands for command '%s':\" % self.name)\n self.msg(self.columnize_commands(self.cmds.list()))\n return\n\n # \"help cmd subcmd\". Give help specific for that subcommand.\n cmd = self.cmds.lookup(subcmd_name)\n if cmd:\n doc = cmd.__doc__ or cmd.run.__doc__\n if doc:\n self.proc.rst_msg(doc.rstrip('\\n'))\n else:\n self.proc.intf[-1] \\\n .errmsg('Sorry - author mess up. No help registered for '\n 'subcommand %s of command %s' %\n (subcmd_name, self.name))\n pass\n else:\n cmds = [c for c in self.cmds.list()\n if re.match('^' + subcmd_name, c) ]\n if cmds == []:\n self.errmsg(\"No %s subcommands found matching /^%s/. \"\n \"Try \\\"help\\\".\" % (self.name, subcmd_name))\n else:\n self.section(\"Subcommand(s) of \\\"%s\\\" matching /^%s/:\" %\n (self.name, subcmd_name,))\n self.msg_nocr(self.columnize_commands(cmds))\n pass\n pass\n return", "response": "Gives help for a command which has subcommands."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef undefined_subcmd(self, cmd, subcmd):\n self.proc.intf[-1].errmsg(('Undefined \"%s\" subcommand: \"%s\". ' +\n 'Try \"help %s *\".') % (cmd, subcmd, cmd))\n return", "response": "Error message when a subcommand is not defined"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nexecute a code object.", "response": "def runcode(obj, code_obj):\n \"\"\"Execute a code object.\n\n When an exception occurs, self.showtraceback() is called to\n display a traceback. All exceptions are caught except\n SystemExit, which is reraised.\n\n A note about KeyboardInterrupt: this exception may occur\n elsewhere in this code, and may not always be caught. The\n caller should be prepared to deal with it.\n\n \"\"\"\n try:\n exec(code_obj, obj.locals, obj.globals)\n except SystemExit:\n raise\n except:\n info = sys.exc_info()\n print(\"%s; %s\" % (info[0], info[1]))\n else:\n pass\n return"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef run(self, args):\n\n Mframe.adjust_relative(self.proc, self.name, args, self.signum)\n return False", "response": "Move the current frame down in the stack trace."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nresolving a location namedtuple.", "response": "def resolve_location(proc, location):\n \"\"\"Expand fields in Location namedtuple. If:\n '.': get fields from stack\n function/module: get fields from evaluation/introspection\n location file and line number: use that\n \"\"\"\n curframe = proc.curframe\n if location == '.':\n if not curframe:\n proc.errmsg(\"Don't have a stack to get location from\")\n return INVALID_LOCATION\n filename = Mstack.frame2file(proc.core, curframe, canonic=False)\n lineno = inspect.getlineno(curframe)\n return Location(filename, lineno, False, None)\n\n assert isinstance(location, Location)\n is_address = False\n if proc.curframe:\n g = curframe.f_globals\n l = curframe.f_locals\n else:\n g = globals()\n l = locals()\n pass\n if location.method:\n # Validate arguments that can't be done in parsing\n filename = lineno = None\n msg = ('Object %s is not known yet as a function, ' % location.method)\n try:\n modfunc = eval(location.method, g, l)\n except:\n proc.errmsg(msg)\n return INVALID_LOCATION\n\n try:\n # Check if the converted string is a function or instance method\n if inspect.isfunction(modfunc) or hasattr(modfunc, 'im_func'):\n pass\n else:\n proc.errmsg(msg)\n return INVALID_LOCATION\n except:\n proc.errmsg(msg)\n return INVALID_LOCATION\n filename = proc.core.canonic(modfunc.__code__.co_filename)\n # FIXME: we may want to check lineno and\n # respect that in the future\n lineno = modfunc.__code__.co_firstlineno\n elif location.path:\n filename = proc.core.canonic(location.path)\n lineno = location.line_number\n modfunc = None\n msg = \"%s is not known as a file\" % location.path\n if not osp.isfile(filename):\n # See if argument is a module\n try:\n modfunc = eval(location.path, g, l)\n except:\n msg = (\"Don't see '%s' as a existing file or as an module\"\n % location.path)\n proc.errmsg(msg)\n return INVALID_LOCATION\n pass\n is_address = location.is_address\n if inspect.ismodule(modfunc):\n if hasattr(modfunc, '__file__'):\n filename = pyficache.pyc2py(modfunc.__file__)\n filename = proc.core.canonic(filename)\n if not lineno:\n # use first line of module file\n lineno = 1\n is_address = False\n return Location(filename, lineno, is_address, modfunc)\n else:\n msg = (\"module '%s' doesn't have a file associated with it\" %\n location.path)\n\n proc.errmsg(msg)\n return INVALID_LOCATION\n maxline = pyficache.maxline(filename)\n if maxline and lineno > maxline:\n # NB: we use the gdb wording here\n proc.errmsg(\"Line number %d out of range; %s has %d lines.\"\n % (lineno, filename, maxline))\n return INVALID_LOCATION\n elif location.line_number:\n filename = Mstack.frame2file(proc.core, curframe, canonic=False)\n lineno = location.line_number\n is_address = location.is_address\n modfunc = None\n return Location(filename, lineno, is_address, modfunc)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef resolve_address_location(proc, location):\n curframe = proc.curframe\n if location == '.':\n filename = Mstack.frame2file(proc.core, curframe, canonic=False)\n offset = curframe.f_lasti\n is_address = True\n return Location(filename, offset, False, None)\n\n assert isinstance(location, Location)\n is_address = True\n if proc.curframe:\n g = curframe.f_globals\n l = curframe.f_locals\n else:\n g = globals()\n l = locals()\n pass\n if location.method:\n # Validate arguments that can't be done in parsing\n filename = offset = None\n msg = ('Object %s is not known yet as a function, ' % location.method)\n try:\n modfunc = eval(location.method, g, l)\n except:\n proc.errmsg(msg)\n return INVALID_LOCATION\n\n try:\n # Check if the converted string is a function or instance method\n if inspect.isfunction(modfunc) or hasattr(modfunc, 'im_func'):\n pass\n else:\n proc.errmsg(msg)\n return INVALID_LOCATION\n except:\n proc.errmsg(msg)\n return INVALID_LOCATION\n filename = proc.core.canonic(modfunc.func_code.co_filename)\n # FIXME: we may want to check offset and\n # respect that in the future\n offset = 0\n elif location.path:\n filename = proc.core.canonic(location.path)\n offset = location.line_number\n is_address = location.is_address\n modfunc = None\n msg = \"%s is not known as a file\" % location.path\n if not osp.isfile(filename):\n # See if argument is a module\n try:\n modfunc = eval(location.path, g, l)\n except:\n msg = (\"Don't see '%s' as a existing file or as an module\"\n % location.path)\n proc.errmsg(msg)\n return INVALID_LOCATION\n pass\n is_address = location.is_address\n if inspect.ismodule(modfunc):\n if hasattr(modfunc, '__file__'):\n filename = pyficache.pyc2py(modfunc.__file__)\n filename = proc.core.canonic(filename)\n if not offset:\n # use first offset of module file\n offset = 0\n is_address = True\n return Location(filename, offset, is_address, modfunc)\n else:\n msg = (\"module '%s' doesn't have a file associated with it\" %\n location.path)\n\n proc.errmsg(msg)\n return INVALID_LOCATION\n maxline = pyficache.maxline(filename)\n if maxline and offset > maxline:\n # NB: we use the gdb wording here\n proc.errmsg(\"Line number %d out of range; %s has %d lines.\"\n % (offset, filename, maxline))\n return INVALID_LOCATION\n elif location.line_number is not None:\n filename = Mstack.frame2file(proc.core, curframe, canonic=False)\n offset = location.line_number\n is_address = location.is_address\n modfunc = proc.list_object\n return Location(filename, offset, is_address, modfunc)", "response": "Resolve the address location in the named tuple."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfind and set the debugger frame state to the frame which is in the current thread.", "response": "def find_and_set_debugged_frame(self, frame, thread_id):\n '''The dance we have to do to set debugger frame state to\n *frame*, which is in the thread with id *thread_id*. We may\n need to the hide initial debugger frames.\n '''\n thread = threading._active[thread_id]\n thread_name = thread.getName()\n if (not self.settings['dbg_trepan'] and\n thread_name == Mthread.current_thread_name()):\n # The frame we came in on ('current_thread_name') is\n # the same as the one we want to switch to. In this case\n # we need to some debugger frames are in this stack so\n # we need to remove them.\n newframe = Mthread.find_debugged_frame(frame)\n if newframe is not None: frame = newframe\n pass\n # FIXME: else: we might be blocked on other threads which are\n # about to go into the debugger it not for the fact this one got there\n # first. Possibly in the future we want\n # to hide the blocks into threading of that locking code as well.\n\n # Set stack to new frame\n self.stack, self.curindex = Mcmdproc.get_stack(frame, None,\n self.proc)\n self.proc.stack, self.proc.curindex = self.stack, self.curindex\n self.proc.frame_thread_name = thread_name\n return"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nseeing if *name_or_id* is either a thread name or a thread id.", "response": "def get_from_thread_name_or_id(self, name_or_id, report_error=True):\n '''See if *name_or_id* is either a thread name or a thread id.\n The frame of that id/name is returned, or None if name_or_id is\n invalid.'''\n thread_id = self.proc.get_int_noerr(name_or_id)\n if thread_id is None:\n # Must be a \"frame\" command with frame name, not a frame\n # number (or invalid command).\n name2id = Mthread.map_thread_names()\n if name_or_id == '.':\n name_or_id = Mthread.current_thread_name()\n pass\n thread_id = name2id.get(name_or_id)\n if thread_id is None:\n self.errmsg(\"I don't know about thread name %s.\" %\n name_or_id)\n return None, None\n pass\n # Above we should have set thread_id. Now see if we can\n # find it.\n threads = sys._current_frames()\n frame = threads.get(thread_id)\n if frame is None and report_error:\n self.errmsg(\"I don't know about thread number %s (%d).\" %\n name_or_id, thread_id)\n # self.info_thread_terse()\n return None, None\n return frame, thread_id"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nruns a frame command. This routine is a little complex because we allow a number parameter variations.", "response": "def run(self, args):\n '''Run a frame command. This routine is a little complex\n because we allow a number parameter variations.'''\n\n if len(args) == 1:\n # Form is: \"frame\" which means \"frame 0\"\n position_str = '0'\n elif len(args) == 2:\n # Form is: \"frame {position | thread}\n name_or_id = args[1]\n frame, thread_id = self.get_from_thread_name_or_id(name_or_id,\n False)\n if frame is None:\n # Form should be: frame position\n position_str = name_or_id\n else:\n # Form should be: \"frame thread\" which means\n # \"frame thread 0\"\n position_str = '0'\n self.find_and_set_debugged_frame(frame, thread_id)\n pass\n elif len(args) == 3:\n # Form is: frame \n name_or_id = args[1]\n position_str = args[2]\n frame, thread_id = self.get_from_thread_name_or_id(name_or_id)\n if frame is None:\n # Error message was given in routine\n return\n self.find_and_set_debugged_frame(frame, thread_id)\n pass\n self.one_arg_run(position_str)\n return False"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ntry to pretty print a simple array.", "response": "def pprint_simple_array(val, displaywidth, msg_nocr, msg, lineprefix=''):\n '''Try to pretty print a simple case where a list is not nested.\n Return True if we can do it and False if not. '''\n\n if type(val) != list:\n return False\n\n numeric = True\n for i in range(len(val)):\n if not (type(val[i]) in [bool, float, int]):\n numeric = False\n if not (type(val[i]) in [bool, float, int, bytes]):\n return False\n pass\n pass\n mess = columnize([repr(v) for v in val],\n opts={\"arrange_array\": True,\n \"lineprefix\": lineprefix,\n \"displaywidth\": int(displaywidth)-3,\n 'ljust': not numeric})\n msg_nocr(mess)\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef main(dbg=None, sys_argv=list(sys.argv)):\n global __title__\n\n # Save the original just for use in the restart that works via exec.\n orig_sys_argv = list(sys_argv)\n opts, dbg_opts, sys_argv = Moptions.process_options(__title__,\n VERSION,\n sys_argv)\n\n if opts.server is not None:\n if opts.server == 'tcp':\n connection_opts={'IO': 'TCP', 'PORT': opts.port}\n else:\n connection_opts={'IO': 'FIFO'}\n intf = Mserver.ServerInterface(connection_opts=connection_opts)\n dbg_opts['interface'] = intf\n if 'FIFO' == intf.server_type:\n print('Starting FIFO server for process %s.' % os.getpid())\n elif 'TCP' == intf.server_type:\n print('Starting TCP server listening on port %s.' %\n intf.inout.PORT)\n pass\n elif opts.client:\n Mclient.run(opts, sys_argv)\n return\n\n dbg_opts['orig_sys_argv'] = orig_sys_argv\n\n if dbg is None:\n dbg = Mdebugger.Trepan(dbg_opts)\n dbg.core.add_ignore(main)\n pass\n Moptions._postprocess_options(dbg, opts)\n\n # process_options has munged sys.argv to remove any options that\n # options that belong to this debugger. The original options to\n # invoke the debugger and script are in global sys_argv\n\n if len(sys_argv) == 0:\n # No program given to debug. Set to go into a command loop\n # anyway\n mainpyfile = None\n else:\n mainpyfile = sys_argv[0] # Get script filename.\n if not osp.isfile(mainpyfile):\n mainpyfile=Mclifns.whence_file(mainpyfile)\n is_readable = Mfile.readable(mainpyfile)\n if is_readable is None:\n print(\"%s: Python script file '%s' does not exist\"\n % (__title__, mainpyfile,))\n sys.exit(1)\n elif not is_readable:\n print(\"%s: Can't read Python script file '%s'\"\n % (__title__, mainpyfile, ))\n sys.exit(1)\n return\n\n if Mfile.is_compiled_py(mainpyfile):\n try:\n from xdis import load_module, PYTHON_VERSION, IS_PYPY\n (python_version, timestamp, magic_int, co, is_pypy,\n source_size) = load_module(mainpyfile, code_objects=None,\n fast_load=True)\n assert is_pypy == IS_PYPY\n assert python_version == PYTHON_VERSION, \\\n \"bytecode is for version %s but we are version %s\" % (\n python_version, PYTHON_VERSION)\n # We should we check version magic_int\n\n py_file = co.co_filename\n if osp.isabs(py_file):\n try_file = py_file\n else:\n mainpydir = osp.dirname(mainpyfile)\n tag = sys.implementation.cache_tag\n dirnames = [osp.join(mainpydir, tag),\n mainpydir] + os.environ['PATH'].split(osp.pathsep) + ['.']\n try_file = Mclifns.whence_file(py_file, dirnames)\n\n if osp.isfile(try_file):\n mainpyfile = try_file\n pass\n else:\n # Move onto the except branch\n raise IOError(\"Python file name embedded in code %s not found\" % try_file)\n except IOError:\n try:\n from uncompyle6 import decompile_file\n except ImportError:\n print(\"%s: Compiled python file '%s', but uncompyle6 not found\"\n % (__title__, mainpyfile), file=sys.stderr)\n sys.exit(1)\n return\n\n short_name = osp.basename(mainpyfile).strip('.pyc')\n fd = tempfile.NamedTemporaryFile(suffix='.py',\n prefix=short_name + \"_\",\n delete=False)\n old_write = fd.file.write\n\n def write_wrapper(*args, **kwargs):\n if isinstance(args[0], str):\n new_args = list(args)\n new_args[0] = args[0].encode('utf-8')\n old_write(*new_args, **kwargs)\n else:\n old_write(*args, **kwargs)\n fd.file.write = write_wrapper\n\n # from io import StringIO\n # linemap_io = StringIO()\n try:\n decompile_file(mainpyfile, fd.file, mapstream=fd)\n except:\n print(\"%s: error decompiling '%s'\"\n % (__title__, mainpyfile), file=sys.stderr)\n sys.exit(1)\n return\n\n # # Get the line associations between the original and\n # # decompiled program\n # mapline = linemap_io.getvalue()\n # fd.write(mapline + \"\\n\\n\")\n # linemap = eval(mapline[3:])\n mainpyfile = fd.name\n fd.close()\n\n # Since we are actually running the recreated source,\n # there is little no need to remap line numbers.\n # The mapping is given at the end of the file.\n # However we should consider adding this information\n # and original file name.\n\n print(\"%s: couldn't find Python source so we recreated it at '%s'\"\n % (__title__, mainpyfile), file=sys.stderr)\n\n pass\n\n # If mainpyfile is an optimized Python script try to find and\n # use non-optimized alternative.\n mainpyfile_noopt = pyficache.pyc2py(mainpyfile)\n if mainpyfile != mainpyfile_noopt \\\n and Mfile.readable(mainpyfile_noopt):\n print(\"%s: Compiled Python script given and we can't use that.\"\n % __title__)\n print(\"%s: Substituting non-compiled name: %s\" % (\n __title__, mainpyfile_noopt,))\n mainpyfile = mainpyfile_noopt\n pass\n\n # Replace trepan's dir with script's dir in front of\n # module search path.\n sys.path[0] = dbg.main_dirname = osp.dirname(mainpyfile)\n\n # XXX If a signal has been received we continue in the loop, otherwise\n # the loop exits for some reason.\n dbg.sig_received = False\n\n # if not mainpyfile:\n # print('For now, you need to specify a Python script name!')\n # sys.exit(2)\n # pass\n\n while True:\n\n # Run the debugged script over and over again until we get it\n # right.\n\n try:\n if dbg.program_sys_argv and mainpyfile:\n normal_termination = dbg.run_script(mainpyfile)\n if not normal_termination: break\n else:\n dbg.core.execution_status = 'No program'\n dbg.core.processor.process_commands()\n pass\n\n dbg.core.execution_status = 'Terminated'\n dbg.intf[-1].msg(\"The program finished - quit or restart\")\n dbg.core.processor.process_commands()\n except Mexcept.DebuggerQuit:\n break\n except Mexcept.DebuggerRestart:\n dbg.core.execution_status = 'Restart requested'\n if dbg.program_sys_argv:\n sys.argv = list(dbg.program_sys_argv)\n part1 = ('Restarting %s with arguments:' %\n dbg.core.filename(mainpyfile))\n args = ' '.join(dbg.program_sys_argv[1:])\n dbg.intf[-1].msg(\n Mmisc.wrapped_lines(part1, args,\n dbg.settings['width']))\n else: break\n except SystemExit:\n # In most cases SystemExit does not warrant a post-mortem session.\n break\n pass\n\n # Restore old sys.argv\n sys.argv = orig_sys_argv\n return", "response": "Main function for the main function of the debugger."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nuse this to set where to write to. output can be a string or a TextIOWrapper object. This code raises IOError on error.", "response": "def open(self, output, opts=None):\n \"\"\"Use this to set where to write to. output can be a\n file object or a string. This code raises IOError on error.\"\"\"\n if isinstance(output, io.TextIOWrapper) or \\\n isinstance(output, io.StringIO) or \\\n output == sys.stdout:\n pass\n elif isinstance(output, 'string'.__class__): # FIXME\n output = open(output, 'w')\n else:\n raise IOError(\"Invalid output type (%s) for %s\" %\n (output.__class__.__name__, output))\n # raise IOError(\"Invalid output type (%s) for %s\" % (type(output),\n # output))\n self.output = output\n return"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfind the corresponding signal name for num. Return None if num is invalid. Return None if num is invalid.", "response": "def lookup_signame(num):\n \"\"\"Find the corresponding signal name for 'num'. Return None\n if 'num' is invalid.\"\"\"\n signames = signal.__dict__\n num = abs(num)\n for signame in list(signames.keys()):\n if signame.startswith('SIG') and signames[signame] == num:\n return signame\n pass\n # Something went wrong. Should have returned above\n return None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfind the corresponding signal number for name. Return None if name is invalid.", "response": "def lookup_signum(name):\n \"\"\"Find the corresponding signal number for 'name'. Return None\n if 'name' is invalid.\"\"\"\n uname = name.upper()\n if (uname.startswith('SIG') and hasattr(signal, uname)):\n return getattr(signal, uname)\n else:\n uname = \"SIG\"+uname\n if hasattr(signal, uname):\n return getattr(signal, uname)\n return None\n return"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a signal name for a signal name or signal number.", "response": "def canonic_signame(name_num):\n \"\"\"Return a signal name for a signal name or signal\n number. Return None is name_num is an int but not a valid signal\n number and False if name_num is a not number. If name_num is a\n signal name or signal number, the canonic if name is returned.\"\"\"\n signum = lookup_signum(name_num)\n if signum is None:\n # Maybe signame is a number?\n try:\n num = int(name_num)\n signame = lookup_signame(num)\n if signame is None:\n return None\n except:\n return False\n return signame\n\n signame = name_num.upper()\n if not signame.startswith('SIG'): return 'SIG'+signame\n return signame"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef check_and_adjust_sighandler(self, signame, sigs):\n\n signum = lookup_signum(signame)\n\n try:\n old_handler = signal.getsignal(signum)\n except ValueError:\n # On some OS's (Redhat 8), SIGNUM's are listed (like\n # SIGRTMAX) that getsignal can't handle.\n if signame in self.sigs:\n sigs.pop(signame)\n pass\n return None\n if old_handler != self.sigs[signame].handle:\n if old_handler not in [signal.SIG_IGN, signal.SIG_DFL]:\n # save the program's signal handler\n sigs[signame].old_handler = old_handler\n pass\n # set/restore _our_ signal handler\n try:\n # signal.signal(signum, self.sigs[signame].handle)\n self._orig_set_signal(signum, self.sigs[signame].handle)\n except ValueError:\n # Probably not in main thread\n return False\n except KeyError:\n # May be weird keys from 3.3\n return False\n pass\n return True", "response": "Check to see if a single signal handler that we are interested in has changed or has not been set initially."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef check_and_adjust_sighandlers(self):\n for signame in list(self.sigs.keys()):\n if not self.check_and_adjust_sighandler(signame, self.sigs):\n break\n pass\n return", "response": "Check to see if any of the signal handlers we are interested in have\n changed or is not initially set."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nprint status for a single signal name.", "response": "def print_info_signal_entry(self, signame):\n \"\"\"Print status for a single signal name (signame)\"\"\"\n if signame in signal_description:\n description=signal_description[signame]\n else:\n description=\"\"\n pass\n if signame not in list(self.sigs.keys()):\n # Fake up an entry as though signame were in sigs.\n self.dbgr.intf[-1].msg(self.info_fmt\n % (signame, 'No', 'No', 'No', 'Yes',\n description))\n return\n\n sig_obj = self.sigs[signame]\n self.dbgr.intf[-1].msg(self.info_fmt %\n (signame,\n YN(sig_obj.b_stop),\n YN(sig_obj.print_method is not None),\n YN(sig_obj.print_stack),\n YN(sig_obj.pass_along),\n description))\n return"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef info_signal(self, args):\n if len(args) == 0: return None\n signame = args[0]\n if signame in ['handle', 'signal']:\n # This has come from dbgr's info command\n if len(args) == 1:\n # Show all signal handlers\n self.dbgr.core.processor.section(self.header)\n for signame in self.siglist:\n self.print_info_signal_entry(signame)\n return True\n else:\n signame = args[1]\n pass\n pass\n\n signame = self.is_name_or_number(signame)\n self.dbgr.core.processor.section(self.header)\n self.print_info_signal_entry(signame)\n return True", "response": "Print information about a signal"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndelegates the actions specified in arg to another method.", "response": "def action(self, arg):\n \"\"\"Delegate the actions specified in 'arg' to another\n method.\n \"\"\"\n if not arg:\n self.info_signal(['handle'])\n return True\n args = arg.split()\n signame = args[0]\n signame = self.is_name_or_number(args[0])\n if not signame: return\n\n if len(args) == 1:\n self.info_signal([signame])\n return True\n # We can display information about 'fatal' signals, but not\n # change their actions.\n if signame in fatal_signals:\n return None\n\n if signame not in list(self.sigs.keys()):\n if not self.initialize_handler(signame): return None\n pass\n\n # multiple commands might be specified, i.e. 'nopass nostop'\n for attr in args[1:]:\n if attr.startswith('no'):\n on = False\n attr = attr[2:]\n else:\n on = True\n if 'stop'.startswith(attr):\n self.handle_stop(signame, on)\n elif 'print'.startswith(attr) and len(attr) >= 2:\n self.handle_print(signame, on)\n elif 'pass'.startswith(attr):\n self.handle_pass(signame, on)\n elif 'ignore'.startswith(attr):\n self.handle_ignore(signame, on)\n elif 'stack'.startswith(attr):\n self.handle_print_stack(signame, on)\n else:\n self.dbgr.intf[-1].errmsg('Invalid arguments')\n pass\n pass\n return self.check_and_adjust_sighandler(signame, self.sigs)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef handle_print_stack(self, signame, print_stack):\n self.sigs[signame].print_stack = print_stack\n return print_stack", "response": "Set whether we stop when this signal is caught."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsetting whether we stop when this signal is caught.", "response": "def handle_stop(self, signame, set_stop):\n \"\"\"Set whether we stop or not when this signal is caught.\n If 'set_stop' is True your program will stop when this signal\n happens.\"\"\"\n if set_stop:\n self.sigs[signame].b_stop = True\n # stop keyword implies print AND nopass\n self.sigs[signame].print_method = self.dbgr.intf[-1].msg\n self.sigs[signame].pass_along = False\n else:\n self.sigs[signame].b_stop = False\n pass\n return set_stop"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef handle_pass(self, signame, set_pass):\n self.sigs[signame].pass_along = set_pass\n if set_pass:\n # Pass implies nostop\n self.sigs[signame].b_stop = False\n pass\n return set_pass", "response": "Set whether we should pass this signal to the program or not."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets whether we print or not when this signal is caught.", "response": "def handle_print(self, signame, set_print):\n \"\"\"Set whether we print or not when this signal is caught.\"\"\"\n if set_print:\n self.sigs[signame].print_method = self.dbgr.intf[-1].msg\n else:\n self.sigs[signame].print_method = None\n pass\n return set_print"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck whether line is a line of a breakpoint.", "response": "def is_ok_line_for_breakpoint(filename, lineno, errmsg_fn):\n \"\"\"Check whether specified line seems to be executable.\n\n Return `lineno` if it is, 0 if not (e.g. a docstring, comment, blank\n line or EOF). Warning: testing is not comprehensive.\n \"\"\"\n line = linecache.getline(filename, lineno)\n if not line:\n errmsg_fn('End of file')\n return False\n line = line.strip()\n # Don't allow setting breakpoint at a blank line\n if (not line or (line[0] == '#') or\n (line[:3] == '\"\"\"') or line[:3] == \"'''\"):\n errmsg_fn('Blank or comment')\n return False\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef file2module(filename):\n basename = osp.basename(filename)\n if '.' in basename:\n pos = basename.rfind('.')\n return basename[:pos]\n else:\n return basename\n return None", "response": "Given a filename extract the most likely module name."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsearches a file in a list of directories and return a full pathname for filename if we can find one.", "response": "def search_file(filename, directories, cdir):\n \"\"\"Return a full pathname for filename if we can find one. path\n is a list of directories to prepend to filename. If no file is\n found we'll return None\"\"\"\n\n for trydir in directories:\n\n # Handle $cwd and $cdir\n if trydir =='$cwd': trydir='.'\n elif trydir == '$cdir': trydir = cdir\n\n tryfile = osp.realpath(osp.join(trydir, filename))\n if osp.isfile(tryfile):\n return tryfile\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntry to find a file in a directory and return the results.", "response": "def whence_file(py_script, dirnames=None):\n \"\"\"Do a shell-like path lookup for py_script and return the results.\n If we can't find anything return py_script\"\"\"\n if py_script.find(os.sep) != -1:\n # Don't search since this name has path separator components\n return py_script\n if dirnames is None:\n dirnames = os.environ['PATH'].split(os.pathsep)\n for dirname in dirnames:\n py_script_try = osp.join(dirname, py_script)\n if osp.exists(py_script_try):\n return py_script_try\n # Failure\n return py_script"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef print_obj(arg, frame, format=None, short=False):\n try:\n if not frame:\n # ?? Should we have set up a dummy globals\n # to have persistence?\n obj = eval(arg, None, None)\n else:\n obj = eval(arg, frame.f_globals, frame.f_locals)\n pass\n except:\n return 'No symbol \"' + arg + '\" in current context.'\n # format and print\n what = arg\n if format:\n what = format + ' ' + arg\n obj = printf(obj, format)\n s = '%s = %s' % (what, obj)\n if not short:\n s += '\\ntype = %s' % type(obj)\n if callable(obj):\n argspec = print_argspec(obj, arg)\n if argspec:\n s += ':\\n\\t'\n if inspect.isclass(obj):\n s += 'Class constructor information:\\n\\t'\n obj = obj.__init__\n elif isinstance(obj, types.InstanceType):\n obj = obj.__call__\n pass\n s+= argspec\n pass\n\n # Try to list the members of a class.\n # Not sure if this is correct or the\n # best way to do.\n s = print_dict(s, obj, \"object variables\")\n if hasattr(obj, \"__class__\"):\n s = print_dict(s, obj.__class__, \"class variables\")\n pass\n return s", "response": "Return a string representation of an object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef adjust_frame(self, pos, absolute_pos):\n if not self.curframe:\n Mmsg.errmsg(self, \"No stack.\")\n return\n\n # Below we remove any negativity. At the end, pos will be\n # the new value of self.curindex.\n if absolute_pos:\n if pos >= 0:\n pos = len(self.stack)-pos-1\n else:\n pos = -pos-1\n else:\n pos += self.curindex\n\n if pos < 0:\n Mmsg.errmsg(self,\n \"Adjusting would put us beyond the oldest frame.\")\n return\n elif pos >= len(self.stack):\n Mmsg.errmsg(self,\n \"Adjusting would put us beyond the newest frame.\")\n return\n\n self.curindex = pos\n self.curframe = self.stack[self.curindex][0]\n self.print_location()\n self.list_lineno = None\n return", "response": "Adjust stack frame by pos positions."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef ok_for_running(self, cmd_obj, name, cmd_hash):\n '''We separate some of the common debugger command checks here:\n whether it makes sense to run the command in this execution state,\n if the command has the right number of arguments and so on.\n '''\n if hasattr(cmd_obj, 'execution_set'):\n if not (self.core.execution_status in cmd_obj.execution_set):\n part1 = (\"Command '%s' is not available for execution \"\n \"status:\" % name)\n Mmsg.errmsg(self,\n Mmisc.\n wrapped_lines(part1,\n self.core.execution_status,\n self.debugger.settings['width']))\n return False\n pass\n if self.frame is None and cmd_obj.need_stack:\n self.intf[-1].errmsg(\"Command '%s' needs an execution stack.\"\n % name)\n return False\n return True", "response": "Check if the command is available for execution."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _populate_commands(self):\n cmd_instances = []\n from trepan.bwprocessor import command as Mcommand\n eval_cmd_template = 'command_mod.%s(self)'\n for mod_name in Mcommand.__modules__:\n import_name = \"command.\" + mod_name\n try:\n command_mod = getattr(__import__(import_name), mod_name)\n except:\n print('Error importing %s: %s' % (mod_name, sys.exc_info()[0]))\n continue\n\n classnames = [ tup[0] for tup in\n inspect.getmembers(command_mod, inspect.isclass)\n if ('DebuggerCommand' != tup[0] and\n tup[0].endswith('Command')) ]\n for classname in classnames:\n eval_cmd = eval_cmd_template % classname\n try:\n instance = eval(eval_cmd)\n cmd_instances.append(instance)\n except:\n print('Error loading %s from %s: %s' %\n (classname, mod_name, sys.exc_info()[0]))\n pass\n pass\n pass\n return cmd_instances", "response": "Create a list of DebuggerCommand instances for each possible debugger\n ."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\npopulate self. commands with the command list.", "response": "def _populate_cmd_lists(self):\n \"\"\" Populate self.commands\"\"\"\n self.commands = {}\n for cmd_instance in self.cmd_instances:\n cmd_name = cmd_instance.name\n self.commands[cmd_name] = cmd_instance\n pass\n return"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nformatting the location of the currently selected item in the cache.", "response": "def format_location(proc_obj):\n \"\"\"Show where we are. GUI's and front-end interfaces often\n use this to update displays. So it is helpful to make sure\n we give at least some place that's located in a file.\n \"\"\"\n i_stack = proc_obj.curindex\n if i_stack is None or proc_obj.stack is None:\n return False\n location = {}\n core_obj = proc_obj.core\n dbgr_obj = proc_obj.debugger\n\n # Evaluation routines like \"exec\" don't show useful location\n # info. In these cases, we will use the position before that in\n # the stack. Hence the looping below which in practices loops\n # once and sometimes twice.\n while i_stack >= 0:\n frame_lineno = proc_obj.stack[i_stack]\n i_stack -= 1\n frame, lineno = frame_lineno\n\n filename = Mstack.frame2file(core_obj, frame)\n\n location['filename'] = filename\n location['fn_name'] = frame.f_code.co_name\n location['lineno'] = lineno\n\n if '' == filename and dbgr_obj.eval_string:\n filename = pyficache.unmap_file(filename)\n if '' == filename:\n fd = tempfile.NamedTemporaryFile(suffix='.py',\n prefix='eval_string',\n delete=False)\n fd.write(bytes(dbgr_obj.eval_string, 'UTF-8'))\n fd.close()\n pyficache.remap_file(fd.name, '')\n filename = fd.name\n pass\n pass\n\n opts = {\n 'reload_on_change' : proc_obj.settings('reload'),\n 'output' : 'plain'\n }\n line = pyficache.getline(filename, lineno, opts)\n if not line:\n line = linecache.getline(filename, lineno,\n proc_obj.curframe.f_globals)\n pass\n\n if line and len(line.strip()) != 0:\n location['text'] = line\n pass\n if '' != filename: break\n pass\n\n return location"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nusing to write to a debugger that is connected to this server ; str written will have a newline added to it", "response": "def msg(self, msg):\n \"\"\" used to write to a debugger that is connected to this\n server; `str' written will have a newline added to it\n \"\"\"\n if hasattr(self.output, 'writeline'):\n self.output.writeline(msg)\n elif hasattr(self.output, 'writelines'):\n self.output.writelines(msg + \"\\n\")\n pass\n return"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef run(self, args):\n mainfile = self.core.filename(None)\n if self.core.is_running():\n if mainfile:\n part1 = \"Python program '%s' is stopped\" % mainfile\n else:\n part1 = 'Program is stopped'\n pass\n if self.proc.event:\n msg = 'via a %s event.' % self.proc.event\n else:\n msg = '.'\n self.msg(Mmisc.wrapped_lines(part1, msg,\n self.settings['width']))\n if self.proc.curframe:\n self.msg(\"PC offset is %d.\" % self.proc.curframe.f_lasti)\n\n if self.proc.event == 'return':\n val = self.proc.event_arg\n part1 = 'Return value is'\n self.msg(Mmisc.wrapped_lines(part1, self.proc._saferepr(val),\n self.settings['width']))\n pass\n elif self.proc.event == 'exception':\n exc_type, exc_value, exc_tb = self.proc.event_arg\n self.msg('Exception type: %s' %\n self.proc._saferepr(exc_type))\n if exc_value:\n self.msg('Exception value: %s' %\n self.proc._saferepr(exc_value))\n pass\n pass\n self.msg('It stopped %s.' % self.core.stop_reason)\n if self.proc.event in ['signal', 'exception', 'c_exception']:\n self.msg('Note: we are stopped *after* running the '\n 'line shown.')\n pass\n else:\n if mainfile:\n part1 = \"Python program '%s'\" % mainfile\n msg = \"is not currently running. \"\n self.msg(Mmisc.wrapped_lines(part1, msg,\n self.settings['width']))\n else:\n self.msg('No Python program is currently running.')\n pass\n self.msg(self.core.execution_status)\n pass\n return False", "response": "This method is called by the main thread when the program is started."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef columnize_commands(self, commands):\n commands.sort()\n width = self.debugger.settings['width']\n return columnize.columnize(commands, displaywidth=width,\n lineprefix=' ')", "response": "Return a list of aligned columns"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef errmsg(self, msg, opts={}):\n try:\n return(self.debugger.intf[-1].errmsg(msg))\n except EOFError:\n # FIXME: what do we do here?\n pass\n return None", "response": "Convenience method for debugging."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef msg(self, msg, opts={}):\n try:\n return(self.debugger.intf[-1].msg(msg))\n except EOFError:\n # FIXME: what do we do here?\n pass\n return None", "response": "Convenience method to return the message from the debugger."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the message ID if it was sent to the nova.", "response": "def msg_nocr(self, msg, opts={}):\n \"\"\" Convenience short-hand for self.debugger.intf[-1].msg_nocr \"\"\"\n try:\n return(self.debugger.intf[-1].msg_nocr(msg))\n except EOFError:\n # FIXME: what do we do here?\n pass\n return None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconverting ReStructuredText and run through msg", "response": "def rst_msg(self, text, opts={}):\n \"\"\"Convert ReStructuredText and run through msg()\"\"\"\n text = Mformat.rst_text(text,\n 'plain' == self.debugger.settings['highlight'],\n self.debugger.settings['width'])\n return self.msg(text)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_last_or_frame_exception():\n\n try:\n if inspect.istraceback(sys.last_traceback):\n # We do have a traceback so prefer that.\n return sys.last_type, sys.last_value, sys.last_traceback\n except AttributeError:\n pass\n return sys.exc_info()", "response": "Intended to be used in post mortem routines."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef post_mortem(exc=None, frameno=1, dbg=None):\n\n if dbg is None:\n # Check for a global debugger object\n if Mdebugger.debugger_obj is None:\n Mdebugger.debugger_obj = Mdebugger.Trepan()\n pass\n dbg = Mdebugger.debugger_obj\n pass\n re_bogus_file = re.compile(\"^<.+>$\")\n\n if exc[0] is None:\n # frameno+1 because we are about to add one more level of call\n # in get_last_or_frame_exception\n exc = get_last_or_frame_exception()\n if exc[0] is None:\n print(\"Can't find traceback for post_mortem \"\n \"in sys.last_traceback or sys.exec_info()\")\n return\n pass\n exc_type, exc_value, exc_tb = exc\n dbg.core.execution_status = ('Terminated with unhandled exception %s'\n % exc_type)\n\n # tb has least-recent traceback entry first. We want the most-recent\n # entry. Also we'll pick out a mainpyfile name if it hasn't previously\n # been set.\n if exc_tb is not None:\n while exc_tb.tb_next is not None:\n filename = exc_tb.tb_frame.f_code.co_filename\n if (dbg.mainpyfile and 0 == len(dbg.mainpyfile)\n and not re_bogus_file.match(filename)):\n dbg.mainpyfile = filename\n pass\n exc_tb = exc_tb.tb_next\n pass\n dbg.core.processor.curframe = exc_tb.tb_frame\n pass\n\n if 0 == len(dbg.program_sys_argv):\n # Fake program (run command) args since we weren't called with any\n dbg.program_sys_argv = list(sys.argv[1:])\n dbg.program_sys_argv[:0] = [dbg.mainpyfile]\n\n # if 0 == len(dbg._sys_argv):\n # # Fake script invocation (restart) args since we don't have any\n # dbg._sys_argv = list(dbg.program_sys_argv)\n # dbg._sys_argv[:0] = [__title__]\n\n try:\n # # FIXME: This can be called from except hook in which case we\n # # need this. Dunno why though.\n # try:\n # _pydb_trace.set_trace(t.tb_frame)\n # except:\n # pass\n\n # Possibly a bug in Python 2.5. Why f.f_lineno is\n # not always equal to t.tb_lineno, I don't know.\n f = exc_tb.tb_frame\n if f and f.f_lineno != exc_tb.tb_lineno : f = f.f_back\n dbg.core.processor.event_processor(f, 'exception', exc, 'Trepan3k:pm')\n except DebuggerRestart:\n while True:\n sys.argv = list(dbg._program_sys_argv)\n dbg.msg(\"Restarting %s with arguments:\\n\\t%s\"\n % (dbg.filename(dbg.mainpyfile),\n \" \".join(dbg._program_sys_argv[1:])))\n try:\n dbg.run_script(dbg.mainpyfile)\n except DebuggerRestart:\n pass\n pass\n except DebuggerQuit:\n pass\n return", "response": "Enter the debugger read loop after a program has crashed."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef close(self):\n self.state = 'closing'\n if self.inout:\n self.inout.close()\n pass\n self.state = 'closing connection'\n if self.conn:\n self.conn.close()\n self.state = 'disconnected'\n return", "response": "Closes both socket and server connection."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef complete_token_filtered(aliases, prefix, expanded):\n\n \"\"\"Find all starting matches in dictionary *aliases* that start\n with *prefix*, but filter out any matches already in\n *expanded*.\"\"\"\n\n complete_ary = list(aliases.keys())\n results = [cmd for cmd in\n complete_ary if cmd.startswith(prefix)] and not (\n cmd in aliases and expanded not in aliases[cmd])\n return sorted(results, key=lambda pair: pair[0])", "response": "Find all starting matches in dictionary aliases that start with prefix but filter out any matches already in\n expanded."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef complete_identifier(cmd, prefix):\n '''Complete an arbitrary expression.'''\n if not cmd.proc.curframe: return [None]\n # Collect globals and locals. It is usually not really sensible to also\n # complete builtins, and they clutter the namespace quite heavily, so we\n # leave them out.\n ns = cmd.proc.curframe.f_globals.copy()\n ns.update(cmd.proc.curframe.f_locals)\n if '.' in prefix:\n # Walk an attribute chain up to the last part, similar to what\n # rlcompleter does. This will bail if any of the parts are not\n # simple attribute access, which is what we want.\n dotted = prefix.split('.')\n try:\n obj = ns[dotted[0]]\n for part in dotted[1:-1]:\n obj = getattr(obj, part)\n except (KeyError, AttributeError):\n return []\n pre_prefix = '.'.join(dotted[:-1]) + '.'\n return [pre_prefix + n for n in dir(obj) if\n n.startswith(dotted[-1])]\n else:\n # Complete a simple name.\n return Mcomplete.complete_token(ns.keys(), prefix)", "response": "Complete an arbitrary expression."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef errmsg(self, message, opts={}):\n if 'plain' != self.debugger.settings['highlight']:\n message = colorize('standout', message)\n pass\n return(self.intf[-1].errmsg(message))", "response": "Convenience short - hand for self. intf. errmsg"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef rst_msg(self, text, opts={}):\n from trepan.lib.format import rst_text\n text = rst_text(text,\n 'plain' == self.debugger.settings['highlight'],\n self.debugger.settings['width'])\n return self.msg(text)", "response": "Convert ReStructuredText and run through msg"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef dbgr(self, string):\n '''Invoke a debugger command from inside a python shell called inside\n the debugger.\n '''\n print('')\n self.proc.cmd_queue.append(string)\n self.proc.process_command()\n return", "response": "Invoke a debugger command from inside a python shell called inside\n the debugger."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nquits command when there s just one thread.", "response": "def nothread_quit(self, arg):\n \"\"\" quit command when there's just one thread. \"\"\"\n\n self.debugger.core.stop()\n self.debugger.core.execution_status = 'Quit command'\n self.proc.response['event'] = 'terminated'\n self.proc.response['name'] = 'status'\n self.proc.intf[-1].msg(self.proc.response)\n raise Mexcept.DebuggerQuit"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse_addr_list_cmd(proc, args, listsize=40):\n\n text = proc.current_command[len(args[0])+1:].strip()\n\n if text in frozenset(('', '.', '+', '-')):\n if text == '.':\n location = resolve_address_location(proc, '.')\n return (location.path, location.line_number, True,\n location.line_number + listsize, True,\n location.method)\n\n if proc.list_offset is None:\n proc.errmsg(\"Don't have previous list location\")\n return INVALID_PARSE_LIST\n\n filename = proc.list_filename\n if text == '+':\n first = max(0, proc.list_offset-1)\n elif text == '-':\n # FIXME: not quite right for offsets\n if proc.list_lineno == 1 + listsize:\n proc.errmsg(\"Already at start of %s.\" % proc.list_filename)\n return INVALID_PARSE_LIST\n first = max(1, proc.list_lineno - (2*listsize) - 1)\n elif text == '':\n # Continue from where we last left off\n first = proc.list_offset + 1\n last = first + listsize - 1\n return filename, first, True, last, True, proc.list_object\n last = first + listsize - 1\n return filename, first, True, last, True, proc.list_object\n else:\n try:\n list_range = build_arange(text)\n except LocationError as e:\n proc.errmsg(\"Error in parsing list range at or around:\")\n proc.errmsg(e.text)\n proc.errmsg(e.text_cursor)\n return INVALID_PARSE_LIST\n except ScannerError as e:\n proc.errmsg(\"Lexical error in parsing list range at or around:\")\n proc.errmsg(e.text)\n proc.errmsg(e.text_cursor)\n return INVALID_PARSE_LIST\n\n if list_range.first is None:\n # Last must have been given\n assert isinstance(list_range.last, Location)\n location = resolve_address_location(proc, list_range.last)\n if not location:\n return INVALID_PARSE_LIST\n last = location.line_number\n if location.is_address:\n raise RuntimeError(\"We don't handle ending offsets\")\n else:\n first = max(1, last - listsize)\n return location.path, first, False, last, False, location.method\n elif isinstance(list_range.first, int):\n first = list_range.first\n location = resolve_address_location(proc, list_range.last)\n if not location:\n return INVALID_PARSE_LIST\n filename = location.path\n last = location.line_number\n\n if not location.is_address and last < first:\n # Treat as a count rather than an absolute location\n last = first + last\n return location.path, first, False, last, location.is_address, location.method\n else:\n # First is location. Last may be empty or a number/address\n assert isinstance(list_range.first, Location)\n location = resolve_address_location(proc, list_range.first)\n if not location:\n return INVALID_PARSE_LIST\n first = location.line_number\n first_is_addr = location.is_address\n last_is_addr = False\n last = list_range.last\n if isinstance(last, str):\n # Is an offset +number\n assert last[0] in ('+', '*')\n last_is_addr = last[0] == '*'\n if last_is_addr:\n last = int(last[1:])\n else:\n last = first + int(last[1:])\n elif not last:\n last_is_addr = True\n last = first + listsize\n elif last < first:\n # Treat as a count rather than an absolute location\n last = first + last\n\n return location.path, first, first_is_addr, last, last_is_addr, location.method\n pass\n return", "response": "Parses the arguments for the list command and returns the tuple that is returned."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef add_ignore(self, *frames_or_fns):\n for frame_or_fn in frames_or_fns:\n rc = self.ignore_filter.add_include(frame_or_fn)\n pass\n return rc", "response": "Add frame_or_fn to the list of functions that are not to\n be debugged"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nturning a filename into its canonic representation and returns it.", "response": "def canonic(self, filename):\n \"\"\" Turns `filename' into its canonic representation and returns this\n string. This allows a user to refer to a given file in one of several\n equivalent ways.\n\n Relative filenames need to be fully resolved, since the current working\n directory might change over the course of execution.\n\n If filename is enclosed in < ... >, then we assume it is\n one of the bogus internal Python names like which is seen\n for example when executing \"exec cmd\".\n \"\"\"\n\n if filename == \"<\" + filename[1:-1] + \">\":\n return filename\n canonic = self.filename_cache.get(filename)\n if not canonic:\n lead_dir = filename.split(os.sep)[0]\n if lead_dir == os.curdir or lead_dir == os.pardir:\n # We may have invoked the program from a directory\n # other than where the program resides. filename is\n # relative to where the program resides. So make sure\n # to use that.\n canonic = os.path.abspath(os.path.join(self.main_dirname,\n filename))\n else:\n canonic = os.path.abspath(filename)\n pass\n if not os.path.isfile(canonic):\n canonic = Mclifns.search_file(filename, self.search_path,\n self.main_dirname)\n # FIXME: is this is right for utter failure?\n if not canonic: canonic = filename\n pass\n canonic = os.path.realpath(os.path.normcase(canonic))\n self.filename_cache[filename] = canonic\n return canonic"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning filename depending on the the basename setting", "response": "def filename(self, filename=None):\n \"\"\"Return filename or the basename of that depending on the\n basename setting\"\"\"\n if filename is None:\n if self.debugger.mainpyfile:\n filename = self.debugger.mainpyfile\n else:\n return None\n if self.debugger.settings['basename']:\n return(os.path.basename(filename))\n return filename"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning True if debugging is in progress.", "response": "def is_started(self):\n '''Return True if debugging is in progress.'''\n return (tracer.is_started() and\n not self.trace_hook_suspend\n and tracer.find_hook(self.trace_dispatch))"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef start(self, opts=None):\n\n # The below is our fancy equivalent of:\n # sys.settrace(self._trace_dispatch)\n try:\n self.trace_hook_suspend = True\n get_option = lambda key: Mmisc.option_set(opts, key,\n default.START_OPTS)\n\n add_hook_opts = get_option('add_hook_opts')\n\n # Has tracer been started?\n if not tracer.is_started() or get_option('force'):\n # FIXME: should filter out opts not for tracer\n\n tracer_start_opts = default.START_OPTS.copy()\n if opts:\n tracer_start_opts.update(opts.get('tracer_start', {}))\n tracer_start_opts['trace_fn'] = self.trace_dispatch\n tracer_start_opts['add_hook_opts'] = add_hook_opts\n tracer.start(tracer_start_opts)\n elif not tracer.find_hook(self.trace_dispatch):\n tracer.add_hook(self.trace_dispatch, add_hook_opts)\n pass\n self.execution_status = 'Running'\n finally:\n self.trace_hook_suspend = False\n return", "response": "We've already created a debugger object, but here we start\n debugging in earnest. We can also turn off debugging (but have\n the hooks suspended or not) using 'stop'.\n\n 'opts' is a hash of every known value you might want to set when\n starting the debugger. See START_OPTS of module default."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning True if we are going to stop here and set the self. stop_reason if so.", "response": "def is_stop_here(self, frame, event, arg):\n \"\"\" Does the magic to determine if we stop here and run a\n command processor or not. If so, return True and set\n self.stop_reason; if not, return False.\n\n Determining factors can be whether a breakpoint was\n encountered, whether we are stepping, next'ing, finish'ing,\n and, if so, whether there is an ignore counter.\n \"\"\"\n\n # Add an generic event filter here?\n # FIXME TODO: Check for\n # - thread switching (under set option)\n\n # Check for \"next\" and \"finish\" stopping via stop_level\n\n # Do we want a different line and if so,\n # do we have one?\n lineno = frame.f_lineno\n filename = frame.f_code.co_filename\n if self.different_line and event == 'line':\n if self.last_lineno == lineno and self.last_filename == filename:\n return False\n pass\n self.last_lineno = lineno\n self.last_filename = filename\n\n if self.stop_level is not None:\n if frame != self.last_frame:\n # Recompute stack_depth\n self.last_level = Mstack.count_frames(frame)\n self.last_frame = frame\n pass\n if self.last_level > self.stop_level:\n return False\n elif self.last_level == self.stop_level and \\\n self.stop_on_finish and event in ['return', 'c_return']:\n self.stop_level = None\n self.stop_reason = \"in return for 'finish' command\"\n return True\n pass\n\n # Check for stepping\n if self._is_step_next_stop(event):\n self.stop_reason = 'at a stepping statement'\n return True\n\n return False"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_next(self, frame, step_ignore=0, step_events=None):\n \"Sets to stop on the next event that happens in frame 'frame'.\"\n self.step_events = None # Consider all events\n self.stop_level = Mstack.count_frames(frame)\n self.last_level = self.stop_level\n self.last_frame = frame\n self.stop_on_finish = False\n self.step_ignore = step_ignore\n return", "response": "Sets to stop on the next event that happens in frame frame."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nchecking whether we should break here because of b. funcname.", "response": "def checkfuncname(b, frame):\n \"\"\"Check whether we should break here because of `b.funcname`.\"\"\"\n if not b.funcname:\n # Breakpoint was set via line number.\n if b.line != frame.f_lineno:\n # Breakpoint was set at a line with a def statement and the function\n # defined is called: don't break.\n return False\n return True\n\n # Breakpoint set via function name.\n\n if frame.f_code.co_name != b.funcname:\n # It's not a function call, but rather execution of def statement.\n return False\n\n # We are in the right frame.\n if not b.func_first_executable_line:\n # The function is entered for the 1st time.\n b.func_first_executable_line = frame.f_lineno\n\n if b.func_first_executable_line != frame.f_lineno:\n # But we are not at the first line number: don't break.\n return False\n return True"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nremove a breakpoint given its breakpoint number.", "response": "def delete_breakpoint_by_number(self, bpnum):\n \"Remove a breakpoint given its breakpoint number.\"\n success, msg, bp = self.get_breakpoint(bpnum)\n if not success:\n return False, msg\n self.delete_breakpoint(bp)\n return (True, '')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nenable or disable all breakpoints.", "response": "def en_disable_all_breakpoints(self, do_enable=True):\n \"Enable or disable all breakpoints.\"\n bp_list = [bp for bp in self.bpbynumber if bp]\n bp_nums = []\n if do_enable:\n endis = 'en'\n else:\n endis = 'dis'\n pass\n if not bp_list:\n return \"No breakpoints to %sable\" % endis\n for bp in bp_list:\n bp.enabled = do_enable\n bp_nums.append(str(bp.number))\n pass\n return (\"Breakpoints %sabled: %s\" % (endis, \", \".join(bp_nums)))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nenables or disable a breakpoint given its breakpoint number.", "response": "def en_disable_breakpoint_by_number(self, bpnum, do_enable=True):\n \"Enable or disable a breakpoint given its breakpoint number.\"\n success, msg, bp = self.get_breakpoint(bpnum)\n if not success:\n return success, msg\n if do_enable:\n endis = 'en'\n else:\n endis = 'dis'\n pass\n if bp.enabled == do_enable:\n return (False, ('Breakpoint (%r) previously %sabled' %\n (str(bpnum), endis,)))\n bp.enabled = do_enable\n return (True, '')"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nremoving all breakpoints at a give filename and line number. Returns a list of breakpoints numbers deleted.", "response": "def delete_breakpoints_by_lineno(self, filename, lineno):\n \"\"\"Removes all breakpoints at a give filename and line number.\n Returns a list of breakpoints numbers deleted.\n \"\"\"\n if (filename, lineno) not in self.bplist:\n return []\n breakpoints = self.bplist[(filename, lineno)]\n bpnums = [bp.number for bp in breakpoints]\n for bp in list(breakpoints):\n self.delete_breakpoint(bp)\n return bpnums"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nfinding a breakpoint at the given filename line number.", "response": "def find_bp(self, filename, lineno, frame):\n \"\"\"Determine which breakpoint for this file:line is to be acted upon.\n\n Called only if we know there is a bpt at this\n location. Returns breakpoint that was triggered and a flag\n that indicates if it is ok to delete a temporary breakpoint.\n\n \"\"\"\n possibles = self.bplist[filename, lineno]\n for i in range(0, len(possibles)):\n b = possibles[i]\n if not b.enabled:\n continue\n if not checkfuncname(b, frame):\n continue\n # Count every hit when bp is enabled\n b.hits += 1\n if not b.condition:\n # If unconditional, and ignoring, go on to next, else\n # break\n if b.ignore > 0:\n b.ignore = b.ignore -1\n continue\n else:\n # breakpoint and marker that's ok to delete if\n # temporary\n return (b, True)\n else:\n # Conditional bp.\n # Ignore count applies only to those bpt hits where the\n # condition evaluates to true.\n try:\n val = eval(b.condition, frame.f_globals, frame.f_locals)\n if val:\n if b.ignore > 0:\n b.ignore = b.ignore -1\n # continue\n else:\n return (b, True)\n # else:\n # continue\n except:\n # if eval fails, most conservative thing is to\n # stop on breakpoint regardless of ignore count.\n # Don't delete temporary, as another hint to user.\n return (b, False)\n pass\n pass\n return (None, None)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef open(self, inp, opts=None):\n if isinstance(inp, io.TextIOWrapper):\n self.input = inp\n elif isinstance(inp, 'string'.__class__): # FIXME\n self.name = inp\n self.input = open(inp, 'r')\n else:\n raise IOError(\"Invalid input type (%s) for %s\" %\n (inp.__class__.__name__, inp))\n return", "response": "Use this to set what file to read from."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nread a line of input.", "response": "def readline(self, prompt='', use_raw=None):\n \"\"\"Read a line of input. Prompt and use_raw exist to be\n compatible with other input routines and are ignored.\n EOFError will be raised on EOF.\n \"\"\"\n line = self.input.readline()\n if not line: raise EOFError\n return line.rstrip(\"\\n\")"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef confirm(self, prompt, default):\n while True:\n try:\n self.write_confirm(prompt, default)\n reply = self.readline('').strip().lower()\n except EOFError:\n return default\n if reply in ('y', 'yes'):\n return True\n elif reply in ('n', 'no'):\n return False\n else:\n self.msg(\"Please answer y or n.\")\n pass\n pass\n return default", "response": "Called when a dangerous action is about to be done to make sure\n it s okay. prompt is printed and user response is returned."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef t_single_quote_file(self, s):\n r\"'[^'].+'\"\n # Pick out text inside of singe-quoted string\n base = s[1:-1]\n self.add_token('FILENAME', base)\n self.pos += len(s)", "response": "Look for a single quote file."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndouble quote a file name.", "response": "def t_double_quote_file(self, s):\n r'\"[^\"]+\"'\n # Pick out text inside of singe-quoted string\n base = s[1:-1]\n self.add_token('FILENAME', base)\n self.pos += len(s)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef t_colon(self, s):\n r':'\n # Used to separate a filename from a line number\n self.add_token('COLON', s)\n self.pos += len(s)", "response": "r Colon is used to separate a filename from a line number."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef t_comma(self, s):\n r','\n # Used in \"list\" to separate first from last\n self.add_token('COMMA', s)\n self.pos += len(s)", "response": "r Comma is used in list and list."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nparses a DIRECTION token.", "response": "def t_direction(self, s):\n r'^[+-]$'\n # Used in the \"list\" command\n self.add_token('DIRECTION', s)\n self.pos += len(s)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef t_offset(self, s):\n r'[+]\\d+'\n pos = self.pos\n self.add_token('OFFSET', s)\n self.pos = pos + len(s)", "response": "r Offset is a version of the RFC 2965 section 2. 3. 1."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nextracts Concepts from a list of sentences and ids.", "response": "def extract_concepts(self, sentences=None, ids=None,\n composite_phrase=4, filename=None,\n file_format='sldi', allow_acronym_variants=False,\n word_sense_disambiguation=False, allow_large_n=False,\n strict_model=False, relaxed_model=False,\n allow_overmatches=False, allow_concept_gaps=False,\n term_processing=False, no_derivational_variants=False,\n derivational_variants=False, ignore_word_order=False,\n unique_acronym_variants=False,\n prefer_multiple_concepts=False,\n ignore_stop_phrases=False, compute_all_mappings=False,\n mm_data_version=False, exclude_sources=[],\n restrict_to_sources=[], restrict_to_sts=[], exclude_sts=[]):\n \"\"\" extract_concepts takes a list of sentences and ids(optional)\n then returns a list of Concept objects extracted via\n MetaMap.\n\n Supported Options:\n Composite Phrase -Q\n Word Sense Disambiguation -y\n use strict model -A\n use relaxed model -C\n allow large N -l\n allow overmatches -o\n allow concept gaps -g\n term processing -z\n No Derivational Variants -d\n All Derivational Variants -D\n Ignore Word Order -i\n Allow Acronym Variants -a\n Unique Acronym Variants -u\n Prefer Multiple Concepts -Y\n Ignore Stop Phrases -K\n Compute All Mappings -b\n MM Data Version -V\n Exclude Sources -e\n Restrict to Sources -R\n Restrict to Semantic Types -J\n Exclude Semantic Types -k\n \n\n For information about the available options visit\n http://metamap.nlm.nih.gov/.\n\n Note: If an error is encountered the process will be closed\n and whatever was processed, if anything, will be\n returned along with the error found.\n \"\"\"\n if allow_acronym_variants and unique_acronym_variants:\n raise ValueError(\"You can't use both allow_acronym_variants and \"\n \"unique_acronym_variants.\")\n if (sentences is not None and filename is not None) or \\\n (sentences is None and filename is None):\n raise ValueError(\"You must either pass a list of sentences \"\n \"OR a filename.\")\n if file_format not in ['sldi','sldiID']:\n raise ValueError(\"file_format must be either sldi or sldiID\")\n\n input_file = None\n if sentences is not None:\n input_file = tempfile.NamedTemporaryFile(mode=\"wb\", delete=False)\n else:\n input_file = open(filename, 'r')\n output_file = tempfile.NamedTemporaryFile(mode=\"r\", delete=False)\n error = None\n try:\n if sentences is not None:\n if ids is not None:\n for identifier, sentence in zip(ids, sentences):\n input_file.write('{0!r}|{1!r}\\n'.format(identifier, sentence).encode('utf8'))\n else:\n for sentence in sentences:\n input_file.write('{0!r}\\n'.format(sentence).encode('utf8'))\n input_file.flush()\n\n command = [self.metamap_filename, '-N']\n command.append('-Q')\n command.append(str(composite_phrase))\n if mm_data_version is not False:\n if mm_data_version not in ['Base', 'USAbase', 'NLM']:\n raise ValueError(\"mm_data_version must be Base, USAbase, or NLM.\")\n command.append('-V')\n command.append(str(mm_data_version))\n if word_sense_disambiguation:\n command.append('-y')\n if strict_model:\n command.append('-A')\n if relaxed_model:\n command.append('-C')\n if allow_large_n:\n command.append('-l')\n if allow_overmatches:\n command.append('-o')\n if allow_concept_gaps:\n command.append('-g')\n if term_processing:\n command.append('-z')\n if no_derivational_variants:\n command.append('-d')\n if derivational_variants:\n command.append('-D')\n if ignore_word_order:\n command.append('-i')\n if allow_acronym_variants:\n command.append('-a')\n if unique_acronym_variants:\n command.append('-u')\n if prefer_multiple_concepts:\n command.append('-Y')\n if ignore_stop_phrases:\n command.append('-K')\n if compute_all_mappings:\n command.append('-b')\n if len(exclude_sources) > 0:\n command.append('-e')\n command.append(str(','.join(exclude_sources)))\n if len(restrict_to_sources) > 0:\n command.append('-R')\n command.append(str(','.join(restrict_to_sources)))\n if len(restrict_to_sts) > 0:\n command.append('-J')\n command.append(str(','.join(restrict_to_sts)))\n if len(exclude_sts) > 0:\n command.append('-k')\n command.append(str(','.join(exclude_sts)))\n if ids is not None or (file_format == 'sldiID' and\n sentences is None):\n command.append('--sldiID')\n else:\n command.append('--sldi')\n command.append(input_file.name)\n command.append(output_file.name)\n\n metamap_process = subprocess.Popen(command, stdout=subprocess.PIPE)\n while metamap_process.poll() is None:\n stdout = str(metamap_process.stdout.readline())\n if 'ERROR' in stdout:\n metamap_process.terminate()\n error = stdout.rstrip()\n output = str(output_file.read())\n finally:\n if sentences is not None:\n os.remove(input_file.name)\n else:\n input_file.close()\n os.remove(output_file.name)\n\n concepts = Corpus.load(output.splitlines())\n return (concepts, error)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef extract_concepts(self, sentences=None, ids=None, filename=None,\n restrict_to_sts=None, restrict_to_sources=None):\n \"\"\" extract_concepts takes a list of sentences and ids(optional)\n then returns a list of Concept objects extracted via\n MetaMapLite.\n\n Supported Options:\n Restrict to Semantic Types --restrict_to_sts\n Restrict to Sources --restrict_to_sources\n\n For information about the available options visit\n http://metamap.nlm.nih.gov/.\n\n Note: If an error is encountered the process will be closed\n and whatever was processed, if anything, will be\n returned along with the error found.\n \"\"\"\n if (sentences is not None and filename is not None) or \\\n (sentences is None and filename is None):\n raise ValueError(\"You must either pass a list of sentences \"\n \"OR a filename.\")\n\n input_file = None\n if sentences is not None:\n input_file = tempfile.NamedTemporaryFile(mode=\"wb\", delete=False)\n else:\n input_file = open(filename, 'r')\n\n # Unlike MetaMap, MetaMapLite does not take an output filename as a parameter.\n # It creates a new output file at same location as \"input_file\" with the default file extension \".mmi\".\n # output_file = tempfile.NamedTemporaryFile(mode=\"r\", delete=False)\n output_file_name = None\n error = None\n try:\n if sentences is not None:\n if ids is not None:\n for identifier, sentence in zip(ids, sentences):\n input_file.write('{0!r}|{1!r}\\n'.format(identifier, sentence).encode('utf8'))\n else:\n for sentence in sentences:\n input_file.write('{0!r}\\n'.format(sentence).encode('utf8'))\n input_file.flush()\n\n command = [\"bash\", os.path.join(self.metamap_filename, \"metamaplite.sh\")]\n if restrict_to_sts:\n if isinstance(restrict_to_sts, str):\n restrict_to_sts = [restrict_to_sts]\n if len(restrict_to_sts) > 0:\n command.append('--restrict_to_sts')\n command.append(str(','.join(restrict_to_sts)))\n\n if restrict_to_sources:\n if isinstance(restrict_to_sources, str):\n restrict_to_sources = [restrict_to_sources]\n if len(restrict_to_sources) > 0:\n command.append('--restrict_to_sources')\n command.append(str(','.join(restrict_to_sources)))\n\n if ids is not None:\n command.append('--inputformat=sldiwi')\n\n command.append(input_file.name)\n # command.append(output_file.name)\n\n metamap_process = subprocess.Popen(command, stdout=subprocess.PIPE)\n while metamap_process.poll() is None:\n stdout = str(metamap_process.stdout.readline())\n if 'ERROR' in stdout:\n metamap_process.terminate()\n error = stdout.rstrip()\n\n # print(\"input file name: {0}\".format(input_file.name))\n output_file_name, file_extension = os.path.splitext(input_file.name)\n output_file_name += \".\" + \"mmi\"\n # print(\"output_file_name: {0}\".format(output_file_name))\n with open(output_file_name) as fd:\n output = fd.read()\n # output = str(output_file.read())\n # print(\"output: {0}\".format(output))\n finally:\n if sentences is not None:\n os.remove(input_file.name)\n else:\n input_file.close()\n # os.remove(output_file.name)\n os.remove(output_file_name)\n\n concepts = CorpusLite.load(output.splitlines())\n return concepts, error", "response": "This function extracts Concepts from a list of sentences and ids and returns a list of Concept objects."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef as_future(self, query):\n # concurrent.futures.Future is not compatible with the \"new style\"\n # asyncio Future, and awaiting on such \"old-style\" futures does not\n # work.\n #\n # tornado includes a `run_in_executor` function to help with this\n # problem, but it's only included in version 5+. Hence, we copy a\n # little bit of code here to handle this incompatibility.\n\n if not self._pool:\n self._pool = ThreadPoolExecutor(max_workers=self._max_workers)\n\n old_future = self._pool.submit(query)\n new_future = Future()\n\n IOLoop.current().add_future(\n old_future, lambda f: chain_future(f, new_future)\n )\n\n return new_future", "response": "Wrap a sqlalchemy. orm. query. Query object into a concurrent. futures. Future so that it can be yielded on it."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef login_as(user, request, store_original_user=True):\n\n # Save the original user pk before it is replaced in the login method\n original_user_pk = request.user.pk\n\n # Find a suitable backend.\n if not hasattr(user, \"backend\"):\n for backend in django_settings.AUTHENTICATION_BACKENDS:\n if not hasattr(load_backend(backend), \"get_user\"):\n continue\n\n if user == load_backend(backend).get_user(user.pk):\n user.backend = backend\n break\n else:\n raise ImproperlyConfigured(\"Could not found an appropriate authentication backend\")\n\n # Add admin audit log entry\n if original_user_pk:\n change_message = \"User {0} logged in as {1}.\".format(request.user, user)\n LogEntry.objects.log_action(\n user_id=original_user_pk,\n content_type_id=ContentType.objects.get_for_model(user).pk,\n object_id=user.pk,\n object_repr=str(user),\n change_message=change_message,\n action_flag=CHANGE,\n )\n\n # Log the user in.\n if not hasattr(user, \"backend\"):\n return\n\n if la_settings.UPDATE_LAST_LOGIN:\n login(request, user)\n else:\n with no_update_last_login():\n login(request, user)\n\n # Set a flag on the session\n if store_original_user:\n messages.warning(\n request,\n la_settings.MESSAGE_LOGIN_SWITCH.format(username=user.__dict__[username_field]),\n extra_tags=la_settings.MESSAGE_EXTRA_TAGS,\n )\n request.session[la_settings.USER_SESSION_FLAG] = signer.sign(original_user_pk)", "response": "Log the user in and store the original user pk in the session."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nrestores an original login session", "response": "def restore_original_login(request):\n \"\"\"\n Restore an original login session, checking the signed session\n \"\"\"\n original_session = request.session.get(la_settings.USER_SESSION_FLAG)\n logout(request)\n\n if not original_session:\n return\n\n try:\n original_user_pk = signer.unsign(\n original_session, max_age=timedelta(days=la_settings.USER_SESSION_DAYS_TIMESTAMP).total_seconds()\n )\n user = get_user_model().objects.get(pk=original_user_pk)\n messages.info(\n request,\n la_settings.MESSAGE_LOGIN_REVERT.format(username=user.__dict__[username_field]),\n extra_tags=la_settings.MESSAGE_EXTRA_TAGS,\n )\n login_as(user, request, store_original_user=False)\n if la_settings.USER_SESSION_FLAG in request.session:\n del request.session[la_settings.USER_SESSION_FLAG]\n except SignatureExpired:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncoding to load create user module. Copied off django - browserid.", "response": "def _load_module(path):\n \"\"\"Code to load create user module. Copied off django-browserid.\"\"\"\n\n i = path.rfind(\".\")\n module, attr = path[:i], path[i + 1 :]\n\n try:\n mod = import_module(module)\n except ImportError:\n raise ImproperlyConfigured(\"Error importing CAN_LOGIN_AS function: {}\".format(module))\n except ValueError:\n raise ImproperlyConfigured(\"Error importing CAN_LOGIN_AS\" \" function. Is CAN_LOGIN_AS a\" \" string?\")\n\n try:\n can_login_as = getattr(mod, attr)\n except AttributeError:\n raise ImproperlyConfigured(\"Module {0} does not define a {1} \" \"function.\".format(module, attr))\n return can_login_as"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef iterate_docs(client, expanded=False, progress=False):\n # Get total number of docs from the project record\n num_docs = client.get()['document_count']\n progress_bar = None\n try:\n if progress:\n progress_bar = tqdm(desc='Downloading documents', total=num_docs)\n\n for offset in range(0, num_docs, DOCS_PER_BATCH):\n response = client.get('docs', offset=offset, limit=DOCS_PER_BATCH)\n docs = response['result']\n for doc in docs:\n # Get the appropriate set of fields for each document\n if expanded:\n for field in UNNECESSARY_FIELDS:\n doc.pop(field, None)\n else:\n doc = {field: doc[field] for field in CONCISE_FIELDS}\n\n if progress:\n progress_bar.update()\n yield doc\n\n finally:\n if progress:\n progress_bar.close()", "response": "Yields each document in a Luminoso project in turn."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndownloads all the documents from a LuminosoClient pointing to a project and write them to a JSON file.", "response": "def download_docs(client, output_filename=None, expanded=False):\n \"\"\"\n Given a LuminosoClient pointing to a project and a filename to write to,\n retrieve all its documents in batches, and write them to a JSON lines\n (.jsons) file with one document per line.\n \"\"\"\n if output_filename is None:\n # Find a default filename to download to, based on the project name.\n projname = _sanitize_filename(client.get()['name'])\n output_filename = '{}.jsons'.format(projname)\n\n # If the file already exists, add .1, .2, ..., after the project name\n # to unobtrusively get a unique filename.\n counter = 0\n while os.access(output_filename, os.F_OK):\n counter += 1\n output_filename = '{}.{}.jsons'.format(projname, counter)\n\n print('Downloading project to {!r}'.format(output_filename))\n\n with open(output_filename, 'w', encoding='utf-8') as out:\n for doc in iterate_docs(client, expanded=expanded, progress=True):\n print(json.dumps(doc, ensure_ascii=False), file=out)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nhandling arguments for the 'lumi-download' command.", "response": "def _main(argv):\n \"\"\"\n Handle arguments for the 'lumi-download' command.\n \"\"\"\n parser = argparse.ArgumentParser(\n description=DESCRIPTION,\n formatter_class=argparse.RawDescriptionHelpFormatter\n )\n parser.add_argument(\n '-b',\n '--base-url',\n default=URL_BASE,\n help='API root url, default: %s' % URL_BASE,\n )\n parser.add_argument(\n '-e', '--expanded',\n help=\"Include Luminoso's analysis of each document, such as terms and\"\n ' document vectors',\n action='store_true',\n )\n parser.add_argument('-t', '--token', help='API authentication token')\n parser.add_argument(\n '-s',\n '--save-token',\n action='store_true',\n help='save --token for --base-url to ~/.luminoso/tokens.json',\n )\n parser.add_argument(\n 'project_id', help='The ID of the project in the Daylight API'\n )\n parser.add_argument(\n 'output_file', nargs='?', default=None,\n help='The JSON lines (.jsons) file to write to'\n )\n args = parser.parse_args(argv)\n if args.save_token:\n if not args.token:\n raise ValueError(\"error: no token provided\")\n LuminosoClient.save_token(args.token,\n domain=urlparse(args.base_url).netloc)\n\n client = LuminosoClient.connect(url=args.base_url, token=args.token)\n proj_client = client.client_for_path('projects/{}'.format(args.project_id))\n download_docs(proj_client, args.output_file, args.expanded)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef transcode(input_filename, output_filename=None, date_format=None):\n if output_filename is None:\n # transcode to standard output\n output = sys.stdout\n else:\n if output_filename.endswith('.json'):\n logger.warning(\"Changing .json to .jsons, because this program \"\n \"outputs a JSON stream format that is not \"\n \"technically JSON itself.\")\n output_filename += 's'\n output = open(output_filename, 'w')\n\n for entry in open_json_or_csv_somehow(input_filename,\n date_format=date_format):\n output.write(json.dumps(entry, ensure_ascii=False).encode('utf-8'))\n output.write('\\n')\n output.close()", "response": "Convert a JSON or CSV file of input to a standard output."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef transcode_to_stream(input_filename, date_format=None):\n tmp = tempfile.TemporaryFile()\n for entry in open_json_or_csv_somehow(input_filename,\n date_format=date_format):\n tmp.write(json.dumps(entry, ensure_ascii=False).encode('utf-8'))\n tmp.write(b'\\n')\n tmp.seek(0)\n return tmp", "response": "Read a JSON or CSV file and convert it into a stream."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nopening a file and return a tuple of the appropriate file format.", "response": "def open_json_or_csv_somehow(filename, date_format=None):\n \"\"\"\n Deduce the format of a file, within reason.\n\n - If the filename ends with .csv or .txt, it's csv.\n - If the filename ends with .jsons, it's a JSON stream (conveniently the\n format we want to output).\n - If the filename ends with .json, it could be a legitimate JSON file, or\n it could be a JSON stream, following a nonstandard convention that many\n people including us are guilty of. In that case:\n - If the first line is a complete JSON document, and there is more in the\n file besides the first line, then it is a JSON stream.\n - Otherwise, it is probably really JSON.\n - If the filename does not end with .json, .jsons, or .csv, we have to guess\n whether it's still CSV or tab-separated values or something like that.\n If it's JSON, the first character would almost certainly have to be a\n bracket or a brace. If it isn't, assume it's CSV or similar.\n \"\"\"\n fileformat = None\n if filename.endswith('.csv'):\n fileformat = 'csv'\n elif filename.endswith('.jsons'):\n fileformat = 'jsons'\n else:\n with open(filename) as opened:\n line = opened.readline()\n if line[0] not in '{[' and not filename.endswith('.json'):\n fileformat = 'csv'\n else:\n if (line.count('{') == line.count('}') and\n line.count('[') == line.count(']')):\n # This line contains a complete JSON document. This probably\n # means it's in linewise JSON ('.jsons') format, unless the\n # whole file is on one line.\n char = ' '\n while char.isspace():\n char = opened.read()\n if char == '':\n fileformat = 'json'\n break\n if fileformat is None:\n fileformat = 'jsons'\n else:\n fileformat = 'json'\n\n if fileformat == 'json':\n stream = json.load(open(filename), encoding='utf-8')\n elif fileformat == 'csv':\n stream = open_csv_somehow(filename)\n else:\n stream = stream_json_lines(filename)\n\n return _normalize_data(stream, date_format=date_format)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _convert_date(date_string, date_format):\n if date_format != 'epoch':\n return datetime.strptime(date_string, date_format).timestamp()\n else:\n return float(date_string)", "response": "Convert a date in a given format to epoch time."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef detect_file_encoding(filename):\n with open(filename, 'rb') as opened:\n sample = opened.read(2 ** 20)\n _, encoding = ftfy.guess_bytes(sample)\n return encoding", "response": "Detect the encoding of a file based on the file s file format."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef stream_json_lines(file):\n if isinstance(file, string_type):\n file = open(file, 'rb')\n for line in file:\n line = line.strip()\n if line:\n if isinstance(line, bytes):\n line = line.decode('utf-8')\n yield json.loads(line)", "response": "Load a JSON stream and return a generator yielding one object at a time."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconverts a file in some other encoding into a temporary file that s in UTF - 8.", "response": "def transcode_to_utf8(filename, encoding):\n \"\"\"\n Convert a file in some other encoding into a temporary file that's in\n UTF-8.\n \"\"\"\n tmp = tempfile.TemporaryFile()\n for line in io.open(filename, encoding=encoding):\n tmp.write(line.strip('\\uFEFF').encode('utf-8'))\n\n tmp.seek(0)\n return tmp"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nopening a CSV file using Python 2 s CSV module working around the deficiency where it can t handle null bytes of UTF - 16.", "response": "def open_csv_somehow_py2(filename):\n \"\"\"\n Open a CSV file using Python 2's CSV module, working around the deficiency\n where it can't handle the null bytes of UTF-16.\n \"\"\"\n encoding = detect_file_encoding(filename)\n if encoding.startswith('UTF-16'):\n csvfile = transcode_to_utf8(filename, encoding)\n encoding = 'UTF-8'\n else:\n csvfile = open(filename, 'rU')\n line = csvfile.readline()\n csvfile.seek(0)\n\n if '\\t' in line:\n # tab-separated\n reader = csv.reader(csvfile, delimiter='\\t')\n else:\n reader = csv.reader(csvfile, dialect='excel')\n\n header = reader.next()\n header = [cell.decode(encoding).lower().strip() for cell in header]\n encode_fn = lambda x: x.decode(encoding, 'replace')\n return _read_csv(reader, header, encode_fn)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngives a CSV reader object a header row that we ve read and an encoding function that we ve detected encoding yield its rows as dictionaries.", "response": "def _read_csv(reader, header, encode_fn):\n \"\"\"\n Given a constructed CSV reader object, a header row that we've read, and\n a detected encoding, yield its rows as dictionaries.\n \"\"\"\n for row in reader:\n if len(row) == 0:\n continue\n row = [encode_fn(cell) for cell in row]\n row_list = zip(header, row)\n row_dict = dict(row_list)\n if len(row_dict['text']) == 0:\n continue\n row_dict['text'] = unicodedata.normalize(\n 'NFKC', row_dict['text'].strip()\n )\n if row_dict.get('title') == '':\n del row_dict['title']\n if 'date' in row_dict:\n # We handle dates further in open_json_or_csv_somehow\n if row_dict['date'] == '':\n del row_dict['date']\n if 'subset' in row_dict:\n subsets = [cell[1] for cell in row_list\n if cell[1] != '' and cell[0] == 'subset']\n if subsets:\n row_dict['subsets'] = subsets\n if 'subset' in row_dict:\n del row_dict['subset']\n yield row_dict"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef main():\n logging.basicConfig(level=logging.INFO)\n import argparse\n parser = argparse.ArgumentParser(\n description=\"Translate CSV or JSON input to a JSON stream, or verify \"\n \"something that is already a JSON stream.\"\n )\n parser.add_argument('input',\n help='A CSV, JSON, or JSON stream file to read.')\n parser.add_argument('output', nargs='?', default=None,\n help=\"The filename to output to. Recommended extension is .jsons. \"\n \"If omitted, use standard output.\")\n args = parser.parse_args()\n transcode(args.input, args.output)", "response": "Main function for the a\n script."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef jsonify_parameters(params):\n result = {}\n for param, value in params.items():\n if isinstance(value, (int, str)):\n result[param] = value\n else:\n result[param] = json.dumps(value)\n return result", "response": "JSON - encodes the parameters in a list of dicts."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate a new client with a session that can be used to access the Luminoso API.", "response": "def connect(cls, url=None, token_file=None, token=None):\n \"\"\"\n Returns an object that makes requests to the API, authenticated\n with a saved or specified long-lived token, at URLs beginning with\n `url`.\n\n If no URL is specified, or if the specified URL is a path such as\n '/projects' without a scheme and domain, the client will default to\n https://analytics.luminoso.com/api/v5/.\n\n If neither token nor token_file are specified, the client will look\n for a token in $HOME/.luminoso/tokens.json. The file should contain\n a single json dictionary of the format\n `{'root_url': 'token', 'root_url2': 'token2', ...}`.\n \"\"\"\n if url is None:\n url = '/'\n\n if url.startswith('http'):\n root_url = get_root_url(url)\n else:\n url = URL_BASE + '/' + url.lstrip('/')\n root_url = URL_BASE\n\n if token is None:\n token_file = token_file or get_token_filename()\n try:\n with open(token_file) as tf:\n token_dict = json.load(tf)\n except FileNotFoundError:\n raise LuminosoAuthError('No token file at %s' % token_file)\n try:\n token = token_dict[urlparse(root_url).netloc]\n except KeyError:\n raise LuminosoAuthError('No token stored for %s' % root_url)\n\n session = requests.session()\n session.auth = _TokenAuth(token)\n return cls(session, url)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef save_token(token, domain='analytics.luminoso.com', token_file=None):\n token_file = token_file or get_token_filename()\n if os.path.exists(token_file):\n saved_tokens = json.load(open(token_file))\n else:\n saved_tokens = {}\n saved_tokens[domain] = token\n directory, filename = os.path.split(token_file)\n if directory and not os.path.exists(directory):\n os.makedirs(directory)\n with open(token_file, 'w') as f:\n json.dump(saved_tokens, f)", "response": "Save a long - lived API token to a local file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef connect_with_username_and_password(cls, url=None, username=None,\n password=None):\n \"\"\"\n Returns an object that makes requests to the API, authenticated with\n a short-lived token retrieved from username and password. If username\n or password is not supplied, the method will prompt for a username\n and/or password to be entered interactively.\n\n See the connect method for more details about the `url` argument.\n\n PLEASE NOTE: This method is being provided as a temporary measure. We\n strongly encourage users of the Luminoso API to use a long-lived token\n instead, as explained in the V5_README file.\n \"\"\"\n from .v4_client import LuminosoClient as v4LC\n if username is None:\n username = input('Username: ')\n v4client = v4LC.connect(url=url, username=username, password=password)\n\n if url is None:\n url = '/'\n\n if url.startswith('http'):\n root_url = get_root_url(url)\n else:\n url = URL_BASE + '/' + url.lstrip('/')\n root_url = URL_BASE\n\n return cls(v4client.session, root_url)", "response": "Connect to the API with username and password."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _request(self, req_type, url, **kwargs):\n logger.debug('%s %s' % (req_type, url))\n result = self.session.request(req_type, url, **kwargs)\n try:\n result.raise_for_status()\n except requests.HTTPError:\n error = result.text\n try:\n error = json.loads(error)\n except ValueError:\n pass\n if result.status_code in (401, 403):\n error_class = LuminosoAuthError\n elif result.status_code in (400, 404, 405):\n error_class = LuminosoClientError\n elif result.status_code >= 500:\n error_class = LuminosoServerError\n else:\n error_class = LuminosoError\n raise error_class(error)\n return result", "response": "Make a request via the requests module."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef post(self, path='', **params):\n url = ensure_trailing_slash(self.url + path.lstrip('/'))\n return self._json_request('post', url, data=json.dumps(params),\n headers={'Content-Type': 'application/json'})", "response": "Make a POST request to the given path and return the JSON - decoded version of the result."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef delete(self, path='', **params):\n params = jsonify_parameters(params)\n url = ensure_trailing_slash(self.url + path.lstrip('/'))\n return self._json_request('delete', url, params=params)", "response": "Make a DELETE request to the given path and return the JSON - decoded version of the object represented by this URL."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef client_for_path(self, path):\n if path.startswith('/'):\n url = self.root_url + path\n else:\n url = self.url + path\n return self.__class__(self.session, url)", "response": "Returns a new client for the given path."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndeprecating method for uploading a document to the server.", "response": "def upload(self, path, docs, **params):\n \"\"\"\n A deprecated alias for post(path, docs=docs), included only for\n backward compatibility.\n \"\"\"\n logger.warning('The upload method is deprecated; use post instead.')\n return self.post(path, docs=docs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef wait_for_build(self, interval=5, path=None):\n path = path or ''\n start = time.time()\n next_log = 0\n while True:\n response = self.get(path)['last_build_info']\n if not response:\n raise ValueError('This project is not building!')\n if response['stop_time']:\n if response['success']:\n return response\n else:\n raise LuminosoError(response)\n elapsed = time.time() - start\n if elapsed > next_log:\n logger.info('Still waiting (%d seconds elapsed).', next_log)\n next_log += 120\n time.sleep(interval)", "response": "This method will wait for a build to complete."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsave binary content to a file with name filename.", "response": "def save_to_file(self, path, filename, **params):\n \"\"\"\n Saves binary content to a file with name filename. filename should\n include the appropriate file extension, such as .xlsx or .txt, e.g.,\n filename = 'sample.xlsx'.\n\n Useful for downloading .xlsx files.\n \"\"\"\n url = ensure_trailing_slash(self.url + path.lstrip('/'))\n content = self._request('get', url, params=params).content\n with open(filename, 'wb') as f:\n f.write(content)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the root URL for a URL.", "response": "def get_root_url(url, warn=True):\n \"\"\"\n Get the \"root URL\" for a URL, as described in the LuminosoClient\n documentation.\n \"\"\"\n parsed_url = urlparse(url)\n\n # Make sure it's a complete URL, not a relative one\n if not parsed_url.scheme:\n raise ValueError('Please supply a full URL, beginning with http:// '\n 'or https:// .')\n\n # Issue a warning if the path didn't already start with /api/v4\n root_url = '%s://%s/api/v4' % (parsed_url.scheme, parsed_url.netloc)\n if warn and not parsed_url.path.startswith('/api/v4'):\n logger.warning('Using %s as the root url' % root_url)\n return root_url"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef connect(cls, url=None, username=None, password=None, token=None,\n token_file=None):\n \"\"\"\n Returns an object that makes requests to the API, authenticated\n with the provided username/password, at URLs beginning with `url`.\n\n You can leave out the URL and get your 'default URL', a base path\n that is probably appropriate for creating projects on your\n account:\n\n client = LuminosoClient.connect(username=username)\n\n If the URL is simply a path, omitting the scheme and domain, then\n it will default to https://analytics.luminoso.com/api/v4/, which is\n probably what you want:\n\n client = LuminosoClient.connect('/projects/public', username=username)\n\n If you leave out the username, it will use your system username,\n which is convenient if it matches your Luminoso username:\n\n client = LuminosoClient.connect()\n \"\"\"\n auto_account = False\n if url is None:\n auto_account = True\n url = '/'\n\n if url.startswith('http'):\n root_url = get_root_url(url)\n else:\n url = URL_BASE + '/' + url.lstrip('/')\n root_url = URL_BASE\n\n auth = cls._get_token_auth(username, password, token, token_file,\n root_url)\n session = requests.session()\n session.auth = auth\n client = cls(session, url)\n if auto_account:\n client = client.change_path('/projects/%s' %\n client._get_default_account())\n return client", "response": "Creates a new LuminosoClient object with the provided username password and token."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nsaves the user s long - lived API token and save it in a local file.", "response": "def save_token(self, token_file=None):\n \"\"\"\n Obtain the user's long-lived API token and save it in a local file.\n If the user has no long-lived API token, one will be created.\n Returns the token that was saved.\n \"\"\"\n tokens = self._json_request('get', self.root_url + '/user/tokens/')\n long_lived = [token['type'] == 'long_lived' for token in tokens]\n if any(long_lived):\n dic = tokens[long_lived.index(True)]\n else:\n # User doesn't have a long-lived token, so create one\n dic = self._json_request('post', self.root_url + '/user/tokens/')\n token = dic['token']\n token_file = token_file or get_token_filename()\n if os.path.exists(token_file):\n saved_tokens = json.load(open(token_file))\n else:\n saved_tokens = {}\n saved_tokens[urlparse(self.root_url).netloc] = token\n directory, filename = os.path.split(token_file)\n if directory and not os.path.exists(directory):\n os.makedirs(directory)\n with open(token_file, 'w') as f:\n json.dump(saved_tokens, f)\n return token"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nmakes a request to the specified url and expect a JSON object in response.", "response": "def _json_request(self, req_type, url, **kwargs):\n \"\"\"\n Make a request of the specified type and expect a JSON object in\n response.\n\n If the result has an 'error' value, raise a LuminosoAPIError with\n its contents. Otherwise, return the contents of the 'result' value.\n \"\"\"\n response = self._request(req_type, url, **kwargs)\n try:\n json_response = response.json()\n except ValueError:\n logger.error(\"Received response with no JSON: %s %s\" %\n (response, response.content))\n raise LuminosoError('Response body contained no JSON. '\n 'Perhaps you meant to use get_raw?')\n if json_response.get('error'):\n raise LuminosoAPIError(json_response.get('error'))\n return json_response['result']"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef post(self, path='', **params):\n params = jsonify_parameters(params)\n url = ensure_trailing_slash(self.url + path.lstrip('/'))\n return self._json_request('post', url, data=params)", "response": "Make a POST request to the given path and return the JSON - decoded version of the result."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef post_data(self, path, data, content_type, **params):\n params = jsonify_parameters(params)\n url = ensure_trailing_slash(self.url + path.lstrip('/'))\n return self._json_request('post', url,\n params=params,\n data=data,\n headers={'Content-Type': content_type}\n )", "response": "Make a POST request to the given path with data in its body."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef change_path(self, path):\n if path.startswith('/'):\n url = self.root_url + path\n else:\n url = self.url + path\n return self.__class__(self.session, url)", "response": "Return a new LuminosoClient for a subpath of this one."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get_default_account(self):\n newclient = self.__class__(self.session, self.root_url)\n account_info = newclient.get('/accounts/')\n if account_info['default_account'] is not None:\n return account_info['default_account']\n valid_accounts = [a['account_id'] for a in account_info['accounts']\n if a['account_id'] != 'public']\n if len(valid_accounts) == 0:\n raise ValueError(\"Can't determine your default URL. \"\n \"Please request a specific URL or ask \"\n \"Luminoso for support.\")\n return valid_accounts[0]", "response": "Get the ID of the default account."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef documentation(self):\n newclient = self.__class__(self.session, self.root_url)\n return newclient.get_raw('/')", "response": "Get the documentation that the server sends for the API."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef upload(self, path, docs, **params):\n json_data = json.dumps(list(docs))\n return self.post_data(path, json_data, 'application/json', **params)", "response": "This method uploads a set of dictionaries representing\n documents to the specified path."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwaits for a job to finish.", "response": "def wait_for(self, job_id, base_path=None, interval=5):\n \"\"\"\n Wait for an asynchronous task to finish.\n\n Unlike the thin methods elsewhere on this object, this one is actually\n specific to how the Luminoso API works. This will poll an API\n endpoint to find out the status of the job numbered `job_id`,\n repeating every 5 seconds (by default) until the job is done. When\n the job is done, it will return an object representing the result of\n that job.\n\n In the Luminoso API, requests that may take a long time return a\n job ID instead of a result, so that your code can continue running\n in the meantime. When it needs the job to be done to proceed, it can\n use this method to wait.\n\n The base URL where it looks for that job is by default `jobs/id/`\n under the current URL, assuming that this LuminosoClient's URL\n represents a project. You can specify a different URL by changing\n `base_path`.\n\n If the job failed, will raise a LuminosoError with the job status\n as its message.\n \"\"\"\n if base_path is None:\n base_path = 'jobs/id'\n path = '%s%d' % (ensure_trailing_slash(base_path), job_id)\n start = time.time()\n next_log = 0\n while True:\n response = self.get(path)\n if response['stop_time']:\n if response['success']:\n return response\n else:\n raise LuminosoError(response)\n elapsed = time.time() - start\n if elapsed > next_log:\n logger.info('Still waiting (%d seconds elapsed).', next_log)\n next_log += 120\n time.sleep(interval)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the raw text of a response.", "response": "def get_raw(self, path, **params):\n \"\"\"\n Get the raw text of a response.\n\n This is only generally useful for specific URLs, such as documentation.\n \"\"\"\n url = ensure_trailing_slash(self.url + path.lstrip('/'))\n return self._request('get', url, params=params).text"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _print_csv(result):\n if type(result) is not list:\n raise TypeError(\"output not able to be displayed as CSV.\")\n first_line = result[0]\n w = csv.DictWriter(sys.stdout, fieldnames=sorted(first_line.keys()))\n w.writeheader()\n for line in result:\n w.writerow(line)", "response": "Print a JSON list of JSON objects in CSV format."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nread parameters from input file - j and - p arguments in that order.", "response": "def _read_params(input_file, json_body, p_params):\n \"\"\"Read parameters from input file, -j, and -p arguments, in that order.\"\"\"\n params = {}\n try:\n if input_file:\n params.update(json.load(input_file))\n if json_body is not None:\n params.update(json.loads(json_body))\n except ValueError as e:\n raise ValueError(\"input is not valid JSON: %s\" % e)\n try:\n params.update({p.split('=', 1)[0]: p.split('=', 1)[1] for p in p_params})\n except IndexError:\n raise ValueError(\"--param arguments must have key=value format\")\n return params"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntaking an iterator and yield its contents in groups of size items.", "response": "def _batches(iterable, size):\n \"\"\"\n Take an iterator and yield its contents in groups of `size` items.\n \"\"\"\n sourceiter = iter(iterable)\n while True:\n try:\n batchiter = islice(sourceiter, size)\n yield chain([next(batchiter)], batchiter)\n except StopIteration:\n return"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _simplify_doc(doc):\n # Mutate a copy of the document to fill in missing fields\n doc = dict(doc)\n if 'text' not in doc:\n raise ValueError(\"The document {!r} has no text field\".format(doc))\n return {\n 'text': doc['text'],\n 'metadata': doc.get('metadata', []),\n 'title': doc.get('title', '')\n }", "response": "Simplify a document to just the three fields we should upload."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a new project with the given documents.", "response": "def create_project_with_docs(\n client, docs, language, name, account=None, progress=False\n):\n \"\"\"\n Given an iterator of documents, upload them as a Luminoso project.\n \"\"\"\n description = 'Uploaded using lumi-upload at {}'.format(time.asctime())\n if account is not None:\n proj_record = client.post(\n 'projects',\n name=name,\n language=language,\n description=description,\n account_id=account,\n )\n else:\n proj_record = client.post(\n 'projects', name=name, language=language, description=description\n )\n proj_id = proj_record['project_id']\n proj_client = client.client_for_path('projects/' + proj_id)\n try:\n if progress:\n progress_bar = tqdm(desc='Uploading documents')\n else:\n progress_bar = None\n\n for batch in _batches(docs, BATCH_SIZE):\n docs_to_upload = [_simplify_doc(doc) for doc in batch]\n proj_client.post('upload', docs=docs_to_upload)\n if progress:\n progress_bar.update(BATCH_SIZE)\n\n finally:\n if progress:\n progress_bar.close()\n\n print('The server is building project {!r}.'.format(proj_id))\n proj_client.post('build')\n\n while True:\n time.sleep(10)\n proj_status = proj_client.get()\n build_info = proj_status['last_build_info']\n if 'success' in build_info:\n if not build_info['success']:\n raise LuminosoServerError(build_info['reason'])\n return proj_status"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef upload_docs(\n client, input_filename, language, name, account=None, progress=False\n):\n \"\"\"\n Given a LuminosoClient pointing to the root of the API, and a filename to\n read JSON lines from, create a project from the documents in that file.\n \"\"\"\n docs = iterate_json_lines(input_filename)\n return create_project_with_docs(\n client, docs, language, name, account, progress=progress\n )", "response": "Given a LuminosoClient pointing to the root of the API and a filename to read in a JSON file and create a project from the documents in that file."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nhandles arguments for the 'lumi-upload' command.", "response": "def _main(argv):\n \"\"\"\n Handle arguments for the 'lumi-upload' command.\n \"\"\"\n parser = argparse.ArgumentParser(\n description=DESCRIPTION,\n formatter_class=argparse.RawDescriptionHelpFormatter,\n )\n parser.add_argument(\n '-b',\n '--base-url',\n default=URL_BASE,\n help='API root url, default: %s' % URL_BASE,\n )\n parser.add_argument(\n '-a',\n '--account-id',\n default=None,\n help='Account ID that should own the project, if not the default',\n )\n parser.add_argument(\n '-l',\n '--language',\n default='en',\n help='The language code for the language the text is in. Default: en',\n )\n parser.add_argument('-t', '--token', help=\"API authentication token\")\n parser.add_argument(\n '-s',\n '--save-token',\n action='store_true',\n help='save --token for --base-url to ~/.luminoso/tokens.json',\n )\n parser.add_argument(\n 'input_filename',\n help='The JSON-lines (.jsons) file of documents to upload',\n )\n parser.add_argument(\n 'project_name',\n nargs='?',\n default=None,\n help='What the project should be called',\n )\n args = parser.parse_args(argv)\n if args.save_token:\n if not args.token:\n raise ValueError(\"error: no token provided\")\n LuminosoClient.save_token(args.token,\n domain=urlparse(args.base_url).netloc)\n\n client = LuminosoClient.connect(url=args.base_url, token=args.token)\n\n name = args.project_name\n if name is None:\n name = input('Enter a name for the project: ')\n if not name:\n print('Aborting because no name was provided.')\n return\n\n result = upload_docs(\n client,\n args.input_filename,\n args.language,\n name,\n account=args.account_id,\n progress=True,\n )\n print(\n 'Project {!r} created with {} documents'.format(\n result['project_id'], result['document_count']\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ntake an iterator and yield its contents in groups of size items.", "response": "def batches(iterable, size):\n \"\"\"\n Take an iterator and yield its contents in groups of `size` items.\n \"\"\"\n sourceiter = iter(iterable)\n while True:\n batchiter = islice(sourceiter, size)\n yield chain([next(batchiter)], batchiter)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef upload_stream(stream, server, account, projname, language=None,\n username=None, password=None,\n append=False, stage=False):\n \"\"\"\n Given a file-like object containing a JSON stream, upload it to\n Luminoso with the given account name and project name.\n \"\"\"\n client = LuminosoClient.connect(server,\n username=username, password=password)\n if not append:\n # If we're not appending to an existing project, create new project.\n info = client.post('/projects/' + account, name=projname)\n project_id = info['project_id']\n print('New project ID:', project_id)\n else:\n projects = client.get('/projects/' + account, name=projname)\n if len(projects) == 0:\n print('No such project exists!')\n return\n if len(projects) > 1:\n print('Warning: Multiple projects with name \"%s\". ' % projname,\n end='')\n project_id = projects[0]['project_id']\n print('Using existing project with id %s.' % project_id)\n\n project = client.change_path('/projects/' + account + '/' + project_id)\n\n counter = 0\n for batch in batches(stream, 1000):\n counter += 1\n documents = list(batch)\n project.upload('docs', documents)\n print('Uploaded batch #%d' % (counter))\n\n if not stage:\n # Calculate the docs into the assoc space.\n print('Calculating.')\n kwargs = {}\n if language is not None:\n kwargs = {'language': language}\n job_id = project.post('docs/recalculate', **kwargs)\n project.wait_for(job_id)", "response": "Uploads a stream to a Luminoso project."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef upload_file(filename, server, account, projname, language=None,\n username=None, password=None,\n append=False, stage=False, date_format=None):\n \"\"\"\n Upload a file to Luminoso with the given account and project name.\n\n Given a file containing JSON, JSON stream, or CSV data, this verifies\n that we can successfully convert it to a JSON stream, then uploads that\n JSON stream.\n \"\"\"\n stream = transcode_to_stream(filename, date_format)\n upload_stream(stream_json_lines(stream),\n server, account, projname, language=language,\n username=username, password=password,\n append=append, stage=stage)", "response": "Uploads a file to Luminoso with the given account and project name."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef main():\n import argparse\n parser = argparse.ArgumentParser()\n parser.add_argument('filename')\n parser.add_argument('account')\n parser.add_argument('project_name')\n parser.add_argument(\n '--append',\n help=(\"If append flag is used, upload documents to existing project, \"\n \"rather than creating a new project.\"),\n action=\"store_true\"\n )\n parser.add_argument(\n '-s', '--stage',\n help=\"If stage flag is used, just upload docs, don't recalculate.\",\n action=\"store_true\"\n )\n parser.add_argument(\n '-a', '--api-url',\n help=\"Specify an alternate API url\",\n default=URL_BASE\n )\n parser.add_argument(\n '-l', '--language',\n help=(\"Two-letter language code to use when recalculating (e.g. 'en' \"\n \"or 'ja')\")\n )\n parser.add_argument(\n '-u', '--username', default=None,\n help=\"username (defaults to your username on your computer)\"\n )\n parser.add_argument(\n '-p', '--password', default=None,\n help=\"password (you can leave this out and type it in later)\"\n )\n parser.add_argument(\n '-d', '--date-format', default='iso',\n help=(\"format string for parsing dates, following \"\n \"http://strftime.org/. Default is 'iso', which is \"\n \"'%%Y-%%m-%%dT%%H:%%M:%%S+00:00'. Other shortcuts are 'epoch' \"\n \"for epoch time or 'us-standard' for '%%m/%%d/%%y'\")\n )\n args = parser.parse_args()\n\n # Implement some human-understandable shortcuts for date_format\n date_format_lower = args.date_format.lower()\n if date_format_lower == 'iso':\n date_format = '%Y-%m-%dT%H:%M:%S+00:00'\n elif date_format_lower in ['unix', 'epoch']:\n date_format = 'epoch'\n elif date_format_lower == 'us-standard':\n date_format = '%m/%d/%y'\n else:\n date_format = args.date_format\n\n upload_file(args.filename, args.api_url, args.account, args.project_name,\n language=args.language,\n username=args.username, password=args.password,\n append=args.append, stage=args.stage,\n date_format=date_format)", "response": "This function is called by the command line tool to upload a file to a Luminoso project."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nobtains a short - lived token using a username and password and use that token to create an auth object.", "response": "def from_user_creds(cls, username, password, url=URL_BASE):\n \"\"\"\n Obtain a short-lived token using a username and password, and use that\n token to create an auth object.\n \"\"\"\n session = requests.session()\n token_resp = session.post(url.rstrip('/') + '/user/login/',\n data={'username': username,\n 'password': password})\n if token_resp.status_code != 200:\n error = token_resp.text\n try:\n error = json.loads(error)['error']\n except (KeyError, ValueError):\n pass\n raise LuminosoLoginError(error)\n\n return cls(token_resp.json()['result']['token'])"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_data(self, p_p_resource_id, start_date=None, end_date=None):\n\n data = {\n '_' + REQ_PART + '_dateDebut': start_date,\n '_' + REQ_PART + '_dateFin': end_date\n }\n\n params = {\n 'p_p_id': REQ_PART,\n 'p_p_lifecycle': 2,\n 'p_p_state': 'normal',\n 'p_p_mode': 'view',\n 'p_p_resource_id': p_p_resource_id,\n 'p_p_cacheability': 'cacheLevelPage',\n 'p_p_col_id': 'column-1',\n 'p_p_col_pos': 1,\n 'p_p_col_count': 3\n }\n\n try:\n raw_res = self._session.post(DATA_URL,\n data=data,\n params=params,\n allow_redirects=False,\n timeout=self._timeout)\n\n if 300 <= raw_res.status_code < 400:\n raw_res = self._session.post(DATA_URL,\n data=data,\n params=params,\n allow_redirects=False,\n timeout=self._timeout)\n except OSError as e:\n raise PyLinkyError(\"Could not access enedis.fr: \" + str(e))\n\n if raw_res.text is \"\":\n raise PyLinkyError(\"No data\")\n\n if 302 == raw_res.status_code and \"/messages/maintenance.html\" in raw_res.text:\n raise PyLinkyError(\"Site in maintenance\")\n\n try:\n json_output = raw_res.json()\n except (OSError, json.decoder.JSONDecodeError, simplejson.errors.JSONDecodeError) as e:\n raise PyLinkyError(\"Impossible to decode response: \" + str(e) + \"\\nResponse was: \" + str(raw_res.text))\n\n if json_output.get('etat').get('valeur') == 'erreur':\n raise PyLinkyError(\"Enedis.fr answered with an error: \" + str(json_output))\n\n return json_output.get('graphe')", "response": "Get data from the enedis. fr API."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef fetch_data(self):\n\n for t in [HOURLY, DAILY, MONTHLY, YEARLY]:\n self._data[t] = self.get_data_per_period(t)", "response": "Get the latest data from Enedis."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nload the view on first load", "response": "def prepare(self):\n \"\"\" Load the view on first load \"\"\"\n if self.__class__.view:\n return\n \n #: Load the View class from the dotted view name\n with enaml.imports():\n View = pydoc.locate(self.page.view)\n assert View, \"Failed to import View: {}\".format(self.page.view)\n \n #: Set initial view properties\n self.__class__.view = View(\n site=self.site,\n page=self.page,\n request=self.request,\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ninitializes the object with the current view properties.", "response": "def initialize(self):\n \"\"\" Load the view on first load could also load based on session, group, etc.. \n \"\"\"\n if self.__class__.view:\n self.view.handler = self\n self.view.request = self.request\n return\n \n #: Load the View class from the dotted view name\n with enaml.imports():\n from views.index import View\n \n #: Set initial view properties\n self.__class__.view = View(\n company=current_company,\n request=self.request,\n handler=self,\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get(self, *args, **kwargs):\n #: Render view for get request, view is cached for websocket\n \"\"\" Execute the correct handler depending on what is connecting. \"\"\"\n if self.is_websocket():\n return super(DemoHandler, self).get(*args, **kwargs)\n else:\n #return tornado.web.RequestHandler.get(self, *args, **kwargs)\n self.write(self.view.render())", "response": "Render view for get request"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef on_message(self, message):\n #: Decode message\n change = tornado.escape.json_decode(message)\n #print change\n #: Get the owner ID\n ref = change.get('ref')\n if not ref:\n return\n \n #: Get the server side representation of the node\n #: If found will return the View declaration node\n node = self.view.xpath('//*[@ref=\"{}\"]'.format(ref), first=True)\n if node is None:\n return\n \n #: Handle the event\n if change.get('type') and change.get('name'):\n if change['type'] == 'event':\n #: Trigger the event\n trigger = getattr(node, change['name'])\n trigger()\n if change['type'] == 'update':\n #: Trigger the update\n setattr(node, change['name'], change['value'])", "response": "Called when a message is received from the enaml. js server."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _default_handlers(self):\n static_path = os.path.abspath(os.path.join(os.path.dirname(__file__),\"static\"))\n urls = [\n (r\"/static/(.*)\", cyclone.web.StaticFileHandler, {\"path\": static_path}),\n ]\n for p in self.pages:\n handler = p.handler\n handler.site = self\n handler.page = p\n urls.append((p.link.url,handler))\n return urls", "response": "Generate the default handlers for this site"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef on_dom_modified(self, change):\n log.debug(f'Update from enaml: {change}')\n self.write_message(json.dumps(change['value']))", "response": "When an event from enaml occurs send it out the websocket\n so that the client can update it accordingly."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate the toolkit widget for the proxy object.", "response": "def create_widget(self):\n \"\"\" Create the toolkit widget for the proxy object.\n\n This method is called during the top-down pass, just before the\n 'init_widget()' method is called. This method should create the\n toolkit widget and assign it to the 'widget' attribute.\n\n \"\"\"\n self.widget = SubElement(self.parent_widget(), self.declaration.tag)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ninitializes the state of the toolkit widget.", "response": "def init_widget(self):\n \"\"\" Initialize the state of the toolkit widget.\n\n This method is called during the top-down pass, just after the\n 'create_widget()' method is called. This method should init the\n state of the widget. The child widgets will not yet be created.\n\n \"\"\"\n widget = self.widget\n d = self.declaration\n\n #: Save ref id\n ref = d.ref\n CACHE[ref] = atomref(self)\n widget.set('ref', ref)\n\n if d.text:\n self.set_text(d.text)\n if d.tail:\n self.set_tail(d.tail)\n if d.style:\n self.set_style(d.style)\n if d.cls:\n self.set_cls(d.cls)\n if d.attrs:\n self.set_attrs(d.attrs)\n if d.id:\n widget.set('id', d.id)\n if d.draggable:\n self.set_draggable(d.draggable)\n\n # Set any attributes that may be defined\n for name, member in d.members().items():\n if not member.metadata:\n continue\n meta = member.metadata\n\n # Exclude any attr tags\n if not (meta.get('d_member') and meta.get('d_final')):\n continue\n\n # Skip any items with attr=false\n elif not meta.get('attr', True):\n continue\n\n elif isinstance(member, Event):\n continue\n value = getattr(d, name)\n if value:\n self.set_attribute(name, value)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef child_added(self, child):\n super(WebComponent, self).child_added(child)\n if child.widget is not None:\n # Use insert to put in the correct spot\n for i, c in enumerate(self.children()):\n if c == child:\n self.widget.insert(i, child.widget)\n break", "response": "Handle the child added event from the declaration."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nhandle the child removed event from the declaration.", "response": "def child_removed(self, child):\n \"\"\" Handle the child removed event from the declaration.\n\n This handler will unparent the child toolkit widget. Subclasses\n which need more control should reimplement this method.\n\n \"\"\"\n super(WebComponent, self).child_removed(child)\n if child.widget is not None:\n for i, c in enumerate(self.children()):\n if c == child:\n del self.widget[i]\n break"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef find(self, query, **kwargs):\n nodes = self.widget.xpath(query, **kwargs)\n if not nodes:\n return []\n matches = []\n for node in nodes:\n aref = CACHE.get(node.attrib.get('ref'))\n obj = aref() if aref else None\n if obj is None:\n continue\n matches.append(obj)\n return matches", "response": "Find the nodes matching the query"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the child toolkit widgets for this object.", "response": "def child_widgets(self):\n \"\"\" Get the child toolkit widgets for this object.\n\n Returns\n -------\n result : iterable of QObject\n The child widgets defined for this object.\n\n \"\"\"\n for child in self.children():\n w = child.widget\n if w is not None:\n yield w"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef set_attribute(self, name, value):\n if value is True:\n self.widget.set(name, name)\n elif value is False:\n del self.widget.attrib[name]\n else:\n self.widget.set(name, str(value))", "response": "Set the attribute of the current object."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _update_proxy(self, change):\n #: Try default handler\n if change['type'] == 'update' and self.proxy_is_active:\n handler = getattr(self.proxy, 'set_' + change['name'], None)\n if handler is not None:\n handler(change['value'])\n else:\n self.proxy.set_attribute(change['name'], change['value'])\n self._notify_modified(change)", "response": "Update the proxy widget when the data\n changes."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nnotify the client of a change when we have a websocket connection active", "response": "def _notify_modified(self, change):\n \"\"\" If a change occurs when we have a websocket connection active\n notify the websocket client of the change.\n \"\"\"\n root = self.root_object()\n if isinstance(root, Html):\n name = change['name']\n change = {\n 'ref': self.ref,\n 'type': change['type'],\n 'name': change['name'],\n 'value': change['value']\n }\n root.modified(change)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef xpath(self, query, **kwargs):\n nodes = self.proxy.find(query, **kwargs)\n return [n.declaration for n in nodes]", "response": "Find nodes matching the given xpath query"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef init_widget(self):\n d = self.declaration\n if d.source:\n self.set_source(d.source)\n else:\n super(RawComponent, self).init_widget()", "response": "Initialize the widget with the source."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set_source(self, source):\n self.widget.clear()\n html = etree.HTML(source)\n self.widget.extend(html[0])\n\n # Clear removes everything so it must be reinitialized\n super(RawComponent, self).init_widget()", "response": "Set the source by parsing the source and inserting it into the component."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _observe_mode(self, change):\n block = self.block\n if block and self.is_initialized and change['type'] == 'update':\n if change['oldvalue'] == 'replace':\n raise NotImplementedError\n for c in self.children:\n block.children.remove(c)\n c.set_parent(None)\n self.refresh_items()", "response": "Called when the mode of the item is changed."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef read(*pathcomponents):\n with open(join(abspath(dirname(__file__)), *pathcomponents)) as thefile:\n return thefile.read()", "response": "Read the contents of a file located relative to setup. py"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nprints the dictionary returned by a MongoDB Query in the standard output.", "response": "def print_obj(obj, verbose, metadata, mongo_version):\n \"\"\"\n Print the dict returned by a MongoDB Query in the standard output.\n \"\"\"\n if verbose:\n sys.stdout.write(json_encoder.encode(obj) + '\\n')\n sys.stdout.flush()\n else:\n try:\n ts_time = obj['ts']\n operation = obj['op']\n doc = None\n if operation == 'query':\n if mongo_version < \"3.2\":\n doc = obj['ns'].split(\".\")[-1]\n query = json_encoder.encode(obj['query']) if 'query' in obj else \"{}\"\n else:\n if \"query\" in obj:\n cmd = obj['query'] # Mongo 3.2 - 3.4\n else:\n cmd = obj['command'] # Mongo 3.6+\n doc = cmd['find']\n query = json_encoder.encode(cmd['filter']) if 'filter' in cmd else \"{}\"\n if 'sort' in cmd:\n query += ', sort: ' + json_encoder.encode(cmd['sort'])\n query += '. %s returned.' % obj['nreturned']\n elif operation == 'update':\n doc = obj['ns'].split(\".\")[-1]\n if mongo_version < \"3.6\":\n query = json_encoder.encode(obj['query']) if 'query' in obj else \"{}\"\n query += ', ' + json_encoder.encode(obj['updateobj'])\n else:\n query = json_encoder.encode(obj['command']['q']) if 'command' in obj and 'q' in obj['command'] else \"{}\"\n query += ', ' + json_encoder.encode(obj['command']['u'])\n if 'nModified' in obj:\n query += '. %s updated.' % obj['nModified']\n elif 'nMatched' in obj:\n query += '. %s updated.' % obj['nMatched']\n elif operation == 'insert':\n if mongo_version < \"3.2\":\n doc = obj['ns'].split(\".\")[-1]\n query = json_encoder.encode(obj['query']) if 'query' in obj else \"{}\"\n else:\n if 'query' in obj:\n doc = obj['query']['insert']\n if 'documents' in obj['query']:\n if isinstance(obj['query']['documents'], collections.Iterable) \\\n and len(obj['query']['documents']) > 1:\n query = json_encoder.encode(obj['query']['documents']) + \". \"\n else:\n query = json_encoder.encode(obj['query']['documents'][0]) + \". \"\n else:\n query = \"\"\n else:\n # Mongo 3.6+ profiler looks like doens't record insert details (document object), and\n # some tools like Robo 3T (formerly Robomongo) allows to duplicate collections\n # but the profiler doesn't record the element inserted\n doc = obj['ns'].split(\".\")[-1]\n query = \"\"\n query += '%s inserted.' % obj['ninserted']\n elif operation == 'remove':\n doc = obj['ns'].split(\".\")[-1]\n if mongo_version < \"3.6\":\n query = json_encoder.encode(obj['query']) if 'query' in obj else \"{}\"\n else:\n query = json_encoder.encode(obj['command']['q']) if 'command' in obj and 'q' in obj['command'] else \"{}\"\n query += '. %s deleted.' % obj['ndeleted']\n elif operation == \"command\":\n if 'count' in obj[\"command\"]:\n operation = \"count\"\n query = json_encoder.encode(obj['command']['query'])\n elif 'aggregate' in obj[\"command\"]:\n operation = \"aggregate\"\n query = json_encoder.encode(obj['command']['pipeline'])\n elif 'distinct' in obj[\"command\"]:\n operation = \"distinct\"\n query = json_encoder.encode(obj['command']['query'])\n query = '\"%s\", %s' % (obj['command']['key'], query)\n elif 'drop' in obj[\"command\"]:\n operation = \"drop\"\n query = \"\"\n elif 'findandmodify' in obj[\"command\"]:\n operation = \"findandmodify\"\n query = \"query: \" + json_encoder.encode(obj['command']['query'])\n if 'sort' in obj[\"command\"]:\n query += \", sort: \" + json_encoder.encode(obj['command']['sort'])\n if 'update' in obj[\"command\"]:\n query += \", update: \" + json_encoder.encode(obj['command']['update'])\n if 'remove' in obj[\"command\"]:\n query += \", remove: \" + str(obj['command']['remove']).lower()\n if 'fields' in obj[\"command\"]:\n query += \", fields: \" + json_encoder.encode(obj['command']['fields'])\n if 'upsert' in obj[\"command\"]:\n query += \", upsert: \" + str(obj['command']['upsert']).lower()\n if 'new' in obj[\"command\"]:\n query += \", new: \" + str(obj['command']['new']).lower()\n elif 'group' in obj[\"command\"]:\n operation = \"group\"\n doc = obj[\"command\"]['group'][\"ns\"]\n if 'key' in obj['command']['group']:\n key = \"key: \" + json_encoder.encode(obj['command']['group']['key'])\n else:\n key = None\n if 'initial' in obj['command']['group']:\n initial = \"initial: \" + json_encoder.encode(obj['command']['group']['initial'])\n else:\n initial = None\n if 'cond' in obj['command']['group']:\n cond = \"cond: \" + json_encoder.encode(obj['command']['group']['cond'])\n else:\n cond = None\n if '$keyf' in obj['command']['group']:\n key_function = \"keyf: \" + min_script(obj['command']['group']['$keyf'])\n else:\n key_function = None\n if '$reduce' in obj['command']['group']:\n reduce_func = \"reduce: \" + min_script(obj['command']['group']['$reduce'])\n else:\n reduce_func = None\n if 'finalize' in obj['command']['group']:\n finalize_func = \"finalize: \" + min_script(obj['command']['group']['finalize'])\n else:\n finalize_func = None\n query = \", \".join(list(filter(lambda x: x, (key, reduce_func, initial, key_function, cond, finalize_func))))\n elif 'map' in obj[\"command\"]:\n operation = \"map\"\n doc = obj[\"command\"][\"mapreduce\"]\n del obj[\"command\"][\"mapreduce\"]\n map_func = min_script(obj['command'][\"map\"])\n del obj['command'][\"map\"]\n reduce_func = min_script(obj['command'][\"reduce\"])\n del obj['command'][\"reduce\"]\n query = \"{%s, %s, %s}\" % (map_func, reduce_func, json_encoder.encode(obj['command']))\n else:\n warn('Unknown command operation\\nDump: %s' % json_encoder.encode(obj))\n if not doc:\n doc = obj[\"command\"][operation]\n else:\n warn('Unknown operation \"%s\"\\nDump: %s' % (operation, json_encoder.encode(obj)))\n\n if metadata:\n met = []\n for m in metadata:\n if m in obj and obj[m] != {}:\n q = m + \": \"\n if isinstance(obj[m], str):\n q += '\"%s\"' % obj[m]\n elif isinstance(obj[m], dict):\n q += json_encoder.encode(obj[m])\n else:\n q += str(obj[m])\n met.append(q)\n if met:\n if not query.endswith(\".\"): query += \". \"\n if not query.endswith(\" \"): query += \" \"\n query += \", \".join(met)\n\n sys.stdout.write(\"%s %s [%s] : %s\\n\" % (ts_time.strftime(\"%Y-%m-%d %H:%M:%S.%f\")[:-3],\n operation.upper().ljust(9), doc, query))\n sys.stdout.flush() # Allows pipe the output during the execution with others tools like 'grep'\n except (KeyError, TypeError):\n warn('Unknown registry\\nDump: %s' % json_encoder.encode(obj))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconnects to a database and return a tuple with a pymongo. MongoClient object and a pymongo. database. Database object.", "response": "def connect(address, args):\n \"\"\"\n Connect with `address`, and return a tuple with a :class:`~pymongo.MongoClient`,\n and a :class:`~pymongo.database.Database` object.\n :param address: a string representation with the db address\n :param args: connection arguments:\n - username: username for authentication (optional)\n - password: password for authentication. If username is given and password isn't,\n it's asked from tty.\n - auth_database: authenticate the username and password against that database (optional).\n If not specified, the database specified in address will be used.\n - ssl, ssl_certfile, ssl_keyfile, ssl_cert_reqs, ssl_ca_certs: SSL authentication options\n :return: a tuple with ``(client, db)``\n \"\"\"\n try:\n host, port, dbname = get_res_address(address)\n except AddressError as e:\n error_parsing(str(e).replace(\"resource\", \"database\"))\n\n try:\n options = {}\n if args.ssl:\n options[\"ssl\"] = True\n options[\"ssl_certfile\"] = args.ssl_cert_file\n options[\"ssl_keyfile\"] = args.ssl_key_file\n options[\"ssl_cert_reqs\"] = args.ssl_cert_reqs\n options[\"ssl_ca_certs\"] = args.ssl_ca_certs\n\n client = MongoClient(host=host, port=port, **options)\n except Exception as e:\n error(\"Error trying to connect: %s\" % str(e), ECONNREFUSED)\n\n username = args.username\n password = args.password\n auth_database = args.auth_database\n\n if username:\n if password is None:\n password = getpass.getpass()\n if auth_database is None:\n auth_database = dbname\n try:\n auth_db = client[auth_database]\n auth_db.authenticate(username, password)\n except Exception as e:\n error(\"Error trying to authenticate: %s\" % str(e), -3)\n db = client[dbname]\n return client, db"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nprinting msg error and exit with status exit_code", "response": "def error(msg, exit_code):\n \"\"\"\n Print `msg` error and exit with status `exit_code`\n \"\"\"\n sys.stderr.write(\"%s\\ntry 'mongotail --help' for more information\\n\" % msg)\n sys.stderr.flush()\n exit(exit_code)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef error_parsing(msg=\"unknown options\"):\n sys.stderr.write(\"Error parsing command line: %s\\ntry 'mongotail --help' for more information\\n\" % msg)\n sys.stderr.flush()\n exit(EINVAL)", "response": "Print any parsing error and exit with status - 1"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget a Item from the Menu by name.", "response": "def get_product_by_name(self, name):\n '''\n Gets a Item from the Menu by name. Note that the name is not\n case-sensitive but must be spelt correctly.\n\n :param string name: The name of the item.\n :raises StopIteration: Raises exception if no item is found.\n :return: An item object matching the search.\n :rtype: Item\n '''\n return next(i for i in self.items if i.name.lower() == name.lower())"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef new_session(self, session):\n '''\n Clear out the current session on the remote and setup a new one.\n\n :return: A response from having expired the current session.\n :rtype: requests.Response\n '''\n response = self.__get('/Home/SessionExpire')\n self.session = update_session_headers(session)\n\n return response", "response": "Clear out the current session on the remote and setup a new one."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef reset_store(self):\n '''\n Clears out the current store and gets a cookie. Set the cross site\n request forgery token for each subsequent request.\n\n :return: A response having cleared the current store.\n :rtype: requests.Response\n '''\n response = self.__get('/Store/Reset')\n\n token = self.session.cookies['XSRF-TOKEN']\n self.session.headers.update({'X-XSRF-TOKEN': token})\n\n return response", "response": "Clears out the current store and gets a cookie."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_stores(self, search_term):\n '''\n Search for dominos pizza stores using a search term.\n\n :param string search: Search term.\n :return: A list of nearby stores matching the search term.\n :rtype: list\n '''\n params = {'SearchText': search_term}\n response = self.__get('/storefindermap/storesearch', params=params)\n\n return Stores(response.json())", "response": "Search for dominos pizza stores using a search term."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsetting the delivery system on the remote server.", "response": "def set_delivery_system(self, store, postcode, fulfilment_method=FULFILMENT_METHOD.DELIVERY):\n '''\n Set local cookies by initialising the delivery system on the remote.\n Requires a store ID and a delivery postcode.\n\n :param Store store: Store id.\n :param string postcode: A postcode.\n :return: A response having initialised the delivery system.\n :rtype: requests.Response\n '''\n method = 'delivery' if fulfilment_method == FULFILMENT_METHOD.DELIVERY else 'collection'\n\n params = {\n 'fulfilmentMethod': method,\n 'postcode': postcode,\n 'storeid': store.store_id\n }\n\n return self.__post('/Journey/Initialize', json=params)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nretrieving the menu from the selected store.", "response": "def get_menu(self, store):\n '''\n Retrieve the menu from the selected store.\n\n :param Store store: A store.\n :return: The store menu.\n :rtype: Menu\n '''\n params = {\n 'collectionOnly': not store.delivery_available,\n 'menuVersion': store.menu_version,\n 'storeId': store.store_id,\n }\n\n response = self.__get('/ProductCatalog/GetStoreCatalog', params=params)\n return Menu(response.json())"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_item_to_basket(self, item, variant=VARIANT.MEDIUM, quantity=1):\n '''\n Add an item to the current basket.\n\n :param Item item: Item from menu.\n :param int variant: Item SKU id. Ignored if the item is a side.\n :param int quantity: The quantity of item to be added.\n :return: A response having added an item to the current basket.\n :rtype: requests.Response\n '''\n item_type = item.type\n\n if item_type == 'Pizza':\n return self.add_pizza_to_basket(item, variant, quantity)\n elif item_type == 'Side':\n return self.add_side_to_basket(item, quantity)\n return None", "response": "Add an item to the current basket."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds a pizza to the current basket.", "response": "def add_pizza_to_basket(self, item, variant=VARIANT.MEDIUM, quantity=1):\n '''\n Add a pizza to the current basket.\n\n :param Item item: Item from menu.\n :param int variant: Item SKU id. Some defaults are defined in the VARIANT enum.\n :param int quantity: The quantity of pizza to be added.\n :return: A response having added a pizza to the current basket.\n :rtype: requests.Response\n '''\n item_variant = item[variant]\n ingredients = item_variant['ingredients'].update([36, 42])\n\n params = {\n 'stepId': 0,\n 'quantity': quantity,\n 'sizeId': variant,\n 'productId': item.item_id,\n 'ingredients': ingredients,\n 'productIdHalfTwo': 0,\n 'ingredientsHalfTwo': [],\n 'recipeReferrer': 0\n }\n\n return self.__post('/Basket/AddPizza', json=params)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_side_to_basket(self, item, quantity=1):\n '''\n Add a side to the current basket.\n\n :param Item item: Item from menu.\n :param int quantity: The quantity of side to be added.\n :return: A response having added a side to the current basket.\n :rtype: requests.Response\n '''\n item_variant = item[VARIANT.PERSONAL]\n\n params = {\n 'productSkuId': item_variant['productSkuId'],\n 'quantity': quantity,\n 'ComplimentaryItems': []\n }\n\n return self.__post('/Basket/AddProduct', json=params)", "response": "Add a side to the current basket."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nremove an item from the current basket.", "response": "def remove_item_from_basket(self, idx):\n '''\n Remove an item from the current basket.\n\n :param int idx: Basket item id.\n :return: A response having removed an item from the current basket.\n :rtype: requests.Response\n '''\n params = {\n 'basketItemId': idx,\n 'wizardItemDelete': False\n }\n\n return self.__post('/Basket/RemoveBasketItem', json=params)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsetting the payment method of the current node.", "response": "def set_payment_method(self, method=PAYMENT_METHOD.CASH_ON_DELIVERY):\n '''\n Select the payment method going to be used to make a purchase.\n\n :param int method: Payment method id.\n :return: A response having set the payment option.\n :rtype: requests.Response\n '''\n params = {'paymentMethod': method}\n return self.__post('/PaymentOptions/SetPaymentMethod', json=params)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nproceeds with payment using the payment method selected earlier. :return: A response having processes the payment. :rtype: requests.Response", "response": "def process_payment(self):\n '''\n Proceed with payment using the payment method selected earlier.\n\n :return: A response having processes the payment.\n :rtype: requests.Response\n '''\n params = {\n '__RequestVerificationToken': self.session.cookies,\n 'method': 'submit'\n }\n\n return self.__post('/PaymentOptions/Proceed', json=params)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nmake a HTTP GET request to the Dominos UK API with the given parameters", "response": "def __get(self, path, **kargs):\n '''\n Make a HTTP GET request to the Dominos UK API with the given parameters\n for the current session.\n\n :param string path: The API endpoint path.\n :params list kargs: A list of arguments.\n :return: A response from the Dominos UK API.\n :rtype: response.Response\n '''\n return self.__call_api(self.session.get, path, **kargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nmake a HTTP POST request to the Dominos UK API with the given path and parameters.", "response": "def __post(self, path, **kargs):\n '''\n Make a HTTP POST request to the Dominos UK API with the given\n parameters for the current session.\n\n :param string path: The API endpoint path.\n :params list kargs: A list of arguments.\n :return: A response from the Dominos UK API.\n :rtype: response.Response\n '''\n return self.__call_api(self.session.post, path, **kargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nmaking a HTTP request to the Dominos UK API with the given parameters for the current session.", "response": "def __call_api(self, verb, path, **kargs):\n '''\n Make a HTTP request to the Dominos UK API with the given parameters for\n the current session.\n\n :param verb func: HTTP method on the session.\n :param string path: The API endpoint path.\n :params list kargs: A list of arguments.\n :return: A response from the Dominos UK API.\n :rtype: response.Response\n '''\n response = verb(self.__url(path), **kargs)\n\n if response.status_code != 200:\n raise ApiError('{}: {}'.format(response.status_code, response))\n\n return response"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef append_item(self, item):\n did_remove = self.remove_exit()\n item.menu = self\n self.items.append(item)\n if did_remove:\n self.add_exit()\n if self.screen:\n max_row, max_cols = self.screen.getmaxyx()\n if max_row < 6 + len(self.items):\n self.screen.resize(6 + len(self.items), max_cols)\n self.draw()", "response": "Add an item to the end of the menu before the exit"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_exit(self):\n if self.items:\n if self.items[-1] is not self.exit_item:\n self.items.append(self.exit_item)\n return True\n return False", "response": "Add the exit item if necessary. Used to make sure there aren t multiple exit items."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nredraw the menu and refreshes the screen. Should be called whenever something changes that needs to be redrawn.", "response": "def draw(self):\n \"\"\"\n Redraws the menu and refreshes the screen. Should be called whenever something changes that needs to be redrawn.\n \"\"\"\n\n self.screen.border(0)\n if self.title is not None:\n self.screen.addstr(2, 2, self.title, curses.A_STANDOUT)\n if self.subtitle is not None:\n self.screen.addstr(4, 2, self.subtitle, curses.A_BOLD)\n\n for index, item in enumerate(self.items):\n if self.current_option == index:\n text_style = self.highlight\n else:\n text_style = self.normal\n self.screen.addstr(5 + index, 4, item.show(index), text_style)\n\n screen_rows, screen_cols = CursesMenu.stdscr.getmaxyx()\n top_row = 0\n if 6 + len(self.items) > screen_rows:\n if screen_rows + self.current_option < 6 + len(self.items):\n top_row = self.current_option\n else:\n top_row = 6 + len(self.items) - screen_rows\n\n self.screen.refresh(top_row, 0, 0, 0, screen_rows - 1, screen_cols - 1)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef process_user_input(self):\n user_input = self.get_input()\n\n go_to_max = ord(\"9\") if len(self.items) >= 9 else ord(str(len(self.items)))\n\n if ord('1') <= user_input <= go_to_max:\n self.go_to(user_input - ord('0') - 1)\n elif user_input == curses.KEY_DOWN:\n self.go_down()\n elif user_input == curses.KEY_UP:\n self.go_up()\n elif user_input == ord(\"\\n\"):\n self.select()\n\n return user_input", "response": "Process the user input and decides what to do with it\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef select(self):\n self.selected_option = self.current_option\n self.selected_item.set_up()\n self.selected_item.action()\n self.selected_item.clean_up()\n self.returned_value = self.selected_item.get_return()\n self.should_exit = self.selected_item.should_exit\n\n if not self.should_exit:\n self.draw()", "response": "Select the current item and run it"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef action(self):\n commandline = \"{0} {1}\".format(self.command, \" \".join(self.arguments))\n try:\n completed_process = subprocess.run(commandline, shell=True)\n self.exit_status = completed_process.returncode\n except AttributeError:\n self.exit_status = subprocess.call(commandline, shell=True)", "response": "This method is used to run a command and set the exit_status attribute of the object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef parse_old_menu(menu_data):\n menu_title = menu_data['title']\n menu = CursesMenu(menu_title)\n for item in menu_data[\"options\"]:\n item_type = item[\"type\"]\n item_title = item[\"title\"]\n if item_type == menuItem.COMMAND:\n item_command = item[\"command\"]\n menu.append_item(CommandItem(item_title, item_command, menu))\n elif item_type == menuItem.FUNCTION:\n item_function = item[\"function\"]\n menu.append_item(FunctionItem(item_title, item_function, menu))\n elif item_type == menuItem.EXITMENU:\n menu.append_item(ExitItem(item_title, menu))\n elif item_type == menuItem.NUMBER:\n menu.append_item(SelectionItem(item_title, menu))\n elif item_type == menuItem.MENU:\n new_menu = parse_old_menu(item)\n menu.append_item(SubmenuItem(item_title, menu, new_menu))\n\n return menu", "response": "Takes an old - style menuData dictionary and returns a CursesMenu object."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_up(self):\n self.menu.pause()\n curses.def_prog_mode()\n self.menu.clear_screen()", "response": "This method is called by the base class to set up the curses screen."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add_aggregation_columns(\n df, *,\n group_cols: Union[str, List[str]],\n aggregations: Dict[str, Agg]\n):\n \"\"\"\n Add new columns containing aggregations values on existing columns\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `group_cols` (*str* or *list*): columns used to aggregate the data\n - `aggregations` (*dict*): keys are name of new columns and values are aggregation functions\n Examples of aggregation functions : 'sum', 'max'\n Available aggregation functions are listed [here](\n https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#aggregation)\n\n ---\n\n ### Example\n\n **Input**\n\n | ENTITY | YEAR | VALUE_1 | VALUE_2 |\n |:------:|:----:|:-------:|:-------:|\n | A | 2017 | 10 | 3 |\n | A | 2017 | 20 | 1 |\n | A | 2018 | 10 | 5 |\n | A | 2018 | 30 | 4 |\n | B | 2017 | 60 | 4 |\n | B | 2017 | 40 | 3 |\n | B | 2018 | 50 | 7 |\n | B | 2018 | 60 | 6 |\n\n ```cson\n add_aggregation_columns:\n group_cols: ['ENTITY', 'YEAR']\n aggregations:\n sum_value1:\n VALUE_1: 'sum' # sum of `VALUE_1` put in `sum_value1` column\n max_value1:\n VALUE_1: 'max' # max of `VALUE_1` put in `max_value1` column\n mean_value2:\n VALUE_2: 'mean' # mean of `VALUE_2` put in `mean_value2` column\n ]\n ```\n\n **Output**\n\n | ENTITY | YEAR | VALUE_1 | VALUE_2 | sum_value1 | max_value1 | mean_value2 |\n |:------:|:----:|:-------:|:-------:|:----------:|:----------:|:-----------:|\n | A | 2017 | 10 | 3 | 30 | 20 | 2.0 |\n | A | 2017 | 20 | 1 | 30 | 20 | 2.0 |\n | A | 2018 | 10 | 5 | 40 | 30 | 4.5 |\n | A | 2018 | 30 | 4 | 40 | 30 | 4.5 |\n | B | 2017 | 60 | 4 | 100 | 60 | 3.5 |\n | B | 2017 | 40 | 3 | 100 | 60 | 3.5 |\n | B | 2018 | 50 | 7 | 110 | 60 | 6.5 |\n | B | 2018 | 60 | 6 | 110 | 60 | 6.5 |\n\n \"\"\"\n group = df.groupby(group_cols)\n for new_col, aggs in aggregations.items():\n assert len(aggs) == 1\n (col, agg), *_ = aggs.items()\n df[new_col] = group[col].transform(agg)\n return df", "response": "Adds new columns containing aggregations values on existing columns"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef top(\n df,\n value: str,\n limit: int,\n order: str = 'asc',\n group: Union[str, List[str]] = None\n):\n \"\"\"\n Get the top or flop N results based on a column value for each specified group columns\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `value` (*str*): column name on which you will rank the results\n - `limit` (*int*): Number to specify the N results you want to retrieve.\n Use a positive number x to retrieve the first x results.\n Use a negative number -x to retrieve the last x results.\n\n *optional :*\n - `order` (*str*): `\"asc\"` or `\"desc\"` to sort by ascending ou descending order. By default : `\"asc\"`.\n - `group` (*str*, *list of str*): name(s) of columns on which you want to perform the group operation.\n\n ---\n\n ### Example\n\n **Input**\n\n | variable | Category | value |\n |:--------:|:--------:|:-----:|\n | lili | 1 | 50 |\n | lili | 1 | 20 |\n | toto | 1 | 100 |\n | toto | 1 | 200 |\n | toto | 1 | 300 |\n | lala | 1 | 100 |\n | lala | 1 | 150 |\n | lala | 1 | 250 |\n | lala | 2 | 350 |\n | lala | 2 | 450 |\n\n\n ```cson\n top:\n value: 'value'\n limit: 4\n order: 'asc'\n ```\n\n **Output**\n\n | variable | Category | value |\n |:--------:|:--------:|:-----:|\n | lala | 1 | 250 |\n | toto | 1 | 300 |\n | lala | 2 | 350 |\n | lala | 2 | 450 |\n \"\"\"\n ascending = order != 'desc'\n limit = int(limit)\n filter_func = 'nlargest' if (limit > 0) ^ ascending else 'nsmallest'\n\n def _top(df):\n return getattr(df, filter_func)(abs(limit), value).sort_values(by=value,\n ascending=ascending)\n\n if group is None:\n df = _top(df)\n else:\n df = df.groupby(group).apply(_top)\n\n return df", "response": "Return the top or flop N results for a specified column value"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef top_group(\n df,\n aggregate_by: List[str],\n value: str,\n limit: int,\n order: str = 'asc',\n function: str = 'sum',\n group: Union[str, List[str]] = None\n):\n \"\"\"\n Get the top or flop N results based on a function and a column value that agregates the input.\n The result is composed by all the original lines including only lines corresponding\n to the top groups\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `value` (*str*): Name of the column name on which you will rank the results.\n - `limit` (*int*): Number to specify the N results you want to retrieve from the sorted values.\n - Use a positive number x to retrieve the first x results.\n - Use a negative number -x to retrieve the last x results.\n - `aggregate_by` (*list of str*)): name(s) of columns you want to aggregate\n\n *optional :*\n - `order` (*str*): `\"asc\"` or `\"desc\"` to sort by ascending ou descending order. By default : `\"asc\"`.\n - `group` (*str*, *list of str*): name(s) of columns on which you want to perform the group operation.\n - `function` : Function to use to group over the group column\n\n ---\n\n ### Example\n\n **Input**\n\n | variable | Category | value |\n |:--------:|:--------:|:-----:|\n | lili | 1 | 50 |\n | lili | 1 | 20 |\n | toto | 1 | 100 |\n | toto | 1 | 200 |\n | toto | 1 | 300 |\n | lala | 1 | 100 |\n | lala | 1 | 150 |\n | lala | 1 | 250 |\n | lala | 2 | 350 |\n | lala | 2 | 450 |\n\n ```cson\n top_group:\n group: [\"Category\"]\n value: 'value'\n aggregate_by: [\"variable\"]\n limit: 2\n order: \"desc\"\n ```\n\n **Output**\n\n | variable | Category | value |\n |:--------:|:--------:|:-----:|\n | toto | 1 | 100 |\n | toto | 1 | 200 |\n | toto | 1 | 300 |\n | lala | 1 | 100 |\n | lala | 1 | 150 |\n | lala | 1 | 250 |\n | lala | 2 | 350 |\n | lala | 2 | 450 |\n \"\"\"\n aggregate_by = aggregate_by or []\n group_top = group or []\n df2 = df.groupby(group_top + aggregate_by).agg(function).reset_index()\n df2 = top(df2, group=group, value=value, limit=limit, order=order).reset_index(drop=True)\n df2 = df2[group_top + aggregate_by]\n df = df2.merge(df, on=group_top + aggregate_by)\n\n return df", "response": "Returns the top group of the result of a function on the given column value."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts string column into datetime column---", "response": "def convert_str_to_datetime(df, *, column: str, format: str):\n \"\"\"\n Convert string column into datetime column\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (*str*): name of the column to format\n - `format` (*str*): current format of the values (see [available formats](\n https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior))\n \"\"\"\n df[column] = pd.to_datetime(df[column], format=format)\n return df"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts datetime column into string", "response": "def convert_datetime_to_str(df, *, column: str, format: str, new_column: str = None):\n \"\"\"\n Convert datetime column into string column\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - column (*str*): name of the column to format\n - format (*str*): format of the result values (see [available formats](\n https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior))\n\n *optional :*\n - new_column (*str*): name of the output column. By default `column` is overwritten.\n \"\"\"\n new_column = new_column or column\n df[new_column] = df[column].dt.strftime(format)\n return df"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef change_date_format(\n df, *,\n column: str,\n output_format: str,\n input_format: str = None,\n new_column: str = None,\n new_time_zone=None\n):\n \"\"\"\n Convert the format of a date\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (*str*): name of the column to change the format\n - `output_format` (*str*): format of the output values (see [available formats](\n https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior))\n\n *optional :*\n - `input_format` (*str*): format of the input values (by default let the parser detect it)\n - `new_column` (*str*): name of the output column (by default overwrite `column`)\n - `new_time_zone` (*str*): name of new time zone (by default no time zone conversion is done)\n\n ---\n\n ### Example\n\n **Input**\n\n label | date\n :------:|:----:\n France | 2017-03-22\n Europe | 2016-03-22\n\n ```cson\n change_date_format:\n column: 'date'\n input_format: '%Y-%m-%d'\n output_format: '%Y-%m'\n ```\n\n Output :\n\n label | date\n :------:|:----:\n France | 2017-03\n Europe | 2016-03\n \"\"\"\n new_column = new_column or column\n df[new_column] = (pd.to_datetime(df[column], format=input_format, utc=True)\n .dt.tz_convert(new_time_zone)\n .dt.strftime(output_format))\n return df", "response": "Change the format of a date column in a DataFrame"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting column s type into type", "response": "def cast(df, column: str, type: str, new_column=None):\n \"\"\"\n Convert column's type into type\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (*str*): name of the column to convert\n - `type` (*str*): output type. It can be :\n - `\"int\"` : integer type\n - `\"float\"` : general number type\n - `\"str\"` : text type\n\n *optional :*\n - `new_column` (*str*): name of the output column.\n By default the `column` arguments is modified.\n\n ---\n\n ### Example\n\n **Input**\n\n | Column 1 | Column 2 | Column 3 |\n |:-------:|:--------:|:--------:|\n | 'one' | '2014' | 30.0 |\n | 'two' | 2015.0 | '1' |\n | 3.1 | 2016 | 450 |\n\n ```cson\n postprocess: [\n cast:\n column: 'Column 1'\n type: 'str'\n cast:\n column: 'Column 2'\n type: 'int'\n cast:\n column: 'Column 3'\n type: 'float'\n ]\n ```\n\n **Output**\n\n | Column 1 | Column 2 | Column 3 |\n |:-------:|:------:|:--------:|\n | 'one' | 2014 | 30.0 |\n | 'two' | 2015 | 1.0 |\n | '3.1' | 2016 | 450.0 |\n \"\"\"\n new_column = new_column or column\n df[new_column] = df[column].astype(type)\n return df"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef compute_evolution_by_frequency(\n df,\n id_cols: List[str],\n date_col: Union[str, Dict[str, str]],\n value_col: str,\n freq=1,\n method: str = 'abs',\n format: str = 'column',\n offseted_suffix: str = '_offseted',\n evolution_col_name: str = 'evolution_computed',\n missing_date_as_zero: bool = False,\n raise_duplicate_error: bool = True\n):\n \"\"\"\n This function answers the question: how has a value changed on a weekly, monthly, yearly basis ?\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `id_cols` (*list*): name of the columns used to create each group.\n - `date_col` (*str or dict*): either directly the name of the column containing the date or a dictionary with:\n - `selector` (*str*): the name of the column\n - `format` (*str*): the format of the date (see [pandas doc](\n https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior))\n - `value_col` (*str*): name of the column containing the value to compare.\n\n *optional :*\n - `freq` (*int/pd.DateOffset/pd.Serie/dict*): the frequency at which we calculate evolutions\n - `method` (*str*): either `\"abs\"` for absolute values or `\"pct\"` for the evolution in percentage of previous value.\n - `offseted_suffix` (*str*): suffix of the offseted column. By default, `\"_offseted\"`.\n - `evolution_col_name` (*str*): name given to the evolution column. By default, `\"evolution_computed\"`.\n - `missing_date_as_zero` (*boolean*): add missing date with zero value.\n - `raise_duplicate_error` (*boolean*): raise an error when the dataset has duplicated values with the given `id_cols`.\n - `format` (*str*): `'df'` # Do not change it !!!\n\n ---\n\n ### Example\n\n **Input**\n\n | id_cols | value_col | date_col|\n |:---------:|:------------:|:----------:|\n | A | 20 | 2010|\n | | 7 | 2011|\n | B | 200 | 2010|\n | | 220 | 2011|\n | C | 100 | 2011|\n\n ```cson\n compute_evolution_by_frequency:\n id_cols: \"id_cols\"\n date_col: \"date_col\"\n value_col: \"value_col\"\n ```\n\n **Output**\n\n | id_cols | value_col | date_col| evolution|\n |:---------:|:------------:|:----------:|:---------:|\n | A | 20 | 2010| null|\n | | 7 | 2011| -13|\n | B | 200 | 2010| null|\n | | 220 | 2011| 20|\n | C | 100 | 2011| null|\n \"\"\"\n if missing_date_as_zero:\n how = 'outer'\n fillna = 0\n else:\n how = 'left'\n fillna = None\n\n return __compute_evolution(\n df=df,\n id_cols=id_cols,\n value_col=value_col,\n date_col=date_col,\n freq=freq,\n method=method,\n format=format,\n offseted_suffix=offseted_suffix,\n evolution_col_name=evolution_col_name,\n how=how,\n fillna=fillna,\n raise_duplicate_error=raise_duplicate_error\n )", "response": "This function calculates the evolution of a group by frequency."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef __compute_evolution(\n df,\n id_cols,\n value_col,\n date_col=None,\n freq=1,\n compare_to=None,\n method='abs',\n format='column',\n offseted_suffix='_offseted',\n evolution_col_name='evolution_computed',\n how='left',\n fillna=None,\n raise_duplicate_error=True\n):\n \"\"\"\n Compute an evolution column :\n - against a period distant from a fixed frequency.\n - against a part of the df\n\n Unfortunately, pandas doesn't allow .change() and .pct_change() to be\n executed with a MultiIndex.\n\n Args:\n df (pd.DataFrame):\n id_cols (list(str)):\n value_col (str):\n date_col (str/dict): default None\n freq (int/pd.DateOffset/pd.Serie): default 1\n compare_to (str): default None\n method (str): default ``'abs'`` can be also ``'pct'``\n format(str): default 'column' can be also 'df'\n offseted_suffix(str): default '_offseted'\n evolution_col_name(str): default 'evolution_computed'\n how(str): default 'left'\n fillna(str/int): default None\n \"\"\"\n if date_col is not None:\n is_date_to_format = isinstance(date_col, dict) or (df[date_col].dtype == np.object)\n if is_date_to_format:\n if isinstance(date_col, dict):\n date_format = date_col.get('format', None)\n date_col = date_col['selector']\n else:\n date_format = None\n df['_'+date_col + '_copy_'] = pd.to_datetime(df[date_col], format=date_format)\n date_col = '_'+date_col + '_copy_'\n\n is_freq_dict = isinstance(freq, dict)\n if is_freq_dict:\n freq = pd.DateOffset(**{k: int(v) for k, v in freq.items()})\n\n check_params_columns_duplicate(id_cols + [value_col, date_col])\n # create df_offseted\n group_cols = id_cols + [date_col]\n df_offseted = df[group_cols + [value_col]].copy()\n df_offseted[date_col] += freq\n\n df_with_offseted_values = apply_merge(\n df, df_offseted, group_cols, how, offseted_suffix,\n raise_duplicate_error\n )\n if is_date_to_format:\n del df_with_offseted_values[date_col]\n\n elif compare_to is not None:\n # create df_offseted\n check_params_columns_duplicate(id_cols + [value_col])\n group_cols = id_cols\n df_offseted = df.query(compare_to).copy()\n df_offseted = df_offseted[group_cols + [value_col]]\n\n df_with_offseted_values = apply_merge(\n df, df_offseted, group_cols, how, offseted_suffix,\n raise_duplicate_error\n )\n\n apply_fillna(df_with_offseted_values, value_col, offseted_suffix, fillna)\n apply_method(df_with_offseted_values, evolution_col_name, value_col, offseted_suffix, method)\n return apply_format(df_with_offseted_values, evolution_col_name, format)", "response": "Compute an evolution column for a single entry in the log."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef rank(\n df,\n value_cols: Union[str, List[str]],\n group_cols: List[str] = None,\n rank_cols_names: List[str] = None,\n method='min',\n ascending: bool = True\n):\n \"\"\"\n This function creates rank columns based on numeric values to be ranked.\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `value_cols` (*list*): name(s) of the columns used\n\n *optional :*\n - `group_cols` (*list*): name(s) of the column(s) used to\n create each group inside which independent ranking needs to be applied\n - `rank_cols_names` (*list*): the names of the added ranking columns.\n If not filled, the ranking will be named after the value_cols with a '_rank' suffix\n - `method` (*str*): method to use when encountering equal values:\n - `'min'` (default): lowest rank in group\n - `'max'`: highest rank in group\n - `'average'`: average rank of group\n - `'first'`: ranks assigned in order the values appear in the series\n - `'dense'`: like 'min', but rank always increases by 1 between groups\n - `ascending` (*boolean*): whether the rank should be determined based on\n ascending (default) or descending order\n\n ---\n\n ### Example\n\n **Input**\n\n | ENTITY | YEAR | VALUE_1 | VALUE_2 |\n | :---: | :---: | :---: | :---: |\n | A | 2017 | 10 | 3 |\n | A | 2017 | 20 | 1 |\n | A | 2018 | 10 | 5 |\n | A | 2018 | 30 | 4 |\n | B | 2017 | 60 | 4 |\n | B | 2017 | 40 | 3 |\n | B | 2018 | 50 | 7 |\n | B | 2018 | 50 | 6 |\n\n ```cson\n rank :\n value_cols: 'VALUE_1'\n ```\n\n **Output**\n\n | ENTITY | YEAR | VALUE_1 | VALUE_2 | VALUE_1_rank\n | :---: | :---: | :---: | :---: | :---: |\n | A | 2017 | 10 | 3 | 1 |\n | A | 2017 | 20 | 1 | 3 |\n | A | 2018 | 10 | 5 | 1 |\n | A | 2018 | 30 | 4 | 4 |\n | B | 2017 | 60 | 4 | 8 |\n | B | 2017 | 40 | 3 | 5 |\n | B | 2018 | 50 | 7 | 6 |\n | B | 2018 | 50 | 6 | 6 |\n \"\"\"\n\n value_cols = [value_cols] if not isinstance(value_cols, list) else value_cols\n for col in value_cols:\n if not np.issubdtype(df[col].dtype, np.number):\n raise TypeError(col + \" specified in value_cols must be of numeric type\")\n\n if rank_cols_names is None:\n rank_cols_names = [x + '_rank' for x in value_cols]\n\n if group_cols is None:\n df[rank_cols_names] = df[value_cols].rank(method=method, ascending=ascending)\n else:\n df[rank_cols_names] = (df.groupby(group_cols)[value_cols]\n .rank(method=method, ascending=ascending))\n\n if method != 'average':\n df[rank_cols_names] = df[rank_cols_names].astype('int')\n\n return df", "response": "This function creates a new ranking column based on numeric values in the specified column."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a line for each bars of a waterfall chart, totals, groups, subgroups. Compute the variation and variation rate for each line. --- ### Parameters *mandatory :* - `date` (*str*): name of the column that id the period of each lines - `value` (*str*): name of the column that contains the vaue for each lines - `start` (*dict*): - `label`: text displayed under the first master column - `id`: value in the date col that id lines for the first period - `end` (*dict*): - `label`: text displayed under the last master column - `id`: value in the date col that id lines for the second period *optional :* - `upperGroup` (*dict*): - `id`: name of the column that contains upperGroups unique IDs - `label`: not required, text displayed under each upperGroups bars, using ID when it's absent - `groupsOrder`: not required, order of upperGroups - `insideGroup` (*dict*): - `id`: name of the column that contains insideGroups unique IDs - `label`: not required, text displayed under each insideGroups bars, using ID when it's absent - `groupsOrder`: not required, order of insideGroups - `filters` (*list*): columns to filters on --- ### Example **Input** | product_id | played | date | ord | category_id | category_name | |:------------:|:--------:|:------:|:-----:|:-------------:|:---------------:| | super clap | 12 | t1 | 1 | clap | Clap | | clap clap | 1 | t1 | 10 | clap | Clap | | tac | 1 | t1 | 1 | snare | Snare | | super clap | 10 | t2 | 1 | clap | Clap | | tac | 100 | t2 | 1 | snare | Snare | | bom | 1 | t2 | 1 | tom | Tom | ```cson waterfall: upperGroup: id: 'category_id' label: 'category_name' insideGroup: id: 'product_id' groupsOrder: 'ord' date: 'date' value: 'played' start: label: 'Trimestre 1' id: 't1' end: label: 'Trimester 2' id: 't2' ``` **Output** | value | label | variation | groups | type | order | |:-------:|:-----------:|:-----------:|:--------:|:------:|:-------:| | 14 | Trimestre 1 | NaN | NaN | NaN | NaN | | -3 | Clap | -0.230769 | clap | parent | NaN | | -2 | super clap | -0.166667 | clap | child | 1 | | -1 | clap clap | -1 | clap | child | 10 | | 99 | Snare | 99 | snare | parent | NaN | | 99 | tac | 99 | snare | child | 1 | | 1 | Tom | inf | tom | parent | NaN | | 1 | bom | inf | tom | child | 1 | | 111 | Trimester 2 | NaN | NaN | NaN | NaN |", "response": "def waterfall(\n df,\n date: str,\n value: str,\n start: Dict[str, str],\n end: Dict[str, str],\n upperGroup: Dict[str, str],\n insideGroup: Dict[str, str] = None,\n filters: List[str] = None\n):\n \"\"\"\n Return a line for each bars of a waterfall chart, totals, groups, subgroups.\n Compute the variation and variation rate for each line.\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `date` (*str*): name of the column that id the period of each lines\n - `value` (*str*): name of the column that contains the vaue for each lines\n - `start` (*dict*):\n - `label`: text displayed under the first master column\n - `id`: value in the date col that id lines for the first period\n - `end` (*dict*):\n - `label`: text displayed under the last master column\n - `id`: value in the date col that id lines for the second period\n\n *optional :*\n - `upperGroup` (*dict*):\n - `id`: name of the column that contains upperGroups unique IDs\n - `label`: not required, text displayed under each upperGroups bars,\n using ID when it's absent\n - `groupsOrder`: not required, order of upperGroups\n - `insideGroup` (*dict*):\n - `id`: name of the column that contains insideGroups unique IDs\n - `label`: not required, text displayed under each insideGroups bars,\n using ID when it's absent\n - `groupsOrder`: not required, order of insideGroups\n - `filters` (*list*): columns to filters on\n\n ---\n\n ### Example\n\n **Input**\n\n | product_id | played | date | ord | category_id | category_name |\n |:------------:|:--------:|:------:|:-----:|:-------------:|:---------------:|\n | super clap | 12 | t1 | 1 | clap | Clap |\n | clap clap | 1 | t1 | 10 | clap | Clap |\n | tac | 1 | t1 | 1 | snare | Snare |\n | super clap | 10 | t2 | 1 | clap | Clap |\n | tac | 100 | t2 | 1 | snare | Snare |\n | bom | 1 | t2 | 1 | tom | Tom |\n\n\n ```cson\n waterfall:\n upperGroup:\n id: 'category_id'\n label: 'category_name'\n insideGroup:\n id: 'product_id'\n groupsOrder: 'ord'\n date: 'date'\n value: 'played'\n start:\n label: 'Trimestre 1'\n id: 't1'\n end:\n label: 'Trimester 2'\n id: 't2'\n ```\n\n **Output**\n\n | value | label | variation | groups | type | order |\n |:-------:|:-----------:|:-----------:|:--------:|:------:|:-------:|\n | 14 | Trimestre 1 | NaN | NaN | NaN | NaN |\n | -3 | Clap | -0.230769 | clap | parent | NaN |\n | -2 | super clap | -0.166667 | clap | child | 1 |\n | -1 | clap clap | -1 | clap | child | 10 |\n | 99 | Snare | 99 | snare | parent | NaN |\n | 99 | tac | 99 | snare | child | 1 |\n | 1 | Tom | inf | tom | parent | NaN |\n | 1 | bom | inf | tom | child | 1 |\n | 111 | Trimester 2 | NaN | NaN | NaN | NaN |\n \"\"\"\n\n if len(df) == 0:\n return df\n\n if filters is not None:\n if isinstance(filters, str):\n filters = [filters]\n\n def sub_waterfall(df):\n wa_df = waterfall(df, date, value, start, end, upperGroup, insideGroup)\n for filters_col in filters:\n wa_df[filters_col] = df[filters_col].values[0]\n return wa_df\n\n # filters df into a list of sub_df\n list_of_sub_df = [df[(df[filters].values == i).all(axis=1)]\n for i in df[filters].drop_duplicates().values]\n\n return pd.concat([sub_waterfall(df) for df in list_of_sub_df], sort=False)\n\n groups = {\n 'upperGroup': {\n 'type': 'parent',\n 'id': 'upperGroup',\n 'order': {\n 'by': ['upperGroup_order', 'groups'],\n 'ascending': [True, True]\n },\n 'obj': upperGroup\n }\n }\n if insideGroup is not None:\n groups['insideGroup'] = {\n 'type': 'child',\n 'id': 'insideGroup',\n 'order': {\n 'by': ['type', 'insideGroup_order', 'label'],\n 'ascending': [False, True, True]\n },\n 'obj': insideGroup\n }\n # prepare the dataframe with standard column names\n df = _compute_rename(df, date, value, groups)\n\n agg_conf = {'value': sum}\n agg_conf.update({f'{col}_label': 'first' for col in groups.keys()})\n agg_conf.update({f'{col}_order': 'first' for col in groups.keys()})\n df = df.groupby(list(groups.keys()) + ['date']).agg(agg_conf).reset_index()\n\n df_start, df_end = _compute_start_end(df, start, end)\n\n df = _compute_value_diff(df, start, end, groups)\n\n middle = _compute_upper_group(df)\n if insideGroup is not None:\n middle = pd.concat([middle, _compute_inside_group(df)])\n\n ret = _compute_order(df_start, df_end, middle, groups)\n\n return ret"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncompute two dataframes with value for start and end", "response": "def _compute_start_end(df, start, end):\n \"\"\"\n Compute two dataframes with value for start and end\n Args:\n totals(dataframe):\n\n Returns: Dataframe, Dataframe\n\n \"\"\"\n result = {}\n time_dict = {'start': start, 'end': end}\n totals = df.groupby('date').agg({'value': sum}).reset_index()\n for time_name, time in time_dict.items():\n if not totals[totals['date'] == time['id']].empty:\n value = totals.loc[\n totals['date'] == time['id'], 'value'\n ].values[0]\n else:\n value = 0\n result[time_name] = pd.DataFrame([{\n 'value': value,\n 'label': time['label'],\n 'groups': time['label']\n }])\n return result['start'], result['end']"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncompute diff value between start and end", "response": "def _compute_value_diff(df, start, end, groups):\n \"\"\"\n Compute diff value between start and end\n Args:\n df(dataframe):\n\n Returns: Dataframe\n\n \"\"\"\n start_values = df[df['date'] == start['id']].copy()\n end_values = df[df['date'] == end['id']].copy()\n\n merge_on = []\n for key, group in groups.items():\n merge_on = merge_on + [key, f'{key}_label', f'{key}_order']\n\n df = start_values.merge(end_values,\n on=merge_on,\n how='outer',\n suffixes=('_start', '_end'), )\n\n # necessary before calculating variation\n df[['value_start', 'value_end']] = df[['value_start', 'value_end']].fillna(0)\n df['value'] = df['value_end'] - df['value_start']\n df.drop(['date_start', 'date_end', 'value_end'], axis=1, inplace=True)\n df.rename(columns={'upperGroup': 'groups'}, inplace=True)\n return df"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncomputes inside Group Args: df(dataframe): Returns: Dataframe", "response": "def _compute_inside_group(df):\n \"\"\"\n Compute inside Group\n Args:\n df(dataframe):\n\n Returns: Dataframe\n\n \"\"\"\n inside_group = df.copy()\n inside_group['type'] = 'child'\n inside_group['variation'] = inside_group['value'] / inside_group[\n 'value_start']\n inside_group.drop(['upperGroup_label', 'insideGroup', 'value_start'],\n axis=1, inplace=True)\n inside_group.rename(columns={'insideGroup_label': 'label'},\n inplace=True)\n return inside_group"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _compute_upper_group(df):\n upper_group = df.groupby(['groups']).agg({\n 'value': sum,\n 'value_start': sum,\n 'upperGroup_label': 'first',\n 'upperGroup_order': 'first'\n }).reset_index()\n upper_group['type'] = 'parent'\n upper_group['variation'] = upper_group['value'] / upper_group[\n 'value_start']\n upper_group.drop(['value_start'], axis=1, inplace=True)\n upper_group.rename(columns={'upperGroup_label': 'label'}, inplace=True)\n return upper_group", "response": "Compute the upper group of the items in the log file"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add(df, new_column, column_1, column_2):\n return _basic_math_operation(df, new_column, column_1, column_2, op='add')", "response": "Add a column to another column."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef subtract(df, new_column, column_1, column_2):\n return _basic_math_operation(df, new_column, column_1, column_2, op='sub')", "response": "Subtract a column from a DataFrame."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef multiply(df, new_column, column_1, column_2):\n return _basic_math_operation(df, new_column, column_1, column_2, op='mul')", "response": "Multiplies a DataFrame by another column."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef divide(df, new_column, column_1, column_2):\n return _basic_math_operation(df, new_column, column_1, column_2, op='truediv')", "response": "Divide a dataframe by two columns."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndoes mathematic operations on columns (add, subtract, multiply or divide) --- ### Parameters *mandatory:* - `new_column` (*str*): name of the output column - `formula` (*str*): Operation on column. Use name of column and special character: - `+` for addition - `-` for subtraction - `*` for multiplication - `/` for division **Note:** - your column name can contain spaces. - if your column name is a number, you must use a quote mark : `\"` or `'` (cf. example) --- ### Examples **Input** | variable | valueA | valueB | My rate | |:--------:|:--------:|:-----:|:------:| | toto | 20 | 100 | 10 | | toto | 30 | 200 | 10 | | toto | 10 | 300 | 10 | ```cson formula: new_column: 'valueD' formula: '(valueB + valueA ) / My rate' ``` **Output** | variable | valueA | valueB | My rate | valueD | |:--------:|:--------:|:------:|:-------:|:-------:| | toto | 20 | 100 | 10 | 12 | | toto | 30 | 200 | 10 | 23 | | toto | 10 | 300 | 10 | 31 | --- **Input** | variable | 2018 | 2019 | |:--------:|:--------:|:-----:| | toto | 20 | 100 | | toto | 30 | 200 | | toto | 10 | 300 | ```cson formula: new_column: 'Evolution' formula: \"'2019' - '2018'\" ``` **Output** | variable | 2018 | 2019 | Evolution | |:--------:|:--------:|:-----:|:-----:| | toto | 20 | 100 | 80 | | toto | 30 | 200 | 170 | | toto | 10 | 300 | 290 |", "response": "def formula(df, *, new_column: str, formula: str):\n \"\"\"\n Do mathematic operations on columns (add, subtract, multiply or divide)\n\n ---\n\n ### Parameters\n\n *mandatory:*\n - `new_column` (*str*): name of the output column\n - `formula` (*str*): Operation on column. Use name of column and special character:\n - `+` for addition\n - `-` for subtraction\n - `*` for multiplication\n - `/` for division\n\n **Note:**\n - your column name can contain spaces.\n - if your column name is a number, you must use a quote mark : `\"` or `'` (cf. example)\n\n ---\n\n ### Examples\n\n **Input**\n\n | variable | valueA | valueB | My rate |\n |:--------:|:--------:|:-----:|:------:|\n | toto | 20 | 100 | 10 |\n | toto | 30 | 200 | 10 |\n | toto | 10 | 300 | 10 |\n\n\n ```cson\n formula:\n new_column: 'valueD'\n formula: '(valueB + valueA ) / My rate'\n ```\n\n **Output**\n\n | variable | valueA | valueB | My rate | valueD |\n |:--------:|:--------:|:------:|:-------:|:-------:|\n | toto | 20 | 100 | 10 | 12 |\n | toto | 30 | 200 | 10 | 23 |\n | toto | 10 | 300 | 10 | 31 |\n\n ---\n\n **Input**\n\n | variable | 2018 | 2019 |\n |:--------:|:--------:|:-----:|\n | toto | 20 | 100 |\n | toto | 30 | 200 |\n | toto | 10 | 300 |\n\n ```cson\n formula:\n new_column: 'Evolution'\n formula: \"'2019' - '2018'\"\n ```\n\n **Output**\n\n | variable | 2018 | 2019 | Evolution |\n |:--------:|:--------:|:-----:|:-----:|\n | toto | 20 | 100 | 80 |\n | toto | 30 | 200 | 170 |\n | toto | 10 | 300 | 290 |\n\n \"\"\"\n tokens = _parse_formula(formula)\n expression_splitted = []\n for t in tokens:\n # To use a column name with only digits, it has to be quoted!\n # Otherwise it is considered as a regular number\n if not t.quoted and (t in MATH_CHARACTERS or is_float(t)):\n expression_splitted.append(t)\n elif t in df.columns:\n expression_splitted.append(f'df[\"{t}\"]')\n else:\n raise FormulaError(f'\"{t}\" is not a valid column name')\n expression = ''.join(expression_splitted)\n df[new_column] = eval(expression)\n return df"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef round_values(df, *, column: str, decimals: int, new_column: str = None):\n new_column = new_column or column\n df[new_column] = df[column].round(decimals)\n return df", "response": "Round each value of a column in a DataFrame to a specified number of decimals."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef absolute_values(df, *, column: str, new_column: str = None):\n new_column = new_column or column\n df[new_column] = abs(df[column])\n return df", "response": "Returns the absolute numeric value of each element of a column in a DataFrame."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef pivot(df, index: List[str], column: str, value: str, agg_function: str = 'mean'):\n if df.dtypes[value].type == np.object_:\n df = pd.pivot_table(df, index=index,\n columns=column,\n values=value,\n aggfunc=lambda x: ' '.join(x))\n else:\n df = pd.pivot_table(df, index=index,\n columns=column,\n values=value,\n aggfunc=agg_function)\n df = df.reset_index()\n return df", "response": "Pivot the data in the DataFrame df."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef pivot_by_group(\n df,\n variable,\n value,\n new_columns,\n groups,\n id_cols=None\n):\n \"\"\"\n Pivot a dataframe by group of variables\n\n ---\n\n ### Parameters\n\n *mandatory :*\n * `variable` (*str*): name of the column used to create the groups.\n * `value` (*str*): name of the column containing the value to fill the pivoted df.\n * `new_columns` (*list of str*): names of the new columns.\n * `groups` (*dict*): names of the groups with their corresponding variables.\n **Warning**: the list of variables must have the same order as `new_columns`\n\n *optional :*\n * `id_cols` (*list of str*) : names of other columns to keep, default `None`.\n\n ---\n\n ### Example\n\n **Input**\n\n | type | variable | montant |\n |:----:|:----------:|:-------:|\n | A | var1 | 5 |\n | A | var1_evol | 0.3 |\n | A | var2 | 6 |\n | A | var2_evol | 0.2 |\n\n ```cson\n pivot_by_group :\n id_cols: ['type']\n variable: 'variable'\n value: 'montant'\n new_columns: ['value', 'variation']\n groups:\n 'Group 1' : ['var1', 'var1_evol']\n 'Group 2' : ['var2', 'var2_evol']\n ```\n\n **Ouput**\n\n | type | variable | value | variation |\n |:----:|:----------:|:-------:|:---------:|\n | A | Group 1 | 5 | 0.3 |\n | A | Group 2 | 6 | 0.2 |\n\n \"\"\"\n if id_cols is None:\n index = [variable]\n else:\n index = [variable] + id_cols\n\n param = pd.DataFrame(groups, index=new_columns)\n temporary_colum = 'tmp'\n\n df[temporary_colum] = df[variable]\n for column in param.columns:\n df.loc[df[variable].isin(param[column]), variable] = column\n\n param = param.T\n for column in param.columns:\n df.loc[\n df[temporary_colum].isin(param[column]), temporary_colum] = column\n\n df = pivot(df, index, temporary_colum, value)\n return df", "response": "Pivot a dataframe by group of variables."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngroups by group by columns.", "response": "def groupby(df, *, group_cols: Union[str, List[str]],\n aggregations: Dict[str, Union[str, List[str]]]):\n \"\"\"\n Aggregate values by groups.\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `group_cols` (*list*): list of columns used to group data\n - `aggregations` (*dict*): dictionnary of values columns to group as keys and aggregation\n function to use as values (See the [list of aggregation functions](\n https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#aggregation))\n\n ---\n\n ### Example\n\n **Input**\n\n | ENTITY | YEAR | VALUE_1 | VALUE_2 |\n |:------:|:----:|:-------:|:-------:|\n | A | 2017 | 10 | 3 |\n | A | 2017 | 20 | 1 |\n | A | 2018 | 10 | 5 |\n | A | 2018 | 30 | 4 |\n | B | 2017 | 60 | 4 |\n | B | 2017 | 40 | 3 |\n | B | 2018 | 50 | 7 |\n | B | 2018 | 60 | 6 |\n\n ```cson\n groupby:\n group_cols: ['ENTITY', 'YEAR']\n aggregations:\n 'VALUE_1': 'sum',\n 'VALUE_2': 'mean'\n ```\n\n **Output**\n\n | ENTITY | YEAR | VALUE_1 | VALUE_2 |\n |:------:|:----:|:-------:|:-------:|\n | A | 2017 | 30 | 2.0 |\n | A | 2018 | 40 | 4.5 |\n | B | 2017 | 100 | 3.5 |\n | B | 2018 | 110 | 6.5 |\n\n \"\"\"\n df = df.groupby(group_cols, as_index=False).agg(aggregations)\n\n # When several aggregations are performed on the same column, pandas return\n # a multi-indexed dataframe, so we need to flatten the columns index to get\n # back to a unique level header\n if df.columns.nlevels == 2:\n level_0 = df.columns.get_level_values(0)\n level_1 = df.columns.get_level_values(1)\n new_columns = [(f'{x}_{y}' if x else y) for (x, y)\n in zip(level_1, level_0)]\n df.columns = new_columns\n return df"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncompute the cumulative sum of a column in a DataFrame.", "response": "def cumsum(df, new_column: str, column: str, index: list, date_column: str, date_format: str):\n \"\"\"\n DEPRECATED - please use `compute_cumsum` instead\n \"\"\"\n logging.getLogger(__name__).warning(f\"DEPRECATED: use compute_cumsum\")\n date_temp = '__date_temp__'\n if isinstance(index, str):\n index = [index]\n levels = list(range(0, len(index)))\n df[date_temp] = pd.to_datetime(df[date_column], format=date_format)\n reference_cols = [date_temp, date_column]\n df = df.groupby(index + reference_cols).sum()\n df[new_column] = df.groupby(level=levels)[column].cumsum()\n df.reset_index(inplace=True)\n del df[date_temp]\n\n return df"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_missing_row(\n df: pd.DataFrame,\n id_cols: List[str],\n reference_col: str,\n complete_index: Union[Dict[str, str], List[str]] = None,\n method: str = None,\n cols_to_keep: List[str] = None\n) -> pd.DataFrame:\n \"\"\"\n Add missing row to a df base on a reference column\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `id_cols` (*list of str*): names of the columns used to create each group\n - `reference_col` (*str*): name of the column used to identify missing rows\n\n *optional :*\n - `complete_index` (*list* or *dict*): [A, B, C] a list of values used to add missing rows.\n It can also be a dict to declare a date range.\n By default, use all values of reference_col.\n - `method` (*str*): by default all missing rows are added. The possible values are :\n - `\"between\"` : add missing rows having their value between min and max values for each group,\n - `\"between_and_after\"` : add missing rows having their value bigger than min value for each group.\n - `\"between_and_before\"` : add missing rows having their value smaller than max values for each group.\n - `cols_to_keep` (*list of str*): name of other columns to keep, linked to the reference_col.\n\n ---\n\n ### Example\n\n **Input**\n\n YEAR | MONTH | NAME\n :---:|:---:|:--:\n 2017|1|A\n 2017|2|A\n 2017|3|A\n 2017|1|B\n 2017|3|B\n\n ```cson\n add_missing_row:\n id_cols: ['NAME']\n reference_col: 'MONTH'\n ```\n\n **Output**\n\n YEAR | MONTH | NAME\n :---:|:---:|:--:\n 2017|1|A\n 2017|2|A\n 2017|3|A\n 2017|1|B\n 2017|2|B\n 2017|3|B\n\n \"\"\"\n if cols_to_keep is None:\n cols_for_index = [reference_col]\n else:\n cols_for_index = [reference_col] + cols_to_keep\n check_params_columns_duplicate(id_cols + cols_for_index)\n\n if method == 'between' or method == 'between_and_after':\n df['start'] = df.groupby(id_cols)[reference_col].transform(min)\n id_cols += ['start']\n if method == 'between' or method == 'between_and_before':\n df['end'] = df.groupby(id_cols)[reference_col].transform(max)\n id_cols += ['end']\n\n names = id_cols + cols_for_index\n new_df = df.set_index(names)\n index_values = df.groupby(id_cols).sum().index.values\n\n if complete_index is None:\n complete_index = df.groupby(cols_for_index).sum().index.values\n elif isinstance(complete_index, dict):\n if complete_index['type'] == 'date':\n freq = complete_index['freq']\n date_format = complete_index['format']\n start = complete_index['start']\n end = complete_index['end']\n if isinstance(freq, dict):\n freq = pd.DateOffset(**{k: int(v) for k, v in freq.items()})\n complete_index = pd.date_range(start=start, end=end, freq=freq)\n complete_index = complete_index.strftime(date_format)\n else:\n raise ParamsValueError(f'Unknown complete index type: '\n f'{complete_index[\"type\"]}')\n\n if not isinstance(index_values[0], tuple):\n index_values = [(x,) for x in index_values]\n if not isinstance(complete_index[0], tuple):\n complete_index = [(x,) for x in complete_index]\n new_tuples_index = [x + y for x in index_values for y in complete_index]\n\n new_index = pd.MultiIndex.from_tuples(new_tuples_index, names=names)\n new_df = new_df.reindex(new_index).reset_index()\n\n if method == 'between' or method == 'between_and_after':\n new_df = new_df[new_df[reference_col] >= new_df['start']]\n del new_df['start']\n if method == 'between' or method == 'between_and_before':\n new_df = new_df[new_df[reference_col] <= new_df['end']]\n del new_df['end']\n\n return new_df", "response": "Adds a missing row to a DataFrame."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nextract the contents of a zip file into a dictionary.", "response": "def extract_zip(zip_file_path):\n \"\"\"\n Returns:\n dict: Dict[str, DataFrame]\n \"\"\"\n dfs = {}\n with zipfile.ZipFile(zip_file_path, mode='r') as z_file:\n names = z_file.namelist()\n for name in names:\n content = z_file.read(name)\n _, tmp_file_path = tempfile.mkstemp()\n try:\n with open(tmp_file_path, 'wb') as tmp_file:\n tmp_file.write(content)\n\n dfs[name] = joblib.load(tmp_file_path)\n finally:\n shutil.rmtree(tmp_file_path, ignore_errors=True)\n return dfs"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nextracting the n term tables from the given data.", "response": "def extract(data):\n \"\"\"\n Args:\n data (str | byte):\n\n Returns:\n dict: Dict[str, DataFrame]\n\n \"\"\"\n _, tmp_file_path = tempfile.mkstemp()\n try:\n with open(tmp_file_path, 'wb') as tmp_file:\n tmp_file.write(data)\n\n if zipfile.is_zipfile(tmp_file_path):\n return extract_zip(tmp_file_path)\n else:\n raise DataSdkError('Unsupported file type')\n finally:\n shutil.rmtree(tmp_file_path, ignore_errors=True)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef read_from_cache(self, domains=None):\n logger.info(f'Reading data from cache ({self.EXTRACTION_CACHE_PATH})')\n if domains is not None and isinstance(domains, list):\n dfs = {domain: self.read_entry(domain) for domain in domains}\n else:\n dfs = {name: self.read_entry(name)\n for name in os.listdir(self.EXTRACTION_CACHE_PATH)}\n return dfs", "response": "Reads the data from the cache."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nread a cache entry from the given file.", "response": "def read_entry(self, file_name):\n \"\"\"\n Args:\n file_name (str):\n\n Returns:\n pd.DataFrame:\n \"\"\"\n file_path = os.path.join(self.EXTRACTION_CACHE_PATH, file_name)\n logger.info(f'Reading cache entry: {file_path}')\n return joblib.load(file_path)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef write(self, dfs):\n if not os.path.exists(self.EXTRACTION_CACHE_PATH):\n os.makedirs(self.EXTRACTION_CACHE_PATH)\n\n for name, df in dfs.items():\n file_path = os.path.join(self.EXTRACTION_CACHE_PATH, name)\n joblib.dump(df, filename=file_path)\n logger.info(f'Cache entry added: {file_path}')", "response": "Writes the data to the cache directory."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef compute_ffill_by_group(\n df,\n id_cols: List[str],\n reference_cols: List[str],\n value_col: str\n):\n \"\"\"\n Compute `ffill` with `groupby`\n Dedicated method as there is a performance issue with a simple groupby/fillna (2017/07)\n The method `ffill` propagates last valid value forward to next values.\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `id_cols` (*list of str*): names of columns used to create each group.\n - `reference_cols` (*list of str*): names of columns used to sort.\n - `value_col` (*str*): name of the columns to fill.\n\n ---\n\n ### Example\n\n **Input**\n\n name | rank | value\n :------:|:--------------:|:--------:\n A | 1 | 2\n A | 2 | 5\n A | 3 | null\n B | 1 | null\n B | 2 | 7\n\n ```cson\n compute_ffill_by_group:\n id_cols: ['name']\n reference_cols: ['rank']\n value_col: 'value'\n ```\n\n **Ouput**\n\n name | rank | value\n :------:|:--------------:|:--------:\n A | 1 | 2\n A | 2 | 5\n A | 3 | 5\n B | 1 | null\n B | 2 | 7\n \"\"\"\n check_params_columns_duplicate(id_cols + reference_cols + [value_col])\n df = df.sort_values(by=id_cols + reference_cols)\n df = df.set_index(id_cols)\n df['fill'] = 1 - df[value_col].isnull().astype(int)\n df['fill'] = df.groupby(\n level=list(range(0, len(id_cols) - 1))\n )['fill'].cumsum()\n df[value_col] = df[value_col].ffill()\n df.loc[df['fill'] == 0, value_col] = None\n del df['fill']\n return df.reset_index()", "response": "Compute the last valid value of a single record in a new tree tree."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef two_values_melt(\n df,\n first_value_vars: List[str],\n second_value_vars: List[str],\n var_name: str,\n value_name: str\n):\n \"\"\"\n Transforms one or multiple columns into rows.\n Unlike melt function, two value columns can be returned by\n the function (e.g. an evolution column and a price column)\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `first_value_vars` (*list of str*): name of the columns corresponding to the first returned value column\n - `second_value_vars` (*list of str*): name of the columns corresponding to the second returned value column\n - `var_name` (*str*): name of the column containing values in first_value_vars\n - `value_name` (*str*): suffix of the two value columns (suffix_first / suffix_second)\n ---\n\n ### Example\n\n **Input**\n\n | Region | avg | total | evo_avg | evo_total |\n |:---------:|:--------:|:-----------:|:--------:|:-----------:|\n | A | 50| 100 | 1 | 4 |\n | B | 40 | 250 | 2 | 5 |\n\n\n ```cson\n two_values_melt:\n first_value_vars: [\"avg\", \"total\"]\n second_value_vars: [\"evo_avg\", \"evo_total\"]\n var_name: \"type\"\n value_name: \"value\"\n ```\n\n **Output**\n\n | Region | type | value_first | value_second |\n |:---------:|:--------:|:------------:|:-------------:|\n | A | avg| 50 | 1 |\n | A | total| 100 | 4 |\n | B | avg| 40 | 2 |\n | B | avg| 250 | 5 |\n \"\"\"\n value_name_first = value_name + '_first'\n value_name_second = value_name + '_second'\n\n # Melt on the first value columns\n melt_first_value = pd.melt(df,\n id_vars=[col for col in list(df) if\n col not in first_value_vars],\n value_vars=first_value_vars,\n var_name=var_name,\n value_name=value_name_first)\n melt_first_value.drop(second_value_vars, axis=1, inplace=True)\n\n # Melt on the second value columns\n melt_second_value = pd.melt(df,\n id_vars=[col for col in list(df) if\n col not in second_value_vars],\n value_vars=second_value_vars,\n var_name=var_name,\n value_name=value_name_second)\n\n # Since there are two value columns, there is no need to keep the\n # second_value_vars names. And it will make things easier for the merge.\n normalize_types = {k: v for k, v in zip(second_value_vars, first_value_vars)}\n melt_second_value.replace(normalize_types, inplace=True)\n melt_second_value.drop(first_value_vars, axis=1, inplace=True)\n\n on_cols = list(melt_first_value)\n on_cols.remove(value_name_first)\n return pd.merge(melt_first_value, melt_second_value, on=on_cols, how='outer')", "response": "This function transforms one or multiple columns into two values."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconcatenates columns element - wise with a new column", "response": "def concat(\n df,\n *,\n columns: List[str],\n new_column: str,\n sep: str = None\n):\n \"\"\"\n Concatenate `columns` element-wise\n See [pandas doc](\n https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.cat.html) for more information\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `columns` (*list*): list of columns to concatenate (at least 2 columns)\n - `new_column` (*str*): the destination column\n\n *optional :*\n - `sep` (*str*): the separator\n \"\"\"\n if len(columns) < 2:\n raise ValueError('The `columns` parameter needs to have at least 2 columns')\n first_col, *other_cols = columns\n df.loc[:, new_column] = df[first_col].astype(str).str.cat(df[other_cols].astype(str), sep=sep)\n return df"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef contains(\n df,\n column: str,\n *,\n pat: str,\n new_column: str = None,\n case: bool = True,\n na: Any = None,\n regex: bool = True\n):\n \"\"\"\n Test if pattern or regex is contained within strings of `column`\n See [pandas doc](\n https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html) for more information\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (*str*): the column\n - `pat` (*str*): character sequence or regular expression.\n\n *optional :*\n - `new_column` (*str*): the destination column (if not set, `column` will be used)\n - `case` (*boolean*): if true, case sensitive.\n - `na`: fill value for missing values.\n - `regex` (*boolean*): default true\n \"\"\"\n new_column = new_column or column\n df.loc[:, new_column] = df[column].str.contains(pat, case=case, na=na, regex=regex)\n return df", "response": "Test if pattern or regex is contained within strings of column."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef repeat(\n df,\n column: str,\n *,\n times: int,\n new_column: str = None\n):\n \"\"\"\n Duplicate each string in `column` by indicated number of time\n See [pandas doc](\n https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.repeat.html) for more information\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (*str*): the column\n - `times` (*int*): times to repeat the string\n\n *optional :*\n - `new_column` (*str*): the destination column (if not set, `column` will be used)\n \"\"\"\n new_column = new_column or column\n df.loc[:, new_column] = df[column].str.repeat(times)\n return df", "response": "Duplicate each string in column by indicated number of time."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreplaces occurrences of pattern or regular expression in column with some other string.", "response": "def replace_pattern(\n df,\n column: str,\n *,\n pat: str,\n repl: str,\n new_column: str = None,\n case: bool = True,\n regex: bool = True\n):\n \"\"\"\n Replace occurrences of pattern/regex in `column` with some other string\n See [pandas doc](\n https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.replace.html) for more information\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (*str*): the column\n - `pat` (*str*): character sequence or regular expression\n - `repl` (*str*): replacement string\n\n *optional :*\n - `new_column` (*str*): the destination column (if not set, `column` will be used)\n - `case` (*boolean*): if true, case sensitive.\n - `regex` (*boolean*): default true\n \"\"\"\n new_column = new_column or column\n df.loc[:, new_column] = df[column].str.replace(pat, repl, case=case, regex=regex)\n return df"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef log_time(logger):\n def decorator(func):\n @wraps(func)\n def wrapper(*args, **kwargs):\n start = time.time()\n result = func(*args, **kwargs)\n end = time.time()\n _log_time(logger, func.__name__, start, end)\n return result\n return wrapper\n return decorator", "response": "Decorator to log the execution time of a function\nridge"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef log(logger=None, start_message='Starting...', end_message='Done...'):\n def actual_log(f, real_logger=logger):\n logger = real_logger or _logger\n\n @wraps(f)\n def timed(*args, **kwargs):\n logger.info(f'{f.__name__} - {start_message}')\n start = time.time()\n res = f(*args, **kwargs)\n end = time.time()\n logger.info(f'{f.__name__} - {end_message} (took {end - start:.2f}s)')\n return res\n\n return timed\n\n if callable(logger):\n return actual_log(logger, real_logger=None)\n return actual_log", "response": "A basic log decorator that logs the current state of the object."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nallows to apply a function f(df: DataFrame) -> DataFrame) on dfs by specifying the key E.g instead of writing: def process_domain1(dfs): df = dfs['domain1'] # actual process dfs['domain1'] = df return dfs You can write: @domain('domain1') def process_domain1(df): #actual process return df", "response": "def domain(domain_name):\n \"\"\"\n Allow to apply a function f(df: DataFrame) -> DataFrame) on dfs by specifying the key\n E.g instead of writing:\n def process_domain1(dfs):\n df = dfs['domain1']\n # actual process\n dfs['domain1'] = df\n return dfs\n\n You can write:\n @domain('domain1')\n def process_domain1(df):\n #actual process\n return df\n \"\"\"\n def decorator(func):\n @wraps(func)\n def wrapper(*args, **kwargs):\n dfs, *args = args\n if not isinstance(dfs, dict):\n raise TypeError(f'{dfs} is not a dict')\n df = dfs.pop(domain_name)\n df = func(df, *args, **kwargs)\n return {domain_name: df, **dfs}\n\n return wrapper\n\n return decorator"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef cache( # noqa: C901\n requires=None,\n disabled=False,\n applied_on_method=False,\n check_param=True,\n limit=None\n):\n \"\"\" Avoid to recompute a function if its parameters and its source code doesnt have changed.\n\n Args:\n requires: list of dependencies (functions or function names)\n disabled (bool): disable the cache mecanism for this function (useful if you\n only want to use the dependency mecanism)\n applied_on_method (bool): ignore the first argument (useful to ignore \"self\")\n check_param (True, False or a str): the name of the parameter to check.\n False to not check any of them.\n True (default) to check all of them.\n limit (int or None): number of cache entries to keep (no limit by default)\n \"\"\"\n if not requires:\n requires = []\n elif isinstance(requires, collections.Callable):\n requires = [requires]\n\n if not isinstance(check_param, (bool, str)):\n raise TypeError(\"'check_param' must be a str (name of the param to check) or a bool\")\n if limit is not None and not isinstance(limit, int):\n raise TypeError(\"'limit' must be an int (number of cache entries to keep) or None\")\n\n # We keep data in the function attributes so that this data\n # is not erased between two calls:\n if not hasattr(cache, 'funcs_references'):\n cache.funcs_references = {} # dict of {function_name -> function_object (or None)}\n if not hasattr(cache, 'dependencies'):\n cache.dependencies = {} # dict of {function_name -> [list of function names]}\n if not hasattr(cache, 'memories'):\n cache.memories = {} # dict of {thread_id -> joblib.Memory object}\n\n def decorator(func):\n \"\"\" This code is executed when the augment module is read (when decorator is applied).\n Here we populate cache.funcs_references and cache.dependencies to use them later. \"\"\"\n cache.funcs_references[func.__name__] = get_orig_function(func)\n dependencies_names = []\n for requirement in requires:\n if isinstance(requirement, collections.Callable):\n req_name = requirement.__name__\n cache.funcs_references[req_name] = get_orig_function(requirement)\n elif requirement not in cache.funcs_references:\n req_name = requirement\n cache.funcs_references[req_name] = None\n dependencies_names.append(req_name)\n\n cache.dependencies[func.__name__] = dependencies_names\n\n @wraps(func)\n def wrapper(*args, **kwargs):\n \"\"\" This code is executed when a decorated function is actually executed.\n It uses the previously built dependency tree (see above). \"\"\"\n current_memory = cache.memories.get(current_thread().name)\n if disabled is True or current_memory is None:\n return func(*args, **kwargs)\n\n # if cache is enabled, we compute the md5 hash of the concatenated source codes\n # of all the dependencies.\n concatenated_source_code = ''\n dependencies = resolve_dependencies(func.__name__, cache.dependencies)\n for func_name in dependencies:\n function = cache.funcs_references[func_name]\n if function is None:\n raise Exception(f\"Can't get source code of function '{func_name}'\")\n source_code = get_func_sourcecode(function)\n concatenated_source_code += source_code\n md5_hash = md5(str.encode(concatenated_source_code)).hexdigest()\n\n # Add extra parameters so that joblib checks they didnt have changed:\n tmp_extra_kwargs = {\n '__func_dependencies_hash__': md5_hash,\n '__original_func_name__': func.__name__,\n }\n\n if check_param is True:\n kwargs.update(tmp_extra_kwargs)\n\n if applied_on_method:\n self_arg, args = args[0], args[1:]\n\n @wraps(func)\n def f(*args, **kwargs):\n # delete the extra parameters that the underlying function doesnt expect:\n for k in tmp_extra_kwargs.keys():\n del kwargs[k]\n\n if applied_on_method:\n args = (self_arg,) + args\n return func(*args, **kwargs)\n\n f = current_memory.cache(f)\n result = f(*args, **kwargs)\n else:\n if isinstance(check_param, str):\n check_only_param_value = get_param_value_from_func_call(\n param_name=check_param,\n func=func,\n call_args=args,\n call_kwargs=kwargs,\n )\n tmp_extra_kwargs['__check_only__'] = check_only_param_value\n\n @wraps(func)\n def f(**tmp_extra_kwargs):\n return func(*args, **kwargs)\n\n f = current_memory.cache(f)\n result = f(**tmp_extra_kwargs)\n\n if limit is not None:\n clean_cachedir_old_entries(f.store_backend, func.__name__, limit)\n\n return result\n\n return wrapper\n\n return decorator", "response": "A decorator that creates a cache object for the given function."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef setup_cachedir(cachedir, mmap_mode=None, bytes_limit=None):\n if not hasattr(cache, 'memories'):\n cache.memories = {}\n\n memory = joblib.Memory(\n location=cachedir,\n verbose=0,\n mmap_mode=mmap_mode,\n bytes_limit=bytes_limit,\n )\n cache.memories[current_thread().name] = memory\n return memory", "response": "This function creates a joblib. Memory object in the cache object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef melt(\n df,\n id: List[str],\n value: List[str],\n dropna=False\n):\n \"\"\"\n A melt will transform a dataset by creating a column \"variable\" and a column \"value\".\n This function is useful to transform a dataset into a format where one or more columns\n are identifier variables, while all other columns, considered measured\n variables (value_vars), are \u201cunpivoted\u201d to the row axis, leaving just two\n non-identifier columns, `\"variable\"` and `\"value\"`.\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `id` (*list of str*): names of the columns that must be kept in column.\n - `value` (*list of str*): names of the columns that will be transformed in long format (in rows).\n\n *optional :*\n - `dropna` (*boolean*): It allows you to drop missing values.\n\n ---\n\n ### Example\n\n **Input**\n\n | my_label | my_value | my_column_1 | my_column_2 | info_1 | info_2 | info_3 |\n |:--------:|:--------:|:-----------:|:-----------:|:------:|:------:|:------:|\n | toto | 10 | S45 | Lalaland | 10 | 20 | None |\n\n ```cson\n melt:\n id: ['my_label', 'my_value' 'my_column_1', 'my_colum_2']\n value: ['info_1', 'info_2', 'info_3']\n dropna: true\n ```\n\n **Ouput**\n\n | my_label | my_value | my_column_1 | my_column_2 | variable | value |\n |:--------:|:--------:|:-----------:|:-----------:|:--------:|:------:|\n | toto | 10 | S45 | Lalaland | info_1 | 10 |\n | toto | 10 | S45 | Lalaland | info_2 | 20 |\n \"\"\"\n df = df[(id + value)]\n df = pd.melt(df, id_vars=id, value_vars=value)\n if dropna:\n df = df.dropna(subset=['value'])\n\n return df", "response": "A melt transform function for a given set of columns."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef rename(\n df,\n values: Dict[str, Dict[str, str]] = None,\n columns: Dict[str, Dict[str, str]] = None,\n locale: str = None\n):\n \"\"\"\n Replaces data values and column names according to the locale\n\n ---\n\n ### Parameters\n\n - `values` (optional: dict):\n - key: term to be replaced\n - value:\n - key: the locale e.g. 'en' or 'fr'\n - value: term's translation\n - `columns` (optional: dict):\n - key: columns name to be replaced\n - value:\n - key: the locale e.g. 'en' or 'fr'\n - value: column name's translation\n - `locale` (optional: str): the locale you want to use.\n By default the client locale is used.\n\n ---\n\n ### Example\n\n **Input**\n\n | label | value |\n |:----------------:|:-----:|\n | France | 100 |\n | Europe wo France | 500 |\n\n ```cson\n rename:\n values:\n 'Europe wo France':\n 'en': 'Europe excl. France'\n 'fr': 'Europe excl. France'\n columns:\n 'value':\n 'en': 'revenue'\n 'fr': 'revenue'\n ```\n\n **Output**\n\n | label | revenue |\n |:-------------------:|:-------:|\n | France | 100 |\n | Europe excl. France | 500 |\n\n \"\"\"\n if values:\n to_replace = list(values.keys())\n value = [values[term][locale] for term in values]\n df = df.replace(to_replace=to_replace, value=value)\n if columns:\n _keys = list(columns.keys())\n _values = [column[locale] for column in columns.values()]\n columns = dict(list(zip(_keys, _values)))\n df = df.rename(columns=columns)\n return df", "response": "Rename the names of the entries in the n - term table."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncompute the cumsum sum of a group of columns.", "response": "def compute_cumsum(\n df,\n id_cols: List[str],\n reference_cols: List[str],\n value_cols: List[str],\n new_value_cols: List[str] = None,\n cols_to_keep: List[str] = None\n):\n \"\"\"\n Compute cumsum for a group of columns.\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `id_cols` (*list*): the columns id to create each group\n - `reference_cols` (*list*): the columns to order the cumsum\n - `value_cols` (*list*): the columns to cumsum\n\n *optional :*\n - `new_value_cols` (*list*): the new columns with the result cumsum\n - `cols_to_keep` (*list*): other columns to keep in the dataset.\n This option can be used if there is only one row by group [id_cols + reference_cols]\n\n ---\n\n ### Example\n\n **Input**\n\n MONTH | DAY | NAME | VALUE | X\n :---:|:---:|:--:|:---:|:---:\n 1 | 1 | A | 1 | lo\n 2 | 1 | A | 1 | lo\n 2 | 15 | A | 1 | la\n 1 | 15 | B | 1 | la\n\n ```cson\n compute_cumsum:\n id_cols: ['NAME']\n reference_cols: ['MONTH', 'DAY']\n cumsum_cols: ['VALUE']\n cols_to_keep: ['X']\n ```\n\n **Output**\n\n NAME | MONTH | DAY | X | VALUE\n :---:|:---:|:--:|:---:|:---:\n A | 1 | 1 | lo | 1\n A | 2 | 1 | la | 2\n A | 2 | 15 | lo | 3\n B | 1 | 15 | la | 1\n \"\"\"\n if cols_to_keep is None:\n cols_to_keep = []\n\n if new_value_cols is None:\n new_value_cols = value_cols\n if len(value_cols) != len(new_value_cols):\n raise ParamsValueError('`value_cols` and `new_value_cols` needs '\n 'to have the same number of elements')\n\n check_params_columns_duplicate(id_cols + reference_cols + cols_to_keep + value_cols)\n\n levels = list(range(0, len(id_cols)))\n\n df = df.groupby(id_cols + reference_cols + cols_to_keep).sum()\n df[new_value_cols] = df.groupby(level=levels)[value_cols].cumsum()\n\n return df.reset_index()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef combine_columns_aggregation(\n df,\n id_cols: List[str],\n cols_for_combination: Dict[str, str],\n agg_func: Union[str, List[str], Dict[str, str]] = 'sum'\n):\n \"\"\"\n Aggregates data to reproduce \"All\" category for requester\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `id_cols` (*list*): the columns id to group\n - `cols_for_combination` (*dict*): colums corresponding to\n the filters as key and their default value as value\n\n *optional :*\n - `agg_func` (*str*, *list* or *dict*): the function(s) to use for aggregating the data.\n Accepted combinations are:\n - string function name\n - list of functions and/or function names, e.g. [np.sum, 'mean']\n - dict of axis labels -> functions, function names or list of such.\n \"\"\"\n requesters_cols = list(cols_for_combination.keys())\n requester_combination = [\n list(item) for i in range(0, len(requesters_cols) + 1)\n for item in itertools.combinations(requesters_cols, i)]\n dfs_result = []\n for comb in requester_combination:\n df_tmp = df.groupby(id_cols + comb).agg(agg_func).reset_index()\n for key in (set(cols_for_combination.keys()) - set(comb)):\n df_tmp[key] = cols_for_combination[key]\n dfs_result.append(df_tmp)\n\n return pd.concat(dfs_result, sort=False, ignore_index=True)", "response": "Combine the columns of the given dataframe with the aggregation function."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_param_value_from_func_call(param_name, func, call_args, call_kwargs):\n signature = inspect.signature(func)\n params_list = signature.parameters.keys()\n if param_name not in params_list:\n raise TypeError(f\"'{param_name}' not found in {func.__name__}\"\n f\"parameters list ([{params_list}])\")\n call = signature.bind(*call_args, **call_kwargs)\n call.apply_defaults()\n return call.arguments[param_name]", "response": "Get the value of a parameter from a function call."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_func_sourcecode(func):\n\n def getsource(func):\n lines, lnum = getsourcelines(func)\n return ''.join(lines)\n\n def getsourcelines(func):\n lines, lnum = findsource(func)\n return inspect.getblock(lines[lnum:]), lnum + 1\n\n def findsource(func):\n file = getfile(func) # file path\n module = inspect.getmodule(func, file)\n lines = linecache.getlines(file, module.__dict__)\n code = func.__code__\n lnum = code.co_firstlineno - 1\n pat = re.compile(r'^(\\s*def\\s)|(\\s*async\\s+def\\s)|(.*(? 0:\n if pat.match(lines[lnum]):\n break\n lnum = lnum - 1 # pragma: no cover\n return lines, lnum\n\n def getfile(func):\n module = inspect.getmodule(func)\n return module.__file__\n\n try:\n return inspect.getsource(func)\n except Exception:\n return getsource(func)", "response": "Try to get sourcecode using standard inspect. getsource."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a slugified name", "response": "def slugify(name, separator='-'):\n \"\"\"Returns a slugified name (we allow _ to be used)\"\"\"\n return _slugify(name, regex_pattern=re.compile('[^-_a-z0-9]+'), separator=separator)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nresolving the dependencies for a given function name and a mapping of function dependencies.", "response": "def resolve_dependencies(func_name, dependencies):\n \"\"\" Given a function name and a mapping of function dependencies,\n returns a list of *all* the dependencies for this function. \"\"\"\n\n def _resolve_deps(func_name, func_deps):\n \"\"\" Append dependencies recursively to func_deps (accumulator) \"\"\"\n if func_name in func_deps:\n return\n\n func_deps.append(func_name)\n for dep in dependencies.get(func_name, []):\n _resolve_deps(dep, func_deps)\n\n func_deps = []\n _resolve_deps(func_name, func_deps)\n return sorted(func_deps)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef clean_cachedir_old_entries(cachedir: StoreBackendBase, func_name: str, limit: int) -> int:\n if limit < 1:\n raise ValueError(\"'limit' must be greater or equal to 1\")\n\n cache_entries = get_cachedir_entries(cachedir, func_name)\n cache_entries = sorted(cache_entries, key=lambda e: e.last_access, reverse=True)\n cache_entries_to_remove = cache_entries[limit:]\n for entry in cache_entries_to_remove:\n shutil.rmtree(entry.path, ignore_errors=True)\n\n return len(cache_entries_to_remove)", "response": "Remove old entries from the cache"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef roll_up(\n df,\n levels: List[str],\n groupby_vars: List[str],\n extra_groupby_cols: List[str] = None,\n var_name: str = 'type',\n value_name: str = 'value',\n agg_func: str = 'sum',\n drop_levels: List[str] = None\n):\n \"\"\"\n Creates aggregates following a given hierarchy\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `levels` (*list of str*): name of the columns composing the hierarchy (from the top to the bottom level).\n - `groupby_vars` (*list of str*): name of the columns with value to aggregate.\n - `extra_groupby_cols` (*list of str*) optional: other columns used to group in each level.\n\n *optional :*\n - `var_name` (*str*) : name of the result variable column. By default, `\u201ctype\u201d`.\n - `value_name` (*str*): name of the result value column. By default, `\u201cvalue\u201d`.\n - `agg_func` (*str*): name of the aggregation operation. By default, `\u201csum\u201d`.\n - `drop_levels` (*list of str*): the names of the levels that you may want to discard from the output.\n\n ---\n\n ### Example\n\n **Input**\n\n | Region | City | Population |\n |:---------:|:--------:|:-----------:|\n | Idf | Panam| 200 |\n | Idf | Antony | 50 |\n | Nord | Lille | 20 |\n\n ```cson\n roll_up:\n levels: [\"Region\", \"City\"]\n groupby_vars: \"Population\"\n ```\n\n **Output**\n\n | Region | City | Population | value | type |\n |:---------:|:--------:|:-----------:|:--------:|:------:|\n | Idf | Panam| 200 | Panam | City |\n | Idf | Antony | 50 | Antony | City |\n | Nord | Lille | 20 | Lille | City |\n | Idf | Nan | 250 | Idf | Region |\n | Nord | Nan | 20 | Nord | Region |\n \"\"\"\n dfs = list()\n groupby_cols_cpy = list(levels)\n levels_cpy = list(levels)\n levels_cpy.reverse()\n\n extra_groupby_cols = extra_groupby_cols or []\n drop_levels = drop_levels or []\n previous_level = None\n for top_level in levels_cpy:\n # Aggregation\n gb_df = getattr(\n df.groupby(groupby_cols_cpy + extra_groupby_cols)[groupby_vars],\n agg_func)().reset_index()\n\n # Melt-like columns\n gb_df[var_name] = top_level\n gb_df[value_name] = gb_df[top_level]\n dfs.append(gb_df)\n if previous_level in drop_levels:\n del dfs[-2]\n previous_level = top_level\n\n # Remove one level each time in the groupby: lowest level column needs\n # a groupby with every levels, the next level needs every one except\n # the lowest, etc. until the top level column that needs only itself\n # inside the groupby.\n groupby_cols_cpy.pop()\n return pd.concat(dfs, sort=False).reset_index()", "response": "Creates an aggregate of the given levels and returns the resulting tree."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef argmax(df, column: str, groups: Union[str, List[str]] = None):\n if groups is None:\n df = df[df[column] == df[column].max()].reset_index(drop=True)\n else:\n group_max = df.groupby(groups)[column].transform('max')\n df = (df\n .loc[df[column] == group_max, :]\n .drop_duplicates()\n .reset_index(drop=True)\n )\n return df", "response": "Return the DataFrame with the maximum value in a column."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the row of the dataframe corresponding to the minimal value in a column.", "response": "def argmin(df, column: str, groups: Union[str, List[str]] = None):\n \"\"\"\n Keep the row of the data corresponding to the minimal value in a column\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (str): name of the column containing the value you want to keep the minimum\n\n *optional :*\n - `groups` (*str or list(str)*): name of the column(s) used for 'groupby' logic\n (the function will return the argmax by group)\n ---\n\n ### Example\n\n **Input**\n\n | variable | wave | year | value |\n |:--------:|:-------:|:--------:|:-----:|\n | toto | wave 1 | 2014 | 300 |\n | toto | wave 1 | 2015 | 250 |\n | toto | wave 1 | 2016 | 450 |\n\n ```cson\n argmin:\n column: 'year'\n ]\n ```\n\n **Output**\n\n | variable | wave | year | value |\n |:--------:|:-------:|:--------:|:-----:|\n | toto | wave 1 | 2015 | 250 |\n \"\"\"\n if groups is None:\n df = df[df[column] == df[column].min()].reset_index(drop=True)\n else:\n group_min = df.groupby(groups)[column].transform('min')\n df = (df\n .loc[df[column] == group_min, :]\n .drop_duplicates()\n .reset_index(drop=True)\n )\n return df"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fillna(df, column: str, value=None, column_value=None):\n if column not in df.columns:\n df[column] = nan\n\n if value is not None and column_value is not None:\n raise ValueError('You cannot set both the parameters value and column_value')\n\n if value is not None:\n df[column] = df[column].fillna(value)\n\n if column_value is not None:\n if column_value not in df.columns:\n raise ValueError(f'\"{column_value}\" is not a valid column name')\n df[column] = df[column].fillna(df[column_value])\n\n return df", "response": "Fill NaN values from a column in a DataFrame with a given value or a column_value."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef date_requester_generator(\n df: pd.DataFrame,\n date_column: str,\n frequency: str,\n date_column_format: str = None,\n format: str = '%Y-%m-%d',\n granularities: Dict[str, str] = None,\n others_format: Dict[str, str] = None,\n times_delta: Dict[str, str] = None\n) -> pd.DataFrame:\n \"\"\"\n From a dataset containing dates in a column, return a dataset\n with at least 3 columns :\n - \"DATE\" : Label of date\n - \"DATETIME\" : Date in datetime dtype\n - \"GRANULARITY\" : Granularity of date\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `date_column` (*str*): name of column containing the date in the dataframe\n - `frequency` (*str*): see [pandas doc](\n http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases)\n\n *optional :*\n - `date_column_format` (*str*): format of the date in date_column\n - `format` (*str*): format of the date (see [pandas doc](\n https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior))\n By default, the format is set to `'%d/%m/%Y'`\n **WARNING**: only use if `granularities` is None.\n - `granularities` (*dict*):\n - key (*str*): name of the granularity\n - value (*str*): Format of the granularity e.g. '%d/%m/%Y' (see [pandas doc](\n https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior))\n - `others_format` (*dict*) : Add new columns for each key\n - key (*str*) : name of the column\n - value (*str*): format of the granularity e.g. '%d/%m/%Y' (see [pandas doc](\n https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior))\n - `times_delta` (*dict*) : Add new columns for each key\n - key (*str*) : name of the column\n - value (*str*): time delta (e.g. '+1 day', '+3 day', '-4 month')\n\n ---\n\n ### Example\n\n **Input**\n\n date | kpi\n :---:|:-----:\n 2018-01-01 | 1\n 2018-01-05 | 2\n 2018-01-04 | 3\n 2018-01-03 | 4\n 2018-01-02 | 5\n\n ```cson\n date_requester_generator:\n date_column: 'date'\n frequency: 'D'\n granularities:\n 'day': '%d/%m/%Y'\n 'Semaine': '%W'\n others_format:\n 'year': '%Y'\n ```\n\n **Ouput**\n\n DATE | DATETIME | GRANULARITY | year\n :---------:|:----------:|:-----------:|:---:\n 01/01/2018 | 2018-01-01 | day | 2018\n 02/01/2018 | 2018-01-02 | day | 2018\n 03/01/2018 | 2018-01-03 | day | 2018\n 04/01/2018 | 2018-01-04 | day | 2018\n 05/01/2018 | 2018-01-05 | day | 2018\n 01 | 2018-01-01 | Semaine | 2018\n \"\"\"\n\n start_date = pd.to_datetime(df[date_column], format=date_column_format).min()\n end_date = pd.to_datetime(df[date_column], format=date_column_format).max()\n\n granularities = granularities or {'date': format}\n others_format = others_format or {}\n times_delta = times_delta or {}\n\n # Base DataFrame\n columns_list = ['DATE', 'DATETIME', 'GRANULARITY', *others_format, *times_delta]\n result_df = {col_name: [] for col_name in columns_list}\n\n # Generate the range\n date_range = pd.date_range(start=start_date, end=end_date, freq=frequency)\n\n for granularity_name, granularity_format in granularities.items():\n date_range_label = date_range.strftime(granularity_format)\n a = list(set(date_range_label))\n index_unique = list(set([a.index(x) for x in date_range_label]))\n date_range_datetime = date_range[index_unique]\n date_range_label = date_range_label.unique()\n\n result_df['DATE'] += list(date_range_label)\n result_df['DATETIME'] += list(date_range_datetime)\n result_df['GRANULARITY'] += [granularity_name] * len(date_range_label)\n\n for col_name, other_format in others_format.items():\n result_df[col_name] += list(date_range_datetime.strftime(other_format))\n\n for col_name, time_delta in times_delta.items():\n result_df[col_name] += list((date_range_datetime + pd.Timedelta(time_delta))\n .strftime(granularity_format))\n\n return pd.DataFrame(result_df)", "response": "Generate a date requester for a given date column."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nnormalize symbolic date values in a valid date.", "response": "def _norm_date(datestr: str, date_fmt: str) -> date:\n \"\"\"normalize symbolic date values (e.g. 'TODAY')\n\n Convert a symbolic value in a valid date.\n Currenlty known symbolic values are 'TODAY', 'YESTERDAY' and 'TOMORROW'.\n\n NOTE: This function will return `date` (not `datetime`) instances.\n\n Parameters:\n `datestr`: the date to parse, formatted as `date_fmt`\n `date_fmt`: expected output date format\n\n Returns:\n The interpreted date as a datetime.datetime object.\n If `datestr` doesn't match any of the known symbolic names, it just parses it.\n\n \"\"\"\n try:\n days = {'TODAY': 0, 'YESTERDAY': -1, 'TOMORROW': 1}[datestr.upper()]\n return date.today() + pd.Timedelta(days=days)\n except KeyError:\n return datetime.strptime(datestr, date_fmt).date()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef add_offset(dateobj, hr_offset: str, sign: str):\n sign_coeff = 1 if sign == '+' else -1\n try:\n return dateobj + sign_coeff * pd.Timedelta(hr_offset)\n except ValueError:\n # pd.Timedelta could not parse the offset, let's try harder\n match = TIMEDELTA_RGX.match(hr_offset)\n if match is not None:\n groups = match.groupdict()\n unit = groups['unit'].lower()[0]\n num = sign_coeff * int(groups['num'])\n # is it a week ?\n if unit == 'w':\n return dateobj + num * timedelta(weeks=1)\n # or a month ?\n if unit == 'm':\n return add_months(dateobj, num)\n # or a year ?\n if unit == 'y':\n return add_years(dateobj, num)\n # we did what we could, just re-raise the original exception\n raise", "response": "add a human readable offset to dateobj and return corresponding date."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a date object with nb_months months added.", "response": "def add_months(dateobj, nb_months: int):\n \"\"\"return `dateobj` + `nb_months`\n\n If landing date doesn't exist (e.g. february, 30th), return the last\n day of the landing month.\n\n >>> add_months(date(2018, 1, 1), 1)\n datetime.date(2018, 1, 1)\n >>> add_months(date(2018, 1, 1), -1)\n datetime.date(2017, 12, 1)\n >>> add_months(date(2018, 1, 1), 25)\n datetime.date(2020, 2, 1)\n >>> add_months(date(2018, 1, 1), -25)\n datetime.date(2015, 12, 1)\n >>> add_months(date(2018, 1, 31), 1)\n datetime.date(2018, 2, 28)\n \"\"\"\n nb_years, nb_months = divmod(nb_months, 12)\n month = dateobj.month + nb_months\n if month > 12:\n nb_years += 1\n month -= 12\n year = dateobj.year + nb_years\n lastday = monthrange(year, month)[1]\n return dateobj.replace(year=year, month=month, day=min(lastday, dateobj.day))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd nb_years to dateobj.", "response": "def add_years(dateobj, nb_years):\n \"\"\"return `dateobj` + `nb_years`\n\n If landing date doesn't exist (e.g. february, 30th), return the last\n day of the landing month.\n\n >>> add_years(date(2018, 1, 1), 1)\n datetime.date(2019, 1, 1)\n >>> add_years(date(2018, 1, 1), -1)\n datetime.date(2017, 1, 1)\n >>> add_years(date(2020, 2, 29), 1)\n datetime.date(2021, 2, 28)\n >>> add_years(date(2020, 2, 29), -1)\n datetime.date(2019, 2, 28)\n \"\"\"\n year = dateobj.year + nb_years\n lastday = monthrange(year, dateobj.month)[1]\n return dateobj.replace(year=year, day=min(lastday, dateobj.day))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef parse_date(datestr: str, date_fmt: str) -> date:\n rgx = re.compile(r'\\((?P.*)\\)(\\s*(?P[+-])(?P.*))?$')\n datestr = datestr.strip()\n match = rgx.match(datestr)\n # if regexp doesn't match, date must match the expected format\n if match is None:\n return _norm_date(datestr, date_fmt)\n datestr = match.group('date').strip()\n dateobj = _norm_date(datestr, date_fmt)\n offset = match.group('offset')\n if offset:\n return add_offset(dateobj, offset, match.group('sign'))\n return dateobj", "response": "parse a date string and return corresponding date object."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef filter_by_date(\n df,\n date_col: str,\n date_format: str = '%Y-%m-%d',\n start: str = None,\n stop: str = None,\n atdate: str = None\n):\n \"\"\"\n Filter dataframe your data by date.\n\n This function will interpret `start`, `stop` and `atdate` and build\n the corresponding date range. The caller must specify either:\n\n - `atdate`: keep all rows matching this date exactly,\n - `start`: keep all rows matching this date onwards.\n - `stop`: keep all rows matching dates before this one.\n - `start` and `stop`: keep all rows between `start` and `stop`,\n\n Any other combination will raise an error. The lower bound of the date range\n will be included, the upper bound will be excluded.\n\n When specified, `start`, `stop` and `atdate` values are expected to match the\n `date_format` format or a known symbolic value (i.e. 'TODAY', 'YESTERDAY' or 'TOMORROW').\n\n Additionally, the offset syntax \"(date) + offset\" is also supported (Mind\n the parenthesis around the date string). In that case, the offset must be\n one of the syntax supported by `pandas.Timedelta` (see [pandas doc](\n http://pandas.pydata.org/pandas-docs/stable/timedeltas.html))\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `date_col` (*str*): the name of the dataframe's column to filter on\n\n *optional :*\n - `date_format` (*str*): expected date format in column `date_col` (see [available formats](\n https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior)\n - `start` (*str*): if specified, lower bound (included) of the date range\n - `stop` (*str*): if specified, upper bound (excluded) of the date range\n - `atdate` (*str*): if specified, the exact date we're filtering on\n \"\"\"\n mask = None\n if start is None and stop is None and atdate is None:\n raise TypeError('either \"start\", \"stop\" or \"atdate\" must be specified')\n if start is not None and atdate is not None:\n raise TypeError('\"start\" and \"atdate\" are mutually exclusive')\n if stop is not None and atdate is not None:\n raise TypeError('\"stop\" and \"atdate\" are mutually exclusive')\n # add a new column that will hold actual date objects instead of strings.\n # This column is just temporary and will be removed before returning the\n # filtered dataframe.\n filtercol = str(uuid4())\n df[filtercol] = pd.to_datetime(df[date_col], format=date_format)\n if atdate is not None:\n mask = df[filtercol] == parse_date(atdate, date_format)\n elif start is not None and stop is not None:\n mask = ((df[filtercol] >= parse_date(start, date_format)) &\n (df[filtercol] < parse_date(stop, date_format)))\n elif stop is None:\n mask = df[filtercol] >= parse_date(start, date_format)\n elif start is None:\n mask = df[filtercol] < parse_date(stop, date_format)\n return df[mask].drop(filtercol, axis=1)", "response": "Filter dataframe your data by date."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreplacing a column in a DataFrame.", "response": "def replace(df, column: str, new_column: str = None, **kwargs):\n \"\"\"\n Change the label of a value or a columns within your data source.\n (Similar to `rename` but does not have the notion of locale)\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (*str*): name of the column to modify.\n - `to_replace` (*dict*): keys of this dict are old values pointing on substitute.\n\n *optional :*\n - `new_column` (*str*): name of the output column. By default the `column` arguments is modified.\n\n **Note**: extra parameters can be used (see [pandas doc](\n https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.replace.html))\n\n ---\n\n ### Example\n\n **Input**\n\n article | rating\n :------:|:------:\n book | 1\n puzzle | 3\n food | 5\n\n We want to split the ratings in three categories: \"good\", \"average\" and \"poor\".\n\n ```cson\n replace:\n column: \"rating\"\n new_column: \"rating_category\" # create a new column with replaced data\n to_replace:\n 1: \"poor\"\n 2: \"poor\"\n 3: \"average\"\n 4: \"good\"\n 5: \"good\"\n ```\n\n **Ouput**\n\n article | rating | rating_category\n :------:|:------:|:--------------:\n book | 1 | poor\n puzzle | 3 | average\n food | 5 | good\n \"\"\"\n new_column = new_column or column\n df.loc[:, new_column] = df[column].replace(**kwargs)\n return df"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding a column to the dataframe according to the groupby logic on group_cols --- ### Parameters *mandatory :* - `column` (*str*): name of the desired column you need percentage on *optional :* - `group_cols` (*list*): names of columns for the groupby logic - `new_column` (*str*): name of the output column. By default `column` will be overwritten. --- **Input** | gender | sport | number | |:------:|:----------:|:------:| | male | bicycle | 17 | | female | basketball | 17 | | male | basketball | 3 | | female | football | 7 | | female | running | 30 | | male | running | 20 | | male | football | 21 | | female | bicycle | 17 | ```cson percentage: new_column: 'number_percentage' column: 'number' group_cols: ['sport'] ``` **Output** | gender | sport | number | number_percentage | |:------:|:----------:|:------:|:-----------------:| | male | bicycle | 17 | 50.0 | | female | basketball | 17 | 85.0 | | male | basketball | 3 | 15.0 | | female | football | 7 | 25.0 | | female | running | 30 | 60.0 | | male | running | 20 | 40.0 | | male | football | 21 | 75.0 | | female | bicycle | 17 | 50.0 |", "response": "def percentage(\n df,\n column: str,\n group_cols: Union[str, List[str]] = None,\n new_column: str = None\n):\n \"\"\"\n Add a column to the dataframe according to the groupby logic on group_cols\n\n ---\n\n ### Parameters\n\n *mandatory :*\n - `column` (*str*): name of the desired column you need percentage on\n\n *optional :*\n - `group_cols` (*list*): names of columns for the groupby logic\n - `new_column` (*str*): name of the output column. By default `column` will be overwritten.\n\n ---\n\n **Input**\n\n | gender | sport | number |\n |:------:|:----------:|:------:|\n | male | bicycle | 17 |\n | female | basketball | 17 |\n | male | basketball | 3 |\n | female | football | 7 |\n | female | running | 30 |\n | male | running | 20 |\n | male | football | 21 |\n | female | bicycle | 17 |\n\n ```cson\n percentage:\n new_column: 'number_percentage'\n column: 'number'\n group_cols: ['sport']\n ```\n\n **Output**\n\n | gender | sport | number | number_percentage |\n |:------:|:----------:|:------:|:-----------------:|\n | male | bicycle | 17 | 50.0 |\n | female | basketball | 17 | 85.0 |\n | male | basketball | 3 | 15.0 |\n | female | football | 7 | 25.0 |\n | female | running | 30 | 60.0 |\n | male | running | 20 | 40.0 |\n | male | football | 21 | 75.0 |\n | female | bicycle | 17 | 50.0 |\n \"\"\"\n new_column = new_column or column\n if group_cols is None:\n df[new_column] = 100. * df[column] / sum(df[column])\n else:\n df[new_column] = 100. * df[column] / df.groupby(group_cols)[column].transform(sum)\n return df"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\noptimize by SGD, AdaGrad, or AdaDelta.", "response": "def ada_family_core(params, gparams, learning_rate = 0.01, eps= 1e-6, rho=0.95, method=\"ADADELTA\",\n beta=0.0, gsum_regularization = 0.0001):\n \"\"\"\n Optimize by SGD, AdaGrad, or AdaDelta.\n \"\"\"\n\n _, _, _, args = inspect.getargvalues(inspect.currentframe())\n logging.info(\"ada_family_core: %s\" % str(args.items()))\n free_parameters = []\n\n if method == \"FINETUNING_ADAGRAD\":\n method = \"ADAGRAD\"\n gsum_regularization = 0\n\n oneMinusBeta = 1 - beta\n\n gsums = [theano.shared(np.zeros_like(param.get_value(borrow=True), dtype=FLOATX), name=\"gsum_%s\" % param.name) if (method == 'ADADELTA' or method == 'ADAGRAD') else None for param in params]\n xsums = [theano.shared(np.zeros_like(param.get_value(borrow=True), dtype=FLOATX), name=\"xsum_%s\" % param.name) if method == 'ADADELTA' else None for param in params]\n\n # Fix for AdaGrad, init gsum to 1\n if method == 'ADAGRAD':\n for gsum in gsums:\n gsum.set_value(gsum.get_value() ** 0)\n\n updates = OrderedDict()\n # Updates\n for gparam, param, gsum, xsum in zip(gparams, params, gsums, xsums):\n\n if method == 'ADADELTA':\n updates[gsum] = rho * gsum + (1. - rho) * (gparam **2)\n dparam = -T.sqrt((xsum + eps) / (updates[gsum] + eps)) * gparam\n updates[xsum] =rho * xsum + (1. - rho) * (dparam **2)\n updates[param] = param * oneMinusBeta + dparam\n elif method == 'ADAGRAD':\n updates[gsum] = gsum + (gparam **2) - gsum_regularization * gsum\n updates[param] = param * oneMinusBeta - learning_rate * (gparam / (T.sqrt(updates[gsum] + eps)))\n\n else:\n updates[param] = param * oneMinusBeta - gparam * learning_rate\n # Add free parameters\n if method == 'ADADELTA':\n free_parameters.extend(gsums + xsums)\n elif method == 'ADAGRAD':\n free_parameters.extend(gsums)\n # Check dtype\n for k in updates:\n if updates[k].dtype != FLOATX:\n updates[k] = updates[k].astype(FLOATX)\n return updates.items(), free_parameters"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef optimize_updates(params, gradients, config=None, shapes=None):\n if config and isinstance(config, dict):\n config = TrainerConfig(config)\n\n # Clipping\n if config:\n clip_value = config.get(\"gradient_clipping\", None)\n\n if clip_value:\n clip_constant = T.constant(clip_value, dtype=FLOATX)\n\n if config.avoid_compute_embed_norm:\n grad_norm = multiple_l2_norm([t[1] for t in zip(params, gradients) if not t[0].name.startswith(\"W_embed\")])\n else:\n grad_norm = multiple_l2_norm(gradients)\n isnan = T.or_(T.isnan(grad_norm), T.isinf(grad_norm))\n multiplier = ifelse(grad_norm < clip_constant,\n T.constant(1., dtype=FLOATX), clip_constant / (grad_norm + EPSILON))\n\n # Clip\n clipped_gradients = []\n for param, g in zip(params, gradients):\n g = multiplier * g\n if config.avoid_nan:\n g = T.switch(isnan, np.float32(0.1) * param, g)\n if config.gradient_tolerance:\n g = ifelse(grad_norm > config.gradient_tolerance, T.zeros_like(g) + EPSILON, g)\n clipped_gradients.append(g)\n\n gradients = clipped_gradients\n # Regularization\n if config and config.weight_l2:\n regularized_gradients = []\n for param, grad in zip(params, gradients):\n grad = grad + (2 * config.weight_l2 * param)\n regularized_gradients.append(grad)\n gradients = regularized_gradients\n\n # Avoid nan but not computing the norm\n # This is not recommended\n if config and config.avoid_nan and not config.gradient_clipping:\n logging.info(\"avoid NaN gradients\")\n new_gradients = []\n for grad in gradients:\n new_grad = ifelse(T.isnan(grad).any(), T.zeros_like(grad) + EPSILON, grad)\n new_gradients.append(new_grad)\n gradients = new_gradients\n\n\n # Find method\n method = \"SGD\"\n if config:\n method = config.get(\"method\", method).upper()\n # Get Function\n func = None\n if method in [\"SGD\", \"ADAGRAD\", \"ADADELTA\", \"FINETUNING_ADAGRAD\"]:\n from cores.ada_family import ada_family_core\n func = ada_family_core\n elif method == \"ADAM\":\n from cores.adam import adam_core\n func = adam_core\n elif method == \"RMSPROP\":\n from cores.rmsprop import rmsprop_core\n func = rmsprop_core\n elif method == \"MOMENTUM\":\n from cores.momentum import momentum_core\n func = momentum_core\n\n if not func:\n raise NotImplementedError(\"method '%s' is not supported\" % method)\n\n logging.info(\"optimize method=%s parameters=%s\" % (method, str(params)))\n\n free_parameters = []\n return_vals = wrap_core(func, config, params, gradients)\n if type(return_vals) == list and type(return_vals[0]) == list:\n updates, free_parameters = return_vals\n else:\n updates = return_vals\n\n # No free param recording\n if config and not config.record_free_params:\n free_parameters = []\n\n # Weight bound\n if config.weight_bound:\n logging.info(\"apply weight bound of %.2f\" % config.weight_bound)\n new_updates = []\n for param, update_value in updates:\n bounded_value = (update_value * (T.abs_(update_value) <= config.weight_bound) +\n config.weight_bound * (update_value > config.weight_bound) +\n -config.weight_bound * (update_value < -config.weight_bound))\n new_updates.append((param, bounded_value))\n updates = new_updates\n return updates, free_parameters", "response": "General optimization function for Theano."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef optimize_function(params, config=None):\n gs = [dim_to_var(p.ndim) for p in params]\n updates, _ = optimize_updates(params, gs, config)\n return theano.function(gs, [], updates=updates)", "response": "Create a optimizing function receives gradients."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn updates in the training.", "response": "def _learning_updates(self):\n \"\"\"\n Return updates in the training.\n \"\"\"\n params = self.training_params()\n gradients = self.get_gradients(params)\n return self.optimization_updates(params, gradients)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets parameters to be optimized.", "response": "def training_params(self):\n \"\"\"\n Get parameters to be optimized.\n \"\"\"\n params = self.network.parameters\n # Freeze parameters\n if self.config.fixed_parameters:\n logging.info(\"fixed parameters: %s\" % \", \".join(map(str, self.config.fixed_parameters)))\n params = [p for p in params if p not in self.config.fixed_parameters]\n return params"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn updates from optimization.", "response": "def optimization_updates(self, params, gradients):\n \"\"\"\n Return updates from optimization.\n \"\"\"\n updates, free_parameters = optimize_updates(params, gradients, self.config)\n self.network.free_parameters.extend(free_parameters)\n logging.info(\"Added %d free parameters for optimization\" % len(free_parameters))\n return updates"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget the learning function.", "response": "def learning_function(self):\n \"\"\"\n Get the learning function.\n :param func:\n :return:\n \"\"\"\n network_updates = list(self.network.updates) + list(self.network.training_updates)\n learning_updates = list(self._learning_updates())\n update_list = network_updates + learning_updates\n\n logging.info(\"network updates: %s\" % \" \".join(map(str, [x[0] for x in network_updates])))\n logging.info(\"learning updates: %s\" % \" \".join(map(str, [x[0] for x in learning_updates])))\n\n variables = self.network.input_variables + self.network.target_variables\n givens = None\n return theano.function(\n variables,\n map(lambda v: theano.Out(v, borrow=True), self.training_variables),\n updates=update_list, allow_input_downcast=True,\n mode=self.config.get(\"theano_mode\", None),\n givens=givens)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfunctions to compute the GLIMPSE sensor from image and focus vector.", "response": "def _glimpse_sensor(self, x_t, l_p):\n \"\"\"\n Parameters:\n x_t - 28x28 image\n l_p - 2x1 focus vector\n Returns:\n 4x12 matrix\n \"\"\"\n # Turn l_p to the left-top point of rectangle\n l_p = l_p * 14 + 14 - 2\n l_p = T.cast(T.round(l_p), \"int32\")\n\n l_p = l_p * (l_p >= 0)\n l_p = l_p * (l_p < 24) + (l_p >= 24) * 23\n l_p2 = l_p - 2\n l_p2 = l_p2 * (l_p2 >= 0)\n l_p2 = l_p2 * (l_p2 < 20) + (l_p2 >= 20) * 19\n l_p3 = l_p - 6\n l_p3 = l_p3 * (l_p3 >= 0)\n l_p3 = l_p3 * (l_p3 < 16) + (l_p3 >= 16) * 15\n glimpse_1 = x_t[l_p[0]: l_p[0] + 4][:, l_p[1]: l_p[1] + 4]\n glimpse_2 = x_t[l_p2[0]: l_p2[0] + 8][:, l_p2[1]: l_p2[1] + 8]\n glimpse_2 = theano.tensor.signal.downsample.max_pool_2d(glimpse_2, (2,2))\n glimpse_3 = x_t[l_p3[0]: l_p3[0] + 16][:, l_p3[1]: l_p3[1] + 16]\n glimpse_3 = theano.tensor.signal.downsample.max_pool_2d(glimpse_3, (4,4))\n return T.concatenate([glimpse_1, glimpse_2, glimpse_3])"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _refined_glimpse_sensor(self, x_t, l_p):\n # Turn l_p to the left-top point of rectangle\n l_p = l_p * 14 + 14 - 4\n l_p = T.cast(T.round(l_p), \"int32\")\n\n l_p = l_p * (l_p >= 0)\n l_p = l_p * (l_p < 21) + (l_p >= 21) * 20\n glimpse_1 = x_t[l_p[0]: l_p[0] + 7][:, l_p[1]: l_p[1] + 7]\n # glimpse_2 = theano.tensor.signal.downsample.max_pool_2d(x_t, (4,4))\n # return T.concatenate([glimpse_1, glimpse_2])\n return glimpse_1", "response": "Refines the GLIMPSE sensor."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _glimpse_network(self, x_t, l_p):\n sensor_output = self._refined_glimpse_sensor(x_t, l_p)\n sensor_output = T.flatten(sensor_output)\n h_g = self._relu(T.dot(sensor_output, self.W_g0))\n h_l = self._relu(T.dot(l_p, self.W_g1))\n g = self._relu(T.dot(h_g, self.W_g2_hg) + T.dot(h_l, self.W_g2_hl))\n return g", "response": "Function that computes the Glimse network for a single image and a single focus vector."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _action_network(self, h_t):\n z = self._relu(T.dot(h_t, self.W_a) + self.B_a)\n return self._softmax(z)", "response": "Function that computes the action network of the logarithmic state."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_network(model=None, std=0.005, disable_reinforce=False, random_glimpse=False):\n network = NeuralClassifier(input_dim=28 * 28)\n network.stack_layer(FirstGlimpseLayer(std=std, disable_reinforce=disable_reinforce, random_glimpse=random_glimpse))\n if model and os.path.exists(model):\n network.load_params(model)\n return network", "response": "Get baseline model.\n Parameters:\n model - model path\n Returns:\n network"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncomputes first glimpse sensor.", "response": "def _first_glimpse_sensor(self, x_t):\n \"\"\"\n Compute first glimpse position using down-sampled image.\n \"\"\"\n downsampled_img = theano.tensor.signal.downsample.max_pool_2d(x_t, (4,4))\n downsampled_img = downsampled_img.flatten()\n first_l = T.dot(downsampled_img, self.W_f)\n if self.disable_reinforce:\n wf_grad = self.W_f\n if self.random_glimpse:\n first_l = self.srng.uniform((2,), low=-1.7, high=1.7)\n else:\n sampled_l_t = self._sample_gaussian(first_l, self.cov)\n sampled_pdf = self._multi_gaussian_pdf(disconnected_grad(sampled_l_t), first_l)\n wf_grad = T.grad(T.log(sampled_pdf), self.W_f)\n first_l = sampled_l_t\n return first_l, wf_grad"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfunctions to compute the core network.", "response": "def _core_network(self, l_p, h_p, x_t):\n \"\"\"\n Parameters:\n x_t - 28x28 image\n l_p - 2x1 focus vector\n h_p - 256x1 vector\n Returns:\n h_t, 256x1 vector\n \"\"\"\n g_t = self._glimpse_network(x_t, l_p)\n h_t = self._tanh(T.dot(g_t, self.W_h_g) + T.dot(h_p, self.W_h) + self.B_h)\n l_t = self._location_network(h_t)\n\n if not self.disable_reinforce:\n sampled_l_t = self._sample_gaussian(l_t, self.cov)\n sampled_pdf = self._multi_gaussian_pdf(disconnected_grad(sampled_l_t), l_t)\n wl_grad = T.grad(T.log(sampled_pdf), self.W_l)\n else:\n sampled_l_t = l_t\n wl_grad = self.W_l\n\n if self.random_glimpse and self.disable_reinforce:\n sampled_l_t = self.srng.uniform((2,), low=-1.7, high=1.7)\n\n a_t = self._action_network(h_t)\n\n return sampled_l_t, h_t, a_t, wl_grad"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\npreparing the internal layer and the internal layer classifiers.", "response": "def prepare(self):\n \"\"\"\n All codes that create parameters should be put into 'setup' function.\n \"\"\"\n self.output_dim = 10\n self.encoder = Chain(self.input_dim).stack(Dense(self.internal_layer_size, 'tanh'))\n self.decoder = Chain(self.internal_layer_size).stack(Dense(self.input_dim))\n self.classifier = Chain(self.internal_layer_size).stack(Dense(50, 'tanh'),\n Dense(self.output_dim),\n Softmax())\n\n self.register_inner_layers(self.encoder, self.decoder, self.classifier)\n\n self.target_input = T.ivector('target')\n self.register_external_inputs(self.target_input)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncomputing the cost of a single tensor.", "response": "def compute_tensor(self, x):\n \"\"\"\n Build the computation graph here.\n \"\"\"\n internal_variable = self.encoder.compute_tensor(x)\n\n decoding_output = self.decoder.compute_tensor(internal_variable)\n\n classification_output = self.classifier.compute_tensor(internal_variable)\n\n auto_encoder_cost = AutoEncoderCost(decoding_output, x).get()\n\n classification_cost = CrossEntropyCost(classification_output, self.target_input).get()\n\n final_cost = 0.01 * auto_encoder_cost + classification_cost\n\n error_rate = ErrorRateCost(classification_output, self.target_input).get()\n\n self.register_monitors((\"err\", error_rate),\n (\"encoder_cost\", auto_encoder_cost),\n (\"classify_cost\", classification_cost))\n\n return final_cost"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef map(self, func):\n if self._train_set:\n self._train_set = map(func, self._train_set)\n if self._valid_set:\n self._valid_set = map(func, self._valid_set)\n if self._test_set:\n self._test_set = map(func, self._test_set)", "response": "Process all data with given function."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nmakes targets be one-hot vectors.", "response": "def vectorize_target(self, size):\n \"\"\"\n Make targets be one-hot vectors.\n \"\"\"\n if self._train_set:\n self._train_set = self._vectorize_set(self._train_set, size)\n if self._valid_set:\n self._valid_set = self._vectorize_set(self._valid_set, size)\n if self._test_set:\n self._test_set = self._vectorize_set(self._test_set, size)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef train(self, train_set, valid_set=None, test_set=None, train_size=None):\n '''We train over mini-batches and evaluate periodically.'''\n iteration = 0\n while True:\n if not iteration % self.config.test_frequency and test_set:\n try:\n self.test(iteration, test_set)\n except KeyboardInterrupt:\n logging.info('interrupted!')\n break\n\n if not iteration % self.validation_frequency and valid_set:\n try:\n if not self.evaluate(iteration, valid_set):\n logging.info('patience elapsed, bailing out')\n break\n except KeyboardInterrupt:\n logging.info('interrupted!')\n break\n\n train_message = \"\"\n try:\n train_message = self.train_func(train_set)\n except KeyboardInterrupt:\n logging.info('interrupted!')\n break\n if not iteration % self.config.monitor_frequency:\n logging.info('monitor (iter=%i) %s', iteration + 1, train_message)\n\n iteration += 1\n if hasattr(self.network, \"iteration_callback\"):\n self.network.iteration_callback()\n\n yield train_message\n\n if valid_set:\n self.set_params(self.best_params)\n if test_set:\n self.test(0, test_set)", "response": "Train over mini - batches and evaluate periodically."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsamples outputs from LM.", "response": "def sample(self, input, steps):\n \"\"\"\n Sample outputs from LM.\n \"\"\"\n inputs = [[onehot(self.input_dim, x) for x in input]]\n for _ in range(steps):\n target = self.compute(inputs)[0,-1].argmax()\n input.append(target)\n inputs[0].append(onehot(self.input_dim, target))\n return input"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef compute_tensor(self, x):\n # Target class\n class_matrix = self.target_tensor // self.output_size\n class_vector = class_matrix.reshape((-1,))\n # Target index\n target_matrix = self.target_tensor % self.output_size\n target_vector = target_matrix.reshape((-1,))\n # Input matrix\n input_matrix = x.reshape((-1, self.input_dim))\n # Output matrix\n output_tensor3d = self.output_layer.compute_tensor(x)\n output_matrix = output_tensor3d.reshape((-1, self.class_size, self.output_size))\n arange_vec = self.arange_cache[:output_matrix.shape[0]]\n sub_output_matrix = output_matrix[arange_vec, class_vector]\n # Softmax\n softmax_output_matrix = self.softmax_layer.compute_tensor(sub_output_matrix)\n # Class prediction\n class_output_matrix = self.class_layer.compute_tensor(x)\n # Costs\n output_cost = LMCost(softmax_output_matrix, target_vector).get()\n class_cost = LMCost(class_output_matrix, class_matrix).get()\n final_cost = output_cost + class_cost\n\n return final_cost", "response": "Compute the tensor for the given batch time and vec."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncomputes the alignment weights based on the previous state and the precomputed values.", "response": "def compute_alignments(self, prev_state, precomputed_values, mask=None):\n \"\"\"\n Compute the alignment weights based on the previous state.\n \"\"\"\n\n WaSp = T.dot(prev_state, self.Wa)\n UaH = precomputed_values\n # For test time the UaH will be (time, output_dim)\n if UaH.ndim == 2:\n preact = WaSp[:, None, :] + UaH[None, :, :]\n else:\n preact = WaSp[:, None, :] + UaH\n act = T.activate(preact, 'tanh')\n align_scores = T.dot(act, self.Va) # ~ (batch, time)\n if mask:\n mask = (1 - mask) * -99.00\n if align_scores.ndim == 3:\n align_scores += mask[None, :]\n else:\n align_scores += mask\n align_weights = T.nnet.softmax(align_scores)\n return align_weights"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncomputes the context vector with soft attention.", "response": "def compute_context_vector(self, prev_state, inputs, precomputed_values=None, mask=None):\n \"\"\"\n Compute the context vector with soft attention.\n \"\"\"\n precomputed_values = precomputed_values if precomputed_values else self.precompute(inputs)\n align_weights = self.compute_alignments(prev_state, precomputed_values, mask)\n context_vector = T.sum(align_weights[:, :, None] * inputs, axis=1)\n return context_vector"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ntrain the model in multi - GPU environment.", "response": "def train(self, train_set, valid_set=None, test_set=None, train_size=None):\n \"\"\"\n Train the model in multi-GPU environment.\n \"\"\"\n from platoon.channel import Worker\n from platoon.param_sync import EASGD, ASGD\n server_port = self._port\n param_map = self.create_param_map()\n # Initialize the worker\n worker = Worker(control_port=server_port)\n if self.config.learning_rate:\n worker.send_req({'init_schedule': self._schedule_params})\n self.sync_hyperparams(worker.send_req('sync_hyperparams')['sync_hyperparams'])\n easgd_alpha = worker.send_req('get_easgd_alpha')\n if self._using_easgd:\n self.logger.info(\"using EASGD with alpha={}\".format(easgd_alpha))\n else:\n self.logger.info(\"using ASGD rule\")\n rule = EASGD(easgd_alpha) if self._using_easgd else ASGD()\n worker.init_shared_params(param_map.values(), param_sync_rule=rule)\n worker.send_req({\n \"set_names\": None,\n \"training_names\": self.training_names,\n \"evaluation_names\": self.evaluation_names\n })\n # Load all training batches, consume vast memory here\n self.logger.info(\"started process {}\".format(os.getpid()))\n self.logger.info(\"(proc {}) load training data\".format(os.getpid()))\n train_batches = list(train_set)\n network_callback = bool(self.network.training_callbacks)\n trainer_callback = bool(self._iter_controllers)\n # Start from valid, so the performance when a worked join can be known\n worker.copy_to_local()\n if valid_set:\n self._run_valid(self.epoch, valid_set, dry_run=True)\n self.fix_costs()\n worker.send_req({\n \"valid_done\": None,\n \"valid_costs\": self.last_run_costs,\n \"auto_save\": self.config.auto_save\n })\n worker.copy_to_local()\n # Begin the loop\n while True:\n resp = worker.send_req('next')\n if resp == 'stop':\n break\n elif resp == 'wait':\n time.sleep(1)\n elif resp == 'get_num_batches':\n worker.send_req({'get_num_batches_done': len(train_batches)})\n elif 'eval' in resp:\n self.best_cost = resp['best_valid_cost']\n worker.copy_to_local()\n valid_costs = None\n test_costs = None\n if valid_set:\n self._run_valid(self.epoch, valid_set)\n self.fix_costs()\n valid_costs = self.last_run_costs\n if test_set:\n self._run_test(self.epoch, test_set)\n self.fix_costs()\n test_costs = self.last_run_costs\n worker.send_req({\n \"eval_done\": None,\n \"valid_costs\": valid_costs,\n \"test_costs\": test_costs,\n \"auto_save\": self.config.auto_save\n })\n elif 'valid' in resp:\n self.best_cost = resp['best_valid_cost']\n worker.copy_to_local()\n if valid_set:\n self._run_valid(self.epoch, valid_set, dry_run=True)\n self.fix_costs()\n worker.send_req({\n \"valid_done\": None,\n \"valid_costs\": self.last_run_costs,\n \"auto_save\": self.config.auto_save\n })\n elif 'train' in resp:\n batch_ids = resp['train']\n batch_costs = [[] for _ in self.training_names]\n for batch_id in batch_ids:\n x = train_batches[batch_id]\n cost_x = self.learn(*x)\n for i, cost in enumerate(cost_x):\n batch_costs[i].append(cost)\n self.last_cost = cost_x[0]\n if network_callback:\n self.network.training_callback()\n if trainer_callback:\n for func in self._iter_controllers:\n func(self)\n worker.sync_params(synchronous=True)\n worker.send_req({'train_done': None, 'costs': [float(np.mean(c)) for c in batch_costs]})\n elif 'sync_hyperparams' in resp:\n self.sync_hyperparams(resp['sync_hyperparams'])\n worker.close()\n return []"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef var(tensor_type, last_dim=0, test_shape=None):\n # Create tensor\n from deepy.core.neural_var import NeuralVariable\n from deepy.core.env import env\n from theano.tensor.var import TensorVariable\n if isinstance(tensor_type, NeuralVariable):\n var = tensor_type\n if last_dim != 0:\n var.output_dim = last_dim\n elif isinstance(tensor_type, TensorVariable):\n var = NeuralVariable(tensor_type, dim=last_dim)\n elif isinstance(tensor_type, str):\n theano_tensor = getattr(TT, tensor_type)()\n var = NeuralVariable(theano_tensor, dim=last_dim)\n else:\n raise Exception(\"tensor_type shall be a string or a NeuralVariable\")\n # Set test value\n if test_shape:\n if type(test_shape) != list and type(test_shape) != tuple:\n # May be it's a value\n var.set_test_value(test_shape)\n else:\n test_val = env.numpy_rand.rand(*test_shape)\n if len(test_shape) > 0:\n test_val = test_val.astype(var.tensor.dtype)\n elif var.tensor.dtype.startswith(\"int\"):\n test_val = 1\n var.set_test_value(test_val)\n else:\n # Create a general test_shape\n dims = [(d + 1) * 3 for d in range(var.tensor.ndim)]\n if var.dim() != 0:\n dims[-1] = var.dim()\n test_val = env.numpy_rand.rand(*dims)\n if len(dims) > 0:\n test_val = test_val.astype(var.tensor.dtype)\n elif var.tensor.dtype.startswith(\"int\"):\n test_val = 1\n var.set_test_value(test_val)\n return var", "response": "Wrap a Theano tensor into a NeuralVariable object for defining neural network."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\npads the dataset to given length in the left or right side.", "response": "def _pad(self, side, length):\n \"\"\"\n Pad sequences to given length in the left or right side.\n \"\"\"\n if self._train_set:\n self._train_set = pad_dataset(self._train_set, side, length)\n if self._valid_set:\n self._valid_set = pad_dataset(self._valid_set, side, length)\n if self._test_set:\n self._test_set = pad_dataset(self._test_set, side, length)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef pad_dataset(subset, side=\"right\", length=-1):\n assert length == -1 or length > 0\n if type(subset[0][0][0]) in [float, int, np.int64, np.int32, np.float32]:\n return _pad_2d(subset, side, length)\n else:\n return _pad_3d(subset, side, length)", "response": "Pads a dataset to specified length."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\npreparing for one epoch.", "response": "def prepare_epoch(self):\n \"\"\"\n Prepare for one epoch.\n Returns:\n bool: False if to stop the training.\n \"\"\"\n self.epoch += 1\n if self.epoch >= self.epoch_start_halving and ((self.epoch - self.epoch_start_halving) % self._halving_freq == 0):\n self._lr *= 0.5\n self._current_iter = 0\n self._iters_from_last_valid = 0\n self._train_costs = []\n self.prepared_worker_pool.clear()\n self.batch_pool = range(self.num_train_batches)\n self.rand.shuffle(self.batch_pool)\n if self.epoch > self.end_at:\n self.log(\"Training is done, wait all workers to stop\")\n return False\n else:\n self.log(\"start epoch {} with lr={}\".format(self.epoch, self._lr))\n return True"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef handle_control(self, req, worker_id, req_info):\n if self.start_time is None: self.start_time = time.time()\n response = \"\"\n\n if req == 'next':\n if self.num_train_batches == 0:\n response = \"get_num_batches\"\n elif self._done:\n response = \"stop\"\n self.worker_is_done(worker_id)\n elif self._evaluating:\n response = 'wait'\n elif not self.batch_pool:\n # End of one iter\n if self._train_costs:\n with self._lock:\n sys.stdout.write(\"\\r\")\n sys.stdout.flush()\n mean_costs = []\n for i in range(len(self._training_names)):\n mean_costs.append(np.mean([c[i] for c in self._train_costs]))\n self.log(\"train (epoch={:2d}) {}\".format(\n self.epoch,\n self.get_monitor_string(zip(self._training_names, mean_costs)))\n )\n response = {'eval': None, 'best_valid_cost': self._best_valid_cost}\n self._evaluating = True\n else:\n # Continue training\n if worker_id not in self.prepared_worker_pool:\n response = {\"sync_hyperparams\": self.feed_hyperparams()}\n self.prepared_worker_pool.add(worker_id)\n elif self._iters_from_last_valid >= self._valid_freq:\n response = {'valid': None, 'best_valid_cost': self._best_valid_cost}\n self._iters_from_last_valid = 0\n else:\n response = {\"train\": self.feed_batches()}\n elif 'eval_done' in req:\n with self._lock:\n self._evaluating = False\n sys.stdout.write(\"\\r\")\n sys.stdout.flush()\n if 'test_costs' in req and req['test_costs']:\n self.log(\"test (epoch={:2d}) {} (worker {})\".format(\n self.epoch,\n self.get_monitor_string(req['test_costs']),\n worker_id)\n )\n if 'valid_costs' in req and req['test_costs']:\n valid_J = req['valid_costs'][0][1]\n if valid_J < self._best_valid_cost:\n self._best_valid_cost = valid_J\n star_str = \"*\"\n else:\n star_str = \"\"\n self.log(\"valid (epoch={:2d}) {} {} (worker {})\".format(\n self.epoch,\n self.get_monitor_string(req['valid_costs']),\n star_str,\n worker_id))\n # if star_str and 'auto_save' in req and req['auto_save']:\n # self.log(\"(worker {}) save the model to {}\".format(\n # worker_id,\n # req['auto_save']\n # ))\n continue_training = self.prepare_epoch()\n self._epoch_start_time = time.time()\n if not continue_training:\n self._done = True\n self.log(\"training time {:.4f}s\".format(time.time() - self.start_time))\n response = \"stop\"\n elif 'valid_done' in req:\n with self._lock:\n sys.stdout.write(\"\\r\")\n sys.stdout.flush()\n if 'valid_costs' in req:\n valid_J = req['valid_costs'][0][1]\n if valid_J < self._best_valid_cost:\n self._best_valid_cost = valid_J\n star_str = \"*\"\n else:\n star_str = \"\"\n self.log(\"valid ( dryrun ) {} {} (worker {})\".format(\n self.get_monitor_string(req['valid_costs']),\n star_str,\n worker_id\n ))\n # if star_str and 'auto_save' in req and req['auto_save']:\n # self.log(\"(worker {}) save the model to {}\".format(\n # worker_id,\n # req['auto_save']\n # ))\n elif 'train_done' in req:\n costs = req['costs']\n self._train_costs.append(costs)\n sys.stdout.write(\"\\x1b[2K\\r> %d%% | J=%.2f | %.1f batch/s\" % (\n self._current_iter * 100 / self.num_train_batches,\n costs[0], float(len(self._train_costs) * self.sync_freq) / (time.time() - self._epoch_start_time)))\n sys.stdout.flush()\n elif 'get_num_batches_done' in req:\n self.num_train_batches = req['get_num_batches_done']\n elif 'get_easgd_alpha' in req:\n response = self._easgd_alpha\n elif 'sync_hyperparams' in req:\n response = {\"sync_hyperparams\": self.feed_hyperparams()}\n elif 'init_schedule' in req:\n with self._lock:\n sys.stdout.write(\"\\r\")\n sys.stdout.flush()\n self.log(\"worker {} connected\".format(worker_id))\n if self.epoch == 0:\n schedule_params = req['init_schedule']\n sch_str = \" \".join(\"{}={}\".format(a, b) for (a, b) in schedule_params.items())\n self.log(\"initialize the schedule with {}\".format(sch_str))\n for key, val in schedule_params.items():\n if not val: continue\n if key == 'learning_rate':\n self._lr = val\n elif key == 'start_halving_at':\n self.epoch_start_halving = val\n elif key == 'halving_freq':\n self._halving_freq = val\n elif key == 'end_at':\n self.end_at = val\n elif key == 'sync_freq':\n self.sync_freq = val\n elif key == 'valid_freq':\n self._valid_freq = val\n\n elif 'set_names' in req:\n self._training_names = req['training_names']\n self._evaluation_names = req['evaluation_names']\n\n\n return response", "response": "Handles a control request from a worker."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncompares to previous records and return whether the given cost is a new best.", "response": "def compare(self, cost_map):\n \"\"\"\n Compare to previous records and return whether the given cost is a new best.\n :return: True if the given cost is a new best\n \"\"\"\n cri_val = cost_map[self._criteria]\n if self._best_criteria is None:\n self._best_criteria = cri_val\n return True\n else:\n if self._smaller_is_better and cri_val < self._best_criteria:\n self._best_criteria = cri_val\n return True\n elif not self._smaller_is_better and cri_val > self._best_criteria:\n self._best_criteria = cri_val\n return True\n else:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nrun the model with validation data and return the cost of the model.", "response": "def run(self, data_x):\n \"\"\"\n Run the model with validation data and return costs.\n \"\"\"\n output_vars = self.compute(*data_x)\n return self._extract_costs(output_vars)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef invoke(self):\n self._counter += 1\n if self._counter % self._freq == 0:\n cnt = 0.\n sum_map = defaultdict(float)\n for x in self._trainer.get_data(self._data_split):\n val_map = self.run(x)\n if not isinstance(val_map, dict):\n raise Exception(\"Monitor.run must return a dict.\")\n for k, val in val_map.items():\n sum_map[k] += val\n cnt += 1\n for k in sum_map:\n sum_map[k] /= cnt\n new_best = self.compare(sum_map)\n self._trainer.report(sum_map, self._data_split, new_best=new_best)\n if new_best:\n self._trainer.save_checkpoint(self._save_path)", "response": "This function is called after each iteration."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate inner loop variables.", "response": "def _build_loop_vars(self):\n \"\"\"\n Create inner loop variables.\n \"\"\"\n from theano.tensor.var import TensorVariable\n from deepy.core.neural_var import NeuralVariable\n if not self._loop_vars:\n self._ordered_out_keys = self._outputs.keys()\n seq_keys = self._sequences.keys()\n filled_out_keys = [k for k in self._ordered_out_keys if self._outputs[k]]\n nonseq_keys = self._non_sequences.keys()\n dummy_tensors, self._scan_local_vars = get_dummy_args(\n sequences=[self._sequences[k].tensor for k in seq_keys],\n outputs_info=[self._outputs[k].tensor for k in self._ordered_out_keys],\n non_sequences=[self._non_sequences[k].tensor for k in nonseq_keys],\n **self._kwargs\n )\n dummy_map = dict(zip(seq_keys + filled_out_keys + nonseq_keys, dummy_tensors))\n arg_map = self._sequences.copy()\n arg_map.update(self._outputs)\n arg_map.update(self._non_sequences)\n self._loop_vars = LoopVars()\n for k, dummy_tensor in dummy_map.items():\n dummy_var = NeuralVariable(dummy_tensor, dim=arg_map[k].dim())\n self._loop_vars[k] = dummy_var"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_outputs(self, *args):\n if args:\n output_vars = map(self._scan_outputs.get, args)\n if len(output_vars) == 1:\n return output_vars[0]\n else:\n return output_vars\n else:\n return self._scan_outputs", "response": "Get the outputs of the loop."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nexecuting then_branch when training.", "response": "def iftrain(self, then_branch, else_branch):\n \"\"\"\n Execute `then_branch` when training.\n \"\"\"\n return ifelse(self._training_flag, then_branch, else_branch, name=\"iftrain\")"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nswitching training mode. :param flag: switch on training mode when flag is True.", "response": "def switch_training(self, flag):\n \"\"\"\n Switch training mode.\n :param flag: switch on training mode when flag is True.\n \"\"\"\n if self._is_training == flag: return\n self._is_training = flag\n if flag:\n self._training_flag.set_value(1)\n else:\n self._training_flag.set_value(0)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef nag_core(params, J, momentum=0.9, learning_rate=0.01):\n # TODO: this requires some refractorings.\n for param in params:\n step = theano.shared(np.zeros_like(param.get_value()), name=param.name + '_step')\n velocity = theano.shared(np.zeros_like(param.get_value()), name=param.name + '_vel')\n yield step, momentum * velocity\n yield param, param + step\n yield velocity, step - learning_rate * T.grad(J, param)\n yield param, param + velocity - step", "response": "Generator for NAG core"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nskip N batches in the training set", "response": "def skip(self, n_batches, n_epochs=0):\n \"\"\"\n Skip N batches in the training.\n \"\"\"\n logging.info(\"skip %d epochs and %d batches\" % (n_epochs, n_batches))\n self._skip_batches = n_batches\n self._skip_epochs = n_epochs"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nloads parameters for the training.", "response": "def load_params(self, path, exclude_free_params=False):\n \"\"\"\n Load parameters for the training.\n This method can load free parameters and resume the training progress.\n \"\"\"\n self.network.load_params(path, exclude_free_params=exclude_free_params)\n self.best_params = self.copy_params()\n # Resume the progress\n if self.network.train_logger.progress() > 0 or self.network.train_logger.epoch() > 0:\n self.skip(self.network.train_logger.progress(), self.network.train_logger.epoch() - 1)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds iteration callbacks function.", "response": "def add_iter_controllers(self, *controllers):\n \"\"\"\n Add iteration callbacks function (receives an argument of the trainer).\n :param controllers: can be a `TrainingController` or a function.\n :type funcs: list of TrainingContoller\n \"\"\"\n for controller in controllers:\n if isinstance(controller, TrainingController):\n controller.bind(self)\n self._iter_controllers.append(controller)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds epoch controllers function.", "response": "def add_epoch_controllers(self, *controllers):\n \"\"\"\n Add epoch callbacks function.\n :param controllers: can be a `TrainingController` or a function.\n \"\"\"\n for controller in controllers:\n if isinstance(controller, TrainingController):\n controller.bind(self)\n self._epoch_controllers.append(controller)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef train(self, train_set, valid_set=None, test_set=None, train_size=None):\n self._epoch = 0\n while True:\n if self._skip_epochs > 0:\n logging.info(\"skipping one epoch ...\")\n self._skip_epochs -= 1\n self._epoch += 1\n yield None\n continue\n # Test\n if not self._epoch % self.config.test_frequency and test_set:\n try:\n self._run_test(self._epoch, test_set)\n except KeyboardInterrupt:\n logging.info('interrupted!')\n break\n # Validate\n if not self._epoch % self.validation_frequency and valid_set:\n try:\n\n if not self._run_valid(self._epoch, valid_set):\n logging.info('patience elapsed, bailing out')\n break\n except KeyboardInterrupt:\n logging.info('interrupted!')\n break\n # Train one step\n\n try:\n costs = self._run_train(self._epoch, train_set, train_size)\n except KeyboardInterrupt:\n logging.info('interrupted!')\n break\n # Check costs\n if np.isnan(costs[0][1]):\n logging.info(\"NaN detected in costs, rollback to last parameters\")\n self.set_params(*self.checkpoint)\n else:\n self._epoch += 1\n self.network.epoch_callback()\n\n yield dict(costs)\n\n if valid_set and self.config.get(\"save_best_parameters\", True):\n self.set_params(*self.best_params)\n if test_set:\n self._run_test(-1, test_set)", "response": "Train the model and return the costs."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _run_train(self, epoch, train_set, train_size=None):\n self.network.train_logger.record_epoch(epoch + 1)\n costs = self.train_step(train_set, train_size)\n if not epoch % self.config.monitor_frequency:\n self.report(dict(costs), \"train\", epoch)\n self.last_run_costs = costs\n return costs", "response": "Runs one training iteration."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _run_valid(self, epoch, valid_set, dry_run=False, save_path=None):\n costs = self.valid_step(valid_set)\n # this is the same as: (J_i - J_f) / J_i > min improvement\n _, J = costs[0]\n new_best = False\n if self.best_cost - J > self.best_cost * self.min_improvement:\n # save the best cost and parameters\n self.best_params = self.copy_params()\n new_best = True\n if not dry_run:\n self.best_cost = J\n self.best_epoch = epoch\n self.save_checkpoint(save_path)\n\n self.report(dict(costs), type=\"valid\", epoch=0 if dry_run else epoch, new_best=new_best)\n self.last_run_costs = costs\n return epoch - self.best_epoch < self.patience", "response": "Run one valid iteration return True if to continue training."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef report(self, score_map, type=\"valid\", epoch=-1, new_best=False):\n type_str = type\n if len(type_str) < 5:\n type_str += \" \" * (5 - len(type_str))\n info = \" \".join(\"%s=%.2f\" % el for el in score_map.items())\n current_epoch = epoch if epoch > 0 else self.current_epoch()\n epoch_str = \"epoch={}\".format(current_epoch + 1)\n if epoch < 0:\n epoch_str = \"dryrun\"\n sys.stdout.write(\"\\r\")\n sys.stdout.flush()\n marker = \" *\" if new_best else \"\"\n message = \"{} ({}) {}{}\".format(type_str, epoch_str, info, marker)\n self.network.train_logger.record(message)\n logging.info(message)", "response": "Report the scores and record them in the log."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the data from the specified split of data.", "response": "def get_data(self, data_split=\"train\"):\n \"\"\"\n Get specified split of data.\n \"\"\"\n if data_split == 'train':\n return self._current_train_set\n elif data_split == 'valid':\n return self._current_valid_set\n elif data_split == 'test':\n return self._current_test_set\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef run(self, train_set, valid_set=None, test_set=None, train_size=None, epoch_controllers=None):\n epoch_controllers = epoch_controllers if epoch_controllers else []\n epoch_controllers += self._epoch_controllers\n if isinstance(train_set, Dataset):\n dataset = train_set\n train_set = dataset.train_set()\n valid_set = dataset.valid_set()\n test_set = dataset.test_set()\n train_size = dataset.train_size()\n self._current_train_set = train_set\n self._current_valid_set = valid_set\n self._current_test_set = test_set\n if epoch_controllers:\n for controller in epoch_controllers:\n controller.bind(self)\n timer = Timer()\n for _ in self.train(train_set, valid_set=valid_set, test_set=test_set, train_size=train_size):\n if epoch_controllers:\n for controller in epoch_controllers:\n controller.invoke()\n if self._ended:\n break\n if self._report_time:\n timer.report()", "response": "Runs the training and test set."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _cut_to_pieces(self, bunch_stack):\n stack_len = len(bunch_stack[0])\n for i in xrange(0, stack_len, self.fragment_length):\n yield np.array(map(lambda stack: stack[i: i + self.fragment_length], bunch_stack))", "response": "Yields the pieces of the bunch_stack."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\npadding the bunch_stack with zeros.", "response": "def _pad_zeros(self, bunch_stack):\n \"\"\"\n :type bunch_stack: list of list\n \"\"\"\n min_len = min(map(len, bunch_stack))\n for i in range(len(bunch_stack)):\n bunch_stack[i] = bunch_stack[i][:min_len]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef apply(self, func, dim=None):\n output_dim = dim if dim else self.output_dim\n return NeuralVariable(func(self.tensor), output_dim)", "response": "Apply a function to tensors."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef rprop_core(params, gradients, rprop_increase=1.01, rprop_decrease=0.99, rprop_min_step=0, rprop_max_step=100,\n learning_rate=0.01):\n \"\"\"\n Rprop optimizer.\n See http://sci2s.ugr.es/keel/pdf/algorithm/articulo/2003-Neuro-Igel-IRprop+.pdf.\n \"\"\"\n for param, grad in zip(params, gradients):\n grad_tm1 = theano.shared(np.zeros_like(param.get_value()), name=param.name + '_grad')\n step_tm1 = theano.shared(np.zeros_like(param.get_value()) + learning_rate, name=param.name+ '_step')\n\n test = grad * grad_tm1\n same = T.gt(test, 0)\n diff = T.lt(test, 0)\n step = T.minimum(rprop_max_step, T.maximum(rprop_min_step, step_tm1 * (\n T.eq(test, 0) +\n same * rprop_increase +\n diff * rprop_decrease)))\n grad = grad - diff * grad\n yield param, param - T.sgn(grad) * step\n yield grad_tm1, grad\n yield step_tm1, step", "response": "Generator for core Rprop optimizer."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef report(self):\n if self.logger:\n self.logger.info(\"accessed parameters:\")\n for key in self.used_parameters:\n self.logger.info(\" - %s %s\" % (key, \"(undefined)\" if key in self.undefined_parameters else \"\"))", "response": "Report usage of training parameters."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef new_block(self, *layers, **kwargs):\n from deepy.layers.block import Block\n block = Block(*layers, **kwargs)\n return block", "response": "Create a new block with some layers and kwargs"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef var(self, tensor_type, last_dim=0, test_shape=None):\n from deepy.tensor import var\n return var(tensor_type, last_dim=last_dim, test_shape=test_shape)", "response": "Returns the variable of the given type."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef create_vars_from_data(self, dataset, split=\"train\"):\n from deepy.core.neural_var import NeuralVariable\n vars = []\n if split == \"valid\":\n data_split = dataset.valid_set()\n elif split == \"test\":\n data_split = dataset.test_set()\n else:\n data_split = dataset.train_set()\n first_data_piece = list(data_split)[0]\n for i, numpy_tensor in enumerate(first_data_piece):\n if numpy_tensor.dtype == \"int64\":\n numpy_tensor = numpy_tensor.astype(\"int32\")\n if numpy_tensor.dtype == \"float64\":\n numpy_tensor = numpy_tensor.astype(env.FLOATX)\n type_map = {\n 0: \"scalar\",\n 1: \"vector\",\n 2: \"matrix\",\n 3: \"tensor3\",\n 4: \"tensor4\",\n 5: \"tensor5\",\n }\n tensor_type = type_map[numpy_tensor.ndim] if numpy_tensor.ndim in type_map else type_map[0]\n if numpy_tensor.dtype.kind == \"i\":\n tensor_type = \"i\" + tensor_type\n theano_tensor = getattr(TT, tensor_type)(\"input_{}_{}\".format(i + 1, tensor_type))\n last_dim = numpy_tensor.shape[-1]\n var = NeuralVariable(theano_tensor, dim=last_dim)\n var.set_test_value(numpy_tensor)\n vars.append(var)\n return vars", "response": "Create vars given a dataset and set test values."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef scan(self, func, sequences=None, outputs=None, non_sequences=None, block=None, **kwargs):\n results, updates = Scanner(func, sequences, outputs, non_sequences, neural_computation=True, **kwargs).compute()\n if block and updates:\n if type(updates) == dict:\n updates = updates.items()\n block.register_updates(*updates)\n return results", "response": "A loop function that uses the same usage with the theano one."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef loop(self, sequences=None, outputs=None, non_sequences=None, block=None, **kwargs):\n from loop import Loop\n return Loop(sequences, outputs, non_sequences, block, **kwargs)", "response": "Start a loop.\n Usage:\n ```\n with deepy.graph.loop(sequences={\"x\": x}, outputs={\"o\": None}) as vars:\n vars.o = vars.x + 1\n loop_outputs = deepy.graph.loop_outputs()\n result = loop_outputs.o\n ```"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_trainer(self, model, method='sgd', config=None, annealer=None, validator=None):\n from deepy.trainers import GeneralNeuralTrainer\n return GeneralNeuralTrainer(model, method=method, config=config, annealer=annealer, validator=validator)", "response": "Get a trainer to optimize given model."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef shared(self, value, name=None):\n if type(value) == int:\n final_value = np.array(value, dtype=\"int32\")\n elif type(value) == float:\n final_value = np.array(value, dtype=env.FLOATX)\n else:\n final_value = value\n\n return theano.shared(final_value, name=name)", "response": "Create a shared theano scalar value."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fill_parameters(self, path, blocks, exclude_free_params=False, check_parameters=False):\n if not os.path.exists(path):\n raise Exception(\"model {} does not exist\".format(path))\n # Decide which parameters to load\n normal_params = sum([nn.parameters for nn in blocks], [])\n all_params = sum([nn.all_parameters for nn in blocks], [])\n # Load parameters\n if path.endswith(\".gz\"):\n opener = gzip.open if path.lower().endswith('.gz') else open\n handle = opener(path, 'rb')\n saved_params = pickle.load(handle)\n handle.close()\n # Write parameters\n if len(all_params) != len(saved_params):\n logging.warning(\n \"parameters in the network: {}, parameters in the dumped model: {}\".format(len(all_params),\n len(saved_params)))\n for target, source in zip(all_params, saved_params):\n if not exclude_free_params or target not in normal_params:\n target.set_value(source)\n elif path.endswith(\".npz\"):\n arrs = np.load(path)\n # Write parameters\n if len(all_params) != len(arrs.keys()):\n logging.warning(\n \"parameters in the network: {}, parameters in the dumped model: {}\".format(len(all_params),\n len(arrs.keys())))\n for target, idx in zip(all_params, range(len(arrs.keys()))):\n if not exclude_free_params or target not in normal_params:\n source = arrs['arr_%d' % idx]\n target.set_value(source)\n else:\n raise Exception(\"File format of %s is not supported, use '.gz' or '.npz' or '.uncompressed.gz'\" % path)", "response": "Load parameters from file to fill all blocks sequentially."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the size of training data.", "response": "def train_size(self):\n \"\"\"\n Return size of training data. (optional)\n :rtype: number\n \"\"\"\n train_set = self.train_set()\n if isinstance(train_set, collections.Iterable):\n return len(list(train_set))\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef invoke(self):\n self._iter += 1\n if self._iter - max(self._trainer.best_iter, self._annealed_iter) >= self._patience:\n if self._annealed_times >= self._anneal_times:\n logging.info(\"ending\")\n self._trainer.exit()\n else:\n self._trainer.set_params(*self._trainer.best_params)\n self._learning_rate.set_value(self._learning_rate.get_value() * 0.5)\n self._annealed_times += 1\n self._annealed_iter = self._iter\n logging.info(\"annealed learning rate to %f\" % self._learning_rate.get_value())", "response": "This method is called by the training thread when training is complete."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nruns it return whether to end training.", "response": "def invoke(self):\n \"\"\"\n Run it, return whether to end training.\n \"\"\"\n self._iter += 1\n logging.info(\"{} epochs left to run\".format(self._patience - self._iter))\n if self._iter >= self._patience:\n self._trainer.exit()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef stack_reparameterization_layer(self, layer_size):\n self.rep_layer = ReparameterizationLayer(layer_size, sample=self.sample)\n self.stack_encoders(self.rep_layer)", "response": "Perform reparameterization trick for latent variables."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef stack_encoders(self, *layers):\n self.stack(*layers)\n self.encoding_layes.extend(layers)", "response": "Stack encoding layers, this must be done before stacking decoding layers."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef create_2d_gaussian(dim, sigma):\n\n # check if the dimension is odd\n if dim % 2 == 0:\n raise ValueError(\"Kernel dimension should be odd\")\n\n # initialize the kernel\n kernel = np.zeros((dim, dim), dtype=np.float16)\n\n # calculate the center point\n center = dim/2\n\n # calculate the variance\n variance = sigma ** 2\n\n # calculate the normalization coefficeint\n coeff = 1. / (2 * variance)\n\n # create the kernel\n for x in range(0, dim):\n for y in range(0, dim):\n x_val = abs(x - center)\n y_val = abs(y - center)\n numerator = x_val**2 + y_val**2\n denom = 2*variance\n\n kernel[x,y] = coeff * np.exp(-1. * numerator/denom)\n\n return kernel/sum(sum(kernel))", "response": "This function creates a 2d gaussian kernel with the standard deviation of the gaussian kernel denoted by sigma."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef stack_layer(self, layer, no_setup=False):\n if layer.name:\n layer.name += \"%d\" % (len(self.layers) + 1)\n if not self.layers:\n layer.init(self.input_dim, no_prepare=no_setup)\n else:\n layer.init(self.layers[-1].output_dim, no_prepare=no_setup)\n self._output = layer.compute_tensor(self._output)\n self._test_output = layer.compute_tensor(self._test_output)\n self._hidden_outputs.append(self._output)\n self.register_layer(layer)\n self.layers.append(layer)", "response": "Stack a neural layer."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nregistering the given layer so that it s param will be trained.", "response": "def register_layer(self, layer):\n \"\"\"\n Register the layer so that it's param will be trained.\n But the output of the layer will not be stacked.\n \"\"\"\n if type(layer) == Block:\n layer.fix()\n self.parameter_count += layer.parameter_count\n self.parameters.extend(layer.parameters)\n self.free_parameters.extend(layer.free_parameters)\n self.training_monitors.extend(layer.training_monitors)\n self.testing_monitors.extend(layer.testing_monitors)\n self.updates.extend(layer.updates)\n self.training_updates.extend(layer.training_updates)\n self.input_variables.extend(layer.external_inputs)\n self.target_variables.extend(layer.external_targets)\n\n self.training_callbacks.extend(layer.training_callbacks)\n self.testing_callbacks.extend(layer.testing_callbacks)\n self.epoch_callbacks.extend(layer.epoch_callbacks)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef monitor_layer_outputs(self):\n for layer, hidden in zip(self.layers, self._hidden_outputs):\n self.training_monitors.append(('mean(%s)' % (layer.name), abs(hidden).mean()))", "response": "Monitoring the outputs of each layer."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncomputes the network output.", "response": "def compute(self, *x):\n \"\"\"\n Return network output.\n \"\"\"\n self._compile()\n outs = self._compute(*x)\n if self._output_keys:\n return MapDict(dict(zip(self._output_keys, outs)))\n else:\n return outs"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef save_params(self, path, new_thread=False):\n save_logger.info(path)\n param_variables = self.all_parameters\n params = [p.get_value().copy() for p in param_variables]\n if new_thread:\n thread = Thread(target=save_network_params, args=(params, path))\n thread.start()\n else:\n save_network_params(params, path)\n self.train_logger.save(path)", "response": "Save parameters to file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load_params(self, path, exclude_free_params=False):\n if not os.path.exists(path): return;\n logging.info(\"loading parameters from %s\" % path)\n # Decide which parameters to load\n if exclude_free_params:\n params_to_load = self.parameters\n else:\n params_to_load = self.all_parameters\n # Load parameters\n if path.endswith(\".gz\"):\n opener = gzip.open if path.lower().endswith('.gz') else open\n handle = opener(path, 'rb')\n saved_params = pickle.load(handle)\n handle.close()\n # Write parameters\n for target, source in zip(params_to_load, saved_params):\n logging.info('%s: setting value %s', target.name, source.shape)\n target.set_value(source)\n elif path.endswith(\".npz\"):\n arrs = np.load(path)\n # Write parameters\n for target, idx in zip(params_to_load, range(len(arrs.keys()))):\n source = arrs['arr_%d' % idx]\n logging.info('%s: setting value %s', target.name, source.shape)\n target.set_value(source)\n else:\n raise Exception(\"File format of %s is not supported, use '.gz' or '.npz' or '.uncompressed.gz'\" % path)\n\n self.train_logger.load(path)", "response": "Load parameters from file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef init(self, input_dim=0, input_dims=None, no_prepare=False):\n if self.initialized:\n return\n # configure input dimensions\n if input_dims:\n self.input_dims = input_dims\n self.input_dim = input_dims[0]\n else:\n self.input_dim = input_dim\n self.input_dims = [input_dims]\n # set default output dimension\n if self.output_dim == 0:\n self.output_dim = self.input_dim\n self.initialized = True\n # call prepare\n if not no_prepare:\n self.prepare()\n return self", "response": "Initialize the layer.\n :param no_prepare: avoid calling preparation function"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef compute(self, *inputs, **kwargs):\n from deepy.core.neural_var import NeuralVariable\n from deepy.core.graph import graph\n if type(inputs[0]) != NeuralVariable:\n raise SystemError(\"The input of `compute` must be NeuralVar\")\n\n dims = [t.dim() for t in inputs]\n if len(inputs) == 1:\n self.init(input_dim=dims[0])\n else:\n self.init(input_dims=dims)\n # Check block\n if self.parameters and not self._linked_block:\n self.belongs_to(graph.default_block())\n # convert kwargs\n train_kwargs, _, _ = convert_to_theano_var(kwargs)\n\n output = self.compute_tensor(*[t.tensor for t in inputs], **train_kwargs)\n\n if type(output) != list and type(output) != tuple:\n return NeuralVariable(output, dim=self.output_dim)\n else:\n return [NeuralVariable(*item) for item in zip(output, self.output_dims)]", "response": "Compute based on NeuralVariable."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef belongs_to(self, block):\n if self._linked_block:\n raise SystemError(\"The layer {} has already blonged to {}\".format(self.name, self._linked_block.name))\n self._linked_block = block\n block.register_layer(self)\n return self", "response": "This method is used to make sure that this layer is blonged to the given block or network."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef register_updates(self, *updates):\n for key, node in updates:\n if key not in self._registered_updates:\n self.updates.append((key, node))\n self._registered_updates.add(key)", "response": "Register updates that will be executed in each iteration."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef register_training_updates(self, *updates):\n for key, node in updates:\n if key not in self._registered_training_updates:\n self.training_updates.append((key, node))\n self._registered_training_updates.add(key)", "response": "Register updates that will only be executed in training phase."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef register_monitors(self, *monitors):\n for key, node in monitors:\n if key not in self._registered_monitors:\n node *= 1.0 # Avoid CudaNdarray\n self.training_monitors.append((key, node))\n self.testing_monitors.append((key, node))\n self._registered_monitors.add(key)", "response": "Register monitors they should be tuple of name and Theano variable."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the L2 norm of multiple tensors.", "response": "def multiple_l2_norm(tensors):\n \"\"\"\n Get the L2 norm of multiple tensors.\n This function is taken from blocks.\n \"\"\"\n # Another way for doing this, I don't know which one is fast\n # return T.sqrt(sum(T.sum(t ** 2) for t in tensors))\n flattened = [T.as_tensor_variable(t).flatten() for t in tensors]\n flattened = [(t if t.ndim > 0 else t.dimshuffle('x'))\n for t in flattened]\n joined = T.join(0, *flattened)\n return T.sqrt(T.sqr(joined).sum())"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef dump_one(elt_to_pickle, file_obj):\n pickled_elt_str = dumps(elt_to_pickle)\n file_obj.write(pickled_elt_str)\n # record separator is a blank line\n # (since pickled_elt_str might contain its own newlines)\n file_obj.write('\\n\\n')", "response": "Dumps one element to file_obj"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nloads contents from file_obj returning a generator that yields one element at a time", "response": "def load(file_obj):\n \"\"\"\n load contents from file_obj, returning a generator that yields one\n element at a time\n \"\"\"\n cur_elt = []\n for line in file_obj:\n cur_elt.append(line)\n\n if line == '\\n':\n pickled_elt_str = ''.join(cur_elt)\n cur_elt = []\n try:\n elt = loads(pickled_elt_str)\n except ValueError:\n continue\n\n yield elt"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfix the block, register all the parameters of sub layers. :return:", "response": "def fix(self):\n \"\"\"\n Fix the block, register all the parameters of sub layers.\n :return:\n \"\"\"\n if not self.fixed:\n for layer in self.layers:\n if not layer.initialized:\n raise Exception(\"All sub layers in a block must be initialized when fixing it.\")\n self.register_inner_layers(layer)\n self.fixed = True"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nregister one connected layer.", "response": "def register_layer(self, layer):\n \"\"\"\n Register one connected layer.\n :type layer: NeuralLayer\n \"\"\"\n if self.fixed:\n raise Exception(\"After a block is fixed, no more layers can be registered.\")\n self.layers.append(layer)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nloading parameters to the block.", "response": "def load_params(self, path, exclude_free_params=False):\n from deepy.core import graph\n \"\"\"\n Load parameters to the block.\n \"\"\"\n from deepy.core.comp_graph import ComputationalGraph\n model = graph.compile(blocks=[self])\n model.load_params(path, exclude_free_params=exclude_free_params)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef compute_step(self, state, lstm_cell=None, input=None, additional_inputs=None):\n if not self.initialized:\n input_dim = None\n if input and hasattr(input.tag, 'last_dim'):\n input_dim = input.tag.last_dim\n self.init(input_dim)\n\n input_map = self.merge_inputs(input, additional_inputs=additional_inputs)\n input_map.update({\"state\": state, \"lstm_cell\": lstm_cell})\n output_map = self.compute_new_state(input_map)\n outputs = [output_map.pop(\"state\")]\n outputs += output_map.values()\n for tensor in outputs:\n tensor.tag.last_dim = self.hidden_size\n if len(outputs) == 1:\n return outputs[0]\n else:\n return outputs", "response": "Compute one step in the RNN and GRU."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting initial states for the cluster.", "response": "def get_initial_states(self, input_var, init_state=None):\n \"\"\"\n :type input_var: T.var\n :rtype: dict\n \"\"\"\n initial_states = {}\n for state in self.state_names:\n if state != \"state\" or not init_state:\n if self._input_type == 'sequence' and input_var.ndim == 2:\n init_state = T.alloc(np.cast[env.FLOATX](0.), self.hidden_size)\n else:\n init_state = T.alloc(np.cast[env.FLOATX](0.), input_var.shape[0], self.hidden_size)\n initial_states[state] = init_state\n return initial_states"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a dictionary of step inputs for the given input variable.", "response": "def get_step_inputs(self, input_var, states=None, mask=None, additional_inputs=None):\n \"\"\"\n :type input_var: T.var\n :rtype: dict\n \"\"\"\n step_inputs = {}\n if self._input_type == \"sequence\":\n if not additional_inputs:\n additional_inputs = []\n if mask:\n step_inputs['mask'] = mask.dimshuffle(1, 0)\n step_inputs.update(self.merge_inputs(input_var, additional_inputs=additional_inputs))\n else:\n # step_inputs[\"mask\"] = mask.dimshuffle((1,0)) if mask else None\n if additional_inputs:\n step_inputs.update(self.merge_inputs(None, additional_inputs=additional_inputs))\n if states:\n for name in self.state_names:\n step_inputs[name] = states[name]\n\n return step_inputs"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconverting a NeuralVariable or list or dict to theano vars.", "response": "def convert_to_theano_var(obj):\n \"\"\"\n Convert neural vars to theano vars.\n :param obj: NeuralVariable or list or dict or tuple\n :return: theano var, test var, tensor found, neural var found\n \"\"\"\n from deepy.core.neural_var import NeuralVariable\n if type(obj) == tuple:\n return tuple(convert_to_theano_var(list(obj)))\n if type(obj) == list:\n unpacked_list = map(convert_to_theano_var, obj)\n normal_list = []\n test_list = []\n theano_var_found = False\n neural_var_found = False\n for normal_var, tensor_found, neural_found in unpacked_list:\n normal_list.append(normal_var)\n if tensor_found: theano_var_found = True\n if neural_found: neural_var_found = True\n return normal_list, theano_var_found, neural_var_found\n elif type(obj) == dict:\n normal_map = {}\n theano_var_found = False\n neural_var_found = False\n for key in obj:\n normal_var, tensor_found, neural_found = convert_to_theano_var(obj[key])\n normal_map[key] = normal_var\n if tensor_found: theano_var_found = True\n if neural_found: neural_var_found = True\n return normal_map, theano_var_found, neural_var_found\n elif type(obj) == MapDict:\n normal_map = {}\n theano_var_found = False\n neural_var_found = False\n for key in obj:\n normal_var, tensor_found, neural_found = convert_to_theano_var(obj[key])\n normal_map[key] = normal_var\n if tensor_found: theano_var_found = True\n if neural_found: neural_var_found = True\n return MapDict(normal_map), theano_var_found, neural_var_found\n elif type(obj) == NeuralVariable:\n theano_tensor = obj.tensor\n theano_tensor.tag.last_dim = obj.dim()\n return theano_tensor, False, True\n elif type(obj) == TensorVariable:\n return obj, True, False\n elif type(obj) == slice:\n normal_args = []\n theano_var_found = False\n neural_var_found = False\n for arg in [obj.start, obj.stop, obj.step]:\n normal_var, tensor_found, neural_found = convert_to_theano_var(arg)\n normal_args.append(normal_var)\n if tensor_found: theano_var_found = True\n if neural_found: neural_var_found = True\n return slice(*normal_args), theano_var_found, neural_var_found\n else:\n return obj, False, False"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconverts object and a test object into neural var.", "response": "def convert_to_neural_var(obj):\n \"\"\"\n Convert object and a test object into neural var.\n :param obj: tensor or list or dict or tuple\n :param test_obj: NeuralVar or list or dict or tuple\n :return:\n \"\"\"\n from theano.tensor.var import TensorVariable\n from deepy.core.neural_var import NeuralVariable\n if type(obj) == list:\n return [convert_to_neural_var(item) for item in obj]\n elif type(obj) == tuple:\n return tuple(convert_to_neural_var(list(obj)))\n elif type(obj) == dict:\n merged_map = {}\n for key in obj:\n merged_map[key] = convert_to_neural_var(obj[key])\n return merged_map\n elif type(obj) == MapDict:\n merged_map = {}\n for key in obj:\n merged_map[key] = convert_to_neural_var(obj[key])\n return MapDict(merged_map)\n elif type(obj) == TensorVariable:\n deepy_var = NeuralVariable(obj)\n if hasattr(obj, 'tag') and hasattr(obj.tag, 'last_dim'):\n deepy_var.output_dim = obj.tag.last_dim\n return deepy_var\n else:\n return obj"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef neural_computation(original_func, prefer_tensor=False):\n\n def wrapper(*args, **kwargs):\n normal_args, tensor_found_in_args, neural_found_in_args = convert_to_theano_var(args)\n normal_kwargs, tensor_found_in_kwargs, neural_found_in_kwargs = convert_to_theano_var(kwargs)\n\n tensor_found = tensor_found_in_args or tensor_found_in_kwargs\n neural_found = neural_found_in_args or neural_found_in_kwargs\n\n if tensor_found and neural_found:\n raise Exception(\"Theano tensor variables can not be used together with neural variables.\")\n\n normal_result = original_func(*normal_args, **normal_kwargs)\n\n if tensor_found or (not neural_found and prefer_tensor):\n # No neural variables are inputted, so output tensors\n return normal_result\n else:\n # Output neural variables, auto set output_dim\n result_var = convert_to_neural_var(normal_result)\n if (isinstance(normal_result, TensorVariable) and\n hasattr(normal_result.tag, \"test_value\") and\n hasattr(normal_result.tag.test_value, \"shape\") and\n normal_result.tag.test_value.shape):\n result_var.output_dim = normal_result.tag.test_value.shape[-1]\n return result_var\n return wrapper", "response": "Decorator for theano - based neural computation of a single neural variable."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef onehot_tensor(i_matrix, vocab_size):\n dim0, dim1 = i_matrix.shape\n i_vector = i_matrix.reshape((-1,))\n hot_matrix = T.extra_ops.to_one_hot(i_vector, vocab_size).reshape((dim0, dim1, vocab_size))\n return hot_matrix", "response": "One - hot tensor for a batch x time series."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating |oauth2| request elements.", "response": "def create_request_elements(\n cls, request_type, credentials, url, method='GET', params=None,\n headers=None, body='', secret=None, redirect_uri='', scope='',\n csrf='', user_state=''\n ):\n \"\"\"\n Creates |oauth2| request elements.\n \"\"\"\n\n headers = headers or {}\n params = params or {}\n\n consumer_key = credentials.consumer_key or ''\n consumer_secret = credentials.consumer_secret or ''\n token = credentials.token or ''\n refresh_token = credentials.refresh_token or credentials.token or ''\n\n # Separate url base and query parameters.\n url, base_params = cls._split_url(url)\n\n # Add params extracted from URL.\n params.update(dict(base_params))\n\n if request_type == cls.USER_AUTHORIZATION_REQUEST_TYPE:\n # User authorization request.\n # TODO: Raise error for specific message for each missing argument.\n if consumer_key and redirect_uri and (\n csrf or not cls.supports_csrf_protection):\n params['client_id'] = consumer_key\n params['redirect_uri'] = redirect_uri\n params['scope'] = scope\n if cls.supports_user_state:\n params['state'] = base64.urlsafe_b64encode(\n json.dumps(\n {\"csrf\": csrf, \"user_state\": user_state}\n ).encode('utf-8')\n )\n else:\n params['state'] = csrf\n params['response_type'] = 'code'\n\n # Add authorization header\n headers.update(cls._authorization_header(credentials))\n else:\n raise OAuth2Error(\n 'Credentials with valid consumer_key and arguments '\n 'redirect_uri, scope and state are required to create '\n 'OAuth 2.0 user authorization request elements!')\n\n elif request_type == cls.ACCESS_TOKEN_REQUEST_TYPE:\n # Access token request.\n if consumer_key and consumer_secret:\n params['code'] = token\n params['client_id'] = consumer_key\n params['client_secret'] = consumer_secret\n params['redirect_uri'] = redirect_uri\n params['grant_type'] = 'authorization_code'\n\n # TODO: Check whether all providers accept it\n headers.update(cls._authorization_header(credentials))\n else:\n raise OAuth2Error(\n 'Credentials with valid token, consumer_key, '\n 'consumer_secret and argument redirect_uri are required '\n 'to create OAuth 2.0 access token request elements!')\n\n elif request_type == cls.REFRESH_TOKEN_REQUEST_TYPE:\n # Refresh access token request.\n if refresh_token and consumer_key and consumer_secret:\n params['refresh_token'] = refresh_token\n params['client_id'] = consumer_key\n params['client_secret'] = consumer_secret\n params['grant_type'] = 'refresh_token'\n else:\n raise OAuth2Error(\n 'Credentials with valid refresh_token, consumer_key, '\n 'consumer_secret are required to create OAuth 2.0 '\n 'refresh token request elements!')\n\n elif request_type == cls.PROTECTED_RESOURCE_REQUEST_TYPE:\n # Protected resource request.\n\n # Add Authorization header. See:\n # http://tools.ietf.org/html/rfc6749#section-7.1\n if credentials.token_type == cls.BEARER:\n # http://tools.ietf.org/html/rfc6750#section-2.1\n headers.update(\n {'Authorization': 'Bearer {0}'.format(credentials.token)})\n\n elif token:\n params['access_token'] = token\n else:\n raise OAuth2Error(\n 'Credentials with valid token are required to create '\n 'OAuth 2.0 protected resources request elements!')\n\n request_elements = core.RequestElements(\n url, method, params, headers, body)\n\n return cls._x_request_elements_filter(\n request_type, request_elements, credentials)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndecoding state and return param.", "response": "def decode_state(cls, state, param='user_state'):\n \"\"\"\n Decode state and return param.\n\n :param str state:\n state parameter passed through by provider\n\n :param str param:\n key to query from decoded state variable. Options include 'csrf'\n and 'user_state'.\n\n :returns:\n string value from decoded state\n\n \"\"\"\n if state and cls.supports_user_state:\n # urlsafe_b64 may include = which the browser quotes so must\n # unquote Cast to str to void b64decode translation error. Base64\n # should be str compatible.\n return json.loads(base64.urlsafe_b64decode(\n unquote(str(state))).decode('utf-8'))[param]\n else:\n return state if param == 'csrf' else ''"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nrefresh the credentials for the current user.", "response": "def refresh_credentials(self, credentials):\n \"\"\"\n Refreshes :class:`.Credentials` if it gives sense.\n\n :param credentials:\n :class:`.Credentials` to be refreshed.\n\n :returns:\n :class:`.Response`.\n\n \"\"\"\n\n if not self._x_refresh_credentials_if(credentials):\n return\n\n # We need consumer key and secret to make this kind of request.\n cfg = credentials.config.get(credentials.provider_name)\n credentials.consumer_key = cfg.get('consumer_key')\n credentials.consumer_secret = cfg.get('consumer_secret')\n\n request_elements = self.create_request_elements(\n request_type=self.REFRESH_TOKEN_REQUEST_TYPE,\n credentials=credentials,\n url=self.access_token_url,\n method='POST'\n )\n\n self._log(logging.INFO, u'Refreshing credentials.')\n response = self._fetch(*request_elements)\n\n # We no longer need consumer info.\n credentials.consumer_key = None\n credentials.consumer_secret = None\n\n # Extract the refreshed data.\n access_token = response.data.get('access_token')\n refresh_token = response.data.get('refresh_token')\n\n # Update credentials only if there is access token.\n if access_token:\n credentials.token = access_token\n credentials.expire_in = response.data.get('expires_in')\n\n # Update refresh token only if there is a new one.\n if refresh_token:\n credentials.refresh_token = refresh_token\n\n # Handle different naming conventions across providers.\n credentials = self._x_credentials_parser(\n credentials, response.data)\n\n return response"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nparsing the credentials from the data.", "response": "def _x_credentials_parser(credentials, data):\n \"\"\"\n We need to override this method to fix Facebooks naming deviation.\n \"\"\"\n\n # Facebook returns \"expires\" instead of \"expires_in\".\n credentials.expire_in = data.get('expires')\n\n if data.get('token_type') == 'bearer':\n # TODO: cls is not available here, hardcode for now.\n credentials.token_type = 'Bearer'\n\n return credentials"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nfilter out the request elements that are not needed by Google.", "response": "def _x_request_elements_filter(cls, request_type, request_elements,\n credentials):\n \"\"\"\n Google doesn't accept client ID and secret to be at the same time in\n request parameters and in the basic authorization header in the access\n token request.\n \"\"\"\n if request_type is cls.ACCESS_TOKEN_REQUEST_TYPE:\n params = request_elements[2]\n del params['client_id']\n del params['client_secret']\n return request_elements"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nlogging-in handler for the user.", "response": "def login(provider_name):\n \"\"\"\n Login handler, must accept both GET and POST to be able to use OpenID.\n \"\"\"\n\n # We need response object for the WerkzeugAdapter.\n response = make_response()\n\n # Log the user in, pass it the adapter and the provider name.\n result = authomatic.login(\n WerkzeugAdapter(\n request,\n response),\n provider_name)\n\n # If there is no LoginResult object, the login procedure is still pending.\n if result:\n if result.user:\n # We need to update the user to get more info.\n result.user.update()\n\n # The rest happens inside the template.\n return render_template('login.html', result=result)\n\n # Don't forget to return the response.\n return response"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nnormalizing a dictionary of items that are single - item iterables with the value of its index 0.", "response": "def normalize_dict(dict_):\n \"\"\"\n Replaces all values that are single-item iterables with the value of its\n index 0.\n\n :param dict dict_:\n Dictionary to normalize.\n\n :returns:\n Normalized dictionary.\n\n \"\"\"\n\n return dict([(k, v[0] if not isinstance(v, str) and len(v) == 1 else v)\n for k, v in list(dict_.items())])"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts list of tuples to dictionary with duplicate keys converted to lists.", "response": "def items_to_dict(items):\n \"\"\"\n Converts list of tuples to dictionary with duplicate keys converted to\n lists.\n\n :param list items:\n List of tuples.\n\n :returns:\n :class:`dict`\n\n \"\"\"\n\n res = collections.defaultdict(list)\n\n for k, v in items:\n res[k].append(v)\n\n return normalize_dict(dict(res))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef json_qs_parser(body):\n try:\n # Try JSON first.\n return json.loads(body)\n except (OverflowError, TypeError, ValueError):\n pass\n\n try:\n # Then XML.\n return ElementTree.fromstring(body)\n except (ElementTree.ParseError, TypeError, ValueError):\n pass\n\n # Finally query string.\n return dict(parse.parse_qsl(body))", "response": "Parses response body from JSON XML or query string."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nresolve a provider class by name.", "response": "def resolve_provider_class(class_):\n \"\"\"\n Returns a provider class.\n\n :param class_name: :class:`string` or\n :class:`authomatic.providers.BaseProvider` subclass.\n\n \"\"\"\n\n if isinstance(class_, str):\n # prepare path for authomatic.providers package\n path = '.'.join([__package__, 'providers', class_])\n\n # try to import class by string from providers module or by fully\n # qualified path\n return import_string(class_, True) or import_string(path)\n else:\n return class_"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef id_to_name(config, short_name):\n\n for k, v in list(config.items()):\n if v.get('id') == short_name:\n return k\n\n raise Exception(\n 'No provider with id={0} found in the config!'.format(short_name))", "response": "Returns the provider key based on it s id value."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_cookie(self, delete=None):\n value = 'deleted' if delete else self._serialize(self.data)\n split_url = parse.urlsplit(self.adapter.url)\n domain = split_url.netloc.split(':')[0]\n\n # Work-around for issue #11, failure of WebKit-based browsers to accept\n # cookies set as part of a redirect response in some circumstances.\n if '.' not in domain:\n template = '{name}={value}; Path={path}; HttpOnly{secure}{expires}'\n else:\n template = ('{name}={value}; Domain={domain}; Path={path}; '\n 'HttpOnly{secure}{expires}')\n\n return template.format(\n name=self.name,\n value=value,\n domain=domain,\n path=split_url.path,\n secure='; Secure' if self.secure else '',\n expires='; Expires=Thu, 01-Jan-1970 00:00:01 GMT' if delete else ''\n )", "response": "Creates the value for Set - Cookie HTTP header."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef save(self):\n if self.data:\n cookie = self.create_cookie()\n cookie_len = len(cookie)\n\n if cookie_len > 4093:\n raise SessionError('Cookie too long! The cookie size {0} '\n 'is more than 4093 bytes.'\n .format(cookie_len))\n\n self.adapter.set_header('Set-Cookie', cookie)\n\n # Reset data\n self._data = {}", "response": "Saves the current state of the object to the adapter."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_data(self):\n cookie = self.adapter.cookies.get(self.name)\n return self._deserialize(cookie) if cookie else {}", "response": "Extracts the session data from the cookie."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the session data lazily.", "response": "def data(self):\n \"\"\"\n Gets session data lazily.\n \"\"\"\n if not self._data:\n self._data = self._get_data()\n # Always return a dict, even if deserialization returned nothing\n if self._data is None:\n self._data = {}\n return self._data"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a signature for the session.", "response": "def _signature(self, *parts):\n \"\"\"\n Creates signature for the session.\n \"\"\"\n signature = hmac.new(six.b(self.secret), digestmod=hashlib.sha1)\n signature.update(six.b('|'.join(parts)))\n return signature.hexdigest()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _serialize(self, value):\n\n # data = copy.deepcopy(value)\n data = value\n\n # 1. Serialize\n serialized = pickle.dumps(data).decode('latin-1')\n\n # 2. Encode\n # Percent encoding produces smaller result then urlsafe base64.\n encoded = parse.quote(serialized, '')\n\n # 3. Concatenate\n timestamp = str(int(time.time()))\n signature = self._signature(self.name, encoded, timestamp)\n concatenated = '|'.join([encoded, timestamp, signature])\n\n return concatenated", "response": "Converts the value to a signed string with timestamp."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _deserialize(self, value):\n\n # 3. Split\n encoded, timestamp, signature = value.split('|')\n\n # Verify signature\n if not signature == self._signature(self.name, encoded, timestamp):\n raise SessionError('Invalid signature \"{0}\"!'.format(signature))\n\n # Verify timestamp\n if int(timestamp) < int(time.time()) - self.max_age:\n return None\n\n # 2. Decode\n decoded = parse.unquote(encoded)\n\n # 1. Deserialize\n deserialized = pickle.loads(decoded.encode('latin-1'))\n\n return deserialized", "response": "Deserializes and verifies the value created by _serialize."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert the user instance to a dict.", "response": "def to_dict(self):\n \"\"\"\n Converts the :class:`.User` instance to a :class:`dict`.\n\n :returns:\n :class:`dict`\n\n \"\"\"\n\n # copy the dictionary\n d = copy.copy(self.__dict__)\n\n # Keep only the provider name to avoid circular reference\n d['provider'] = self.provider.name\n d['credentials'] = self.credentials.serialize(\n ) if self.credentials else None\n d['birth_date'] = str(d['birth_date'])\n\n # Remove content\n d.pop('content')\n\n if isinstance(self.data, ElementTree.Element):\n d['data'] = None\n\n return d"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsets the expiration time of the resource.", "response": "def expire_in(self, value):\n \"\"\"\n Computes :attr:`.expiration_time` when the value is set.\n \"\"\"\n\n # pylint:disable=attribute-defined-outside-init\n if value:\n self._expiration_time = int(time.time()) + int(value)\n self._expire_in = value"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef valid(self):\n\n if self.expiration_time:\n return self.expiration_time > int(time.time())\n else:\n return True", "response": "Returns True if credentials are valid False otherwise."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn True if the credentials expire sooner than specified number of seconds.", "response": "def expire_soon(self, seconds):\n \"\"\"\n Returns ``True`` if credentials expire sooner than specified.\n\n :param int seconds:\n Number of seconds.\n\n :returns:\n ``True`` if credentials expire sooner than specified,\n else ``False``.\n\n \"\"\"\n\n if self.expiration_time:\n return self.expiration_time < int(time.time()) + int(seconds)\n else:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef refresh(self, force=False, soon=86400):\n\n if hasattr(self.provider_class, 'refresh_credentials'):\n if force or self.expire_soon(soon):\n logging.info('PROVIDER NAME: {0}'.format(self.provider_name))\n return self.provider_class(\n self, None, self.provider_name).refresh_credentials(self)", "response": "Refreshes the credentials of the user in the cache."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef serialize(self):\n\n if self.provider_id is None:\n raise ConfigError(\n 'To serialize credentials you need to specify a '\n 'unique integer under the \"id\" key in the config '\n 'for each provider!')\n\n # Get the provider type specific items.\n rest = self.provider_type_class().to_tuple(self)\n\n # Provider ID and provider type ID are always the first two items.\n result = (self.provider_id, self.provider_type_id) + rest\n\n # Make sure that all items are strings.\n stringified = [str(i) for i in result]\n\n # Concatenate by newline.\n concatenated = '\\n'.join(stringified)\n\n # Percent encode.\n return parse.quote(concatenated, '')", "response": "Converts the credentials to a percent encoded string to be stored for later use."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef popup_js(self, callback_name=None, indent=None,\n custom=None, stay_open=False):\n \"\"\"\n Returns JavaScript that:\n\n #. Triggers the ``options.onLoginComplete(result, closer)``\n handler set with the :ref:`authomatic.setup() `\n function of :ref:`javascript.js `.\n #. Calls the JavasScript callback specified by :data:`callback_name`\n on the opener of the *login handler popup* and passes it the\n *login result* JSON object as first argument and the `closer`\n function which you should call in your callback to close the popup.\n\n :param str callback_name:\n The name of the javascript callback e.g ``foo.bar.loginCallback``\n will result in ``window.opener.foo.bar.loginCallback(result);``\n in the HTML.\n\n :param int indent:\n The number of spaces to indent the JSON result object.\n If ``0`` or negative, only newlines are added.\n If ``None``, no newlines are added.\n\n :param custom:\n Any JSON serializable object that will be passed to the\n ``result.custom`` attribute.\n\n :param str stay_open:\n If ``True``, the popup will stay open.\n\n :returns:\n :class:`str` with JavaScript.\n\n \"\"\"\n\n custom_callback = \"\"\"\n try {{ window.opener.{cb}(result, closer); }} catch(e) {{}}\n \"\"\".format(cb=callback_name) if callback_name else ''\n\n # TODO: Move the window.close() to the opener\n return \"\"\"\n (function(){{\n\n closer = function(){{\n window.close();\n }};\n\n var result = {result};\n result.custom = {custom};\n\n {custom_callback}\n\n try {{\n window.opener.authomatic.loginComplete(result, closer);\n }} catch(e) {{}}\n\n }})();\n\n \"\"\".format(result=self.to_json(indent),\n custom=json.dumps(custom),\n custom_callback=custom_callback,\n stay_open='// ' if stay_open else '')", "response": "Returns JavaScript that will display the user in the popup."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef popup_html(self, callback_name=None, indent=None,\n title='Login | {0}', custom=None, stay_open=False):\n \"\"\"\n Returns a HTML with JavaScript that:\n\n #. Triggers the ``options.onLoginComplete(result, closer)`` handler\n set with the :ref:`authomatic.setup() ` function of\n :ref:`javascript.js `.\n #. Calls the JavasScript callback specified by :data:`callback_name`\n on the opener of the *login handler popup* and passes it the\n *login result* JSON object as first argument and the `closer`\n function which you should call in your callback to close the popup.\n\n :param str callback_name:\n The name of the javascript callback e.g ``foo.bar.loginCallback``\n will result in ``window.opener.foo.bar.loginCallback(result);``\n in the HTML.\n\n :param int indent:\n The number of spaces to indent the JSON result object.\n If ``0`` or negative, only newlines are added.\n If ``None``, no newlines are added.\n\n :param str title:\n The text of the HTML title. You can use ``{0}`` tag inside,\n which will be replaced by the provider name.\n\n :param custom:\n Any JSON serializable object that will be passed to the\n ``result.custom`` attribute.\n\n :param str stay_open:\n If ``True``, the popup will stay open.\n\n :returns:\n :class:`str` with HTML.\n\n \"\"\"\n\n return \"\"\"\n \n \n {title}\n \n \n \n \n \"\"\".format(\n title=title.format(self.provider.name if self.provider else ''),\n js=self.popup_js(callback_name, indent, custom, stay_open)\n )", "response": "Returns a JavasScript popup HTML with JavaScript that will be used to display the login result in the browser."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef is_binary_string(content):\n\n textchars = (bytearray([7, 8, 9, 10, 12, 13, 27]) +\n bytearray(range(0x20, 0x100)))\n return bool(content.translate(None, textchars))", "response": "Return true if string is binary data."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the content of the response.", "response": "def content(self):\n \"\"\"\n The whole response content.\n \"\"\"\n\n if not self._content:\n content = self.httplib_response.read()\n if self.is_binary_string(content):\n self._content = content\n else:\n self._content = content.decode('utf-8')\n return self._content"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef data(self):\n\n if not self._data:\n self._data = self.content_parser(self.content)\n return self._data", "response": "A dict of data parsed from content."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nlogin to the specified provider.", "response": "def login(self, adapter, provider_name, callback=None,\n session=None, session_saver=None, **kwargs):\n \"\"\"\n If :data:`provider_name` specified, launches the login procedure for\n corresponding :doc:`provider ` and returns\n :class:`.LoginResult`.\n\n If :data:`provider_name` is empty, acts like\n :meth:`.Authomatic.backend`.\n\n .. warning::\n\n The method redirects the **user** to the **provider** which in\n turn redirects **him/her** back to the *request handler* where\n it has been called.\n\n :param str provider_name:\n Name of the provider as specified in the keys of the :doc:`config`.\n\n :param callable callback:\n If specified the method will call the callback with\n :class:`.LoginResult` passed as argument and will return nothing.\n\n :param bool report_errors:\n\n .. note::\n\n Accepts additional keyword arguments that will be passed to\n :doc:`provider ` constructor.\n\n :returns:\n :class:`.LoginResult`\n\n \"\"\"\n\n if provider_name:\n # retrieve required settings for current provider and raise\n # exceptions if missing\n provider_settings = self.config.get(provider_name)\n if not provider_settings:\n raise ConfigError('Provider name \"{0}\" not specified!'\n .format(provider_name))\n\n if not (session is None or session_saver is None):\n session = session\n session_saver = session_saver\n else:\n session = Session(adapter=adapter,\n secret=self.secret,\n max_age=self.session_max_age,\n name=self.prefix,\n secure=self.secure_cookie)\n\n session_saver = session.save\n\n # Resolve provider class.\n class_ = provider_settings.get('class_')\n if not class_:\n raise ConfigError(\n 'The \"class_\" key not specified in the config'\n ' for provider {0}!'.format(provider_name))\n ProviderClass = resolve_provider_class(class_)\n\n # FIXME: Find a nicer solution\n ProviderClass._logger = self._logger\n\n # instantiate provider class\n provider = ProviderClass(self,\n adapter=adapter,\n provider_name=provider_name,\n callback=callback,\n session=session,\n session_saver=session_saver,\n **kwargs)\n\n # return login result\n return provider.login()\n\n else:\n # Act like backend.\n self.backend(adapter)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\naccesses protected resource on behalf of the user.", "response": "def access(self, credentials, url, params=None, method='GET',\n headers=None, body='', max_redirects=5, content_parser=None):\n \"\"\"\n Accesses **protected resource** on behalf of the **user**.\n\n :param credentials:\n The **user's** :class:`.Credentials` (serialized or normal).\n\n :param str url:\n The **protected resource** URL.\n\n :param str method:\n HTTP method of the request.\n\n :param dict headers:\n HTTP headers of the request.\n\n :param str body:\n Body of ``POST``, ``PUT`` and ``PATCH`` requests.\n\n :param int max_redirects:\n Maximum number of HTTP redirects to follow.\n\n :param function content_parser:\n A function to be used to parse the :attr:`.Response.data`\n from :attr:`.Response.content`.\n\n :returns:\n :class:`.Response`\n\n \"\"\"\n\n # Deserialize credentials.\n credentials = Credentials.deserialize(self.config, credentials)\n\n # Resolve provider class.\n ProviderClass = credentials.provider_class\n logging.info('ACCESS HEADERS: {0}'.format(headers))\n # Access resource and return response.\n\n provider = ProviderClass(\n self, adapter=None, provider_name=credentials.provider_name)\n provider.credentials = credentials\n\n return provider.access(url=url,\n params=params,\n method=method,\n headers=headers,\n body=body,\n max_redirects=max_redirects,\n content_parser=content_parser)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef request_elements(\n self, credentials=None, url=None, method='GET', params=None,\n headers=None, body='', json_input=None, return_json=False\n ):\n \"\"\"\n Creates request elements for accessing **protected resource of a\n user**. Required arguments are :data:`credentials` and :data:`url`. You\n can pass :data:`credentials`, :data:`url`, :data:`method`, and\n :data:`params` as a JSON object.\n\n :param credentials:\n The **user's** credentials (can be serialized).\n\n :param str url:\n The url of the protected resource.\n\n :param str method:\n The HTTP method of the request.\n\n :param dict params:\n Dictionary of request parameters.\n\n :param dict headers:\n Dictionary of request headers.\n\n :param str body:\n Body of ``POST``, ``PUT`` and ``PATCH`` requests.\n\n :param str json_input:\n you can pass :data:`credentials`, :data:`url`, :data:`method`,\n :data:`params` and :data:`headers` in a JSON object.\n Values from arguments will be used for missing properties.\n\n ::\n\n {\n \"credentials\": \"###\",\n \"url\": \"https://example.com/api\",\n \"method\": \"POST\",\n \"params\": {\n \"foo\": \"bar\"\n },\n \"headers\": {\n \"baz\": \"bing\",\n \"Authorization\": \"Bearer ###\"\n },\n \"body\": \"Foo bar baz bing.\"\n }\n\n :param bool return_json:\n if ``True`` the function returns a json object.\n\n ::\n\n {\n \"url\": \"https://example.com/api\",\n \"method\": \"POST\",\n \"params\": {\n \"access_token\": \"###\",\n \"foo\": \"bar\"\n },\n \"headers\": {\n \"baz\": \"bing\",\n \"Authorization\": \"Bearer ###\"\n },\n \"body\": \"Foo bar baz bing.\"\n }\n\n :returns:\n :class:`.RequestElements` or JSON string.\n\n \"\"\"\n\n # Parse values from JSON\n if json_input:\n parsed_input = json.loads(json_input)\n\n credentials = parsed_input.get('credentials', credentials)\n url = parsed_input.get('url', url)\n method = parsed_input.get('method', method)\n params = parsed_input.get('params', params)\n headers = parsed_input.get('headers', headers)\n body = parsed_input.get('body', body)\n\n if not credentials and url:\n raise RequestElementsError(\n 'To create request elements, you must provide credentials '\n 'and URL either as keyword arguments or in the JSON object!')\n\n # Get the provider class\n credentials = Credentials.deserialize(self.config, credentials)\n ProviderClass = credentials.provider_class\n\n # Create request elements\n request_elements = ProviderClass.create_request_elements(\n ProviderClass.PROTECTED_RESOURCE_REQUEST_TYPE,\n credentials=credentials,\n url=url,\n method=method,\n params=params,\n headers=headers,\n body=body)\n\n if return_json:\n return request_elements.to_json()\n\n else:\n return request_elements", "response": "Creates a request element for accessing a protected resource of a user."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef backend(self, adapter):\n\n AUTHOMATIC_HEADER = 'Authomatic-Response-To'\n\n # Collect request params\n request_type = adapter.params.get('type', 'auto')\n json_input = adapter.params.get('json')\n credentials = adapter.params.get('credentials')\n url = adapter.params.get('url')\n method = adapter.params.get('method', 'GET')\n body = adapter.params.get('body', '')\n\n params = adapter.params.get('params')\n params = json.loads(params) if params else {}\n\n headers = adapter.params.get('headers')\n headers = json.loads(headers) if headers else {}\n\n ProviderClass = Credentials.deserialize(\n self.config, credentials).provider_class\n\n if request_type == 'auto':\n # If there is a \"callback\" param, it's a JSONP request.\n jsonp = params.get('callback')\n\n # JSONP is possible only with GET method.\n if ProviderClass.supports_jsonp and method is 'GET':\n request_type = 'elements'\n else:\n # Remove the JSONP callback\n if jsonp:\n params.pop('callback')\n request_type = 'fetch'\n\n if request_type == 'fetch':\n # Access protected resource\n response = self.access(\n credentials, url, params, method, headers, body)\n result = response.content\n\n # Forward status\n adapter.status = str(response.status) + ' ' + str(response.reason)\n\n # Forward headers\n for k, v in response.getheaders():\n logging.info(' {0}: {1}'.format(k, v))\n adapter.set_header(k, v)\n\n elif request_type == 'elements':\n # Create request elements\n if json_input:\n result = self.request_elements(\n json_input=json_input, return_json=True)\n else:\n result = self.request_elements(credentials=credentials,\n url=url,\n method=method,\n params=params,\n headers=headers,\n body=body,\n return_json=True)\n\n adapter.set_header('Content-Type', 'application/json')\n else:\n result = '{\"error\": \"Bad Request!\"}'\n\n # Add the authomatic header\n adapter.set_header(AUTHOMATIC_HEADER, request_type)\n\n # Write result to response\n adapter.write(result)", "response": "Converts a webapp2. RequestHandler to a JSON backend which can use with a specific HTTP method."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _normalize_params(params):\n\n if isinstance(params, dict):\n params = list(params.items())\n\n # remove \"realm\" and \"oauth_signature\"\n params = sorted([\n (k, v) for k, v in params\n if k not in ('oauth_signature', 'realm')\n ])\n # sort\n # convert to query string\n qs = parse.urlencode(params)\n # replace \"+\" to \"%20\"\n qs = qs.replace('+', '%20')\n # replace \"%7E\" to \"%20\"\n qs = qs.replace('%7E', '~')\n\n return qs", "response": "Normalizes the parameters of the current request."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate base string for HMAC - SHA1 signature as specified in RFC 5246 section 9. 1. 3.", "response": "def _create_base_string(method, base, params):\n \"\"\"\n Returns base string for HMAC-SHA1 signature as specified in:\n http://oauth.net/core/1.0a/#rfc.section.9.1.3.\n \"\"\"\n\n normalized_qs = _normalize_params(params)\n return _join_by_ampersand(method, base, normalized_qs)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_signature(cls, method, base, params,\n consumer_secret, token_secret=''):\n \"\"\"\n Returns HMAC-SHA1 signature as specified at:\n http://oauth.net/core/1.0a/#rfc.section.9.2.\n\n :param str method:\n HTTP method of the request to be signed.\n\n :param str base:\n Base URL of the request without query string an fragment.\n\n :param dict params:\n Dictionary or list of tuples of the request parameters.\n\n :param str consumer_secret:\n :attr:`.core.Consumer.secret`\n\n :param str token_secret:\n Access token secret as specified in\n http://oauth.net/core/1.0a/#anchor3.\n\n :returns:\n The signature string.\n\n \"\"\"\n\n base_string = _create_base_string(method, base, params)\n key = cls._create_key(consumer_secret, token_secret)\n\n hashed = hmac.new(\n six.b(key),\n base_string.encode('utf-8'),\n hashlib.sha1)\n\n base64_encoded = binascii.b2a_base64(hashed.digest())[:-1]\n\n return base64_encoded", "response": "Creates a base64 - encoded HMAC - SHA1 signature for the request."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef create_request_elements(\n cls, request_type, credentials, url, params=None, headers=None,\n body='', method='GET', verifier='', callback=''\n ):\n \"\"\"\n Creates |oauth1| request elements.\n \"\"\"\n\n params = params or {}\n headers = headers or {}\n\n consumer_key = credentials.consumer_key or ''\n consumer_secret = credentials.consumer_secret or ''\n token = credentials.token or ''\n token_secret = credentials.token_secret or ''\n\n # separate url base and query parameters\n url, base_params = cls._split_url(url)\n\n # add extracted params to future params\n params.update(dict(base_params))\n\n if request_type == cls.USER_AUTHORIZATION_REQUEST_TYPE:\n # no need for signature\n if token:\n params['oauth_token'] = token\n else:\n raise OAuth1Error(\n 'Credentials with valid token are required to create '\n 'User Authorization URL!')\n else:\n # signature needed\n if request_type == cls.REQUEST_TOKEN_REQUEST_TYPE:\n # Request Token URL\n if consumer_key and consumer_secret and callback:\n params['oauth_consumer_key'] = consumer_key\n params['oauth_callback'] = callback\n else:\n raise OAuth1Error(\n 'Credentials with valid consumer_key, consumer_secret '\n 'and callback are required to create Request Token '\n 'URL!')\n\n elif request_type == cls.ACCESS_TOKEN_REQUEST_TYPE:\n # Access Token URL\n if consumer_key and consumer_secret and token and verifier:\n params['oauth_token'] = token\n params['oauth_consumer_key'] = consumer_key\n params['oauth_verifier'] = verifier\n else:\n raise OAuth1Error(\n 'Credentials with valid consumer_key, '\n 'consumer_secret, token and argument verifier'\n ' are required to create Access Token URL!')\n\n elif request_type == cls.PROTECTED_RESOURCE_REQUEST_TYPE:\n # Protected Resources URL\n if consumer_key and consumer_secret and token and token_secret:\n params['oauth_token'] = token\n params['oauth_consumer_key'] = consumer_key\n else:\n raise OAuth1Error(\n 'Credentials with valid consumer_key, ' +\n 'consumer_secret, token and token_secret are required '\n 'to create Protected Resources URL!')\n\n # Sign request.\n # http://oauth.net/core/1.0a/#anchor13\n\n # Prepare parameters for signature base string\n # http://oauth.net/core/1.0a/#rfc.section.9.1\n params['oauth_signature_method'] = cls._signature_generator.method\n params['oauth_timestamp'] = str(int(time.time()))\n params['oauth_nonce'] = cls.csrf_generator(str(uuid.uuid4()))\n params['oauth_version'] = '1.0'\n\n # add signature to params\n params['oauth_signature'] = cls._signature_generator.create_signature( # noqa\n method, url, params, consumer_secret, token_secret)\n\n request_elements = core.RequestElements(\n url, method, params, headers, body)\n\n return cls._x_request_elements_filter(\n request_type, request_elements, credentials)", "response": "Creates |oauth1| request elements."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _access_user_info(self):\n response = super(Bitbucket, self)._access_user_info()\n\n response.data.setdefault(\"email\", None)\n\n email_response = self.access(self.user_email_url)\n if email_response.data:\n for item in email_response.data:\n if item.get(\"primary\", False):\n response.data.update(email=item.get(\"email\", None))\n\n return response", "response": "Get user info from Bitbucket"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _access_user_info(self):\n response = super(Vimeo, self)._access_user_info()\n uid = response.data.get('oauth', {}).get('user', {}).get('id')\n if uid:\n return self.access('http://vimeo.com/api/v2/{0}/info.json'\n .format(uid))\n return response", "response": "Access the user info endpoint"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef login(self, *login_args, **login_kwargs):\n\n def decorator(f):\n @wraps(f)\n def decorated(*args, **kwargs):\n self.response = make_response()\n adapter = WerkzeugAdapter(request, self.response)\n login_kwargs.setdefault('session', session)\n login_kwargs.setdefault('session_saver', self.session_saver)\n self.result = super(FlaskAuthomatic, self).login(\n adapter,\n *login_args,\n **login_kwargs)\n return f(*args, **kwargs)\n return decorated\n return decorator", "response": "Decorator for Flask view functions."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get(cls, key, default=None):\n\n # Query datastore.\n result = cls.query(cls.provider_name == key).get()\n\n if result:\n result_dict = result.to_dict()\n\n # Use NDBOpenIDStore by default\n result_dict['store'] = NDBOpenIDStore\n\n # Convert coma-separated values to list. Currently only scope is\n # csv.\n for i in ('scope', ):\n prop = result_dict.get(i)\n if prop:\n result_dict[i] = [s.strip() for s in prop.split(',')]\n else:\n result_dict[i] = None\n\n return result_dict\n else:\n return default", "response": "Returns a configuration dictionary for the specified provider."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef values(cls):\n\n # get all items\n results = cls.query().fetch()\n # return list of dictionaries\n return [result.to_dict() for result in results]", "response": "Returns a list of dictionaries containing all items in the current database."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef initialize(cls):\n\n if not len(cls.query().fetch()):\n\n example = cls.get_or_insert('Example')\n\n example.class_ = 'Provider class e.g. ' + \\\n '\"authomatic.providers.oauth2.Facebook\".'\n example.provider_name = 'Your custom provider name e.g. \"fb\".'\n\n # AuthorizationProvider\n example.consumer_key = 'Consumer key.'\n example.consumer_secret = 'Consumer secret'\n example.provider_id = 1\n\n # OAuth2\n example.scope = 'coma, separated, list, of, scopes'\n\n # AuthenticationProvider\n example.identifier_param = 'Querystring parameter for claimed ' + \\\n 'id. default is \"id\"'\n\n # Save the example\n example.put()\n\n # Raise an information error.\n raise GAEError(\n 'A NDBConfig data model was created! Go to Datastore Viewer '\n 'in your dashboard and populate it with data!')", "response": "Initializes the data model for the given NDBConfig class."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef login(self):\n\n if self.params.get(self.identifier_param):\n # =================================================================\n # Phase 1 before redirect.\n # =================================================================\n self._log(\n logging.INFO,\n u'Starting OpenID authentication procedure.')\n\n url = users.create_login_url(\n dest_url=self.url, federated_identity=self.identifier)\n\n self._log(logging.INFO, u'Redirecting user to {0}.'.format(url))\n\n self.redirect(url)\n else:\n # =================================================================\n # Phase 2 after redirect.\n # =================================================================\n\n self._log(\n logging.INFO,\n u'Continuing OpenID authentication procedure after redirect.')\n\n user = users.get_current_user()\n\n if user:\n self._log(logging.INFO, u'Authentication successful.')\n self._log(logging.INFO, u'Creating user.')\n self.user = core.User(self,\n id=user.federated_identity(),\n email=user.email(),\n gae_user=user)\n\n # =============================================================\n # We're done\n # =============================================================\n else:\n raise FailureError(\n 'Unable to authenticate identifier \"{0}\"!'.format(\n self.identifier))", "response": "Launches the OpenID authentication procedure."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _error_traceback_html(exc_info, traceback_):\n\n html = \"\"\"\n \n \n ERROR: {error}\n \n \n

The Authomatic library encountered an error!

\n

{error}

\n
{traceback}
\n \n \n \"\"\"\n\n return html.format(error=exc_info[1], traceback=traceback_)", "response": "Generates error traceback HTML."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndecorate the :meth:`.BaseProvider.login` implementations with this decorator. Provides mechanism for error reporting and returning result which makes the :meth:`.BaseProvider.login` implementation cleaner.", "response": "def login_decorator(func):\n \"\"\"\n Decorate the :meth:`.BaseProvider.login` implementations with this\n decorator.\n\n Provides mechanism for error reporting and returning result which\n makes the :meth:`.BaseProvider.login` implementation cleaner.\n\n \"\"\"\n\n def wrap(provider, *args, **kwargs):\n error = None\n result = authomatic.core.LoginResult(provider)\n\n try:\n func(provider, *args, **kwargs)\n except Exception as e: # pylint:disable=broad-except\n if provider.settings.report_errors:\n error = e\n if not isinstance(error, CancellationError):\n provider._log(\n logging.ERROR,\n u'Reported suppressed exception: {0}!'.format(\n repr(error)),\n exc_info=1)\n else:\n if provider.settings.debug:\n # TODO: Check whether it actually works without middleware\n provider.write(\n _error_traceback_html(\n sys.exc_info(),\n traceback.format_exc()))\n raise\n\n # If there is user or error the login procedure has finished\n if provider.user or error:\n result = authomatic.core.LoginResult(provider)\n # Add error to result\n result.error = error\n\n # delete session cookie\n if isinstance(provider.session, authomatic.core.Session):\n provider.session.delete()\n\n provider._log(logging.INFO, u'Procedure finished.')\n\n if provider.callback:\n provider.callback(result)\n return result\n else:\n # Save session\n provider.save_session()\n\n return wrap"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef to_dict(self):\n\n return dict(name=self.name,\n id=getattr(self, 'id', None),\n type_id=self.type_id,\n type=self.get_type(),\n scope=getattr(self, 'scope', None),\n user=self.user.id if self.user else None)", "response": "Converts the provider instance to a dict."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nresolves keyword arguments from constructor or config.", "response": "def _kwarg(self, kwargs, kwname, default=None):\n \"\"\"\n Resolves keyword arguments from constructor or :doc:`config`.\n\n .. note::\n\n The keyword arguments take this order of precedence:\n\n 1. Arguments passed to constructor through the\n :func:`authomatic.login`.\n 2. Provider specific arguments from :doc:`config`.\n 3. Arguments from :doc:`config` set in the ``__defaults__`` key.\n 2. The value from :data:`default` argument.\n\n :param dict kwargs:\n Keyword arguments dictionary.\n :param str kwname:\n Name of the desired keyword argument.\n\n \"\"\"\n\n return kwargs.get(kwname) or \\\n self.settings.config.get(self.name, {}).get(kwname) or \\\n self.settings.config.get('__defaults__', {}).get(kwname) or \\\n default"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _session_key(self, key):\n\n return '{0}:{1}:{2}'.format(self.settings.prefix, self.name, key)", "response": "Generates session key string."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset a value to the session.", "response": "def _session_set(self, key, value):\n \"\"\"\n Saves a value to session.\n \"\"\"\n\n self.session[self._session_key(key)] = value"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef csrf_generator(secret):\n\n # Create hash from random string plus salt.\n hashed = hashlib.md5(uuid.uuid4().bytes + six.b(secret)).hexdigest()\n\n # Each time return random portion of the hash.\n span = 5\n shift = random.randint(0, span)\n return hashed[shift:shift - span - 1]", "response": "Generates a random CSRF token."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nlogs a message with pre - formatted prefix.", "response": "def _log(cls, level, msg, **kwargs):\n \"\"\"\n Logs a message with pre-formatted prefix.\n\n :param int level:\n Logging level as specified in the\n `login module `_ of\n Python standard library.\n\n :param str msg:\n The actual message.\n\n \"\"\"\n\n logger = getattr(cls, '_logger', None) or authomatic.core._logger\n logger.log(\n level, ': '.join(\n ('authomatic', cls.__name__, msg)), **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _fetch(self, url, method='GET', params=None, headers=None,\n body='', max_redirects=5, content_parser=None):\n \"\"\"\n Fetches a URL.\n\n :param str url:\n The URL to fetch.\n\n :param str method:\n HTTP method of the request.\n\n :param dict params:\n Dictionary of request parameters.\n\n :param dict headers:\n HTTP headers of the request.\n\n :param str body:\n Body of ``POST``, ``PUT`` and ``PATCH`` requests.\n\n :param int max_redirects:\n Number of maximum HTTP redirects to follow.\n\n :param function content_parser:\n A callable to be used to parse the :attr:`.Response.data`\n from :attr:`.Response.content`.\n\n \"\"\"\n # 'magic' using _kwarg method\n # pylint:disable=no-member\n params = params or {}\n params.update(self.access_params)\n\n headers = headers or {}\n headers.update(self.access_headers)\n\n scheme, host, path, query, fragment = parse.urlsplit(url)\n query = parse.urlencode(params)\n\n if method in ('POST', 'PUT', 'PATCH'):\n if not body:\n # Put querystring to body\n body = query\n query = ''\n headers.update(\n {'Content-Type': 'application/x-www-form-urlencoded'})\n request_path = parse.urlunsplit(('', '', path or '', query or '', ''))\n\n self._log(logging.DEBUG, u' \\u251C\\u2500 host: {0}'.format(host))\n self._log(\n logging.DEBUG,\n u' \\u251C\\u2500 path: {0}'.format(request_path))\n self._log(logging.DEBUG, u' \\u251C\\u2500 method: {0}'.format(method))\n self._log(logging.DEBUG, u' \\u251C\\u2500 body: {0}'.format(body))\n self._log(logging.DEBUG, u' \\u251C\\u2500 params: {0}'.format(params))\n self._log(logging.DEBUG, u' \\u2514\\u2500 headers: {0}'.format(headers))\n\n # Connect\n if scheme.lower() == 'https':\n connection = http_client.HTTPSConnection(host)\n else:\n connection = http_client.HTTPConnection(host)\n\n try:\n connection.request(method, request_path, body, headers)\n except Exception as e:\n raise FetchError('Fetching URL failed',\n original_message=str(e),\n url=request_path)\n\n response = connection.getresponse()\n location = response.getheader('Location')\n\n if response.status in (300, 301, 302, 303, 307) and location:\n if location == url:\n raise FetchError('Url redirects to itself!',\n url=location,\n status=response.status)\n\n elif max_redirects > 0:\n remaining_redirects = max_redirects - 1\n\n self._log(logging.DEBUG, u'Redirecting to {0}'.format(url))\n self._log(logging.DEBUG, u'Remaining redirects: {0}'\n .format(remaining_redirects))\n\n # Call this method again.\n response = self._fetch(url=location,\n params=params,\n method=method,\n headers=headers,\n max_redirects=remaining_redirects)\n\n else:\n raise FetchError('Max redirects reached!',\n url=location,\n status=response.status)\n else:\n self._log(logging.DEBUG, u'Got response:')\n self._log(logging.DEBUG, u' \\u251C\\u2500 url: {0}'.format(url))\n self._log(\n logging.DEBUG,\n u' \\u251C\\u2500 status: {0}'.format(\n response.status))\n self._log(\n logging.DEBUG,\n u' \\u2514\\u2500 headers: {0}'.format(\n response.getheaders()))\n\n return authomatic.core.Response(response, content_parser)", "response": "Fetches a URL.\n\n :param str url:\n The URL to fetch.\n\n :param str method:\n HTTP method of the request.\n\n :param dict params:\n Dictionary of request parameters.\n\n :param dict headers:\n HTTP headers of the request.\n\n :param str body:\n Body of ``POST``, ``PUT`` and ``PATCH`` requests.\n\n :param int max_redirects:\n Number of maximum HTTP redirects to follow.\n\n :param function content_parser:\n A callable to be used to parse the :attr:`.Response.data`\n from :attr:`.Response.content`."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating or creates a user object.", "response": "def _update_or_create_user(self, data, credentials=None, content=None):\n \"\"\"\n Updates or creates :attr:`.user`.\n\n :returns:\n :class:`.User`\n\n \"\"\"\n\n if not self.user:\n self.user = authomatic.core.User(self, credentials=credentials)\n\n self.user.content = content\n self.user.data = data\n\n # Update.\n for key in self.user.__dict__:\n # Exclude data.\n if key not in ('data', 'content'):\n # Extract every data item whose key matches the user\n # property name, but only if it has a value.\n value = data.get(key)\n if value:\n setattr(self.user, key, value)\n\n # Handle different structure of data by different providers.\n self.user = self._x_user_parser(self.user, data)\n\n if self.user.id:\n self.user.id = str(self.user.id)\n\n # TODO: Move to User\n # If there is no user.name,\n if not self.user.name:\n if self.user.first_name and self.user.last_name:\n # Create it from first name and last name if available.\n self.user.name = ' '.join((self.user.first_name,\n self.user.last_name))\n else:\n # Or use one of these.\n self.user.name = (self.user.username or self.user.nickname or\n self.user.first_name or self.user.last_name)\n\n if not self.user.location:\n if self.user.city and self.user.country:\n self.user.location = '{0}, {1}'.format(self.user.city,\n self.user.country)\n else:\n self.user.location = self.user.city or self.user.country\n\n return self.user"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _http_status_in_category(status, category):\n\n assert category < 10, 'HTTP status category must be a one-digit int!'\n cat = category * 100\n return status >= cat and status < cat + 100", "response": "Checks whether a HTTP status code is in the specified category."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfetches a protected resource from the API.", "response": "def access(self, url, params=None, method='GET', headers=None,\n body='', max_redirects=5, content_parser=None):\n \"\"\"\n Fetches the **protected resource** of an authenticated **user**.\n\n :param credentials:\n The **user's** :class:`.Credentials` (serialized or normal).\n\n :param str url:\n The URL of the **protected resource**.\n\n :param str method:\n HTTP method of the request.\n\n :param dict headers:\n HTTP headers of the request.\n\n :param str body:\n Body of ``POST``, ``PUT`` and ``PATCH`` requests.\n\n :param int max_redirects:\n Maximum number of HTTP redirects to follow.\n\n :param function content_parser:\n A function to be used to parse the :attr:`.Response.data`\n from :attr:`.Response.content`.\n\n :returns:\n :class:`.Response`\n\n \"\"\"\n\n if not self.user and not self.credentials:\n raise CredentialsError(u'There is no authenticated user!')\n\n headers = headers or {}\n\n self._log(\n logging.INFO,\n u'Accessing protected resource {0}.'.format(url))\n\n request_elements = self.create_request_elements(\n request_type=self.PROTECTED_RESOURCE_REQUEST_TYPE,\n credentials=self.credentials,\n url=url,\n body=body,\n params=params,\n headers=headers,\n method=method\n )\n\n response = self._fetch(*request_elements,\n max_redirects=max_redirects,\n content_parser=content_parser)\n\n self._log(\n logging.INFO,\n u'Got response. HTTP status = {0}.'.format(\n response.status))\n return response"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef async_access(self, *args, **kwargs):\n\n return authomatic.core.Future(self.access, *args, **kwargs)", "response": "Same as authomatic. core. access but runs asynchronously in a separate thread."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate the user object.", "response": "def update_user(self):\n \"\"\"\n Updates the :attr:`.BaseProvider.user`.\n\n .. warning::\n Fetches the :attr:`.user_info_url`!\n\n :returns:\n :class:`.UserInfoResponse`\n\n \"\"\"\n if self.user_info_url:\n response = self._access_user_info()\n self.user = self._update_or_create_user(response.data,\n content=response.content)\n return authomatic.core.UserInfoResponse(self.user,\n response.httplib_response)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating the authorization header if the provider supports it.", "response": "def _authorization_header(cls, credentials):\n \"\"\"\n Creates authorization headers if the provider supports it. See:\n http://en.wikipedia.org/wiki/Basic_access_authentication.\n\n :param credentials:\n :class:`.Credentials`\n\n :returns:\n Headers as :class:`dict`.\n\n \"\"\"\n\n if cls._x_use_authorization_header:\n res = ':'.join(\n (credentials.consumer_key,\n credentials.consumer_secret))\n res = base64.b64encode(six.b(res)).decode()\n return {'Authorization': 'Basic {0}'.format(res)}\n else:\n return {}"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nvalidate the consumer key and secret", "response": "def _check_consumer(self):\n \"\"\"\n Validates the :attr:`.consumer`.\n \"\"\"\n\n # 'magic' using _kwarg method\n # pylint:disable=no-member\n if not self.consumer.key:\n raise ConfigError(\n 'Consumer key not specified for provider {0}!'.format(\n self.name))\n\n if not self.consumer.secret:\n raise ConfigError(\n 'Consumer secret not specified for provider {0}!'.format(\n self.name))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsplit given url to base and params converted to list of tuples.", "response": "def _split_url(url):\n \"\"\"\n Splits given url to url base and params converted to list of tuples.\n \"\"\"\n\n split = parse.urlsplit(url)\n base = parse.urlunsplit((split.scheme, split.netloc, split.path, 0, 0))\n params = parse.parse_qsl(split.query, True)\n\n return base, params"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _access_user_info(self):\n url = self.user_info_url.format(**self.user.__dict__)\n return self.access(url)", "response": "Access the user info url."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the CORS headers for the response object.", "response": "def set_cors_headers(req, resp, context, options):\n \"\"\"\n Performs the actual evaluation of Sanic-CORS options and actually\n modifies the response object.\n\n This function is used both in the decorator and the after_request\n callback\n :param sanic.request.Request req:\n\n \"\"\"\n try:\n request_context = context.request[id(req)]\n except AttributeError:\n LOG.debug(\"Cannot find the request context. Is request already finished?\")\n return resp\n # If CORS has already been evaluated via the decorator, skip\n evaluated = request_context.get(SANIC_CORS_EVALUATED, False)\n if evaluated:\n LOG.debug('CORS have been already evaluated, skipping')\n return resp\n\n # `resp` can be None in the case of using Websockets\n # however this case should have been handled in the `extension` and `decorator` methods\n # before getting here. This is a final failsafe check to prevent crashing\n if resp is None:\n return None\n\n if resp.headers is None:\n resp.headers = CIMultiDict()\n\n headers_to_set = get_cors_headers(options, req.headers, req.method)\n\n LOG.debug('Settings CORS headers: %s', str(headers_to_set))\n\n # dict .extend() does not work on CIDict so\n # iterate over them and add them individually.\n try:\n resp.headers.extend(headers_to_set)\n except Exception as e1:\n for k, v in headers_to_set.items():\n try:\n resp.headers.add(k, v)\n except Exception as e2:\n resp.headers[k] = v\n return resp"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_app_kwarg_dict(appInstance):\n # In order to support blueprints which do not have a config attribute\n app_config = getattr(appInstance, 'config', {})\n return dict(\n (k.lower().replace('cors_', ''), app_config.get(k))\n for k in CONFIG_OPTIONS\n if app_config.get(k) is not None\n )", "response": "Returns the dictionary of CORS specific app configurations."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef flexible_str(obj):\n if obj is None:\n return None\n elif(not isinstance(obj, str)\n and isinstance(obj, collections.abc.Iterable)):\n return ', '.join(str(item) for item in sorted(obj))\n else:\n return str(obj)", "response": "A more flexible str function which intelligently handles stringifying\n strings lists and other iterables."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nensures that an instance is iterable.", "response": "def ensure_iterable(inst):\n \"\"\"\n Wraps scalars or string types as a list, or returns the iterable instance.\n \"\"\"\n if isinstance(inst, str):\n return [inst]\n elif not isinstance(inst, collections.abc.Iterable):\n return [inst]\n else:\n return inst"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef isclose(a, b, *, rel_tol=1e-09, abs_tol=0.0):\n try:\n return math.isclose(a, b, rel_tol=rel_tol, abs_tol=abs_tol)\n except AttributeError:\n # Running on older version of python, fall back to hand-rolled implementation\n if (rel_tol < 0.0) or (abs_tol < 0.0):\n raise ValueError(\"Tolerances must be non-negative, but are rel_tol: {} and abs_tol: {}\".format(rel_tol, abs_tol))\n if math.isnan(a) or math.isnan(b):\n return False # NaNs are never close to anything, even other NaNs\n if (a == b):\n return True\n if math.isinf(a) or math.isinf(b):\n return False # Infinity is only close to itself, and we already handled that case\n diff = abs(a - b)\n return (diff <= rel_tol * abs(b)) or (diff <= rel_tol * abs(a)) or (diff <= abs_tol)", "response": "Python 3. 4 does not have math. isclose but we have to steal it and add it here."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nattempts to deserialize a bytestring into an audiosegment. :param bstr: The bytestring serialized via an audiosegment's serialize() method. :returns: An AudioSegment object deserialized from `bstr`.", "response": "def deserialize(bstr):\n \"\"\"\n Attempts to deserialize a bytestring into an audiosegment.\n\n :param bstr: The bytestring serialized via an audiosegment's serialize() method.\n :returns: An AudioSegment object deserialized from `bstr`.\n \"\"\"\n d = pickle.loads(bstr)\n seg = pickle.loads(d['seg'])\n return AudioSegment(seg, d['name'])"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns an AudioSegment object from the given file based on its file extension.", "response": "def from_file(path):\n \"\"\"\n Returns an AudioSegment object from the given file based on its file extension.\n If the extension is wrong, this will throw some sort of error.\n\n :param path: The path to the file, including the file extension.\n :returns: An AudioSegment instance from the file.\n \"\"\"\n _name, ext = os.path.splitext(path)\n ext = ext.lower()[1:]\n seg = pydub.AudioSegment.from_file(path, ext)\n return AudioSegment(seg, path)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates an audio segment from a numpy array.", "response": "def from_numpy_array(nparr, framerate):\n \"\"\"\n Returns an AudioSegment created from the given numpy array.\n\n The numpy array must have shape = (num_samples, num_channels).\n\n :param nparr: The numpy array to create an AudioSegment from.\n :returns: An AudioSegment created from the given array.\n \"\"\"\n # interleave the audio across all channels and collapse\n if nparr.dtype.itemsize not in (1, 2, 4):\n raise ValueError(\"Numpy Array must contain 8, 16, or 32 bit values.\")\n if len(nparr.shape) == 1:\n arrays = [nparr]\n elif len(nparr.shape) == 2:\n arrays = [nparr[i,:] for i in range(nparr.shape[0])]\n else:\n raise ValueError(\"Numpy Array must be one or two dimensional. Shape must be: (num_samples, num_channels).\")\n interleaved = np.vstack(arrays).reshape((-1,), order='F')\n dubseg = pydub.AudioSegment(interleaved.tobytes(),\n frame_rate=framerate,\n sample_width=interleaved.dtype.itemsize,\n channels=len(interleaved.shape)\n )\n return AudioSegment(dubseg, \"\")"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating an audio segment filled with pure silence.", "response": "def silent(duration=1000, frame_rate=11025):\n \"\"\"\n Creates an AudioSegment object of the specified duration/frame_rate filled with digital silence.\n\n :param duration: The duration of the returned object in ms.\n :param frame_rate: The samples per second of the returned object.\n :returns: AudioSegment object filled with pure digital silence.\n \"\"\"\n seg = pydub.AudioSegment.silent(duration=duration, frame_rate=frame_rate)\n return AudioSegment(seg, \"\")"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncalculates the SPL measurement of the audio segment.", "response": "def spl(self):\n \"\"\"\n Sound Pressure Level - defined as 20 * log10(p/p0),\n where p is the RMS of the sound wave in Pascals and p0 is\n 20 micro Pascals.\n\n Since we would need to know calibration information about the\n microphone used to record the sound in order to transform\n the PCM values of this audiosegment into Pascals, we can't really\n give an accurate SPL measurement.\n\n However, we can give a reasonable guess that can certainly be used\n to compare two sounds taken from the same microphone set up.\n\n Be wary about using this to compare sounds taken under different recording\n conditions however, except as a simple approximation.\n\n Returns a scalar float representing the dB SPL of this audiosegment.\n \"\"\"\n arr = self.to_numpy_array()\n if len(arr) == 0:\n return 0.0\n else:\n rms = self.rms\n ratio = rms / P_REF_PCM\n return 20.0 * np.log10(ratio + 1E-9)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a numpy array of shape (nfilters, nsamples), where each row of data is the result of bandpass filtering the audiosegment around a particular frequency. The frequencies are spaced from `lower_bound_hz` to `upper_bound_hz` and are returned with the np array. The particular spacing of the frequencies depends on `mode`, which can be either: 'linear', 'mel', or 'log'. .. note:: This method is an approximation of a gammatone filterbank until I get around to writing an actual gammatone filterbank function. .. code-block:: python # Example usage import audiosegment import matplotlib.pyplot as plt import numpy as np def visualize(spect, frequencies, title=\"\"): # Visualize the result of calling seg.filter_bank() for any number of filters i = 0 for freq, (index, row) in zip(frequencies[::-1], enumerate(spect[::-1, :])): plt.subplot(spect.shape[0], 1, index + 1) if i == 0: plt.title(title) i += 1 plt.ylabel(\"{0:.0f}\".format(freq)) plt.plot(row) plt.show() seg = audiosegment.from_file(\"some_audio.wav\").resample(sample_rate_Hz=24000, sample_width=2, channels=1) spec, frequencies = seg.filter_bank(nfilters=5) visualize(spec, frequencies) .. image:: images/filter_bank.png :param lower_bound_hz: The lower bound of the frequencies to use in the bandpass filters. :param upper_bound_hz: The upper bound of the frequencies to use in the bandpass filters. :param nfilters: The number of filters to apply. This will determine which frequencies are used as well, as they are interpolated between `lower_bound_hz` and `upper_bound_hz` based on `mode`. :param mode: The way the frequencies are spaced. Options are: `linear`, in which case the frequencies are linearly interpolated between `lower_bound_hz` and `upper_bound_hz`, `mel`, in which case the mel frequencies are used, or `log`, in which case they are log-10 spaced. :returns: A numpy array of the form (nfilters, nsamples), where each row is the audiosegment, bandpass-filtered around a particular frequency, and the list of frequencies. I.e., returns (spec, freqs).", "response": "def filter_bank(self, lower_bound_hz=50, upper_bound_hz=8E3, nfilters=128, mode='mel'):\n \"\"\"\n Returns a numpy array of shape (nfilters, nsamples), where each\n row of data is the result of bandpass filtering the audiosegment\n around a particular frequency. The frequencies are\n spaced from `lower_bound_hz` to `upper_bound_hz` and are returned with\n the np array. The particular spacing of the frequencies depends on `mode`,\n which can be either: 'linear', 'mel', or 'log'.\n\n .. note:: This method is an approximation of a gammatone filterbank\n until I get around to writing an actual gammatone filterbank\n function.\n\n .. code-block:: python\n\n # Example usage\n import audiosegment\n import matplotlib.pyplot as plt\n import numpy as np\n\n def visualize(spect, frequencies, title=\"\"):\n # Visualize the result of calling seg.filter_bank() for any number of filters\n i = 0\n for freq, (index, row) in zip(frequencies[::-1], enumerate(spect[::-1, :])):\n plt.subplot(spect.shape[0], 1, index + 1)\n if i == 0:\n plt.title(title)\n i += 1\n plt.ylabel(\"{0:.0f}\".format(freq))\n plt.plot(row)\n plt.show()\n\n seg = audiosegment.from_file(\"some_audio.wav\").resample(sample_rate_Hz=24000, sample_width=2, channels=1)\n spec, frequencies = seg.filter_bank(nfilters=5)\n visualize(spec, frequencies)\n\n .. image:: images/filter_bank.png\n\n :param lower_bound_hz: The lower bound of the frequencies to use in the bandpass filters.\n :param upper_bound_hz: The upper bound of the frequencies to use in the bandpass filters.\n :param nfilters: The number of filters to apply. This will determine which frequencies\n are used as well, as they are interpolated between\n `lower_bound_hz` and `upper_bound_hz` based on `mode`.\n :param mode: The way the frequencies are spaced. Options are: `linear`, in which case\n the frequencies are linearly interpolated between `lower_bound_hz` and\n `upper_bound_hz`, `mel`, in which case the mel frequencies are used,\n or `log`, in which case they are log-10 spaced.\n :returns: A numpy array of the form (nfilters, nsamples), where each row is the\n audiosegment, bandpass-filtered around a particular frequency,\n and the list of frequencies. I.e., returns (spec, freqs).\n \"\"\"\n # Logspace to get all the frequency channels we are after\n data = self.to_numpy_array()\n if mode.lower() == 'mel':\n frequencies = librosa.core.mel_frequencies(n_mels=nfilters, fmin=lower_bound_hz, fmax=upper_bound_hz)\n elif mode.lower() == 'linear':\n frequencies = np.linspace(lower_bound_hz, upper_bound_hz, num=nfilters, endpoint=True)\n elif mode.lower() == 'log':\n start = np.log10(lower_bound_hz)\n stop = np.log10(upper_bound_hz)\n frequencies = np.logspace(start, stop, num=nfilters, endpoint=True, base=10.0)\n else:\n raise ValueError(\"'mode' must be one of: (mel, linear, or log), but was {}\".format(mode))\n\n # Do a band-pass filter in each frequency\n rows = [filters.bandpass_filter(data, freq*0.8, freq*1.2, self.frame_rate) for freq in frequencies]\n rows = np.array(rows)\n spect = np.vstack(rows)\n return spect, frequencies"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef auditory_scene_analysis(self, debug=False, debugplot=False):\n normalized = self.normalize_spl_by_average(db=60)\n\n def printd(*args, **kwargs):\n if debug:\n print(*args, **kwargs)\n\n # Create a spectrogram from a filterbank: [nfreqs, nsamples]\n printd(\"Making filter bank. This takes a little bit.\")\n spect, frequencies = normalized.filter_bank(nfilters=128) # TODO: replace with correct number from paper\n\n # Half-wave rectify each frequency channel so that each value is 0 or greater - we are looking to get a temporal\n # envelope in each frequency channel\n printd(\"Half-wave rectifying\")\n with warnings.catch_warnings(): # Ignore the annoying Numpy runtime warning for less than\n warnings.simplefilter(\"ignore\")\n spect[spect < 0] = 0\n\n # Low-pass filter each frequency channel to remove a bunch of noise - we are only looking for large changes\n printd(\"Low pass filtering\")\n low_boundary = 30\n order = 6\n spect = np.apply_along_axis(filters.lowpass_filter, 1, spect, low_boundary, self.frame_rate, order)\n\n # Downsample each frequency\n printd(\"Downsampling\")\n downsample_freq_hz = 400\n if self.frame_rate > downsample_freq_hz:\n step = int(round(self.frame_rate / downsample_freq_hz))\n spect = spect[:, ::step]\n\n # Smoothing - we will smooth across time and frequency to further remove noise.\n # But we need to do it with several different combinations of kernels to get the best idea of what's going on\n # Scales are (sc, st), meaning (frequency scale, time scale)\n scales = [(6, 1/4), (6, 1/14), (1/2, 1/14)]\n\n # For each (sc, st) scale, smooth across time using st, then across frequency using sc\n gaussian = lambda x, mu, sig: np.exp(-np.power(x - mu, 2.0) / (2 * np.power(sig, 2.0)))\n gaussian_kernel = lambda sig: gaussian(np.linspace(-10, 10, len(frequencies) / 2), 0, sig)\n spectrograms = []\n printd(\"For each scale...\")\n for sc, st in scales:\n printd(\" -> Scale:\", sc, st)\n printd(\" -> Time and frequency smoothing\")\n time_smoothed = np.apply_along_axis(filters.lowpass_filter, 1, spect, 1/st, downsample_freq_hz, 6)\n freq_smoothed = np.apply_along_axis(np.convolve, 0, time_smoothed, gaussian_kernel(sc), 'same')\n\n # Remove especially egregious artifacts\n printd(\" -> Removing egregious filtering artifacts\")\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n freq_smoothed[freq_smoothed > 1E3] = 1E3\n freq_smoothed[freq_smoothed < -1E3] = -1E3\n spectrograms.append(freq_smoothed)\n\n # Onset/Offset Detection and Matching\n segmasks = []\n printd(\"For each scale...\")\n for spect, (sc, st) in zip(spectrograms, scales):\n printd(\" -> Scale:\", sc, st)\n printd(\" -> Getting the onsets\")\n # Compute sudden upward changes in spect, these are onsets of events\n onsets, gradients = asa._compute_peaks_or_valleys_of_first_derivative(spect)\n\n # Compute sudden downward changes in spect, these are offsets of events\n printd(\" -> Getting the offsets\")\n offsets, _ = asa._compute_peaks_or_valleys_of_first_derivative(spect, do_peaks=False)\n\n # Correlate offsets with onsets so that we have a 1:1 relationship\n printd(\" -> Lining up the onsets and offsets\")\n offsets = asa._correlate_onsets_and_offsets(onsets, offsets, gradients)\n\n # Create onset/offset fronts\n # Do this by connecting onsets across frequency channels if they occur within 20ms of each other\n printd(\" -> Create vertical contours (fronts)\")\n onset_fronts = asa._form_onset_offset_fronts(onsets, sample_rate_hz=downsample_freq_hz, threshold_ms=20)\n offset_fronts = asa._form_onset_offset_fronts(offsets, sample_rate_hz=downsample_freq_hz, threshold_ms=20)\n\n # Break poorly matched onset fronts\n printd(\" -> Breaking onset fronts between poorly matched frequencies\")\n asa._break_poorly_matched_fronts(onset_fronts)\n\n printd(\" -> Getting segmentation mask\")\n segmentation_mask = asa._match_fronts(onset_fronts, offset_fronts, onsets, offsets, debug=debug)\n segmasks.append(segmentation_mask)\n break # TODO: We currently don't bother using the multiscale integration, so we should only do one of the scales\n\n # Multiscale Integration, seems to conglomerate too well and take too long\n #finished_segmentation_mask = asa._integrate_segmentation_masks(segmasks) # TODO: doesn't work well and takes too long.\n finished_segmentation_mask = segmasks[0]\n if debugplot:\n asa.visualize_segmentation_mask(finished_segmentation_mask, spect, frequencies)\n\n # Change the segmentation mask's domain to that of the STFT, so we can invert it into a wave form\n ## Get the times\n times = np.arange(2 * downsample_freq_hz * len(self) / MS_PER_S)\n printd(\"Times vs segmentation_mask's times:\", times.shape, finished_segmentation_mask.shape[1])\n\n ## Determine the new times and frequencies\n nsamples_for_each_fft = 2 * finished_segmentation_mask.shape[0]\n printd(\"Converting self into STFT\")\n stft_frequencies, stft_times, stft = signal.stft(self.to_numpy_array(), self.frame_rate, nperseg=nsamples_for_each_fft)\n printd(\"STFTs shape:\", stft.shape)\n printd(\"Frequencies:\", stft_frequencies.shape)\n printd(\"Times:\", stft_times.shape)\n\n ## Due to rounding, the STFT frequency may be one more than we want\n if stft_frequencies.shape[0] > finished_segmentation_mask.shape[0]:\n stft_frequencies = stft_frequencies[:finished_segmentation_mask.shape[0]]\n stft = stft[:stft_frequencies.shape[0], :]\n\n ## Downsample one into the other's times (if needed)\n finished_segmentation_mask, times, stft, stft_times = asa._downsample_one_or_the_other(stft, stft_times, finished_segmentation_mask, times)\n printd(\"Adjusted STFTs shape:\", stft.shape)\n printd(\"Adjusted STFTs frequencies:\", stft_frequencies.shape)\n printd(\"Adjusted STFTs times:\", stft_times.shape)\n printd(\"Segmentation mask:\", finished_segmentation_mask.shape)\n\n ## Interpolate to map the data into the new domain\n printd(\"Attempting to map mask of shape\", finished_segmentation_mask.shape, \"into shape\", (stft_frequencies.shape[0], stft_times.shape[0]))\n finished_segmentation_mask = asa._map_segmentation_mask_to_stft_domain(finished_segmentation_mask, times, frequencies, stft_times, stft_frequencies)\n\n # Separate the mask into a bunch of single segments\n printd(\"Separating masks and throwing out inconsequential ones...\")\n masks = asa._separate_masks(finished_segmentation_mask)\n printd(\"N separate masks:\", len(masks))\n\n # If we couldn't segment into masks after thresholding,\n # there wasn't more than a single audio stream\n # Just return us as the only audio stream\n if len(masks) == 0:\n clone = from_numpy_array(self.to_numpy_array(), self.frame_rate)\n return [clone]\n\n # TODO: Group masks that belong together... somehow...\n\n # Now multiprocess the rest, since it takes forever and is easily parallelizable\n try:\n ncpus = multiprocessing.cpu_count()\n except NotImplementedError:\n ncpus = 2\n\n ncpus = len(masks) if len(masks) < ncpus else ncpus\n\n chunks = np.array_split(masks, ncpus)\n assert len(chunks) == ncpus\n queue = multiprocessing.Queue()\n printd(\"Using {} processes to convert {} masks into linear STFT space and then time domain.\".format(ncpus, len(masks)))\n for i in range(ncpus):\n p = multiprocessing.Process(target=asa._asa_task,\n args=(queue, chunks[i], stft, self.sample_width, self.frame_rate, nsamples_for_each_fft),\n daemon=True)\n p.start()\n\n results = []\n dones = []\n while len(dones) < ncpus:\n item = queue.get()\n if type(item) == str and item == \"DONE\":\n dones.append(item)\n else:\n wav = from_numpy_array(item, self.frame_rate)\n results.append(wav)\n\n return results", "response": "This method performs the actual auditory scene analysis of the audio segments in the audio source."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndetect voice segment of the audio file.", "response": "def detect_voice(self, prob_detect_voice=0.5):\n \"\"\"\n Returns self as a list of tuples:\n [('v', voiced segment), ('u', unvoiced segment), (etc.)]\n\n The overall order of the AudioSegment is preserved.\n\n :param prob_detect_voice: The raw probability that any random 20ms window of the audio file\n contains voice.\n :returns: The described list.\n \"\"\"\n assert self.frame_rate in (48000, 32000, 16000, 8000), \"Try resampling to one of the allowed frame rates.\"\n assert self.sample_width == 2, \"Try resampling to 16 bit.\"\n assert self.channels == 1, \"Try resampling to one channel.\"\n\n class model_class:\n def __init__(self, aggressiveness):\n self.v = webrtcvad.Vad(int(aggressiveness))\n\n def predict(self, vector):\n if self.v.is_speech(vector.raw_data, vector.frame_rate):\n return 1\n else:\n return 0\n\n model = model_class(aggressiveness=2)\n pyesno = 0.3 # Probability of the next 20 ms being unvoiced given that this 20 ms was voiced\n pnoyes = 0.2 # Probability of the next 20 ms being voiced given that this 20 ms was unvoiced\n p_realyes_outputyes = 0.4 # WebRTCVAD has a very high FP rate - just because it says yes, doesn't mean much\n p_realyes_outputno = 0.05 # If it says no, we can be very certain that it really is a no\n p_yes_raw = prob_detect_voice\n filtered = self.detect_event(model=model,\n ms_per_input=20,\n transition_matrix=(pyesno, pnoyes),\n model_stats=(p_realyes_outputyes, p_realyes_outputno),\n event_length_s=0.25,\n prob_raw_yes=p_yes_raw)\n ret = []\n for tup in filtered:\n t = ('v', tup[1]) if tup[0] == 'y' else ('u', tup[1])\n ret.append(t)\n return ret"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef dice(self, seconds, zero_pad=False):\n try:\n total_s = sum(seconds)\n if not (self.duration_seconds <= total_s + 1 and self.duration_seconds >= total_s - 1):\n raise ValueError(\"`seconds` does not sum to within one second of the duration of this AudioSegment.\\\n given total seconds: %s and self.duration_seconds: %s\" % (total_s, self.duration_seconds))\n starts = []\n stops = []\n time_ms = 0\n for dur in seconds:\n starts.append(time_ms)\n time_ms += dur * MS_PER_S\n stops.append(time_ms)\n zero_pad = False\n except TypeError:\n # `seconds` is not a list\n starts = range(0, int(round(self.duration_seconds * MS_PER_S)), int(round(seconds * MS_PER_S)))\n stops = (min(self.duration_seconds * MS_PER_S, start + seconds * MS_PER_S) for start in starts)\n outs = [self[start:stop] for start, stop in zip(starts, stops)]\n out_lens = [out.duration_seconds for out in outs]\n # Check if our last slice is within one ms of expected - if so, we don't need to zero pad\n if zero_pad and not (out_lens[-1] <= seconds * MS_PER_S + 1 and out_lens[-1] >= seconds * MS_PER_S - 1):\n num_zeros = self.frame_rate * (seconds * MS_PER_S - out_lens[-1])\n outs[-1] = outs[-1].zero_extend(num_samples=num_zeros)\n return outs", "response": "Returns a list of audio segments that are at most the given number of seconds."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndetecting an event in a given model.", "response": "def detect_event(self, model, ms_per_input, transition_matrix, model_stats, event_length_s,\n start_as_yes=False, prob_raw_yes=0.5):\n \"\"\"\n A list of tuples of the form [('n', AudioSegment), ('y', AudioSegment), etc.] is returned, where tuples\n of the form ('n', AudioSegment) are the segments of sound where the event was not detected,\n while ('y', AudioSegment) tuples were the segments of sound where the event was detected.\n\n .. code-block:: python\n\n # Example usage\n import audiosegment\n import keras\n import keras.models\n import numpy as np\n import sys\n\n class Model:\n def __init__(self, modelpath):\n self.model = keras.models.load_model(modelpath)\n\n def predict(self, seg):\n _bins, fft_vals = seg.fft()\n fft_vals = np.abs(fft_vals) / len(fft_vals)\n predicted_np_form = self.model.predict(np.array([fft_vals]), batch_size=1)\n prediction_as_int = int(round(predicted_np_form[0][0]))\n return prediction_as_int\n\n modelpath = sys.argv[1]\n wavpath = sys.argv[2]\n model = Model(modelpath)\n seg = audiosegment.from_file(wavpath).resample(sample_rate_Hz=32000, sample_width=2, channels=1)\n pyes_to_no = 0.3 # The probability of one 30 ms sample being an event, and the next one not\n pno_to_yes = 0.2 # The probability of one 30 ms sample not being an event, and the next one yes\n ptrue_pos_rate = 0.8 # The true positive rate (probability of a predicted yes being right)\n pfalse_neg_rate = 0.3 # The false negative rate (probability of a predicted no being wrong)\n raw_prob = 0.7 # The raw probability of seeing the event in any random 30 ms slice of this file\n events = seg.detect_event(model, ms_per_input=30, transition_matrix=[pyes_to_no, pno_to_yes],\n model_stats=[ptrue_pos_rate, pfalse_neg_rate], event_length_s=0.25,\n prob_raw_yes=raw_prob)\n nos = [event[1] for event in events if event[0] == 'n']\n yeses = [event[1] for event in events if event[0] == 'y']\n if len(nos) > 1:\n notdetected = nos[0].reduce(nos[1:])\n notdetected.export(\"notdetected.wav\", format=\"WAV\")\n if len(yeses) > 1:\n detected = yeses[0].reduce(yeses[1:])\n detected.export(\"detected.wav\", format=\"WAV\")\n\n\n :param model: The model. The model must have a predict() function which takes an AudioSegment\n of `ms_per_input` number of ms and which outputs 1 if the audio event is detected\n in that input, and 0 if not. Make sure to resample the AudioSegment to the right\n values before calling this function on it.\n\n :param ms_per_input: The number of ms of AudioSegment to be fed into the model at a time. If this does not\n come out even, the last AudioSegment will be zero-padded.\n\n :param transition_matrix: An iterable of the form: [p(yes->no), p(no->yes)]. That is, the probability of moving\n from a 'yes' state to a 'no' state and the probability of vice versa.\n\n :param model_stats: An iterable of the form: [p(reality=1|output=1), p(reality=1|output=0)]. That is,\n the probability of the ground truth really being a 1, given that the model output a 1,\n and the probability of the ground truth being a 1, given that the model output a 0.\n\n :param event_length_s: The typical duration of the event you are looking for in seconds (can be a float).\n\n :param start_as_yes: If True, the first `ms_per_input` will be in the 'y' category. Otherwise it will be\n in the 'n' category.\n\n :param prob_raw_yes: The raw probability of finding the event in any given `ms_per_input` vector.\n\n :returns: A list of tuples of the form [('n', AudioSegment), ('y', AudioSegment), etc.],\n where over the course of the list, the AudioSegment in tuple 3 picks up\n where the one in tuple 2 left off.\n\n :raises: ValueError if `ms_per_input` is negative or larger than the number of ms in this\n AudioSegment; if `transition_matrix` or `model_stats` do not have a __len__ attribute\n or are not length 2; if the values in `transition_matrix` or `model_stats` are not\n in the closed interval [0.0, 1.0].\n \"\"\"\n if ms_per_input < 0 or ms_per_input / MS_PER_S > self.duration_seconds:\n raise ValueError(\"ms_per_input cannot be negative and cannot be longer than the duration of the AudioSegment.\"\\\n \" The given value was \" + str(ms_per_input))\n elif not hasattr(transition_matrix, \"__len__\") or len(transition_matrix) != 2:\n raise ValueError(\"transition_matrix must be an iterable of length 2.\")\n elif not hasattr(model_stats, \"__len__\") or len(model_stats) != 2:\n raise ValueError(\"model_stats must be an iterable of length 2.\")\n elif any([True for prob in transition_matrix if prob > 1.0 or prob < 0.0]):\n raise ValueError(\"Values in transition_matrix are probabilities, and so must be in the range [0.0, 1.0].\")\n elif any([True for prob in model_stats if prob > 1.0 or prob < 0.0]):\n raise ValueError(\"Values in model_stats are probabilities, and so must be in the range [0.0, 1.0].\")\n elif prob_raw_yes > 1.0 or prob_raw_yes < 0.0:\n raise ValueError(\"`prob_raw_yes` is a probability, and so must be in the range [0.0, 1.0]\")\n\n # Get the yeses or nos for when the filter is triggered (when the event is on/off)\n filter_indices = [yes_or_no for yes_or_no in detect._get_filter_indices(self,\n start_as_yes,\n prob_raw_yes,\n ms_per_input,\n model,\n transition_matrix,\n model_stats)]\n\n # Run a homogeneity filter over the values to make local regions more self-similar (reduce noise)\n ret = detect._homogeneity_filter(filter_indices, window_size=int(round(0.25 * MS_PER_S / ms_per_input)))\n\n # Group the consecutive ones together\n ret = detect._group_filter_values(self, ret, ms_per_input)\n\n # Take the groups and turn them into AudioSegment objects\n real_ret = []\n for i, (this_yesno, next_timestamp) in enumerate(ret):\n if i > 0:\n _next_yesno, timestamp = ret[i - 1]\n else:\n timestamp = 0\n\n ms_per_s = 1000\n data = self[timestamp * ms_per_s:next_timestamp * ms_per_s].raw_data\n seg = AudioSegment(pydub.AudioSegment(data=data, sample_width=self.sample_width,\n frame_rate=self.frame_rate, channels=self.channels), self.name)\n real_ret.append((this_yesno, seg))\n return real_ret"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _execute_sox_cmd(self, cmd, console_output=False):\n on_windows = platform.system().lower() == \"windows\"\n\n # On Windows, a temporary file cannot be shared outside the process that creates it\n # so we need to create a \"permanent\" file that we will use and delete afterwards\n def _get_random_tmp_file():\n if on_windows:\n rand_string = \"\".join(random.choice(string.ascii_uppercase + string.digits) for _ in range(8))\n tmp = self.name + \"_\" + rand_string\n WinTempFile = collections.namedtuple(\"WinTempFile\", \"name\")\n tmp = WinTempFile(tmp)\n else:\n tmp = tempfile.NamedTemporaryFile()\n return tmp\n\n # Get a temp file to put our data and a temp file to store the result\n tmp = _get_random_tmp_file()\n othertmp = _get_random_tmp_file()\n\n # Store our data in the temp file\n self.export(tmp.name, format=\"WAV\")\n\n # Write the command to sox\n stdout = stderr = subprocess.PIPE if console_output else subprocess.DEVNULL\n command = cmd.format(inputfile=tmp.name, outputfile=othertmp.name)\n res = subprocess.call(command.split(' '), stdout=stdout, stderr=stderr)\n assert res == 0, \"Sox did not work as intended, or perhaps you don't have Sox installed?\"\n\n # Create a new AudioSegment from the other temp file (where Sox put the result)\n other = AudioSegment(pydub.AudioSegment.from_wav(othertmp.name), self.name)\n\n # Clean up the temp files\n if on_windows:\n os.remove(tmp.name)\n os.remove(othertmp.name)\n else:\n tmp.close()\n othertmp.close()\n\n return other", "response": "Executes a command in a platform - independent manner."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a copy of this AudioSegment, but whose silence has been removed. .. note:: This method requires that you have the program 'sox' installed. .. warning:: This method uses the program 'sox' to perform the task. While this is very fast for a single function call, the IO may add up for large numbers of AudioSegment objects. :param duration_s: The number of seconds of \"silence\" that must be present in a row to be stripped. :param threshold_percentage: Silence is defined as any samples whose absolute value is below `threshold_percentage * max(abs(samples in this segment))`. :param console_output: If True, will pipe all sox output to the console. :returns: A copy of this AudioSegment, but whose silence has been removed.", "response": "def filter_silence(self, duration_s=1, threshold_percentage=1, console_output=False):\n \"\"\"\n Returns a copy of this AudioSegment, but whose silence has been removed.\n\n .. note:: This method requires that you have the program 'sox' installed.\n\n .. warning:: This method uses the program 'sox' to perform the task. While this is very fast for a single\n function call, the IO may add up for large numbers of AudioSegment objects.\n\n :param duration_s: The number of seconds of \"silence\" that must be present in a row to\n be stripped.\n :param threshold_percentage: Silence is defined as any samples whose absolute value is below\n `threshold_percentage * max(abs(samples in this segment))`.\n :param console_output: If True, will pipe all sox output to the console.\n :returns: A copy of this AudioSegment, but whose silence has been removed.\n \"\"\"\n command = \"sox {inputfile} -t wav {outputfile} silence -l 1 0.1 \"\\\n + str(threshold_percentage) + \"% -1 \" + str(float(duration_s)) + \" \" + str(threshold_percentage) + \"%\"\n try:\n result = self._execute_sox_cmd(command)\n except pydub.exceptions.CouldntDecodeError:\n warnings.warn(\"After silence filtering, the resultant WAV file is corrupted, and so its data cannot be retrieved. Perhaps try a smaller threshold value.\", stacklevel=2)\n # Return a copy of us\n result = AudioSegment(self.seg, self.name)\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fft(self, start_s=None, duration_s=None, start_sample=None, num_samples=None, zero_pad=False):\n if start_s is not None and start_sample is not None:\n raise ValueError(\"Only one of start_s and start_sample can be specified.\")\n if duration_s is not None and num_samples is not None:\n raise ValueError(\"Only one of duration_s and num_samples can be specified.\")\n if start_s is None and start_sample is None:\n start_sample = 0\n if duration_s is None and num_samples is None:\n num_samples = len(self.get_array_of_samples()) - int(start_sample)\n\n if duration_s is not None:\n num_samples = int(round(duration_s * self.frame_rate))\n if start_s is not None:\n start_sample = int(round(start_s * self.frame_rate))\n\n end_sample = start_sample + num_samples # end_sample is excluded\n if end_sample > len(self.get_array_of_samples()) and not zero_pad:\n raise ValueError(\"The combination of start and duration will run off the end of the AudioSegment object.\")\n elif end_sample > len(self.get_array_of_samples()) and zero_pad:\n arr = np.array(self.get_array_of_samples())\n zeros = np.zeros(end_sample - len(arr))\n arr = np.append(arr, zeros)\n else:\n arr = np.array(self.get_array_of_samples())\n\n audioslice = np.array(arr[start_sample:end_sample])\n fft_result = np.fft.fft(audioslice)[range(int(round(num_samples/2)) + 1)]\n step_size = self.frame_rate / num_samples\n bins = np.arange(0, int(round(num_samples/2)) + 1, 1.0) * step_size\n return bins, fft_result", "response": "This function transforms the audio segment into a frequency domain and returns the bins and values of the first 3 seconds of the audio segment."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef generate_frames(self, frame_duration_ms, zero_pad=True):\n Frame = collections.namedtuple(\"Frame\", \"bytes timestamp duration\")\n\n # (samples/sec) * (seconds in a frame) * (bytes/sample)\n bytes_per_frame = int(self.frame_rate * (frame_duration_ms / 1000) * self.sample_width)\n offset = 0 # where we are so far in self's data (in bytes)\n timestamp = 0.0 # where we are so far in self (in seconds)\n # (bytes/frame) * (sample/bytes) * (sec/samples)\n frame_duration_s = (bytes_per_frame / self.frame_rate) / self.sample_width\n while offset + bytes_per_frame < len(self.raw_data):\n yield Frame(self.raw_data[offset:offset + bytes_per_frame], timestamp, frame_duration_s)\n timestamp += frame_duration_s\n offset += bytes_per_frame\n\n if zero_pad:\n rest = self.raw_data[offset:]\n zeros = bytes(bytes_per_frame - len(rest))\n yield Frame(rest + zeros, timestamp, frame_duration_s)", "response": "Yields the audio data in chunks of frame_duration_ms."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef generate_frames_as_segments(self, frame_duration_ms, zero_pad=True):\n for frame in self.generate_frames(frame_duration_ms, zero_pad=zero_pad):\n seg = AudioSegment(pydub.AudioSegment(data=frame.bytes, sample_width=self.sample_width,\n frame_rate=self.frame_rate, channels=self.channels), self.name)\n yield seg, frame.timestamp", "response": "Generates a sequence of audio segments from this instance."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef human_audible(self):\n hist_bins, hist_vals = self.fft()\n hist_vals_real_normed = np.abs(hist_vals) / len(hist_vals)\n f_characteristic = hist_bins[np.argmax(hist_vals_real_normed)]\n\n threshold_fc = 40.11453 - (0.01683697 * f_characteristic) + (1.406211e-6 * f_characteristic ** 2) - (2.371512e-11 * f_characteristic ** 3)\n\n return self.spl >= threshold_fc", "response": "Returns a boolean indicating whether this sound is mostly human audible or not."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef normalize_spl_by_average(self, db):\n arr = self.to_numpy_array().copy()\n if len(arr) == 0:\n raise ValueError(\"Cannot normalize the SPL of an empty AudioSegment\")\n\n def rms(x):\n return np.sqrt(np.mean(np.square(x)))\n\n # Figure out what RMS we would like\n desired_rms = P_REF_PCM * ((10 ** (db/20.0)) - 1E-9)\n\n # Use successive approximation to solve\n ## Keep trying different multiplication factors until we get close enough or run out of time\n max_ntries = 50\n res_rms = 0.0\n ntries = 0\n factor = 0.1\n left = 0.0\n right = desired_rms\n while (ntries < max_ntries) and not util.isclose(res_rms, desired_rms, abs_tol=0.1):\n res_rms = rms(arr * factor)\n if res_rms < desired_rms:\n left = factor\n else:\n right = factor\n factor = 0.5 * (left + right)\n ntries += 1\n\n dtype_dict = {1: np.int8, 2: np.int16, 4: np.int32}\n dtype = dtype_dict[self.sample_width]\n new_seg = from_numpy_array(np.array(arr * factor, dtype=dtype), self.frame_rate)\n return new_seg", "response": "Normalizes the values in the AudioSegment so that its spl property gives the given db."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef reduce(self, others):\n ret = AudioSegment(self.seg, self.name)\n selfdata = [self.seg._data]\n otherdata = [o.seg._data for o in others]\n ret.seg._data = b''.join(selfdata + otherdata)\n\n return ret", "response": "Reduces this segment into the others and returns the result."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef resample(self, sample_rate_Hz=None, sample_width=None, channels=None, console_output=False):\n if sample_rate_Hz is None:\n sample_rate_Hz = self.frame_rate\n if sample_width is None:\n sample_width = self.sample_width\n if channels is None:\n channels = self.channels\n\n # TODO: Replace this with librosa's implementation to remove SOX dependency here\n command = \"sox {inputfile} -b \" + str(sample_width * 8) + \" -r \" + str(sample_rate_Hz) \\\n + \" -t wav {outputfile} channels \" + str(channels)\n\n return self._execute_sox_cmd(command, console_output=console_output)", "response": "Resamples the audio file to the specified size."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nserialize into a bytestring.", "response": "def serialize(self):\n \"\"\"\n Serializes into a bytestring.\n\n :returns: An object of type Bytes.\n \"\"\"\n d = self.__getstate__()\n return pickle.dumps({\n 'name': d['name'],\n 'seg': pickle.dumps(d['seg'], protocol=-1),\n }, protocol=-1)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nplot a series of FFTs from start_s duration_s and num_samples.", "response": "def spectrogram(self, start_s=None, duration_s=None, start_sample=None, num_samples=None,\n window_length_s=None, window_length_samples=None, overlap=0.5, window=('tukey', 0.25)):\n \"\"\"\n Does a series of FFTs from `start_s` or `start_sample` for `duration_s` or `num_samples`.\n Effectively, transforms a slice of the AudioSegment into the frequency domain across different\n time bins.\n\n .. code-block:: python\n\n # Example for plotting a spectrogram using this function\n import audiosegment\n import matplotlib.pyplot as plt\n\n #...\n seg = audiosegment.from_file(\"somebodytalking.wav\")\n freqs, times, amplitudes = seg.spectrogram(window_length_s=0.03, overlap=0.5)\n amplitudes = 10 * np.log10(amplitudes + 1e-9)\n\n # Plot\n plt.pcolormesh(times, freqs, amplitudes)\n plt.xlabel(\"Time in Seconds\")\n plt.ylabel(\"Frequency in Hz\")\n plt.show()\n\n .. image:: images/spectrogram.png\n\n :param start_s: The start time. Starts at the beginning if neither this nor `start_sample` is specified.\n :param duration_s: The duration of the spectrogram in seconds. Goes to the end if neither this nor\n `num_samples` is specified.\n :param start_sample: The index of the first sample to use. Starts at the beginning if neither this nor\n `start_s` is specified.\n :param num_samples: The number of samples in the spectrogram. Goes to the end if neither this nor\n `duration_s` is specified.\n :param window_length_s: The length of each FFT in seconds. If the total number of samples in the spectrogram\n is not a multiple of the window length in samples, the last window will be zero-padded.\n :param window_length_samples: The length of each FFT in number of samples. If the total number of samples in the\n spectrogram is not a multiple of the window length in samples, the last window will\n be zero-padded.\n :param overlap: The fraction of each window to overlap.\n :param window: See Scipy's spectrogram-function_.\n This parameter is passed as-is directly into the Scipy spectrogram function. It's documentation is reproduced here:\n Desired window to use. If window is a string or tuple, it is passed to get_window to generate the window values,\n which are DFT-even by default. See get_window for a list of windows and required parameters.\n If window is array_like it will be used directly as the window and its length must be\n `window_length_samples`.\n Defaults to a Tukey window with shape parameter of 0.25.\n :returns: Three np.ndarrays: The frequency values in Hz (the y-axis in a spectrogram), the time values starting\n at start time and then increasing by `duration_s` each step (the x-axis in a spectrogram), and\n the dB of each time/frequency bin as a 2D array of shape [len(frequency values), len(duration)].\n :raises ValueError: If `start_s` and `start_sample` are both specified, if `duration_s` and `num_samples` are both\n specified, if the first window's duration plus start time lead to running off the end\n of the AudioSegment, or if `window_length_s` and `window_length_samples` are either\n both specified or if they are both not specified.\n\n .. _spectrogram-function: https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.spectrogram.html\n \"\"\"\n if start_s is not None and start_sample is not None:\n raise ValueError(\"Only one of start_s and start_sample may be specified.\")\n if duration_s is not None and num_samples is not None:\n raise ValueError(\"Only one of duration_s and num_samples may be specified.\")\n if window_length_s is not None and window_length_samples is not None:\n raise ValueError(\"Only one of window_length_s and window_length_samples may be specified.\")\n if window_length_s is None and window_length_samples is None:\n raise ValueError(\"You must specify a window length, either in window_length_s or in window_length_samples.\")\n\n # Determine the start sample\n if start_s is None and start_sample is None:\n start_sample = 0\n elif start_s is not None:\n start_sample = int(round(start_s * self.frame_rate))\n\n # Determine the number of samples\n if duration_s is None and num_samples is None:\n num_samples = len(self.get_array_of_samples()) - int(start_sample)\n elif duration_s is not None:\n num_samples = int(round(duration_s * self.frame_rate))\n\n # Determine the number of samples per window\n if window_length_s is not None:\n window_length_samples = int(round(window_length_s * self.frame_rate))\n\n # Check validity of number of samples\n if start_sample + num_samples > len(self.get_array_of_samples()):\n raise ValueError(\"The combination of start and duration will run off the end of the AudioSegment object.\")\n\n # Create a Numpy Array out of the correct samples\n arr = self.to_numpy_array()[start_sample:start_sample+num_samples]\n\n # Use Scipy spectrogram and return\n fs, ts, sxx = signal.spectrogram(arr, self.frame_rate, scaling='spectrum', nperseg=window_length_samples,\n noverlap=int(round(overlap * window_length_samples)),\n mode='magnitude', window=window)\n return fs, ts, sxx"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert the object to a numpy array.", "response": "def to_numpy_array(self):\n \"\"\"\n Convenience function for `np.array(self.get_array_of_samples())` while\n keeping the appropriate dtype.\n \"\"\"\n dtype_dict = {\n 1: np.int8,\n 2: np.int16,\n 4: np.int32\n }\n dtype = dtype_dict[self.sample_width]\n return np.array(self.get_array_of_samples(), dtype=dtype)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd a number of zeros to the AudioSegment.", "response": "def zero_extend(self, duration_s=None, num_samples=None):\n \"\"\"\n Adds a number of zeros (digital silence) to the AudioSegment (returning a new one).\n\n :param duration_s: The number of seconds of zeros to add. If this is specified, `num_samples` must be None.\n :param num_samples: The number of zeros to add. If this is specified, `duration_s` must be None.\n :returns: A new AudioSegment object that has been zero extended.\n :raises: ValueError if duration_s and num_samples are both specified.\n \"\"\"\n if duration_s is not None and num_samples is not None:\n raise ValueError(\"`duration_s` and `num_samples` cannot both be specified.\")\n elif duration_s is not None:\n num_samples = self.frame_rate * duration_s\n seg = AudioSegment(self.seg, self.name)\n zeros = silent(duration=num_samples / self.frame_rate, frame_rate=self.frame_rate)\n return zeros.overlay(seg)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncompute the peak or valley of the first derivative of a time bin in a spectrogram.", "response": "def _compute_peaks_or_valleys_of_first_derivative(s, do_peaks=True):\n \"\"\"\n Takes a spectrogram and returns a 2D array of the form:\n\n 0 0 0 1 0 0 1 0 0 0 1 <-- Frequency 0\n 0 0 1 0 0 0 0 0 0 1 0 <-- Frequency 1\n 0 0 0 0 0 0 1 0 1 0 0 <-- Frequency 2\n *** Time axis *******\n\n Where a 1 means that the value in that time bin in the spectrogram corresponds to\n a peak/valley in the first derivative.\n\n This function is used as part of the ASA algorithm and is not meant to be used publicly.\n \"\"\"\n # Get the first derivative of each frequency in the time domain\n gradient = np.nan_to_num(np.apply_along_axis(np.gradient, 1, s), copy=False)\n\n # Calculate the value we will use for determinig whether something is an event or not\n threshold = np.squeeze(np.nanmean(gradient, axis=1) + np.nanstd(gradient, axis=1))\n\n # Look for relative extrema along the time dimension\n half_window = 4\n if do_peaks:\n indexes = [signal.argrelextrema(gradient[i, :], np.greater, order=half_window)[0] for i in range(gradient.shape[0])]\n else:\n indexes = [signal.argrelextrema(gradient[i, :], np.less, order=half_window)[0] for i in range(gradient.shape[0])]\n\n # indexes should now contain the indexes of possible extrema\n # But we need to filter out values that are not large enough, and we want the end result\n # to be a 1 or 0 mask corresponding to locations of extrema\n extrema = np.zeros(s.shape)\n for row_index, index_array in enumerate(indexes):\n # Each index_array is a list of indexes corresponding to all the extrema in a given row\n for col_index in index_array:\n if do_peaks and (gradient[row_index, col_index] > threshold[row_index]):\n extrema[row_index, col_index] = 1\n elif not do_peaks:\n # Note that we do not remove under-threshold values from the offsets - these will be taken care of later in the algo\n extrema[row_index, col_index] = 1\n return extrema, gradient"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntaking an array of onsets and an array of offsets, of the shape [nfrequencies, nsamples], where each item in these arrays is either a 0 (not an on/offset) or a 1 (a possible on/offset). This function returns a new offsets array, where there is a one-to-one correlation between onsets and offsets, such that each onset has exactly one offset that occurs after it in the time domain (the second dimension of the array). The gradients array is used to decide which offset to use in the case of multiple possibilities.", "response": "def _correlate_onsets_and_offsets(onsets, offsets, gradients):\n \"\"\"\n Takes an array of onsets and an array of offsets, of the shape [nfrequencies, nsamples], where\n each item in these arrays is either a 0 (not an on/offset) or a 1 (a possible on/offset).\n\n This function returns a new offsets array, where there is a one-to-one correlation between\n onsets and offsets, such that each onset has exactly one offset that occurs after it in\n the time domain (the second dimension of the array).\n\n The gradients array is used to decide which offset to use in the case of multiple possibilities.\n \"\"\"\n # For each freq channel:\n for freq_index, (ons, offs) in enumerate(zip(onsets[:, :], offsets[:, :])):\n # Scan along onsets[f, :] until we find the first 1\n indexes_of_all_ones = np.reshape(np.where(ons == 1), (-1,))\n\n # Zero out anything in the offsets up to (and including) this point\n # since we can't have an offset before the first onset\n last_idx = indexes_of_all_ones[0]\n offs[0:last_idx + 1] = 0\n\n if len(indexes_of_all_ones > 1):\n # Do the rest of this only if we have more than one onset in this frequency band\n for next_idx in indexes_of_all_ones[1:]:\n # Get all the indexes of possible offsets from onset index to next onset index\n offset_choices = offs[last_idx:next_idx]\n offset_choice_indexes = np.where(offset_choices == 1)\n\n # Assert that there is at least one offset choice\n if not np.any(offset_choices):\n continue\n assert np.any(offset_choices), \"Offsets from {} to {} only include zeros\".format(last_idx, next_idx)\n\n # If we have more than one choice, the offset index is the one that corresponds to the most negative gradient value\n # Convert the offset_choice_indexes to indexes in the whole offset array, rather than just the offset_choices array\n offset_choice_indexes = np.reshape(last_idx + offset_choice_indexes, (-1,))\n assert np.all(offsets[freq_index, offset_choice_indexes])\n gradient_values = gradients[freq_index, offset_choice_indexes]\n index_of_largest_from_gradient_values = np.where(gradient_values == np.min(gradient_values))[0]\n index_of_largest_offset_choice = offset_choice_indexes[index_of_largest_from_gradient_values]\n assert offsets[freq_index, index_of_largest_offset_choice] == 1\n\n # Zero the others\n offsets[freq_index, offset_choice_indexes] = 0\n offsets[freq_index, index_of_largest_offset_choice] = 1\n last_idx = next_idx\n else:\n # We only have one onset in this frequency band, so the offset will be the very last sample\n offsets[freq_index, :] = 0\n offsets[freq_index, -1] = 1\n return offsets"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nforming the onsets or offsets of a single onset or offset.", "response": "def _form_onset_offset_fronts(ons_or_offs, sample_rate_hz, threshold_ms=20):\n \"\"\"\n Takes an array of onsets or offsets (shape = [nfrequencies, nsamples], where a 1 corresponds to an on/offset,\n and samples are 0 otherwise), and returns a new array of the same shape, where each 1 has been replaced by\n either a 0, if the on/offset has been discarded, or a non-zero positive integer, such that\n each front within the array has a unique ID - for example, all 2s in the array will be the front for on/offset\n front 2, and all the 15s will be the front for on/offset front 15, etc.\n\n Due to implementation details, there will be no 1 IDs.\n \"\"\"\n threshold_s = threshold_ms / 1000\n threshold_samples = sample_rate_hz * threshold_s\n\n ons_or_offs = np.copy(ons_or_offs)\n\n claimed = []\n this_id = 2\n # For each frequency,\n for frequency_index, row in enumerate(ons_or_offs[:, :]):\n ones = np.reshape(np.where(row == 1), (-1,))\n\n # for each 1 in that frequency,\n for top_level_frequency_one_index in ones:\n claimed.append((frequency_index, top_level_frequency_one_index))\n\n found_a_front = False\n # for each frequencies[i:],\n for other_frequency_index, other_row in enumerate(ons_or_offs[frequency_index + 1:, :], start=frequency_index + 1):\n\n # for each non-claimed 1 which is less than theshold_ms away in time,\n upper_limit_index = top_level_frequency_one_index + threshold_samples\n lower_limit_index = top_level_frequency_one_index - threshold_samples\n other_ones = np.reshape(np.where(other_row == 1), (-1,)) # Get the indexes of all the 1s in row\n tmp = np.reshape(np.where((other_ones >= lower_limit_index) # Get the indexes in the other_ones array of all items in bounds\n & (other_ones <= upper_limit_index)), (-1,))\n other_ones = other_ones[tmp] # Get the indexes of all the 1s in the row that are in bounds\n if len(other_ones) > 0:\n unclaimed_idx = other_ones[0] # Take the first one\n claimed.append((other_frequency_index, unclaimed_idx))\n elif len(claimed) < 3:\n # revert the top-most 1 to 0\n ons_or_offs[frequency_index, top_level_frequency_one_index] = 0\n claimed = []\n break # Break from the for-each-frequencies[i:] loop so we can move on to the next item in the top-most freq\n elif len(claimed) >= 3:\n found_a_front = True\n # this group of so-far-claimed forms a front\n claimed_as_indexes = tuple(np.array(claimed).T)\n ons_or_offs[claimed_as_indexes] = this_id\n this_id += 1\n claimed = []\n break # Move on to the next item in the top-most array\n # If we never found a frequency that did not have a matching offset, handle that case here\n if len(claimed) >= 3:\n claimed_as_indexes = tuple(np.array(claimed).T)\n ons_or_offs[claimed_as_indexes] = this_id\n this_id += 1\n claimed = []\n elif found_a_front:\n this_id += 1\n else:\n ons_or_offs[frequency_index, top_level_frequency_one_index] = 0\n claimed = []\n\n return ons_or_offs"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _lookup_offset_by_onset_idx(onset_idx, onsets, offsets):\n assert len(onset_idx) == 2, \"Onset_idx must be a tuple of the form (freq_idx, sample_idx)\"\n frequency_idx, sample_idx = onset_idx\n offset_sample_idxs = np.reshape(np.where(offsets[frequency_idx, :] == 1), (-1,))\n # get the offsets which occur after onset\n offset_sample_idxs = offset_sample_idxs[offset_sample_idxs > sample_idx]\n if len(offset_sample_idxs) == 0:\n # There is no offset in this frequency that occurs after the onset, just return the last sample\n chosen_offset_sample_idx = offsets.shape[1] - 1\n assert offsets[frequency_idx, chosen_offset_sample_idx] == 0\n else:\n # Return the closest offset to the onset\n chosen_offset_sample_idx = offset_sample_idxs[0]\n assert offsets[frequency_idx, chosen_offset_sample_idx] != 0\n return frequency_idx, chosen_offset_sample_idx", "response": "Given a frequency index and a list of onsets and offsets returns the offset index that occurs after the given onset."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_front_idxs_from_id(fronts, id):\n if id == -1:\n # This is the only special case.\n # -1 is the index of the catch-all final column offset front.\n freq_idxs = np.arange(fronts.shape[0], dtype=np.int64)\n sample_idxs = np.ones(len(freq_idxs), dtype=np.int64) * (fronts.shape[1] - 1)\n else:\n freq_idxs, sample_idxs = np.where(fronts == id)\n return [(f, i) for f, i in zip(freq_idxs, sample_idxs)]", "response": "Return a list of tuples of the form frequency_idx sample_idx corresponding to all the indexes of the given front."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _choose_front_id_from_candidates(candidate_offset_front_ids, offset_fronts, offsets_corresponding_to_onsets):\n noverlaps = [] # will contain tuples of the form (number_overlapping, offset_front_id)\n for offset_front_id in candidate_offset_front_ids:\n offset_front_f_idxs, offset_front_s_idxs = np.where(offset_fronts == offset_front_id)\n offset_front_idxs = [(f, i) for f, i in zip(offset_front_f_idxs, offset_front_s_idxs)]\n noverlap_this_id = len(set(offset_front_idxs).symmetric_difference(set(offsets_corresponding_to_onsets)))\n noverlaps.append((noverlap_this_id, offset_front_id))\n _overlapped, chosen_offset_front_id = max(noverlaps, key=lambda t: t[0])\n return int(chosen_offset_front_id)", "response": "Returns a front ID that is the id of the most overlap\n with the given onset front ID."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get_offset_front_id_after_onset_sample_idx(onset_sample_idx, offset_fronts):\n # get all the offset_front_ids\n offset_front_ids = [i for i in np.unique(offset_fronts) if i != 0]\n\n best_id_so_far = -1\n closest_offset_sample_idx = sys.maxsize\n for offset_front_id in offset_front_ids:\n # get all that offset front's indexes\n offset_front_idxs = _get_front_idxs_from_id(offset_fronts, offset_front_id)\n\n # get the sample indexes\n offset_front_sample_idxs = [s for _f, s in offset_front_idxs]\n\n # if each sample index is greater than onset_sample_idx, keep this offset front if it is the best one so far\n min_sample_idx = min(offset_front_sample_idxs)\n if min_sample_idx > onset_sample_idx and min_sample_idx < closest_offset_sample_idx:\n closest_offset_sample_idx = min_sample_idx\n best_id_so_far = offset_front_id\n\n assert best_id_so_far > 1 or best_id_so_far == -1\n return best_id_so_far", "response": "Returns the offset_front_id which corresponds to the offset front after the given onset sample_idx."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_offset_front_id_after_onset_front(onset_front_id, onset_fronts, offset_fronts):\n # get the onset idxs for this front\n onset_idxs = _get_front_idxs_from_id(onset_fronts, onset_front_id)\n\n # get the sample idxs for this front\n onset_sample_idxs = [s for _f, s in onset_idxs]\n\n # get the latest onset in this onset front\n latest_onset_in_front = max(onset_sample_idxs)\n\n offset_front_id_after_this_onset_front = _get_offset_front_id_after_onset_sample_idx(latest_onset_in_front, offset_fronts)\n\n return int(offset_front_id_after_this_onset_front)", "response": "Get the ID corresponding to the offset after the given onset front."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngives a onset_front_id and a list of offsets and a list of onsets and offsets return the first matching onset_front_id.", "response": "def _match_offset_front_id_to_onset_front_id(onset_front_id, onset_fronts, offset_fronts, onsets, offsets):\n \"\"\"\n Find all offset fronts which are composed of at least one offset which corresponds to one of the onsets in the\n given onset front.\n The offset front which contains the most of such offsets is the match.\n If there are no such offset fronts, return -1.\n \"\"\"\n # find all offset fronts which are composed of at least one offset which corresponds to one of the onsets in the onset front\n # the offset front which contains the most of such offsets is the match\n\n # get the onsets that make up front_id\n onset_idxs = _get_front_idxs_from_id(onset_fronts, onset_front_id)\n\n # get the offsets that match the onsets in front_id\n offset_idxs = [_lookup_offset_by_onset_idx(i, onsets, offsets) for i in onset_idxs]\n\n # get all offset_fronts which contain at least one of these offsets\n candidate_offset_front_ids = set([int(offset_fronts[f, i]) for f, i in offset_idxs])\n\n # It is possible that offset_idxs contains offset indexes that correspond to offsets that did not\n # get formed into a front - those will have a front ID of 0. Remove them.\n candidate_offset_front_ids = [id for id in candidate_offset_front_ids if id != 0]\n\n if candidate_offset_front_ids:\n chosen_offset_front_id = _choose_front_id_from_candidates(candidate_offset_front_ids, offset_fronts, offset_idxs)\n else:\n chosen_offset_front_id = _get_offset_front_id_after_onset_front(onset_front_id, onset_fronts, offset_fronts)\n\n return chosen_offset_front_id"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nyields lists of the form [ f s ) where f is the first element of front and s is the second element of front.", "response": "def _get_consecutive_portions_of_front(front):\n \"\"\"\n Yields lists of the form [(f, s), (f, s)], one at a time from the given front (which is a list of the same form),\n such that each list yielded is consecutive in frequency.\n \"\"\"\n last_f = None\n ls = []\n for f, s in front:\n if last_f is not None and f != last_f + 1:\n yield ls\n ls = []\n ls.append((f, s))\n last_f = f\n yield ls"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the consecutive portion of the onset_front and offset_front of the same order.", "response": "def _get_consecutive_and_overlapping_fronts(onset_fronts, offset_fronts, onset_front_id, offset_front_id):\n \"\"\"\n Gets an onset_front and an offset_front such that they both occupy at least some of the same\n frequency channels, then returns the portion of each that overlaps with the other.\n \"\"\"\n # Get the onset front of interest\n onset_front = _get_front_idxs_from_id(onset_fronts, onset_front_id)\n\n # Get the offset front of interest\n offset_front = _get_front_idxs_from_id(offset_fronts, offset_front_id)\n\n # Keep trying consecutive portions of this onset front until we find a consecutive portion\n # that overlaps with part of the offset front\n consecutive_portions_of_onset_front = [c for c in _get_consecutive_portions_of_front(onset_front)]\n for consecutive_portion_of_onset_front in consecutive_portions_of_onset_front:\n # Only get the segment of this front that overlaps in frequencies with the onset front of interest\n onset_front_frequency_indexes = [f for f, _ in consecutive_portion_of_onset_front]\n overlapping_offset_front = [(f, s) for f, s in offset_front if f in onset_front_frequency_indexes]\n\n # Only get as much of this overlapping portion as is actually consecutive\n for consecutive_portion_of_offset_front in _get_consecutive_portions_of_front(overlapping_offset_front):\n if consecutive_portion_of_offset_front:\n # Just return the first one we get - if we get any it means we found a portion of overlap\n return consecutive_portion_of_onset_front, consecutive_portion_of_offset_front\n return [], []"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning an updated segmentation mask such that the input `segmentation_mask` has been updated by segmenting between `onset_front_id` and `offset_front_id`, as found in `onset_fronts` and `offset_fronts`, respectively. This function also returns the onset_fronts and offset_fronts matrices, updated so that any fronts that are of less than 3 channels wide are removed. This function also returns a boolean value indicating whether the onset channel went to completion. Specifically, segments by doing the following: - Going across frequencies in the onset_front, - add the segment mask ID (the onset front ID) to all samples between the onset_front and the offset_front, if the offset_front is in that frequency. Possible scenarios: Fronts line up completely: :: | | S S S | | => S S S | | S S S | | S S S Onset front starts before offset front: :: | | | | S S S | | => S S S | | S S S Onset front ends after offset front: :: | | S S S | | => S S S | | S S S | | Onset front starts before and ends after offset front: :: | | | | => S S S | | S S S | | The above three options in reverse: :: | |S S| | |S S| |S S| |S S| |S S| |S S| |S S| |S S| | | There is one last scenario: :: | | \\ / \\ / / \\ | | Where the offset and onset fronts cross one another. If this happens, we simply reverse the indices and accept: :: |sss| \\sss/ \\s/ /s\\ |sss| The other option would be to destroy the offset front from the crossover point on, and then search for a new offset front for the rest of the onset front.", "response": "def _update_segmentation_mask(segmentation_mask, onset_fronts, offset_fronts, onset_front_id, offset_front_id_most_overlap):\n \"\"\"\n Returns an updated segmentation mask such that the input `segmentation_mask` has been updated by segmenting between\n `onset_front_id` and `offset_front_id`, as found in `onset_fronts` and `offset_fronts`, respectively.\n\n This function also returns the onset_fronts and offset_fronts matrices, updated so that any fronts that are of\n less than 3 channels wide are removed.\n\n This function also returns a boolean value indicating whether the onset channel went to completion.\n\n Specifically, segments by doing the following:\n\n - Going across frequencies in the onset_front,\n - add the segment mask ID (the onset front ID) to all samples between the onset_front and the offset_front,\n if the offset_front is in that frequency.\n\n Possible scenarios:\n\n Fronts line up completely:\n\n ::\n\n | | S S S\n | | => S S S\n | | S S S\n | | S S S\n\n Onset front starts before offset front:\n\n ::\n\n | |\n | | S S S\n | | => S S S\n | | S S S\n\n Onset front ends after offset front:\n\n ::\n\n | | S S S\n | | => S S S\n | | S S S\n | |\n\n Onset front starts before and ends after offset front:\n\n ::\n\n | |\n | | => S S S\n | | S S S\n | |\n\n The above three options in reverse:\n\n ::\n\n | |S S| |\n |S S| |S S| |S S|\n |S S| |S S| |S S|\n |S S| | |\n\n There is one last scenario:\n\n ::\n\n | |\n \\ /\n \\ /\n / \\\n | |\n\n Where the offset and onset fronts cross one another. If this happens, we simply\n reverse the indices and accept:\n\n ::\n\n |sss|\n \\sss/\n \\s/\n /s\\\n |sss|\n\n The other option would be to destroy the offset front from the crossover point on, and\n then search for a new offset front for the rest of the onset front.\n \"\"\"\n # Get the portions of the onset and offset fronts that overlap and are consecutive\n onset_front_overlap, offset_front_overlap = _get_consecutive_and_overlapping_fronts(onset_fronts, offset_fronts, onset_front_id, offset_front_id_most_overlap)\n onset_front = _get_front_idxs_from_id(onset_fronts, onset_front_id)\n offset_front = _get_front_idxs_from_id(offset_fronts, offset_front_id_most_overlap)\n msg = \"Onset front {} and offset front {} result in consecutive overlapping portions of (on) {} and (off) {}, one of which is empty\".format(\n onset_front, offset_front, onset_front_overlap, offset_front_overlap\n )\n assert onset_front_overlap, msg\n assert offset_front_overlap, msg\n onset_front = onset_front_overlap\n offset_front = offset_front_overlap\n\n # Figure out which frequencies will go in the segment\n flow_on, _slow_on = onset_front[0]\n fhigh_on, _shigh_on = onset_front[-1]\n flow_off, _slow_off = offset_front[0]\n fhigh_off, _shigh_off = offset_front[-1]\n flow = max(flow_on, flow_off)\n fhigh = min(fhigh_on, fhigh_off)\n\n # Update all the masks with the segment\n for fidx, _freqchan in enumerate(segmentation_mask[flow:fhigh + 1, :], start=flow):\n assert fidx >= flow, \"Frequency index is {}, but we should have started at {}\".format(fidx, flow)\n assert (fidx - flow) < len(onset_front), \"Frequency index {} minus starting frequency {} is too large for nfrequencies {} in onset front {}\".format(\n fidx, flow, len(onset_front), onset_front\n )\n assert (fidx - flow) < len(offset_front), \"Frequency index {} minus starting frequency {} is too large for nfrequencies {} in offset front {}\".format(\n fidx, flow, len(offset_front), offset_front\n )\n _, beg = onset_front[fidx - flow]\n _, end = offset_front[fidx - flow]\n if beg > end:\n end, beg = beg, end\n assert end >= beg\n segmentation_mask[fidx, beg:end + 1] = onset_front_id\n onset_fronts[fidx, (beg + 1):(end + 1)] = 0\n offset_fronts[fidx, (beg + 1):(end + 1)] = 0\n nfreqs_used_in_onset_front = (fidx - flow) + 1\n\n # Update the other masks to delete fronts that have been used\n indexes = np.arange(flow, fhigh + 1, 1, dtype=np.int64)\n onset_front_sample_idxs_across_freqs = np.array([s for _, s in onset_front])\n onset_front_sample_idxs_across_freqs_up_to_break = onset_front_sample_idxs_across_freqs[:nfreqs_used_in_onset_front]\n offset_front_sample_idxs_across_freqs = np.array([s for _, s in offset_front])\n offset_front_sample_idxs_across_freqs_up_to_break = offset_front_sample_idxs_across_freqs[:nfreqs_used_in_onset_front]\n\n ## Remove the offset front from where we started to where we ended\n offset_fronts[indexes[:nfreqs_used_in_onset_front], offset_front_sample_idxs_across_freqs_up_to_break] = 0\n\n ## Remove the onset front from where we started to where we ended\n onset_fronts[indexes[:nfreqs_used_in_onset_front], onset_front_sample_idxs_across_freqs_up_to_break] = 0\n\n # Determine if we matched the entire onset front by checking if there is any more of this onset front in onset_fronts\n whole_onset_front_matched = onset_front_id not in np.unique(onset_fronts)\n\n return whole_onset_front_matched"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _front_id_from_idx(front, index):\n fidx, sidx = index\n id = front[fidx, sidx]\n if id == 0:\n return -1\n else:\n return id", "response": "Returns the front ID found in front at the given index."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_front_ids_one_at_a_time(onset_fronts):\n yielded_so_far = set()\n for row in onset_fronts:\n for id in row:\n if id != 0 and id not in yielded_so_far:\n yield id\n yielded_so_far.add(id)", "response": "Yields one onset front ID at a time until they are gone."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the offsets that occur as close as possible to the onsets in the given onset - front.", "response": "def _get_corresponding_offsets(onset_fronts, onset_front_id, onsets, offsets):\n \"\"\"\n Gets the offsets that occur as close as possible to the onsets in the given onset-front.\n \"\"\"\n corresponding_offsets = []\n for index in _get_front_idxs_from_id(onset_fronts, onset_front_id):\n offset_fidx, offset_sidx = _lookup_offset_by_onset_idx(index, onsets, offsets)\n corresponding_offsets.append((offset_fidx, offset_sidx))\n return corresponding_offsets"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns all the offset fronts that are composed of at least one of the given offset indexes.", "response": "def _get_all_offset_fronts_from_offsets(offset_fronts, corresponding_offsets):\n \"\"\"\n Returns all the offset fronts that are composed of at least one of the given offset indexes.\n Also returns a dict of the form {offset_front_id: ntimes saw}\n \"\"\"\n all_offset_fronts_of_interest = []\n ids_ntimes_seen = {}\n for offset_index in corresponding_offsets:\n offset_id = _front_id_from_idx(offset_fronts, offset_index)\n if offset_id not in ids_ntimes_seen:\n offset_front_idxs = _get_front_idxs_from_id(offset_fronts, offset_id)\n all_offset_fronts_of_interest.append(offset_front_idxs)\n ids_ntimes_seen[offset_id] = 1\n else:\n ids_ntimes_seen[offset_id] += 1\n return all_offset_fronts_of_interest, ids_ntimes_seen"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _remove_overlaps(segmentation_mask, fronts):\n fidxs, sidxs = np.where((segmentation_mask != fronts) & (segmentation_mask != 0) & (fronts != 0))\n fronts[fidxs, sidxs] = 0", "response": "Removes all points in the segmented fronts that overlap with the segmentation mask."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns segmentation mask for the frequency - band of the onset - fronts.", "response": "def _match_fronts(onset_fronts, offset_fronts, onsets, offsets, debug=False):\n \"\"\"\n Returns a segmentation mask, which looks like this:\n frequency 1: 0 0 4 4 4 4 4 0 0 5 5 5\n frequency 2: 0 4 4 4 4 4 0 0 0 0 5 5\n frequency 3: 0 4 4 4 4 4 4 4 5 5 5 5\n\n That is, each item in the array is either a 0 (not part of a segment) or a positive\n integer which indicates which segment the sample in that frequency band belongs to.\n \"\"\"\n def printd(*args, **kwargs):\n if debug:\n print(*args, **kwargs)\n\n # Make copies of everything, so we can do whatever we want with them\n onset_fronts = np.copy(onset_fronts)\n offset_fronts = np.copy(offset_fronts)\n onsets = np.copy(onsets)\n offsets = np.copy(offsets)\n\n # This is what we will return\n segmentation_mask = np.zeros_like(onset_fronts)\n\n # - Take the first frequency in the onset_fronts matrix\n # [ s s s s s s s s s] <-- This frequency\n # [ s s s s s s s s s]\n # [ s s s s s s s s s]\n # [ s s s s s s s s s]\n # [ s s s s s s s s s]\n\n # - Follow it along in time like this:\n\n # first sample last sample\n # v --> v\n # [ s s s s s s s s s]\n # [ s s s s s s s s s]\n # [ s s s s s s s s s]\n # [ s s s s s s s s s]\n # [ s s s s s s s s s]\n\n # until you get to the first onset front in that frequency\n\n # Here it is!\n # v\n # [ . O . . . . . . .]\n # [ . . O . . . . . .]\n # [ . . O . . . . . .]\n # [ . O . . . . . . .]\n # [ O . . . . . . . .]\n\n resulting_onset_fronts = np.copy(onset_fronts)\n printd(\" -> Dealing with onset fronts...\")\n for onset_front_id in _get_front_ids_one_at_a_time(onset_fronts):\n printd(\" -> Dealing with onset front\", int(onset_front_id))\n front_is_complete = False\n while not front_is_complete:\n # - Now, starting at this onset front in each frequency, find that onset's corresponding offset\n\n # [ . O . . . . F . .]\n # [ . . O . . . F . .]\n # [ . . O . F . . . .]\n # [ . O . F . . . . .]\n # [ O F . . . . . . .]\n\n corresponding_offsets = _get_corresponding_offsets(resulting_onset_fronts, onset_front_id, onsets, offsets)\n\n # It is possible that onset_front_id has been removed from resulting_onset_fronts,\n # if so, skip it and move on to the next onset front (we are iterating over the original\n # to keep the iterator valid)\n if not corresponding_offsets:\n break\n\n # - Get all the offset fronts that are composed of at least one of these offset times\n\n # [ . O . . . . 1 . .]\n # [ . . O 3 . . 1 . .]\n # [ . . O 3 F . 1 . .]\n # [ . O . 3 . . . 1 .]\n # [ O F 3 . . . . . .]\n\n _all_offset_fronts_of_interest, ids_ntimes_seen = _get_all_offset_fronts_from_offsets(offset_fronts, corresponding_offsets)\n\n # - Check how many of these offset times each of the offset fronts are composed of:\n\n # [ . O . . . . Y . .]\n # [ . . O 3 . . Y . .]\n # [ . . O 3 F . 1 . .]\n # [ . O . X . . . 1 .]\n # [ O F 3 . . . . . .]\n\n # In this example, offset front 1 is made up of 4 offset times, 2 of which (the Y's) are offset times\n # that correspond to onsets in the onset front we are currently dealing with. Meanwhile, offset\n # front 3 is made up of 4 offset times, only one of which (the X) is one of the offsets that corresponds\n # to the onset front.\n\n # - Choose the offset front which matches the most offset time candidates. In this example, offset front 1\n # is chosen because it has 2 of these offset times.\n # If there is a tie, we choose the ID with the lower number\n ntimes_seen_sorted = sorted([(k, v) for k, v in ids_ntimes_seen.items()], key=lambda tup: (-1 * tup[1], tup[0]))\n assert len(ntimes_seen_sorted) > 0, \"We somehow got an empty dict of offset front IDs\"\n\n # Only use the special final front (the -1, catch-all front composed of final samples in each frequency) if necessary\n offset_front_id, _ntimes_seen = ntimes_seen_sorted[0]\n if offset_front_id == -1 and len(ntimes_seen_sorted) > 1:\n offset_front_id, _ntimes_seen = ntimes_seen_sorted[1]\n offset_front_id_most_overlap = offset_front_id\n\n # - Finally, update the segmentation mask to follow the offset\n # front from where it first overlaps in frequency with the onset front to where it ends or to where\n # the onset front ends, whichever happens first.\n\n # [ . S S S S S S . .]\n # [ . . S S S S S . .]\n # [ . . S S S S S . .]\n # [ . S S S S S S S .]\n # [ O F 3 . . . . . .] <-- This frequency has not yet been matched with an offset front\n front_is_complete = _update_segmentation_mask(segmentation_mask,\n resulting_onset_fronts,\n offset_fronts,\n onset_front_id,\n offset_front_id_most_overlap)\n\n # Remove any onsets that are covered by the new segmentation mask\n _remove_overlaps(segmentation_mask, resulting_onset_fronts)\n\n # Remove any offsets that are covered by the new segmentaion mask\n _remove_overlaps(segmentation_mask, offset_fronts)\n\n # - Repeat this algorithm, restarting in the first frequency channel that did not match (the last frequency in\n # the above example). Do this until you have finished with this onset front.\n\n # - Repeat for each onset front in the rest of this frequency\n # - Repeat for each frequency\n\n return segmentation_mask"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nremoves all fronts from the given list that are strictly smaller than size consecutive frequencies in length.", "response": "def _remove_fronts_that_are_too_small(fronts, size):\n \"\"\"\n Removes all fronts from `fronts` which are strictly smaller than\n `size` consecutive frequencies in length.\n \"\"\"\n ids = np.unique(fronts)\n for id in ids:\n if id == 0 or id == -1:\n continue\n front = _get_front_idxs_from_id(fronts, id)\n if len(front) < size:\n indexes = ([f for f, _ in front], [s for _, s in front])\n fronts[indexes] = 0"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _break_poorly_matched_fronts(fronts, threshold=0.1, threshold_overlap_samples=3):\n assert threshold_overlap_samples > 0, \"Number of samples of overlap must be greater than zero\"\n breaks_after = {}\n for front_id in _get_front_ids_one_at_a_time(fronts):\n front = _get_front_idxs_from_id(fronts, front_id)\n for i, (f, s) in enumerate(front):\n if i < len(front) - 1:\n # Get the signal from f, s to f, s+1 and the signal from f+1, s to f+1, s+1\n next_f, next_s = front[i + 1]\n low_s = min(s, next_s)\n high_s = max(s, next_s)\n sig_this_f = fronts[f, low_s:high_s]\n sig_next_f = fronts[next_f, low_s:high_s]\n assert len(sig_next_f) == len(sig_this_f)\n\n if len(sig_next_f) > threshold_overlap_samples:\n # If these two signals are not sufficiently close in form, this front should be broken up\n correlation = signal.correlate(sig_this_f, sig_next_f, mode='same')\n assert len(correlation) > 0\n correlation = correlation / max(correlation + 1E-9)\n similarity = np.sum(correlation) / len(correlation)\n # TODO: the above stuff probably needs to be figured out\n if similarity < threshold:\n if front_id in breaks_after:\n breaks_after[front_id].append((f, s))\n else:\n breaks_after[front_id] = []\n\n # Now update the fronts matrix by breaking up any fronts at the points we just identified\n # and assign the newly created fronts new IDs\n taken_ids = sorted(np.unique(fronts))\n next_id = taken_ids[-1] + 1\n for id in breaks_after.keys():\n for f, s in breaks_after[id]:\n fidxs, sidxs = np.where(fronts == id)\n idxs_greater_than_f = [fidx for fidx in fidxs if fidx > f]\n start = len(sidxs) - len(idxs_greater_than_f)\n indexes = (idxs_greater_than_f, sidxs[start:])\n fronts[indexes] = next_id\n next_id += 1\n\n _remove_fronts_that_are_too_small(fronts, 3)", "response": "Break poorly matched fronts into two onset fronts."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating the segmentation mask if there is any overlap.", "response": "def _update_segmentation_mask_if_overlap(toupdate, other, id, otherid):\n \"\"\"\n Merges the segments specified by `id` (found in `toupdate`) and `otherid`\n (found in `other`) if they overlap at all. Updates `toupdate` accordingly.\n \"\"\"\n # If there is any overlap or touching, merge the two, otherwise just return\n yourmask = other == otherid\n mymask = toupdate == id\n overlap_exists = np.any(yourmask & mymask)\n if not overlap_exists:\n return\n\n yourfidxs, yoursidxs = np.where(other == otherid)\n toupdate[yourfidxs, yoursidxs] = id"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck if two segments are adjacent at any point.", "response": "def _segments_are_adjacent(seg1, seg2):\n \"\"\"\n Checks if seg1 and seg2 are adjacent at any point. Each is a tuple of the form\n (fidxs, sidxs).\n \"\"\"\n # TODO: This is unnacceptably slow\n lsf1, lss1 = seg1\n lsf2, lss2 = seg2\n for i, f1 in enumerate(lsf1):\n for j, f2 in enumerate(lsf2):\n if f1 <= f2 + 1 and f1 >= f2 - 1:\n # Frequencies are a match, are samples?\n if lss1[i] <= lss2[j] + 1 and lss1[i] >= lss2[j] - 1:\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _merge_adjacent_segments(mask):\n mask_ids = [id for id in np.unique(mask) if id != 0]\n for id in mask_ids:\n myfidxs, mysidxs = np.where(mask == id)\n for other in mask_ids: # Ugh, brute force O(N^2) algorithm.. gross..\n if id == other:\n continue\n else:\n other_fidxs, other_sidxs = np.where(mask == other)\n if _segments_are_adjacent((myfidxs, mysidxs), (other_fidxs, other_sidxs)):\n mask[other_fidxs, other_sidxs] = id", "response": "Merges all segments in mask which are touching."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _integrate_segmentation_masks(segmasks):\n if len(segmasks) == 1:\n return segmasks\n\n assert len(segmasks) > 0, \"Passed in empty list of segmentation masks\"\n coarse_mask = np.copy(segmasks[0])\n mask_ids = [id for id in np.unique(coarse_mask) if id != 0]\n for id in mask_ids:\n for mask in segmasks[1:]:\n finer_ids = [i for i in np.unique(mask) if i != 0]\n for finer_id in finer_ids:\n _update_segmentation_mask_if_overlap(coarse_mask, mask, id, finer_id)\n\n # Lastly, merge all adjacent blocks, but just kidding, since this algorithm is waaaay to slow\n #_merge_adjacent_segments(coarse_mask)\n return coarse_mask", "response": "Integrates the given list of segmentation masks together to form one segmentation mask."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a list of segmentation masks each of the same dimension as the input one and where each segment in each of the same dimension is zeroed.", "response": "def _separate_masks(mask, threshold=0.025):\n \"\"\"\n Returns a list of segmentation masks each of the same dimension as the input one,\n but where they each have exactly one segment in them and all other samples in them\n are zeroed.\n\n Only bothers to return segments that are larger in total area than `threshold * mask.size`.\n \"\"\"\n try:\n ncpus = multiprocessing.cpu_count()\n except NotImplementedError:\n ncpus = 2\n\n with multiprocessing.Pool(processes=ncpus) as pool:\n mask_ids = [id for id in np.unique(mask) if id != 0]\n thresholds = [threshold * mask.size for _ in range(len(mask_ids))]\n masks = [mask for _ in range(len(mask_ids))]\n ms = pool.starmap(_separate_masks_task, zip(mask_ids, thresholds, masks))\n return [m for m in ms if m is not None]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ntake the given `mask` and `stft`, which must be matrices of shape `frequencies, times` and downsamples one of them into the other one's times, so that the time dimensions are equal. Leaves the frequency dimension untouched.", "response": "def _downsample_one_or_the_other(mask, mask_indexes, stft, stft_indexes):\n \"\"\"\n Takes the given `mask` and `stft`, which must be matrices of shape `frequencies, times`\n and downsamples one of them into the other one's times, so that the time dimensions\n are equal. Leaves the frequency dimension untouched.\n \"\"\"\n assert len(mask.shape) == 2, \"Expected a two-dimensional `mask`, but got one of {} dimensions.\".format(len(mask.shape))\n assert len(stft.shape) == 2, \"Expected a two-dimensional `stft`, but got one of {} dimensions.\".format(len(stft.shape))\n\n if mask.shape[1] > stft.shape[1]:\n downsample_factor = mask.shape[1] / stft.shape[1]\n indexes = _get_downsampled_indexes(mask, downsample_factor)\n mask = mask[:, indexes]\n mask_indexes = np.array(indexes)\n elif mask.shape[1] < stft.shape[1]:\n downsample_factor = stft.shape[1] / mask.shape[1]\n indexes = _get_downsampled_indexes(stft, downsample_factor)\n stft = stft[:, indexes]\n stft_indexes = np.array(indexes)\n\n return mask, mask_indexes, stft, stft_indexes"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _map_segmentation_mask_to_stft_domain(mask, times, frequencies, stft_times, stft_frequencies):\n assert mask.shape == (frequencies.shape[0], times.shape[0]), \"Times is shape {} and frequencies is shape {}, but mask is shaped {}\".format(\n times.shape, frequencies.shape, mask.shape\n )\n result = np.zeros((stft_frequencies.shape[0], stft_times.shape[0]))\n\n if len(stft_times) > len(times):\n all_j = [j for j in range(len(stft_times))]\n idxs = [int(i) for i in np.linspace(0, len(times) - 1, num=len(stft_times))]\n all_i = [all_j[idx] for idx in idxs]\n else:\n all_i = [i for i in range(len(times))]\n idxs = [int(i) for i in np.linspace(0, len(stft_times) - 1, num=len(times))]\n all_j = [all_i[idx] for idx in idxs]\n\n for i, j in zip(all_i, all_j):\n result[:, j] = np.interp(stft_frequencies, frequencies, mask[:, i])\n\n return result", "response": "Maps the given mask which is in domain ( frequencies times stft_times and stft_frequencies and returns the result."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _asa_task(q, masks, stft, sample_width, frame_rate, nsamples_for_each_fft):\n # Convert each mask to (1 or 0) rather than (ID or 0)\n for mask in masks:\n mask = np.where(mask > 0, 1, 0)\n\n # Multiply the masks against STFTs\n masks = [mask * stft for mask in masks]\n\n nparrs = []\n dtype_dict = {1: np.int8, 2: np.int16, 4: np.int32}\n dtype = dtype_dict[sample_width]\n for m in masks:\n _times, nparr = signal.istft(m, frame_rate, nperseg=nsamples_for_each_fft)\n nparr = nparr.astype(dtype)\n nparrs.append(nparr)\n\n for m in nparrs:\n q.put(m)\n q.put(\"DONE\")", "response": "A task that performs the ASA algorithm."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrun a Markov Decision Process over the given audio segment and returns a generator of integers.", "response": "def _get_filter_indices(seg, start_as_yes, prob_raw_yes, ms_per_input, model, transition_matrix, model_stats):\n \"\"\"\n Runs a Markov Decision Process over the given `seg` in chunks of `ms_per_input`, yielding `True` if\n this `ms_per_input` chunk has been classified as positive (1) and `False` if this chunk has been\n classified as negative (0).\n\n :param seg: The AudioSegment to apply this algorithm to.\n :param start_as_yes: If True, the first `ms_per_input` chunk will be classified as positive.\n :param prob_raw_yes: The raw probability of finding the event in any given independently sampled `ms_per_input`.\n :param ms_per_input: The number of ms of AudioSegment to be fed into the model at a time.\n :param model: The model, which must hava predict() function, which takes an AudioSegment of `ms_per_input`\n number of ms and which outputs 1 if the audio event is detected in that input, 0 if not.\n :param transition_matrix: An iterable of the form: [p(yes->no), p(no->yes)].\n :param model_stats: An iterable of the form: [p(reality=1|output=1), p(reality=1|output=0)].\n :yields: `True` if the event has been classified in this chunk, `False` otherwise.\n \"\"\"\n filter_triggered = 1 if start_as_yes else 0\n prob_raw_no = 1.0 - prob_raw_yes\n for segment, _timestamp in seg.generate_frames_as_segments(ms_per_input):\n yield filter_triggered\n observation = int(round(model.predict(segment)))\n assert observation == 1 or observation == 0, \"The given model did not output a 1 or a 0, output: \"\\\n + str(observation)\n prob_hyp_yes_given_last_hyp = 1.0 - transition_matrix[0] if filter_triggered else transition_matrix[1]\n prob_hyp_no_given_last_hyp = transition_matrix[0] if filter_triggered else 1.0 - transition_matrix[1]\n prob_hyp_yes_given_data = model_stats[0] if observation == 1 else model_stats[1]\n prob_hyp_no_given_data = 1.0 - model_stats[0] if observation == 1 else 1.0 - model_stats[1]\n hypothesis_yes = prob_raw_yes * prob_hyp_yes_given_last_hyp * prob_hyp_yes_given_data\n hypothesis_no = prob_raw_no * prob_hyp_no_given_last_hyp * prob_hyp_no_given_data\n # make a list of ints - each is 0 or 1. The number of 1s is hypotheis_yes * 100\n # the number of 0s is hypothesis_no * 100\n distribution = [1 for i in range(int(round(hypothesis_yes * 100)))]\n distribution.extend([0 for i in range(int(round(hypothesis_no * 100)))])\n # shuffle\n random.shuffle(distribution)\n filter_triggered = random.choice(distribution)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ntakes a list of 1s and 0s and returns a list of tuples of the form y n", "response": "def _group_filter_values(seg, filter_indices, ms_per_input):\n \"\"\"\n Takes a list of 1s and 0s and returns a list of tuples of the form:\n ['y/n', timestamp].\n \"\"\"\n ret = []\n for filter_value, (_segment, timestamp) in zip(filter_indices, seg.generate_frames_as_segments(ms_per_input)):\n if filter_value == 1:\n if len(ret) > 0 and ret[-1][0] == 'n':\n ret.append(['y', timestamp]) # The last one was different, so we create a new one\n elif len(ret) > 0 and ret[-1][0] == 'y':\n ret[-1][1] = timestamp # The last one was the same as this one, so just update the timestamp\n else:\n ret.append(['y', timestamp]) # This is the first one\n else:\n if len(ret) > 0 and ret[-1][0] == 'n':\n ret[-1][1] = timestamp\n elif len(ret) > 0 and ret[-1][0] == 'y':\n ret.append(['n', timestamp])\n else:\n ret.append(['n', timestamp])\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ntakes `ls` (a list of 1s and 0s) and smoothes it so that adjacent values are more likely to be the same. :param ls: A list of 1s and 0s to smooth. :param window_size: How large the smoothing kernel is. :returns: A list of 1s and 0s, but smoother.", "response": "def _homogeneity_filter(ls, window_size):\n \"\"\"\n Takes `ls` (a list of 1s and 0s) and smoothes it so that adjacent values are more likely\n to be the same.\n\n :param ls: A list of 1s and 0s to smooth.\n :param window_size: How large the smoothing kernel is.\n :returns: A list of 1s and 0s, but smoother.\n \"\"\"\n # TODO: This is fine way to do this, but it seems like it might be faster and better to do a Gaussian convolution followed by rounding\n k = window_size\n i = k\n while i <= len(ls) - k:\n # Get a window of k items\n window = [ls[i + j] for j in range(k)]\n # Change the items in the window to be more like the mode of that window\n mode = 1 if sum(window) >= k / 2 else 0\n for j in range(k):\n ls[i+j] = mode\n i += k\n return ls"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef bandpass_filter(data, low, high, fs, order=5):\n nyq = 0.5 * fs\n low = low / nyq\n high = high / nyq\n b, a = signal.butter(order, [low, high], btype='band')\n y = signal.lfilter(b, a, data)\n return y", "response": "Does a bandpass filter over the given data."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndo a lowpass filter over the given data.", "response": "def lowpass_filter(data, cutoff, fs, order=5):\n \"\"\"\n Does a lowpass filter over the given data.\n\n :param data: The data (numpy array) to be filtered.\n :param cutoff: The high cutoff in Hz.\n :param fs: The sample rate in Hz of the data.\n :param order: The order of the filter. The higher the order, the tighter the roll-off.\n :returns: Filtered data (numpy array).\n \"\"\"\n nyq = 0.5 * fs\n normal_cutoff = cutoff / nyq\n b, a = signal.butter(order, normal_cutoff, btype='low', analog=False)\n y = signal.lfilter(b, a, data)\n return y"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef list_to_tf_input(data, response_index, num_outcomes):\n matrix = np.matrix([row[:response_index] + row[response_index+1:] for row in data])\n outcomes = np.asarray([row[response_index] for row in data], dtype=np.uint8)\n outcomes_onehot = (np.arange(num_outcomes) == outcomes[:, None]).astype(np.float32)\n\n return matrix, outcomes_onehot", "response": "Converts a list of data into a tf. Input matrix and onehot vectors."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nexpanding continuous features and standardizes categorical features and expands continuous features.", "response": "def expand_and_standardize_dataset(response_index, response_header, data_set, col_vals, headers, standardizers, feats_to_ignore, columns_to_expand, outcome_trans_dict):\n \"\"\"\n Standardizes continuous features and expands categorical features.\n \"\"\"\n # expand and standardize\n modified_set = []\n for row_index, row in enumerate(data_set):\n new_row = []\n for col_index, val in enumerate(row):\n header = headers[col_index]\n\n # Outcome feature -> index outcome\n if col_index == response_index:\n new_outcome = outcome_trans_dict[val]\n new_row.append(new_outcome)\n\n # Ignored feature -> pass\n elif header in feats_to_ignore:\n pass\n \n # Categorical feature -> create new binary column for each possible value of the column\n elif header in columns_to_expand:\n for poss_val in col_vals[header]:\n if val == poss_val:\n new_cat_val = 1.0\n else:\n new_cat_val = -1.0\n new_row.append(new_cat_val)\n\n # Continuous feature -> standardize value with respect to its column\n else:\n new_cont_val = float((val - standardizers[header]['mean']) / standardizers[header]['std_dev'])\n new_row.append(new_cont_val)\n\n modified_set.append(new_row)\n\n # update headers to reflect column expansion\n expanded_headers = []\n for header in headers:\n if header in feats_to_ignore:\n pass\n elif (header in columns_to_expand) and (header is not response_header):\n for poss_val in col_vals[header]:\n new_header = '{}_{}'.format(header,poss_val)\n expanded_headers.append(new_header)\n else:\n expanded_headers.append(header)\n\n return modified_set, expanded_headers"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef equal_ignore_order(a, b):\n unmatched = list(b)\n for element in a:\n try:\n unmatched.remove(element)\n except ValueError:\n return False\n return not unmatched", "response": "Returns True if the two edge lists have the same elements."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef repair(self, data_to_repair):\n num_cols = len(data_to_repair[0])\n col_ids = range(num_cols)\n \n # Get column type information\n col_types = [\"Y\"]*len(col_ids)\n for i, col in enumerate(col_ids):\n if i in self.features_to_ignore:\n col_types[i] = \"I\"\n elif i == self.feature_to_repair:\n col_types[i] = \"X\"\n\n col_type_dict = {col_id: col_type for col_id, col_type in zip(col_ids, col_types)}\n\n not_I_col_ids = filter(lambda x: col_type_dict[x] != \"I\", col_ids)\n \n if self.kdd:\n cols_to_repair = filter(lambda x: col_type_dict[x] == \"Y\", col_ids) \n else:\n cols_to_repair = filter(lambda x: col_type_dict[x] in \"YX\", col_ids)\n \n # To prevent potential perils with user-provided column names, map them to safe column names\n safe_stratify_cols = [self.feature_to_repair]\n\n # Extract column values for each attribute in data\n # Begin byled code will usually be created in the same directory as the .py file. initializing keys and values in dictionary\n data_dict = {col_id: [] for col_id in col_ids}\n\n # Populate each attribute with its column values\n for row in data_to_repair:\n for i in col_ids:\n data_dict[i].append(row[i])\n\n\n repair_types = {}\n for col_id, values in data_dict.items():\n if all(isinstance(value, float) for value in values):\n repair_types[col_id] = float\n elif all(isinstance(value, int) for value in values):\n repair_types[col_id] = int\n else:\n repair_types[col_id] = str\n\n \"\"\"\n Create unique value structures: When performing repairs, we choose median values. If repair is partial, then values will be modified to some intermediate value between the original and the median value. However, the partially repaired value will only be chosen out of values that exist in the data set. This prevents choosing values that might not make any sense in the data's context. To do this, for each column, we need to sort all unique values and create two data structures: a list of values, and a dict mapping values to their positions in that list. Example: There are unique_col_vals[col] = [1, 2, 5, 7, 10, 14, 20] in the column. A value 2 must be repaired to 14, but the user requests that data only be repaired by 50%. We do this by finding the value at the right index:\n index_lookup[col][2] = 1\n index_lookup[col][14] = 5\n this tells us that unique_col_vals[col][3] = 7 is 50% of the way from 2 to 14.\n \"\"\"\n unique_col_vals = {}\n index_lookup = {}\n for col_id in not_I_col_ids:\n col_values = data_dict[col_id]\n # extract unique values from column and sort\n col_values = sorted(list(set(col_values)))\n unique_col_vals[col_id] = col_values\n # look up a value, get its position\n index_lookup[col_id] = {col_values[i]: i for i in range(len(col_values))}\n\n \"\"\"\n Make a list of unique values per each stratified column. Then make a list of combinations of stratified groups. Example: race and gender cols are stratified: [(white, female), (white, male), (black, female), (black, male)] The combinations are tuples because they can be hashed and used as dictionary keys. From these, find the sizes of these groups.\n \"\"\"\n unique_stratify_values = [unique_col_vals[i] for i in safe_stratify_cols]\n all_stratified_groups = list(product(*unique_stratify_values))\n # look up a stratified group, and get a list of indices corresponding to that group in the data\n stratified_group_indices = defaultdict(list)\n\n # Find the number of unique values for each strat-group, organized per column.\n val_sets = {group: {col_id:set() for col_id in cols_to_repair}\n for group in all_stratified_groups}\n for i, row in enumerate(data_to_repair):\n group = tuple(row[col] for col in safe_stratify_cols)\n for col_id in cols_to_repair:\n val_sets[group][col_id].add(row[col_id])\n\n # Also remember that this row pertains to this strat-group.\n stratified_group_indices[group].append(i)\n\n\n \"\"\"\n Separate data by stratified group to perform repair on each Y column's values given that their corresponding protected attribute is a particular stratified group. We need to keep track of each Y column's values corresponding to each particular stratified group, as well as each value's index, so that when we repair the data, we can modify the correct value in the original data. Example: Supposing there is a Y column, \"Score1\", in which the 3rd and 5th scores, 70 and 90 respectively, belonged to black women, the data structure would look like: {(\"Black\", \"Woman\"): {Score1: [(70,2),(90,4)]}}\n \"\"\"\n stratified_group_data = {group: {} for group in all_stratified_groups}\n for group in all_stratified_groups:\n for col_id, col_dict in data_dict.items():\n # Get the indices at which each value occurs.\n indices = {}\n for i in stratified_group_indices[group]:\n value = col_dict[i]\n if value not in indices:\n indices[value] = []\n indices[value].append(i)\n\n stratified_col_values = [(occurs, val) for val, occurs in indices.items()]\n stratified_col_values.sort(key=lambda tup: tup[1])\n stratified_group_data[group][col_id] = stratified_col_values\n\n mode_feature_to_repair = get_mode(data_dict[self.feature_to_repair])\n\n # Repair Data and retrieve the results\n for col_id in cols_to_repair:\n # which bucket value we're repairing\n group_offsets = {group: 0 for group in all_stratified_groups}\n col = data_dict[col_id]\n\n num_quantiles = min(len(val_sets[group][col_id]) for group in all_stratified_groups)\n quantile_unit = 1.0/num_quantiles\n\n if repair_types[col_id] in {int, float}:\n for quantile in range(num_quantiles):\n median_at_quantiles = []\n indices_per_group = {}\n\n for group in all_stratified_groups:\n group_data_at_col = stratified_group_data[group][col_id]\n num_vals = len(group_data_at_col)\n offset = int(round(group_offsets[group]*num_vals))\n number_to_get = int(round((group_offsets[group] + quantile_unit)*num_vals) - offset)\n group_offsets[group] += quantile_unit\n\n if number_to_get > 0:\n\n # Get data at this quantile from this Y column such that stratified X = group\n offset_data = group_data_at_col[offset:offset+number_to_get]\n indices_per_group[group] = [i for val_indices, _ in offset_data for i in val_indices]\n values = sorted([float(val) for _, val in offset_data])\n\n # Find this group's median value at this quantile\n median_at_quantiles.append( get_median(values, self.kdd) )\n\n # Find the median value of all groups at this quantile (chosen from each group's medians)\n median = get_median(median_at_quantiles, self.kdd)\n median_val_pos = index_lookup[col_id][median]\n\n # Update values to repair the dataset.\n for group in all_stratified_groups:\n for index in indices_per_group[group]:\n original_value = col[index]\n\n current_val_pos = index_lookup[col_id][original_value]\n distance = median_val_pos - current_val_pos # distance between indices\n distance_to_repair = int(round(distance * self.repair_level))\n index_of_repair_value = current_val_pos + distance_to_repair\n repaired_value = unique_col_vals[col_id][index_of_repair_value]\n\n # Update data to repaired valued\n data_dict[col_id][index] = repaired_value\n\n #Categorical Repair is done below\n elif repair_types[col_id] in {str}:\n feature = CategoricalFeature(col)\n categories = feature.bin_index_dict.keys()\n\n group_features = get_group_data(all_stratified_groups, stratified_group_data, col_id)\n\n categories_count = get_categories_count(categories, all_stratified_groups, group_features)\n\n categories_count_norm = get_categories_count_norm(categories, all_stratified_groups, categories_count, group_features)\n\n median = get_median_per_category(categories, categories_count_norm)\n\n # Partially fill-out the generator functions to simplify later calls.\n dist_generator = lambda group_index, category : gen_desired_dist(group_index, category, col_id, median, self.repair_level, categories_count_norm, self.feature_to_repair, mode_feature_to_repair)\n\n count_generator = lambda group_index, group, category : gen_desired_count(group_index, group, category, median, group_features, self.repair_level, categories_count)\n\n group_features, overflow = flow_on_group_features(all_stratified_groups, group_features, count_generator)\n\n group_features, assigned_overflow, distribution = assign_overflow(all_stratified_groups, categories, overflow, group_features, dist_generator)\n\n # Return our repaired feature in the form of our original dataset\n for group in all_stratified_groups:\n indices = stratified_group_indices[group]\n for i, index in enumerate(indices):\n repaired_value = group_features[group].data[i]\n data_dict[col_id][index] = repaired_value\n\n # Replace stratified groups with their mode value, to remove it's information\n repaired_data = []\n for i, orig_row in enumerate(data_to_repair):\n new_row = [orig_row[j] if j not in cols_to_repair else data_dict[j][i] for j in col_ids]\n repaired_data.append(new_row)\n return repaired_data", "response": "Repair the data in the archive."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef group_audit_ranks(filenames, measurer, similarity_bound=0.05):\n\n def _partition_groups(feature_scores):\n groups = []\n for feature, score in feature_scores:\n added_to_group = False\n\n # Check to see if the feature belongs in a group with any other features.\n for i, group in enumerate(groups):\n mean_score, group_feature_scores = group\n if abs(mean_score - score) < similarity_bound:\n groups[i][1].append( (feature, score) )\n\n # Recalculate the representative mean.\n groups[i][0] = sum([s for _, s in group_feature_scores])/len(group_feature_scores)\n added_to_group = True\n break\n\n # If this feature did not much with the current groups, create another group.\n if not added_to_group:\n groups.append( [score, [(feature,score)]] )\n\n # Return just the features.\n return [[feature for feature, score in group] for _, group in groups]\n\n\n score_dict = {}\n features = []\n for filename in filenames:\n with open(filename) as audit_file:\n header_line = audit_file.readline()[:-1] # Remove the trailing endline.\n feature = header_line[header_line.index(\":\")+1:]\n features.append(feature)\n\n confusion_matrices = load_audit_confusion_matrices(filename)\n for rep_level, matrix in confusion_matrices:\n score = measurer(matrix)\n if rep_level not in score_dict:\n score_dict[rep_level] = {}\n score_dict[rep_level][feature] = score\n\n # Sort by repair level increasing repair level.\n score_keys = sorted(score_dict.keys())\n\n groups = [features]\n while score_keys:\n key = score_keys.pop()\n new_groups = []\n for group in groups:\n group_features = [(f, score_dict[key][f]) for f in group]\n sub_groups = _partition_groups(group_features)\n new_groups.extend(sub_groups)\n groups = new_groups\n\n return groups", "response": "Given a list of audit files and a measurer function and a similarity bound return the rank of each group."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef accuracy(conf_matrix):\n total, correct = 0.0, 0.0\n for true_response, guess_dict in conf_matrix.items():\n for guess, count in guess_dict.items():\n if true_response == guess:\n correct += count\n total += count\n return correct/total", "response": "Calculates the accuracy of a confusion matrix."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef BCR(conf_matrix):\n parts = []\n for true_response, guess_dict in conf_matrix.items():\n error = 0.0\n total = 0.0\n for guess, count in guess_dict.items():\n if true_response != guess:\n error += count\n total += count\n parts.append(error/total)\n BER = sum(parts)/len(parts)\n return 1 - BER", "response": "Returns Balanced Error Rate."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ngive an unsorted list of numeric values return the median value.", "response": "def get_median(values, kdd):\n \"\"\"\n Given an unsorted list of numeric values, return median value (as a float).\n Note that in the case of even-length lists of values, we apply the value to\n the left of the center to be the median (such that the median can only be\n a value from the list of values).\n Eg: get_median([1,2,3,4]) == 2, not 2.5.\n \"\"\"\n\n if not values:\n raise Exception(\"Cannot calculate median of list with no values!\")\n\n sorted_values = deepcopy(values)\n sorted_values.sort() # Not calling `sorted` b/c `sorted_values` may not be list.\n\n if kdd:\n return sorted_values[len(values)//2]\n else:\n if len(values) % 2 == 0:\n return sorted_values[len(values)//2-1]\n else:\n return sorted_values[len(values)//2]"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef expand_to_one_hot(data,expand = True,use_alternative=False):\n header_dict = {'ALCABUS':0,'PRIRCAT':1,'TMSRVC':2,'SEX1':3,'RACE':4,'RELTYP':5,'age_1st_arrest':6,'DRUGAB':7,'Class':8,'RLAGE':9,'NFRCTNS':10}\n\n new_data = []\n for entry in data:\n\ttemp = {}\n\tif expand == True:\n\t if entry[header_dict[\"SEX1\"]] == \"FEMALE\":\n\t\ttemp['female'] = 1\n\t else:\n\t\ttemp['female'] = 0\n\n\t if entry[header_dict[\"ALCABUS\"]] == 'INMATE IS AN ALCOHOL ABUSER':\n\t\ttemp['prior_alcohol_abuse'] = 1\n\t else:\n\t\ttemp['prior_alcohol_abuse'] = 0\n\n\t if entry[header_dict['DRUGAB']] == 'INMATE IS A DRUG ABUSER':\n\t\ttemp['prior_drug_abuse'] = 1\n\t else:\n\t\ttemp['prior_drug_abuse'] = 0\n\n\t if entry[header_dict['NFRCTNS']] == 'INMATE HAS RECORD':\n\t\ttemp['infraction_in_prison'] = 1\n\t else:\n\t\ttemp['infraction_in_prison'] = 0\n\n\t race_cats = ['WHITE','BLACK','AMERICAN INDIAN/ALEUTIAN','ASIAN/PACIFIC ISLANDER','OTHER','UNKNOWN']\n\n\t for cat in race_cats:\n\t\tif entry[header_dict['RACE']] == cat:\n\t\t temp['race_'+cat] = 1\n\t\telse:\n\t\t temp['race_'+cat] = 0\n\n\t release_age_cats = ['14 TO 17 YEARS OLD','18 TO 24 YEARS OLD', '25 TO 29 YEARS OLD', \\\n\t '30 TO 34 YEARS OLD','35 TO 39 YEARS OLD','40 TO 44 YEARS OLD','45 YEARS OLD AND OLDER']\n\t for cat in release_age_cats:\n\t\tif entry[header_dict['RLAGE']] == cat:\n\t\t temp['release_age_'+cat] = 1\n\t\telse:\n\t\t temp['release_age_'+cat] = 0\n\n\t time_served_cats = ['None','1 TO 6 MONTHS','13 TO 18 MONTHS','19 TO 24 MONTHS','25 TO 30 MONTHS', \\\n\t\t\t'31 TO 36 MONTHS','37 TO 60 MONTHS','61 MONTHS AND HIGHER','7 TO 12 MONTHS']\n\t for cat in time_served_cats:\n\t\tif entry[header_dict['TMSRVC']] == cat:\n\t\t temp['time_served_'+cat] = 1\n\t\telse:\n\t\t temp['time_served_'+cat] = 0\n\n\t prior_arrest_cats = ['None','1 PRIOR ARREST','11 TO 15 PRIOR ARRESTS','16 TO HI PRIOR ARRESTS','2 PRIOR ARRESTS', \\\n\t\t'3 PRIOR ARRESTS','4 PRIOR ARRESTS','5 PRIOR ARRESTS','6 PRIOR ARRESTS','7 TO 10 PRIOR ARRESTS']\n\t for cat in prior_arrest_cats:\n\t\tif entry[header_dict['PRIRCAT']] == cat:\n\t\t temp['prior_arrest_'+cat] = 1\n\t\telse:\n\t\t temp['prior_arrest_'+cat] = 0\n\n\t conditional_release =['PAROLE BOARD DECISION-SERVED NO MINIMUM','MANDATORY PAROLE RELEASE', 'PROBATION RELEASE-SHOCK PROBATION', \\\n\t\t\t'OTHER CONDITIONAL RELEASE']\n\t unconditional_release = ['EXPIRATION OF SENTENCE','COMMUTATION-PARDON','RELEASE TO CUSTODY, DETAINER, OR WARRANT', \\\n\t\t\t'OTHER UNCONDITIONAL RELEASE']\n\t other_release = ['NATURAL CAUSES','SUICIDE','HOMICIDE BY ANOTHER INMATE','OTHER HOMICIDE','EXECUTION','OTHER TYPE OF DEATH', \\\n\t\t 'TRANSFER','RELEASE ON APPEAL OR BOND','OTHER TYPE OF RELEASE','ESCAPE','ACCIDENTAL INJURY TO SELF','UNKNOWN']\n\t if entry[header_dict['RELTYP']] in conditional_release:\n\t\ttemp['released_conditional'] = 1\n\t\ttemp['released_unconditional'] = 0\n\t\ttemp['released_other'] = 0\n\t elif entry[header_dict['RELTYP']] in unconditional_release:\n\t\ttemp['released_conditional'] = 0\n\t\ttemp['released_unconditional'] = 1\n\t\ttemp['released_other'] = 0\n\t else:\n\t\ttemp['released_conditional'] = 0\n\t\ttemp['released_unconditional'] = 0\n\t\ttemp['released_other'] = 1\n\n\t first_arrest_cats = ['UNDER 17','BETWEEN 18 AND 24','BETWEEN 25 AND 29','BETWEEN 30 AND 39','OVER 40']\n\t for cat in first_arrest_cats:\n\t\tif entry[header_dict['age_1st_arrest']] == cat:\n\t\t temp['age_first_arrest_'+cat] = 1\n\t\telse:\n\t\t temp['age_first_arrest_'+cat] = 0\n\telse:\n\t temp['SEX1'] = entry['SEX1']\n\t temp['RELTYP'] = entry['RELTYP']\n\t temp['PRIRCAT'] = entry['PRIRCAT']\n\t temp['ALCABUS'] = entry['ALCABUS']\n\t temp['DRUGAB'] = entry['DRUGAB']\n\t temp['RLAGE'] = entry['RLAGE']\n\t temp['TMSRVC'] = entry['TMSRVC']\n\t temp['NFRCTNS'] = entry['NFRCTNS']\n\t temp['RACE'] = entry['RACE']\n\t try:\n\t\tbdate = datetime.date(int(entry['YEAROB2']),int(entry['MNTHOB2']), int(entry['DAYOB2']))\n\t\tfirst_arrest = datetime.date(int(entry['A001YR']),int(entry['A001MO']),int(entry['A001DA']))\n\t\tfirst_arrest_age = first_arrest - bdate\n\t\ttemp['age_1st_arrest'] = first_arrest_age.days\n\t except:\n\t\ttemp['age_1st_arrest'] = 0\n\n\tnew_data.append(temp)\n\n\n # convert from dictionary to list of lists\n fin = [[int(entry[key]) for key in entry.keys()] for entry in new_data]\n \"\"\"\n with open(\"brandon_testing/test_\"+str(time.clock())+\".csv\",\"w\") as f:\n\twriter = csv.writer(f,delimiter=\",\")\n\tfor row in fin:\n\t writer.writerow(row)\n \"\"\"\n\n return fin", "response": "Expand the data to one hot."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nloads a confusion matrix from an audit file.", "response": "def load_audit_confusion_matrices(filename):\n \"\"\"\n Loads a confusion matrix in a two-level dictionary format.\n\n For example, the confusion matrix of a 75%-accurate model\n that predicted 15 values (and mis-classified 5) may look like:\n {\"A\": {\"A\":10, \"B\": 5}, \"B\": {\"B\":5}}\n\n Note that raw boolean values are translated into strings, such that\n a value that was the boolean True will be returned as the string \"True\".\n \"\"\"\n\n with open(filename) as audit_file:\n audit_file.next() # Skip the first line.\n\n # Extract the confusion matrices and repair levels from the audit file.\n confusion_matrices = []\n for line in audit_file:\n separator = \":\"\n separator_index = line.index(separator)\n\n comma_index = line.index(',')\n repair_level = float(line[separator_index+2:comma_index])\n raw_confusion_matrix = line[comma_index+2:-2]\n confusion_matrix = json.loads( raw_confusion_matrix.replace(\"'\",\"\\\"\") )\n confusion_matrices.append( (repair_level, confusion_matrix) )\n\n # Sort the repair levels in case they are out of order for whatever reason.\n confusion_matrices.sort(key = lambda pair: pair[0])\n return confusion_matrices"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconvert a list of data into a tf. Input matrix and the list of outcomes.", "response": "def list_to_tf_input(data, response_index, num_outcomes):\n \"\"\"\n Separates the outcome feature from the data.\n \"\"\"\n matrix = np.matrix([row[:response_index] + row[response_index+1:] for row in data])\n outcomes = np.asarray([row[response_index] for row in data], dtype=np.uint8)\n\n return matrix, outcomes"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _update_index_url_from_configs(self):\n\n if 'VIRTUAL_ENV' in os.environ:\n self.pip_config_locations.append(os.path.join(os.environ['VIRTUAL_ENV'], 'pip.conf'))\n self.pip_config_locations.append(os.path.join(os.environ['VIRTUAL_ENV'], 'pip.ini'))\n\n if site_config_files:\n self.pip_config_locations.extend(site_config_files)\n\n index_url = None\n custom_config = None\n\n if 'PIP_INDEX_URL' in os.environ and os.environ['PIP_INDEX_URL']:\n # environ variable takes priority\n index_url = os.environ['PIP_INDEX_URL']\n custom_config = 'PIP_INDEX_URL environment variable'\n else:\n for pip_config_filename in self.pip_config_locations:\n if pip_config_filename.startswith('~'):\n pip_config_filename = os.path.expanduser(pip_config_filename)\n\n if os.path.isfile(pip_config_filename):\n config = ConfigParser()\n config.read([pip_config_filename])\n try:\n index_url = config.get('global', 'index-url')\n custom_config = pip_config_filename\n break # stop on first detected, because config locations have a priority\n except (NoOptionError, NoSectionError): # pragma: nocover\n pass\n\n if index_url:\n self.PYPI_API_URL = self._prepare_api_url(index_url)\n print(Color('Setting API url to {{autoyellow}}{}{{/autoyellow}} as found in {{autoyellow}}{}{{/autoyellow}}'\n '. Use --default-index-url to use pypi default index'.format(self.PYPI_API_URL, custom_config)))", "response": "Updates index - url from pip. conf and pip. ini files."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _fetch_index_package_info(self, package_name, current_version):\n\n try:\n package_canonical_name = package_name\n if self.PYPI_API_TYPE == 'simple_html':\n package_canonical_name = canonicalize_name(package_name)\n response = requests.get(self.PYPI_API_URL.format(package=package_canonical_name), timeout=15)\n except HTTPError as e: # pragma: nocover\n return False, e.message\n\n if not response.ok: # pragma: nocover\n return False, 'API error: {}'.format(response.reason)\n\n if self.PYPI_API_TYPE == 'pypi_json':\n return self._parse_pypi_json_package_info(package_name, current_version, response)\n elif self.PYPI_API_TYPE == 'simple_html':\n return self._parse_simple_html_package_info(package_name, current_version, response)\n else: # pragma: nocover\n raise NotImplementedError('This type of PYPI_API_TYPE type is not supported')", "response": "Fetch index information for a package."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nparse the JSON response from PyPI to get the latest version of the package.", "response": "def _parse_pypi_json_package_info(self, package_name, current_version, response):\n \"\"\"\n :type package_name: str\n :type current_version: version.Version\n :type response: requests.models.Response\n \"\"\"\n\n data = response.json()\n all_versions = [version.parse(vers) for vers in data['releases'].keys()]\n filtered_versions = [vers for vers in all_versions if not vers.is_prerelease and not vers.is_postrelease]\n\n if not filtered_versions: # pragma: nocover\n return False, 'error while parsing version'\n\n latest_version = max(filtered_versions)\n\n # even if user did not choose prerelease, if the package from requirements is pre/post release, use it\n if self._prerelease or current_version.is_postrelease or current_version.is_prerelease:\n prerelease_versions = [vers for vers in all_versions if vers.is_prerelease or vers.is_postrelease]\n if prerelease_versions:\n latest_version = max(prerelease_versions)\n try:\n try:\n latest_version_info = data['releases'][str(latest_version)][0]\n except KeyError: # pragma: nocover\n # non-RFC versions, get the latest from pypi response\n latest_version = version.parse(data['info']['version'])\n latest_version_info = data['releases'][str(latest_version)][0]\n except Exception: # pragma: nocover\n return False, 'error while parsing version'\n\n upload_time = latest_version_info['upload_time'].replace('T', ' ')\n\n return {\n 'name': package_name,\n 'current_version': current_version,\n 'latest_version': latest_version,\n 'upgrade_available': current_version < latest_version,\n 'upload_time': upload_time\n }, 'success'"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _parse_simple_html_package_info(self, package_name, current_version, response):\n pattern = r'.*{name}-([A-z0-9\\.-]*)(?:-py|\\.tar).*<\\/a>'.format(name=re.escape(package_name))\n versions_match = re.findall(pattern, response.content.decode('utf-8'), flags=re.IGNORECASE)\n\n all_versions = [version.parse(vers) for vers in versions_match]\n filtered_versions = [vers for vers in all_versions if not vers.is_prerelease and not vers.is_postrelease]\n\n if not filtered_versions: # pragma: nocover\n return False, 'error while parsing version'\n\n latest_version = max(filtered_versions)\n\n # even if user did not choose prerelease, if the package from requirements is pre/post release, use it\n if self._prerelease or current_version.is_postrelease or current_version.is_prerelease:\n prerelease_versions = [vers for vers in all_versions if vers.is_prerelease or vers.is_postrelease]\n if prerelease_versions:\n latest_version = max(prerelease_versions)\n\n return {\n 'name': package_name,\n 'current_version': current_version,\n 'latest_version': latest_version,\n 'upgrade_available': current_version < latest_version,\n 'upload_time': '-'\n }, 'success'", "response": "Parse the simple html package info."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef main():\n options = get_options()\n Windows.enable(auto_colors=True, reset_atexit=True)\n\n try:\n # maybe check if virtualenv is not activated\n check_for_virtualenv(options)\n\n # 1. detect requirements files\n filenames = RequirementsDetector(options.get('')).get_filenames()\n if filenames:\n print(Color('{{autoyellow}}Found valid requirements file(s):{{/autoyellow}} '\n '{{autocyan}}\\n{}{{/autocyan}}'.format('\\n'.join(filenames))))\n else: # pragma: nocover\n print(Color('{autoyellow}No requirements files found in current directory. CD into your project '\n 'or manually specify requirements files as arguments.{/autoyellow}'))\n return\n # 2. detect all packages inside requirements\n packages = PackagesDetector(filenames).get_packages()\n\n # 3. query pypi API, see which package has a newer version vs the one in requirements (or current env)\n packages_status_map = PackagesStatusDetector(\n packages, options.get('--use-default-index')).detect_available_upgrades(options)\n\n # 4. [optionally], show interactive screen when user can choose which packages to upgrade\n selected_packages = PackageInteractiveSelector(packages_status_map, options).get_packages()\n\n # 5. having the list of packages, do the actual upgrade and replace the version inside all filenames\n upgraded_packages = PackagesUpgrader(selected_packages, filenames, options).do_upgrade()\n\n print(Color('{{autogreen}}Successfully upgraded (and updated requirements) for the following packages: '\n '{}{{/autogreen}}'.format(','.join([package['name'] for package in upgraded_packages]))))\n if options['--dry-run']:\n print(Color('{automagenta}Actually, no, because this was a simulation using --dry-run{/automagenta}'))\n\n except KeyboardInterrupt: # pragma: nocover\n print(Color('\\n{autored}Upgrade interrupted.{/autored}'))", "response": "This is the main function of the upgrade command."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _update_package(self, package):\n try:\n if not self.dry_run and not self.skip_package_installation: # pragma: nocover\n subprocess.check_call(['pip', 'install', '{}=={}'.format(package['name'], package['latest_version'])])\n else:\n print('[Dry Run]: skipping package installation:', package['name'])\n # update only if installation success\n self._update_requirements_package(package)\n\n except CalledProcessError: # pragma: nocover\n print(Color('{{autored}}Failed to install package \"{}\"{{/autored}}'.format(package['name'])))", "response": "Update the package in current environment and if success update requirements file"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nattempt to detect requirements files in the current working directory", "response": "def autodetect_files(self):\n \"\"\" Attempt to detect requirements files in the current working directory \"\"\"\n if self._is_valid_requirements_file('requirements.txt'):\n self.filenames.append('requirements.txt')\n\n if self._is_valid_requirements_file('requirements.pip'): # pragma: nocover\n self.filenames.append('requirements.pip')\n\n if os.path.isdir('requirements'):\n for filename in os.listdir('requirements'):\n file_path = os.path.join('requirements', filename)\n if self._is_valid_requirements_file(file_path):\n self.filenames.append(file_path)\n self._check_inclusions_recursively()"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nresolves all streams on the network.", "response": "def resolve_streams(wait_time=1.0):\n \"\"\"Resolve all streams on the network.\n\n This function returns all currently available streams from any outlet on \n the network. The network is usually the subnet specified at the local \n router, but may also include a group of machines visible to each other via \n multicast packets (given that the network supports it), or list of \n hostnames. These details may optionally be customized by the experimenter \n in a configuration file (see Network Connectivity in the LSL wiki). \n \n Keyword arguments:\n wait_time -- The waiting time for the operation, in seconds, to search for \n streams. Warning: If this is too short (<0.5s) only a subset \n (or none) of the outlets that are present on the network may \n be returned. (default 1.0)\n \n Returns a list of StreamInfo objects (with empty desc field), any of which \n can subsequently be used to open an inlet. The full description can be\n retrieved from the inlet.\n\n \"\"\"\n # noinspection PyCallingNonCallable\n buffer = (c_void_p*1024)()\n num_found = lib.lsl_resolve_all(byref(buffer), 1024, c_double(wait_time))\n return [StreamInfo(handle=buffer[k]) for k in range(num_found)]"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nresolve all streams with a specific value for a given property.", "response": "def resolve_byprop(prop, value, minimum=1, timeout=FOREVER):\n \"\"\"Resolve all streams with a specific value for a given property.\n\n If the goal is to resolve a specific stream, this method is preferred over \n resolving all streams and then selecting the desired one.\n \n Keyword arguments:\n prop -- The StreamInfo property that should have a specific value (e.g., \n \"name\", \"type\", \"source_id\", or \"desc/manufaturer\").\n value -- The string value that the property should have (e.g., \"EEG\" as \n the type property).\n minimum -- Return at least this many streams. (default 1)\n timeout -- Optionally a timeout of the operation, in seconds. If the \n timeout expires, less than the desired number of streams \n (possibly none) will be returned. (default FOREVER)\n \n Returns a list of matching StreamInfo objects (with empty desc field), any \n of which can subsequently be used to open an inlet.\n \n Example: results = resolve_Stream_byprop(\"type\",\"EEG\")\n\n \"\"\"\n # noinspection PyCallingNonCallable\n buffer = (c_void_p*1024)()\n num_found = lib.lsl_resolve_byprop(byref(buffer), 1024,\n c_char_p(str.encode(prop)),\n c_char_p(str.encode(value)),\n minimum,\n c_double(timeout))\n return [StreamInfo(handle=buffer[k]) for k in range(num_found)]"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nresolving all streams that match a given predicate.", "response": "def resolve_bypred(predicate, minimum=1, timeout=FOREVER):\n \"\"\"Resolve all streams that match a given predicate.\n\n Advanced query that allows to impose more conditions on the retrieved \n streams; the given string is an XPath 1.0 predicate for the \n node (omitting the surrounding []'s), see also\n http://en.wikipedia.org/w/index.php?title=XPath_1.0&oldid=474981951.\n \n Keyword arguments:\n predicate -- The predicate string, e.g. \"name='BioSemi'\" or \n \"type='EEG' and starts-with(name,'BioSemi') and \n count(description/desc/channels/channel)=32\"\n minimum -- Return at least this many streams. (default 1)\n timeout -- Optionally a timeout of the operation, in seconds. If the \n timeout expires, less than the desired number of streams \n (possibly none) will be returned. (default FOREVER)\n \n Returns a list of matching StreamInfo objects (with empty desc field), any \n of which can subsequently be used to open an inlet.\n\n \"\"\"\n # noinspection PyCallingNonCallable\n buffer = (c_void_p*1024)()\n num_found = lib.lsl_resolve_bypred(byref(buffer), 1024,\n c_char_p(str.encode(predicate)),\n minimum,\n c_double(timeout))\n return [StreamInfo(handle=buffer[k]) for k in range(num_found)]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ntranslating an error code into an exception.", "response": "def handle_error(errcode):\n \"\"\"Error handler function. Translates an error code into an exception.\"\"\"\n if type(errcode) is c_int:\n errcode = errcode.value\n if errcode == 0:\n pass # no error\n elif errcode == -1:\n raise TimeoutError(\"the operation failed due to a timeout.\")\n elif errcode == -2:\n raise LostError(\"the stream has been lost.\")\n elif errcode == -3:\n raise InvalidArgumentError(\"an argument was incorrectly specified.\")\n elif errcode == -4:\n raise InternalError(\"an internal error has occurred.\")\n elif errcode < 0: \n raise RuntimeError(\"an unknown error has occurred.\")"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef push_sample(self, x, timestamp=0.0, pushthrough=True):\n if len(x) == self.channel_count:\n if self.channel_format == cf_string:\n x = [v.encode('utf-8') for v in x]\n handle_error(self.do_push_sample(self.obj, self.sample_type(*x),\n c_double(timestamp),\n c_int(pushthrough)))\n else:\n raise ValueError(\"length of the data must correspond to the \"\n \"stream's channel count.\")", "response": "Push a sample into the outlet."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef push_chunk(self, x, timestamp=0.0, pushthrough=True):\n try:\n n_values = self.channel_count * len(x)\n data_buff = (self.value_type * n_values).from_buffer(x)\n handle_error(self.do_push_chunk(self.obj, data_buff,\n c_long(n_values),\n c_double(timestamp),\n c_int(pushthrough)))\n except TypeError:\n if len(x):\n if type(x[0]) is list:\n x = [v for sample in x for v in sample]\n if self.channel_format == cf_string:\n x = [v.encode('utf-8') for v in x]\n if len(x) % self.channel_count == 0:\n constructor = self.value_type*len(x)\n # noinspection PyCallingNonCallable\n handle_error(self.do_push_chunk(self.obj, constructor(*x),\n c_long(len(x)),\n c_double(timestamp),\n c_int(pushthrough)))\n else:\n raise ValueError(\"each sample must have the same number of \"\n \"channels.\")", "response": "Push a list of samples into the outlet."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nwaits until some consumers are available. Returns True if the wait was successful False otherwise.", "response": "def wait_for_consumers(self, timeout):\n \"\"\"Wait until some consumer shows up (without wasting resources).\n\n Returns True if the wait was successful, False if the timeout expired.\n\n \"\"\"\n return bool(lib.lsl_wait_for_consumers(self.obj, c_double(timeout)))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nretrieve the complete information of the given stream.", "response": "def info(self, timeout=FOREVER):\n \"\"\"Retrieve the complete information of the given stream.\n\n This includes the extended description. Can be invoked at any time of\n the stream's lifetime.\n \n Keyword arguments:\n timeout -- Timeout of the operation. (default FOREVER)\n \n Throws a TimeoutError (if the timeout expires), or LostError (if the \n stream source has been lost).\n\n \"\"\"\n errcode = c_int()\n result = lib.lsl_get_fullinfo(self.obj, c_double(timeout),\n byref(errcode))\n handle_error(errcode)\n return StreamInfo(handle=result)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef open_stream(self, timeout=FOREVER):\n errcode = c_int()\n lib.lsl_open_stream(self.obj, c_double(timeout), byref(errcode))\n handle_error(errcode)", "response": "Subscribe to the data stream."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget an estimated time correction offset for the given stream.", "response": "def time_correction(self, timeout=FOREVER):\n \"\"\"Retrieve an estimated time correction offset for the given stream.\n\n The first call to this function takes several miliseconds until a \n reliable first estimate is obtained. Subsequent calls are instantaneous \n (and rely on periodic background updates). The precision of these \n estimates should be below 1 ms (empirically within +/-0.2 ms).\n \n Keyword arguments: \n timeout -- Timeout to acquire the first time-correction estimate \n (default FOREVER).\n \n Returns the current time correction estimate. This is the number that \n needs to be added to a time stamp that was remotely generated via \n local_clock() to map it into the local clock domain of this \n machine.\n\n Throws a TimeoutError (if the timeout expires), or LostError (if the \n stream source has been lost).\n\n \"\"\"\n errcode = c_int()\n result = lib.lsl_time_correction(self.obj, c_double(timeout),\n byref(errcode))\n handle_error(errcode)\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef pull_sample(self, timeout=FOREVER, sample=None):\n \n # support for the legacy API\n if type(timeout) is list:\n assign_to = timeout\n timeout = sample if type(sample) is float else 0.0\n else:\n assign_to = None\n \n errcode = c_int()\n timestamp = self.do_pull_sample(self.obj, byref(self.sample),\n self.channel_count, c_double(timeout),\n byref(errcode))\n handle_error(errcode)\n if timestamp:\n sample = [v for v in self.sample]\n if self.channel_format == cf_string:\n sample = [v.decode('utf-8') for v in sample]\n if assign_to is not None:\n assign_to[:] = sample\n return sample, timestamp\n else:\n return None, None", "response": "Pull a sample from the inlet and return it."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\npull a chunk of samples from the inlet.", "response": "def pull_chunk(self, timeout=0.0, max_samples=1024, dest_obj=None):\n \"\"\"Pull a chunk of samples from the inlet.\n \n Keyword arguments:\n timeout -- The timeout of the operation; if passed as 0.0, then only \n samples available for immediate pickup will be returned. \n (default 0.0)\n max_samples -- Maximum number of samples to return. (default \n 1024)\n dest_obj -- A Python object that supports the buffer interface.\n If this is provided then the dest_obj will be updated in place\n and the samples list returned by this method will be empty.\n It is up to the caller to trim the buffer to the appropriate\n number of samples.\n A numpy buffer must be order='C'\n (default None)\n \n Returns a tuple (samples,timestamps) where samples is a list of samples \n (each itself a list of values), and timestamps is a list of time-stamps.\n\n Throws a LostError if the stream source has been lost.\n\n \"\"\"\n # look up a pre-allocated buffer of appropriate length \n num_channels = self.channel_count\n max_values = max_samples*num_channels\n\n if max_samples not in self.buffers:\n # noinspection PyCallingNonCallable\n self.buffers[max_samples] = ((self.value_type*max_values)(),\n (c_double*max_samples)())\n if dest_obj is not None:\n data_buff = (self.value_type * max_values).from_buffer(dest_obj)\n else:\n data_buff = self.buffers[max_samples][0]\n ts_buff = self.buffers[max_samples][1]\n\n # read data into it\n errcode = c_int()\n # noinspection PyCallingNonCallable\n num_elements = self.do_pull_chunk(self.obj, byref(data_buff),\n byref(ts_buff), max_values,\n max_samples, c_double(timeout),\n byref(errcode))\n handle_error(errcode)\n # return results (note: could offer a more efficient format in the \n # future, e.g., a numpy array)\n num_samples = num_elements/num_channels\n if dest_obj is None:\n samples = [[data_buff[s*num_channels+c] for c in range(num_channels)]\n for s in range(int(num_samples))]\n if self.channel_format == cf_string:\n samples = [[v.decode('utf-8') for v in s] for s in samples]\n free_char_p_array_memory(data_buff, max_values)\n else:\n samples = None\n timestamps = [ts_buff[s] for s in range(int(num_samples))]\n return samples, timestamps"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef child(self, name):\n return XMLElement(lib.lsl_child(self.e, str.encode(name)))", "response": "Get a child with a specified name."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef next_sibling(self, name=None):\n if name is None:\n return XMLElement(lib.lsl_next_sibling(self.e))\n else:\n return XMLElement(lib.lsl_next_sibling_n(self.e, str.encode(name)))", "response": "Get the next sibling in the children list of the parent node. If name is not provided the next sibling is returned."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the previous sibling in the children list of the parent node.", "response": "def previous_sibling(self, name=None):\n \"\"\"Get the previous sibling in the children list of the parent node.\n\n If a name is provided, the previous sibling with the given name is\n returned.\n\n \"\"\"\n if name is None:\n return XMLElement(lib.lsl_previous_sibling(self.e))\n else:\n return XMLElement(lib.lsl_previous_sibling_n(self.e,\n str.encode(name)))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef child_value(self, name=None):\n if name is None:\n res = lib.lsl_child_value(self.e)\n else:\n res = lib.lsl_child_value_n(self.e, str.encode(name))\n return res.decode('utf-8')", "response": "Get the child value of the first child that is text."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef append_child_value(self, name, value):\n return XMLElement(lib.lsl_append_child_value(self.e,\n str.encode(name),\n str.encode(value)))", "response": "Append a child node with a given name which has a plain - text child with the given text value."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set_child_value(self, name, value):\n return XMLElement(lib.lsl_set_child_value(self.e,\n str.encode(name),\n str.encode(value)))", "response": "Set the text value of the ( nameless ) plain - text child of a named \n child node."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef set_name(self, name):\n return bool(lib.lsl_set_name(self.e, str.encode(name)))", "response": "Set the element s name. Returns False if the element is empty."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef set_value(self, value):\n return bool(lib.lsl_set_value(self.e, str.encode(value)))", "response": "Set the element s value. Returns False if the element is empty."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef append_child(self, name):\n return XMLElement(lib.lsl_append_child(self.e, str.encode(name)))", "response": "Append a child element with the specified name."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef prepend_child(self, name):\n return XMLElement(lib.lsl_prepend_child(self.e, str.encode(name)))", "response": "Prepend a child element with the specified name."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nappending a copy of the specified element as a child.", "response": "def append_copy(self, elem):\n \"\"\"Append a copy of the specified element as a child.\"\"\"\n return XMLElement(lib.lsl_append_copy(self.e, elem.e))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef prepend_copy(self, elem):\n return XMLElement(lib.lsl_prepend_copy(self.e, elem.e))", "response": "Prepend a copy of the specified element as a child."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nremoving a given child element specified by name or as element.", "response": "def remove_child(self, rhs):\n \"\"\"Remove a given child element, specified by name or as element.\"\"\"\n if type(rhs) is XMLElement:\n lib.lsl_remove_child(self.e, rhs.e)\n else:\n lib.lsl_remove_child_n(self.e, rhs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef results(self):\n # noinspection PyCallingNonCallable\n buffer = (c_void_p*1024)()\n num_found = lib.lsl_resolver_results(self.obj, byref(buffer), 1024)\n return [StreamInfo(handle=buffer[k]) for k in range(num_found)]", "response": "Obtain the set of currently present streams on the network."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsees all token associated with a given word.", "response": "def pair(cmd, word):\n \"\"\"See all token associated with a given token.\n PAIR lilas\"\"\"\n word = list(preprocess_query(word))[0]\n key = pair_key(word)\n tokens = [t.decode() for t in DB.smembers(key)]\n tokens.sort()\n print(white(tokens))\n print(magenta('(Total: {})'.format(len(tokens))))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nshowing autocomplete results for a given token.", "response": "def do_AUTOCOMPLETE(cmd, s):\n \"\"\"Shows autocomplete results for a given token.\"\"\"\n s = list(preprocess_query(s))[0]\n keys = [k.decode() for k in DB.smembers(edge_ngram_key(s))]\n print(white(keys))\n print(magenta('({} elements)'.format(len(keys))))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncomputes edge ngram of token from min. Does not include token itself.", "response": "def compute_edge_ngrams(token, min=None):\n \"\"\"Compute edge ngram of token from min. Does not include token itself.\"\"\"\n if min is None:\n min = config.MIN_EDGE_NGRAMS\n token = token[:config.MAX_EDGE_NGRAMS + 1]\n return [token[:i] for i in range(min, len(token))]"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef iter_pipe(pipe, processors):\n if isinstance(pipe, str):\n pipe = [pipe]\n for it in processors:\n pipe = it(pipe)\n yield from pipe", "response": "Allow for iterators to return either an item or an iterator of items."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nimports functions or class by their path.", "response": "def import_by_path(path):\n \"\"\"\n Import functions or class by their path. Should be of the form:\n path.to.module.func\n \"\"\"\n if not isinstance(path, str):\n return path\n module_path, *name = path.rsplit('.', 1)\n func = import_module(module_path)\n if name:\n func = getattr(func, name[0])\n return func"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncalculates the great circle distance between two points on the earth.", "response": "def haversine_distance(point1, point2):\n \"\"\"\n Calculate the great circle distance between two points\n on the earth (specified in decimal degrees).\n \"\"\"\n lat1, lon1 = point1\n lat2, lon2 = point2\n\n # Convert decimal degrees to radians.\n lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2])\n\n # Haversine formula.\n dlon = lon2 - lon1\n dlat = lat2 - lat1\n a = sin(dlat/2)**2 + cos(lat1) * cos(lat2) * sin(dlon/2)**2\n c = 2 * asin(sqrt(a))\n\n # 6367 km is the radius of the Earth.\n km = 6367 * c\n return km"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef imap_unordered(self, func, iterable, chunksize):\n assert self._state == RUN\n task_batches = Pool._get_tasks(func, iterable, chunksize)\n result = IMapUnorderedIterator(self._cache)\n tasks = ((result._job, i, func, chunk, {})\n for i, (_, chunk) in enumerate(task_batches))\n self._taskqueue.put((tasks, result._set_length))\n return result", "response": "Customized version of imap_unordered."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncomputing fuzzy extensions of word. FUZZY lilas", "response": "def do_fuzzy(self, word):\n \"\"\"Compute fuzzy extensions of word.\n FUZZY lilas\"\"\"\n word = list(preprocess_query(word))[0]\n print(white(make_fuzzy(word)))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncomputes fuzzy extensions of word that exist in index.", "response": "def do_fuzzyindex(self, word):\n \"\"\"Compute fuzzy extensions of word that exist in index.\n FUZZYINDEX lilas\"\"\"\n word = list(preprocess_query(word))[0]\n token = Token(word)\n neighbors = make_fuzzy(token)\n neighbors = [(n, DB.zcard(dbkeys.token_key(n))) for n in neighbors]\n neighbors.sort(key=lambda n: n[1], reverse=True)\n for token, freq in neighbors:\n if freq == 0:\n break\n print(white(token), blue(freq))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef extend_results_extrapoling_relations(helper):\n if not helper.bucket_dry:\n return # No need.\n tokens = set(helper.meaningful + helper.common)\n for relation in _extract_manytomany_relations(tokens):\n helper.add_to_bucket([t.db_key for t in relation])\n if helper.bucket_overflow:\n break\n else:\n helper.debug('No relation extrapolated.')", "response": "Try to extract the bigger group of interlinked tokens."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef do_help(self, command):\n if command:\n doc = getattr(self, 'do_' + command).__doc__\n print(cyan(doc.replace(' ' * 8, '')))\n else:\n print(magenta('Available commands:'))\n print(magenta('Type \"HELP \" to get more info.'))\n names = self.get_names()\n names.sort()\n for name in names:\n if name[:3] != 'do_':\n continue\n doc = getattr(self, name).__doc__\n doc = doc.split('\\n')[0]\n print('{} {}'.format(yellow(name[3:]),\n cyan(doc.replace(' ' * 8, ' ')\n .replace('\\n', ''))))", "response": "Display this help message."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef do_BENCH(self, query):\n try:\n count = int(re.match(r'^(\\d+).*', query).group(1))\n except AttributeError:\n count = 100\n self._search(query, count=count)", "response": "Run a search many times to benchmark it.\n BENCH [ 100 ] rue des Lilas"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef do_INTERSECT(self, words):\n start = time.time()\n limit = 100\n if 'LIMIT' in words:\n words, limit = words.split('LIMIT')\n limit = int(limit)\n tokens = [keys.token_key(w) for w in preprocess_query(words)]\n DB.zinterstore(words, tokens)\n results = DB.zrevrange(words, 0, limit, withscores=True)\n DB.delete(words)\n for id_, score in results:\n r = Result(id_)\n print('{} {} {}'.format(white(r), blue(r._id), cyan(score)))\n duration = round((time.time() - start) * 1000, 1)\n print(magenta(\"({} in {} ms)\".format(len(results), duration)))", "response": "Do a raw intersect between tokens"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef do_DBINFO(self, *args):\n info = DB.info()\n keys = [\n 'keyspace_misses', 'keyspace_hits', 'used_memory_human',\n 'total_commands_processed', 'total_connections_received',\n 'connected_clients']\n for key in keys:\n print('{}: {}'.format(white(key), blue(info[key])))\n nb_of_redis_db = int(DB.config_get('databases')['databases'])\n for db_index in range(nb_of_redis_db - 1):\n db_name = 'db{}'.format(db_index)\n if db_name in info:\n label = white('nb keys (db {})'.format(db_index))\n print('{}: {}'.format(label, blue(info[db_name]['keys'])))", "response": "Print some useful infos from Redis DB."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nprinting raw content of a DB key.", "response": "def do_DBKEY(self, key):\n \"\"\"Print raw content of a DB key.\n DBKEY g|u09tyzfe\"\"\"\n type_ = DB.type(key).decode()\n if type_ == 'set':\n out = DB.smembers(key)\n elif type_ == 'string':\n out = DB.get(key)\n else:\n out = 'Unsupported type {}'.format(type_)\n print('type:', magenta(type_))\n print('value:', white(out))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncompute geodistance from a result to a point.", "response": "def do_GEODISTANCE(self, s):\n \"\"\"Compute geodistance from a result to a point.\n GEODISTANCE 772210180J 48.1234 2.9876\"\"\"\n try:\n _id, lat, lon = s.split()\n except:\n return self.error('Malformed query. Use: ID lat lon')\n try:\n result = Result(keys.document_key(_id))\n except ValueError as e:\n return self.error(e)\n center = (float(lat), float(lon))\n km = haversine_distance((float(result.lat), float(result.lon)), center)\n score = km_to_score(km)\n print('km: {} |\u00a0score: {}'.format(white(km), blue(score)))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef do_GEOHASHTOGEOJSON(self, geoh):\n geoh, with_neighbors = self._match_option('NEIGHBORS', geoh)\n bbox = geohash.bbox(geoh)\n try:\n with_neighbors = int(with_neighbors)\n except TypeError:\n with_neighbors = 0\n\n def expand(bbox, geoh, depth):\n neighbors = geohash.neighbors(geoh)\n for neighbor in neighbors:\n other = geohash.bbox(neighbor)\n if with_neighbors > depth:\n expand(bbox, neighbor, depth + 1)\n else:\n if other['n'] > bbox['n']:\n bbox['n'] = other['n']\n if other['s'] < bbox['s']:\n bbox['s'] = other['s']\n if other['e'] > bbox['e']:\n bbox['e'] = other['e']\n if other['w'] < bbox['w']:\n bbox['w'] = other['w']\n\n if with_neighbors > 0:\n expand(bbox, geoh, 0)\n\n geojson = {\n \"type\": \"Polygon\",\n \"coordinates\": [[\n [bbox['w'], bbox['n']],\n [bbox['e'], bbox['n']],\n [bbox['e'], bbox['s']],\n [bbox['w'], bbox['s']],\n [bbox['w'], bbox['n']]\n ]]\n }\n print(white(json.dumps(geojson)))", "response": "Build a GeoJSON representation of a COORDINATES GEOHASH."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef do_GEOHASH(self, latlon):\n try:\n lat, lon = map(float, latlon.split())\n except ValueError:\n print(red('Invalid lat and lon {}'.format(latlon)))\n else:\n print(white(geohash.encode(lat, lon, config.GEOHASH_PRECISION)))", "response": "Compute a geohash from latitude and longitude."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning members of a geohash and its neighbors.", "response": "def do_GEOHASHMEMBERS(self, geoh):\n \"\"\"Return members of a geohash and its neighbors.\n GEOHASHMEMBERS u09vej04 [NEIGHBORS 0]\"\"\"\n geoh, with_neighbors = self._match_option('NEIGHBORS', geoh)\n key = compute_geohash_key(geoh, with_neighbors != '0')\n if key:\n for id_ in DB.smembers(key):\n r = Result(id_)\n print('{} {}'.format(white(r), blue(r._id)))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets a single object from index with its id.", "response": "def do_GET(self, _id):\n \"\"\"Get document from index with its id.\n GET 772210180J\"\"\"\n doc = doc_by_id(_id)\n if not doc:\n return self.error('id \"{}\" not found'.format(_id))\n for key, value in doc.items():\n if key == config.HOUSENUMBERS_FIELD:\n continue\n print('{} {}'.format(white(key), magenta(value)))\n if doc.get('housenumbers'):\n def sorter(v):\n try:\n return int(re.match(r'^\\d+', v['raw']).group())\n except AttributeError:\n return -1\n housenumbers = sorted(doc['housenumbers'].values(), key=sorter)\n print(white('housenumbers'),\n magenta(', '.join(v['raw'] for v in housenumbers)))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget index details for a document by its id.", "response": "def do_INDEX(self, _id):\n \"\"\"Get index details for a document by its id.\n INDEX 772210180J\"\"\"\n doc = doc_by_id(_id)\n if not doc:\n return self.error('id \"{}\" not found'.format(_id))\n for field in config.FIELDS:\n key = field['key']\n if key in doc:\n self._print_field_index_details(doc[key], _id)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning document linked to word with higher score.", "response": "def do_BESTSCORE(self, word):\n \"\"\"Return document linked to word with higher score.\n BESTSCORE lilas\"\"\"\n key = keys.token_key(indexed_string(word)[0])\n for _id, score in DB.zrevrange(key, 0, 20, withscores=True):\n result = Result(_id)\n print(white(result), blue(score), green(result._id))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef do_REVERSE(self, latlon):\n lat, lon = latlon.split()\n for r in reverse(float(lat), float(lon)):\n print('{} ({} | {} km | {})'.format(white(r), blue(r.score),\n blue(r.distance), blue(r._id)))", "response": "Do a reverse search."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nprint the distance score between two strings.", "response": "def do_STRDISTANCE(self, s):\n \"\"\"Print the distance score between two strings. Use |\u00a0as separator.\n STRDISTANCE rue des lilas|porte des lilas\"\"\"\n s = s.split('|')\n if not len(s) == 2:\n print(red('Malformed string. Use | between the two strings.'))\n return\n one, two = s\n print(white(compare_str(one, two)))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef do_CONFIG(self, name):\n if not name:\n for name in self.complete_CONFIG():\n self.do_CONFIG(name)\n return\n value = getattr(config, name.upper(), 'Not found.')\n print(blue(name), white(format_config(value)))", "response": "Inspect loaded Addok config. Output all config without argument."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nruns a Lua script. Takes the raw Redis arguments.", "response": "def do_SCRIPT(self, args):\n \"\"\"Run a Lua script. Takes the raw Redis arguments.\n SCRIPT script_name number_of_keys key1 key2\u2026 arg1 arg2\n \"\"\"\n try:\n name, keys_count, *args = args.split()\n except ValueError:\n print(red('Not enough arguments'))\n return\n try:\n keys_count = int(keys_count)\n except ValueError:\n print(red('You must pass the number of keys as first argument'))\n self.do_HELP('SCRIPT')\n return\n keys = args[:keys_count]\n args = args[keys_count:]\n try:\n output = getattr(scripts, name)(keys=keys, args=args)\n except AttributeError:\n print(red('No script named {}'.format(name)))\n return\n except DB.Error as e:\n print(red(e))\n return\n if not isinstance(output, list):\n # Script may return just an integer.\n output = [output]\n for line in output:\n print(white(line))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef imap(requests, stream=True, pool=None, size=2, exception_handler=None):\n\n def send(r):\n return r.send(stream=stream)\n\n pool = pool if pool else Pool(size)\n\n for request in pool.imap(send, requests):\n if request.response is not None:\n yield request.response\n elif exception_handler:\n exception_handler(request, request.exception)\n\n if not pool:\n pool.close()", "response": "Concurrently converts a generator object of Requests to\n a generator of Responses."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef imap_unordered(requests, stream=True, pool=None, size=2, exception_handler=None):\n\n def send(r):\n return r.send(stream=stream)\n\n pool = pool if pool else Pool(size)\n\n with contextlib.closing(Pool(size)) as pool:\n for request in pool.imap_unordered(send, requests):\n if request.response is not None:\n yield request.response\n elif exception_handler:\n exception_handler(request, request.exception)\n\n if not pool:\n pool.close()", "response": "Concurrently converts a generator object of Requests to\n ."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef getBits_from_array(array, wordWidth, start, end,\n reinterpretElmToType=None):\n \"\"\"\n Gets value of bits between selected range from memory\n\n :param start: bit address of start of bit of bits\n :param end: bit address of first bit behind bits\n :return: instance of BitsVal (derived from SimBits type) which contains\n copy of selected bits\n \"\"\"\n inPartOffset = 0\n value = Bits(end - start, None).fromPy(None)\n\n while start != end:\n assert start < end, (start, end)\n\n dataWordIndex = start // wordWidth\n\n v = array[dataWordIndex]\n if reinterpretElmToType is not None:\n v = v._reinterpret_cast(reinterpretElmToType)\n\n endOfWord = (dataWordIndex + 1) * wordWidth\n width = min(end, endOfWord) - start\n offset = start % wordWidth\n\n val = selectBitRange(v.val, offset, width)\n vldMask = selectBitRange(v.vldMask, offset, width)\n updateTime = v.updateTime\n\n m = mask(width)\n value.val |= (val & m) << inPartOffset\n value.vldMask |= (vldMask & m) << inPartOffset\n value.updateMask = max(value.updateTime, updateTime)\n\n inPartOffset += width\n start += width\n\n return value", "response": "Gets value of bits between selected range from memory"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncasts HArray signal or value to bits", "response": "def reinterptet_harray_to_bits(typeFrom, sigOrVal, bitsT):\n \"\"\"\n Cast HArray signal or value to signal or value of type Bits\n \"\"\"\n size = int(typeFrom.size)\n widthOfElm = typeFrom.elmType.bit_length()\n w = bitsT.bit_length()\n if size * widthOfElm != w:\n raise TypeConversionErr(\n \"Size of types is different\", size * widthOfElm, w)\n\n partT = Bits(widthOfElm)\n parts = [p._reinterpret_cast(partT) for p in sigOrVal]\n\n return Concat(*reversed(parts))._reinterpret_cast(bitsT)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef slice_to_SLICE(sliceVals, width):\n if sliceVals.step is not None:\n raise NotImplementedError()\n\n start = sliceVals.start\n stop = sliceVals.stop\n\n if sliceVals.start is None:\n start = INT.fromPy(width)\n else:\n start = toHVal(sliceVals.start)\n\n if sliceVals.stop is None:\n stop = INT.fromPy(0)\n else:\n stop = toHVal(sliceVals.stop)\n\n startIsVal = isinstance(start, Value)\n stopIsVal = isinstance(stop, Value)\n\n indexesAreValues = startIsVal and stopIsVal\n if indexesAreValues:\n updateTime = max(start.updateTime, stop.updateTime)\n else:\n updateTime = -1\n\n return Slice.getValueCls()((start, stop), SLICE, 1, updateTime)", "response": "convert python slice to value of SLICE hdl type"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the bit range which contains data of this part on bus data signal", "response": "def getBusWordBitRange(self) -> Tuple[int, int]:\n \"\"\"\n :return: bit range which contains data of this part on bus data signal\n \"\"\"\n offset = self.startOfPart % self.parent.wordWidth\n return (offset + self.bit_length(), offset)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef getFieldBitRange(self) -> Tuple[int, int]:\n offset = self.inFieldOffset\n return (self.bit_length() + offset, offset)", "response": "Returns the bit range which contains this part on interface\n of field\n "} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfill all unused code branches with assignments from value specified by enclosure.", "response": "def fill_stm_list_with_enclosure(parentStm: Optional[HdlStatement],\n current_enclosure: Set[RtlSignalBase],\n statements: List[\"HdlStatement\"],\n do_enclose_for: List[RtlSignalBase],\n enclosure: Dict[RtlSignalBase, Union[Value, RtlSignalBase]])\\\n -> None:\n \"\"\"\n Apply enclosure on list of statements\n (fill all unused code branches with assignments from value specified by enclosure)\n\n :param parentStm: optional parent statement where this list is some branch\n :param current_enclosure: list of signals for which this statement list is enclosed\n :param statements: list of statements\n :param do_enclose_for: selected signals for which enclosure should be used\n :param enclosure: enclosure values for signals\n\n :attention: original statements parameter can be modified\n :return: new statements\n \"\"\"\n if statements is None:\n statements = []\n\n for e_sig in do_enclose_for:\n if e_sig in current_enclosure:\n continue\n enclosed = False\n for stm in statements:\n if e_sig in stm._outputs:\n if e_sig not in stm._enclosed_for:\n stm._fill_enclosure(enclosure)\n enclosed = True\n break\n # any statement was not related with this signal,\n if not enclosed:\n e = enclosure[e_sig]\n a = Assignment(e, e_sig)\n statements.append(a)\n\n if parentStm is not None:\n a._set_parent_stm(parentStm)\n\n return statements"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef find_files(directory, pattern, recursive=True):\n if not os.path.isdir(directory):\n if os.path.exists(directory):\n raise IOError(directory + ' is not directory')\n else:\n raise IOError(directory + \" does not exists\")\n if recursive:\n for root, _, files in os.walk(directory):\n for basename in files:\n if fnmatch.fnmatch(basename, pattern):\n filename = os.path.join(root, basename)\n yield filename\n else:\n root = directory\n for basename in os.listdir(root):\n if fnmatch.fnmatch(basename, pattern):\n filename = os.path.join(root, basename)\n if os.path.isfile(filename):\n yield filename", "response": "Find files by pattern in directory"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngenerates if tree for cases like", "response": "def SwitchLogic(cases, default=None):\n \"\"\"\n Generate if tree for cases like (syntax shugar for large elifs)\n\n ..code-block:: python\n if cond0:\n statements0\n elif cond1:\n statements1\n else:\n default\n\n :param case: iterable of tuples (condition, statements)\n :param default: default statements\n \"\"\"\n if default is not None:\n assigTop = default\n else:\n assigTop = []\n\n for cond, statements in reversed(cases):\n assigTop = If(cond,\n statements\n ).Else(\n assigTop\n )\n\n return assigTop"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngenerating for loop for static items in a single unit.", "response": "def StaticForEach(parentUnit, items, bodyFn, name=\"\"):\n \"\"\"\n Generate for loop for static items\n\n :param parentUnit: unit where this code should be instantiated\n :param items: items which this \"for\" itering on\n :param bodyFn: function which fn(item, index) or fn(item)\n returns (statementList, ack).\n It's content is performed in every iteration.\n When ack is high loop will fall to next iteration\n \"\"\"\n\n items = list(items)\n itemsCnt = len(items)\n if itemsCnt == 0:\n # if there are no items there is nothing to generate\n return []\n elif itemsCnt == 1:\n # if there is only one item do not generate counter logic generate\n return bodyFn(items[0], 0)\n else:\n # if there is multiple items we have to generate counter logic\n index = parentUnit._reg(name + \"for_index\",\n Bits(log2ceil(itemsCnt + 1), signed=False),\n defVal=0)\n ackSig = parentUnit._sig(name + \"for_ack\")\n\n statementLists = []\n for i, (statementList, ack) in [(i, bodyFn(item, i))\n for i, item in enumerate(items)]:\n statementLists.append(statementList + [(ackSig(ack)), ])\n\n If(ackSig,\n If(index._eq(itemsCnt - 1),\n index(0)\n ).Else(\n index(index + 1)\n )\n )\n\n return Switch(index)\\\n .addCases(\n enumerate(statementLists)\n ).Default(\n bodyFn(items[0], 0)[0],\n ackSig(True)\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef connect(src, *destinations, exclude: set=None, fit=False):\n assignemnts = []\n\n if isinstance(src, HObjList):\n for dst in destinations:\n assert len(src) == len(dst), (src, dst)\n _destinations = [iter(d) for d in destinations]\n for _src in src:\n dsts = [next(d) for d in _destinations]\n assignemnts.append(connect(_src, *dsts, exclude=exclude, fit=fit))\n else:\n for dst in destinations:\n assignemnts.append(_connect(src, dst, exclude, fit))\n\n return assignemnts", "response": "Connect src to all destinations"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef log2ceil(x):\n\n if not isinstance(x, (int, float)):\n x = int(x)\n\n if x == 0 or x == 1:\n res = 1\n else:\n res = math.ceil(math.log2(x))\n\n return hInt(res)", "response": "Returns no of bits required to store x - 1 returns 3"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef isPow2(num) -> bool:\n if not isinstance(num, int):\n num = int(num)\n return num != 0 and ((num & (num - 1)) == 0)", "response": "Check if number or constant is power of two."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef addCases(self, tupesValStmnts):\n s = self\n for val, statements in tupesValStmnts:\n s = s.Case(val, statements)\n return s", "response": "Adds multiple cases from iterable of tuleles\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef Case(self, caseVal, *statements):\n \"c-like case of switch statement\"\n assert self.parentStm is None\n caseVal = toHVal(caseVal, self.switchOn._dtype)\n\n assert isinstance(caseVal, Value), caseVal\n assert caseVal._isFullVld(), \"Cmp with invalid value\"\n assert caseVal not in self._case_value_index, (\n \"Switch statement already has case for value \", caseVal)\n\n self.rank += 1\n case = []\n self._case_value_index[caseVal] = len(self.cases)\n self.cases.append((caseVal, case))\n\n cond = self.switchOn._eq(caseVal)\n self._inputs.append(cond)\n cond.endpoints.append(self)\n\n self._register_stements(statements, case)\n\n return self", "response": "c - like case of switch statement"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef Trans(self, stateFrom, *condAndNextState):\n top = []\n last = True\n\n for cAndS in reversed(condAndNextState):\n if last is True:\n last = False\n # if this is last trans. it does not have to condition\n try:\n condition, newvalue = cAndS\n except TypeError:\n top = self.stateReg(cAndS)\n continue\n top = []\n\n else:\n condition, newvalue = cAndS\n\n # building decision tree\n top = \\\n If(condition,\n self.stateReg(newvalue)\n ).Else(\n top\n )\n if stateFrom is None:\n return Switch.Default(self, top)\n else:\n return Switch.Case(self, stateFrom, top)", "response": "Creates a new transition for the FSM."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef vcdTypeInfoForHType(t) -> Tuple[str, int, Callable[[RtlSignalBase, Value], str]]:\n if isinstance(t, (SimBitsT, Bits, HBool)):\n return (VCD_SIG_TYPE.WIRE, t.bit_length(), vcdBitsFormatter)\n elif isinstance(t, HEnum):\n return (VCD_SIG_TYPE.REAL, 1, vcdEnumFormatter)\n else:\n raise ValueError(t)", "response": "Returns a function that returns the vcd type name and vcd width for the given HType."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nregistering signals from interfaces for Interface or Unit instances and returns the VCDVarWritingScope that is associated with the object.", "response": "def vcdRegisterInterfaces(self, obj: Union[Interface, Unit],\n parent: Optional[VcdVarWritingScope]):\n \"\"\"\n Register signals from interfaces for Interface or Unit instances\n \"\"\"\n if hasattr(obj, \"_interfaces\") and obj._interfaces:\n name = obj._name\n parent_ = self.vcdWriter if parent is None else parent\n\n subScope = parent_.varScope(name)\n self._obj2scope[obj] = subScope\n\n with subScope:\n # register all subinterfaces\n for chIntf in obj._interfaces:\n self.vcdRegisterInterfaces(chIntf, subScope)\n\n if isinstance(obj, (Unit, SimModel)):\n # register interfaces from all subunits\n for u in obj._units:\n self.vcdRegisterInterfaces(u, subScope)\n\n return subScope\n else:\n t = obj._dtype\n if isinstance(t, self.supported_type_classes):\n tName, width, formatter = vcdTypeInfoForHType(t)\n try:\n parent.addVar(obj, getSignalName(obj),\n tName, width, formatter)\n except VarAlreadyRegistered:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef beforeSim(self, simulator, synthesisedUnit):\n vcd = self.vcdWriter\n vcd.date(datetime.now())\n vcd.timescale(1)\n\n self.vcdRegisterInterfaces(synthesisedUnit, None)\n self.vcdRegisterRemainingSignals(synthesisedUnit)\n\n vcd.enddefinitions()", "response": "This method is called before simulation is started."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nserialize HWProcess instance :param scope: name scope to prevent name collisions", "response": "def HWProcess(cls, proc, ctx):\n \"\"\"\n Serialize HWProcess instance\n\n :param scope: name scope to prevent name collisions\n \"\"\"\n body = proc.statements\n childCtx = ctx.withIndent()\n statemets = [cls.asHdl(s, childCtx) for s in body]\n proc.name = ctx.scope.checkedName(proc.name, proc)\n\n return cls.methodTmpl.render(\n indent=getIndent(ctx.indent),\n name=proc.name,\n statements=statemets\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding new level to scope.", "response": "def setLevel(self, lvl):\n \"\"\"\n Trim or extend scope\n lvl = 1 -> only one scope (global)\n \"\"\"\n while len(self) != lvl:\n if len(self) > lvl:\n self.pop()\n else:\n self.append(NameScopeItem(len(self)))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _size(self):\n assert isinstance(self, Value)\n return int(self.val[0]) - int(self.val[1])", "response": "Returns the number of bits in the current key set"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nwalk all interfaces on unit and instantiate agent for every interface.", "response": "def autoAddAgents(unit):\n \"\"\"\n Walk all interfaces on unit and instantiate agent for every interface.\n\n :return: all monitor/driver functions which should be added to simulation\n as processes\n \"\"\"\n proc = []\n for intf in unit._interfaces:\n if not intf._isExtern:\n continue\n\n intf._initSimAgent()\n assert intf._ag is not None, intf\n agents = [intf._ag, ]\n\n if intf._direction == INTF_DIRECTION.MASTER:\n agProcs = list(map(lambda a: a.getMonitors(), agents))\n elif intf._direction == INTF_DIRECTION.SLAVE:\n agProcs = list(map(lambda a: a.getDrivers(), agents))\n else:\n raise NotImplementedError(\"intf._direction %r for %r\" % (\n intf._direction, intf))\n\n for p in agProcs:\n proc.extend(p)\n\n return proc"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\niterates over the given iterable of values to ints.", "response": "def valuesToInts(values):\n \"\"\"\n Iterable of values to ints (nonvalid = None)\n \"\"\"\n res = []\n append = res.append\n for d in values:\n if isinstance(d, int):\n append(d)\n else:\n append(valToInt(d))\n return res"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the associated rst if any otherwise returns the parent rst", "response": "def _getAssociatedRst(self):\n \"\"\"\n If interface has associated rst(_n) return it otherwise\n try to find rst(_n) on parent recursively\n \"\"\"\n a = self._associatedRst\n\n if a is not None:\n return a\n\n p = self._parent\n assert p is not None\n\n if isinstance(p, UnitBase):\n return getRst(p)\n else:\n return p._getAssociatedRst()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the associated clk or None if no clk is associated with this interface.", "response": "def _getAssociatedClk(self):\n \"\"\"\n If interface has associated clk return it otherwise\n try to find clk on parent recursively\n \"\"\"\n a = self._associatedClk\n\n if a is not None:\n return a\n\n p = self._parent\n assert p is not None\n\n if isinstance(p, UnitBase):\n return getClk(p)\n else:\n return p._getAssociatedClk()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding a variable to the list of extra discovered processes.", "response": "def Architecture_var(cls, v, serializerVars, extraTypes,\n extraTypes_serialized, ctx, childCtx):\n \"\"\"\n :return: list of extra discovered processes\n \"\"\"\n v.name = ctx.scope.checkedName(v.name, v)\n serializedVar = cls.SignalItem(v, childCtx, declaration=True)\n serializerVars.append(serializedVar)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef single(iterable, fn):\n found = False\n ret = None\n\n for i in iterable:\n if fn(i):\n if found:\n raise DuplicitValueExc(i)\n found = True\n ret = i\n\n if not found:\n raise NoValueExc()\n\n return ret", "response": "Get value from iterable where fn(item ) and check\n is True"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntake n items from iterrable", "response": "def take(iterrable, howMay):\n \"\"\"\n :return: generator of first n items from iterrable\n \"\"\"\n assert howMay >= 0\n\n if not howMay:\n return\n\n last = howMay - 1\n for i, item in enumerate(iterrable):\n yield item\n if i == last:\n return"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef iter_with_last(iterable):\n # Ensure it's an iterator and get the first field\n iterable = iter(iterable)\n prev = next(iterable)\n for item in iterable:\n # Lag by one item so I know I'm not at the end\n yield False, prev\n prev = item\n # Last item\n yield True, prev", "response": "Iterate over the items in the sequence with the last item"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef groupedby(collection, fn):\n d = {}\n for item in collection:\n k = fn(item)\n try:\n arr = d[k]\n except KeyError:\n arr = []\n d[k] = arr\n arr.append(item)\n\n yield from d.items()", "response": "groupby - Returns a generator that yields the items of the given collection by a function."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef flatten(iterables, level=inf):\n if level >= 0 and isinstance(iterables, (list, tuple, GeneratorType,\n map, zip)):\n level -= 1\n for i in iterables:\n yield from flatten(i, level=level)\n else:\n yield iterables", "response": "Flatten nested lists tuples generators and maps"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _cut_off_drivers_of(self, sig: RtlSignalBase):\n if len(self._outputs) == 1 and sig in self._outputs:\n self.parentStm = None\n return self\n\n # try to cut off all statements which are drivers of specified signal\n # in all branches\n child_keep_mask = []\n\n newIfTrue = []\n all_cut_off = True\n all_cut_off &= self._cut_off_drivers_of_list(\n sig, self.ifTrue, child_keep_mask, newIfTrue)\n self.ifTrue = list(compress(self.ifTrue, child_keep_mask))\n\n newElifs = []\n anyElifHit = False\n for cond, stms in self.elIfs:\n newCase = []\n child_keep_mask.clear()\n all_cut_off &= self._cut_off_drivers_of_list(\n sig, stms, child_keep_mask, newCase)\n\n _stms = list(compress(stms, child_keep_mask))\n stms.clear()\n stms.extend(_stms)\n\n if newCase:\n anyElifHit = True\n newElifs.append((cond, newCase))\n\n newIfFalse = None\n if self.ifFalse:\n newIfFalse = []\n child_keep_mask.clear()\n all_cut_off &= self._cut_off_drivers_of_list(\n sig, self.ifFalse, child_keep_mask, newIfFalse)\n self.ifFalse = list(compress(self.ifFalse, child_keep_mask))\n\n assert not all_cut_off, \"everything was cut of but this should be already known at start\"\n\n if newIfTrue or newIfFalse or anyElifHit or newIfFalse:\n # parts were cut off\n # generate new statement for them\n cond_sig = self.cond\n n = self.__class__(cond_sig, newIfTrue)\n for c, stms in newElifs:\n assert len(c) == 1\n c_sig = c[0]\n n.Elif(c_sig, stms)\n if newIfFalse is not None:\n n.Else(newIfFalse)\n\n if self.parentStm is None:\n ctx = n._get_rtl_context()\n ctx.statements.add(n)\n\n # update io of this\n self._inputs.clear()\n self._inputs.append(cond_sig)\n for c, _ in self.elIfs:\n self._inputs.extend(c)\n\n self._inputs.append(cond_sig)\n self._outputs.clear()\n\n out_add = self._outputs.append\n in_add = self._inputs.append\n\n for stm in self._iter_stms():\n for inp in stm._inputs:\n in_add(inp)\n\n for outp in stm._outputs:\n out_add(outp)\n\n if self._sensitivity is not None or self._enclosed_for is not None:\n raise NotImplementedError(\n \"Sensitivity and enclosure has to be cleaned first\")\n\n return n", "response": "Method to cut off all drivers of a signal and return the new statement."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndiscovers the sensitivity of the current state of the statement.", "response": "def _discover_sensitivity(self, seen: set) -> None:\n \"\"\"\n Doc on parent class :meth:`HdlStatement._discover_sensitivity`\n \"\"\"\n assert self._sensitivity is None, self\n ctx = self._sensitivity = SensitivityCtx()\n\n self._discover_sensitivity_sig(self.cond, seen, ctx)\n if ctx.contains_ev_dependency:\n return\n\n for stm in self.ifTrue:\n stm._discover_sensitivity(seen)\n ctx.extend(stm._sensitivity)\n\n # elifs\n for cond, stms in self.elIfs:\n if ctx.contains_ev_dependency:\n break\n\n self._discover_sensitivity_sig(cond, seen, ctx)\n if ctx.contains_ev_dependency:\n break\n\n for stm in stms:\n if ctx.contains_ev_dependency:\n break\n\n stm._discover_sensitivity(seen)\n ctx.extend(stm._sensitivity)\n\n if not ctx.contains_ev_dependency and self.ifFalse:\n # else\n for stm in self.ifFalse:\n stm._discover_sensitivity(seen)\n ctx.extend(stm._sensitivity)\n\n else:\n assert not self.ifFalse, \"can not negate event\""} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _iter_stms(self):\n yield from self.ifTrue\n for _, stms in self.elIfs:\n yield from stms\n if self.ifFalse is not None:\n yield from self.ifFalse", "response": "Iterate over the set of elements in this HdlStatement."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _try_reduce(self) -> Tuple[bool, List[HdlStatement]]:\n # flag if IO of statement has changed\n io_change = False\n\n self.ifTrue, rank_decrease, _io_change = self._try_reduce_list(\n self.ifTrue)\n self.rank -= rank_decrease\n io_change |= _io_change\n\n new_elifs = []\n for cond, statements in self.elIfs:\n _statements, rank_decrease, _io_change = self._try_reduce_list(\n statements)\n self.rank -= rank_decrease\n io_change |= _io_change\n new_elifs.append((cond, _statements))\n\n if self.ifFalse is not None:\n self.ifFalse, rank_decrease, _io_update_required = self._try_reduce_list(\n self.ifFalse)\n self.rank -= rank_decrease\n io_change |= _io_change\n\n reduce_self = not self.condHasEffect(\n self.ifTrue, self.ifFalse, self.elIfs)\n\n if reduce_self:\n res = self.ifTrue\n else:\n res = [self, ]\n\n self._on_reduce(reduce_self, io_change, res)\n\n # try merge nested ifs as elifs\n if self.ifFalse is not None and len(self.ifFalse) == 1:\n child = self.ifFalse[0]\n if isinstance(child, IfContainer):\n self._merge_nested_if_from_else(child)\n\n return res, io_change", "response": "Try to reduce the list of if conditions."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nmerges nested IfContarner form else branch to this IfContainer", "response": "def _merge_nested_if_from_else(self, ifStm: \"IfContainer\"):\n \"\"\"\n Merge nested IfContarner form else branch to this IfContainer\n as elif and else branches\n \"\"\"\n self.elIfs.append((ifStm.cond, ifStm.ifTrue))\n self.elIfs.extend(ifStm.elIfs)\n\n self.ifFalse = ifStm.ifFalse"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _merge_with_other_stm(self, other: \"IfContainer\") -> None:\n merge = self._merge_statement_lists\n self.ifTrue = merge(self.ifTrue, other.ifTrue)\n\n new_elifs = []\n for ((c, elifA), (_, elifB)) in zip(self.elIfs, other.elIfs):\n new_elifs.append((c, merge(elifA, elifB)))\n self.elIfs = new_elifs\n\n self.ifFalse = merge(self.ifFalse, other.ifFalse)\n\n other.ifTrue = []\n other.elIfs = []\n other.ifFalse = None\n\n self._on_merge(other)", "response": "Merge two IfContainer objects."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef isSame(self, other: HdlStatement) -> bool:\n if self is other:\n return True\n\n if self.rank != other.rank:\n return False\n\n if isinstance(other, IfContainer):\n if self.cond is other.cond:\n if len(self.ifTrue) == len(other.ifTrue) \\\n and len(self.ifFalse) == len(other.ifFalse) \\\n and len(self.elIfs) == len(other.elIfs):\n if not isSameStatementList(self.ifTrue,\n other.ifTrue) \\\n or not isSameStatementList(self.ifFalse,\n other.ifFalse):\n return False\n for (ac, astms), (bc, bstms) in zip(self.elIfs,\n other.elIfs):\n if not (ac == bc) or\\\n not isSameStatementList(astms, bstms):\n return False\n return True\n return False", "response": "Returns True if this statement is the same meaning as the other statement."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef removeUnconnectedSignals(netlist):\n\n toDelete = set()\n toSearch = netlist.signals\n\n while toSearch:\n _toSearch = set()\n\n for sig in toSearch:\n if not sig.endpoints:\n try:\n if sig._interface is not None:\n # skip interfaces before we want to check them,\n # they should not be optimized out from design\n continue\n except AttributeError:\n pass\n\n for e in sig.drivers:\n # drivers of this signal are useless rm them\n if isinstance(e, Operator):\n inputs = e.operands\n if e.result is sig:\n e.result = None\n else:\n inputs = e._inputs\n netlist.statements.discard(e)\n\n for op in inputs:\n if not isinstance(op, Value):\n try:\n op.endpoints.remove(e)\n except KeyError:\n # this operator has 2x+ same operand\n continue\n\n _toSearch.add(op)\n\n toDelete.add(sig)\n\n if toDelete:\n for sig in toDelete:\n if sig.ctx == netlist:\n netlist.signals.remove(sig)\n _toSearch.discard(sig)\n toDelete = set()\n toSearch = _toSearch", "response": "Removes all unconnected signals from the given netlist."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nchecking if process is just unconditional assignments", "response": "def checkIfIsTooSimple(proc):\n \"\"\"check if process is just unconditional assignments\n and it is useless to merge them\"\"\"\n try:\n a, = proc.statements\n if isinstance(a, Assignment):\n return True\n except ValueError:\n pass\n return False"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef tryToMerge(procA: HWProcess, procB: HWProcess):\n if (checkIfIsTooSimple(procA) or\n checkIfIsTooSimple(procB) or\n areSetsIntersets(procA.outputs, procB.sensitivityList) or\n areSetsIntersets(procB.outputs, procA.sensitivityList) or\n not HdlStatement._is_mergable_statement_list(procA.statements, procB.statements)):\n raise IncompatibleStructure()\n\n procA.statements = HdlStatement._merge_statement_lists(\n procA.statements, procB.statements)\n\n procA.outputs.extend(procB.outputs)\n procA.inputs.extend(procB.inputs)\n procA.sensitivityList.extend(procB.sensitivityList)\n\n return procA", "response": "Try to merge two processes into procA."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ntries to merge processes as much is possible and yield processes instances that are not in the same structure of statements.", "response": "def reduceProcesses(processes):\n \"\"\"\n Try to merge processes as much is possible\n\n :param processes: list of processes instances\n \"\"\"\n # sort to make order of merging same deterministic\n processes.sort(key=lambda x: (x.name, maxStmId(x)), reverse=True)\n # now try to reduce processes with nearly same structure of statements into one\n # to minimize number of processes\n for _, procs in groupedby(processes, lambda p: p.rank):\n for iA, pA in enumerate(procs):\n if pA is None:\n continue\n for iB, pB in enumerate(islice(procs, iA + 1, None)):\n if pB is None:\n continue\n\n try:\n pA = tryToMerge(pA, pB)\n except IncompatibleStructure:\n continue\n procs[iA + 1 + iB] = None\n # procs[iA] = pA\n\n for p in procs:\n if p is not None:\n yield p"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncalls when a write request is received.", "response": "def onWriteReq(self, sim, addr, data):\n \"\"\"\n on writeReqRecieved in monitor mode\n \"\"\"\n self.requests.append((WRITE, addr, data))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconverting object to HDL string.", "response": "def asHdl(cls, obj, ctx: HwtSerializerCtx):\n \"\"\"\n Convert object to HDL string\n\n :param obj: object to serialize\n :param ctx: HwtSerializerCtx instance\n \"\"\"\n if isinstance(obj, RtlSignalBase):\n return cls.SignalItem(obj, ctx)\n elif isinstance(obj, Value):\n return cls.Value(obj, ctx)\n else:\n try:\n serFn = obj.asHwt\n except AttributeError:\n serFn = None\n if serFn is not None:\n return serFn(cls, ctx)\n\n try:\n serFn = getattr(cls, obj.__class__.__name__)\n except AttributeError:\n serFn = None\n\n if serFn is not None:\n return serFn(obj, ctx)\n\n raise SerializerException(\"Not implemented for %r\" % (obj))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a string representation of the given entity.", "response": "def Entity(cls, ent: Entity, ctx: HwtSerializerCtx):\n \"\"\"\n Entity is just forward declaration of Architecture, it is not used\n in most HDL languages as there is no recursion in hierarchy\n \"\"\"\n\n cls.Entity_prepare(ent, ctx, serialize=False)\n ent.name = ctx.scope.checkedName(ent.name, ent, isGlobal=True)\n ports = list(\n map(lambda p: (p.name, cls.HdlType(p._dtype, ctx)),\n ent.ports))\n return unitHeadTmpl.render(\n name=ent.name,\n ports=ports,\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef toRtl(unitOrCls: Unit, name: str=None,\n serializer: GenericSerializer=VhdlSerializer,\n targetPlatform=DummyPlatform(), saveTo: str=None):\n \"\"\"\n Convert unit to RTL using specified serializer\n\n :param unitOrCls: unit instance or class, which should be converted\n :param name: name override of top unit (if is None name is derived\n form class name)\n :param serializer: serializer which should be used for to RTL conversion\n :param targetPlatform: metainformatins about target platform, distributed\n on every unit under _targetPlatform attribute\n before Unit._impl() is called\n :param saveTo: directory where files should be stored\n If None RTL is returned as string.\n :raturn: if saveTo returns RTL string else returns list of file names\n which were created\n \"\"\"\n if not isinstance(unitOrCls, Unit):\n u = unitOrCls()\n else:\n u = unitOrCls\n\n u._loadDeclarations()\n if name is not None:\n assert isinstance(name, str)\n u._name = name\n\n globScope = serializer.getBaseNameScope()\n mouduleScopes = {}\n\n # unitCls : unitobj\n serializedClasses = {}\n\n # (unitCls, paramsValues) : unitObj\n # where paramsValues are dict name:value\n serializedConfiguredUnits = {}\n\n doSerialize = True\n\n createFiles = saveTo is not None\n if createFiles:\n os.makedirs(saveTo, exist_ok=True)\n files = UniqList()\n else:\n codeBuff = []\n\n for obj in u._toRtl(targetPlatform):\n doSerialize = serializer.serializationDecision(\n obj,\n serializedClasses,\n serializedConfiguredUnits)\n if doSerialize:\n if isinstance(obj, Entity):\n s = globScope.fork(1)\n s.setLevel(2)\n ctx = serializer.getBaseContext()\n ctx.scope = s\n mouduleScopes[obj] = ctx\n ctx.currentUnit = obj.origin\n\n sc = serializer.Entity(obj, ctx)\n if createFiles:\n fName = obj.name + serializer.fileExtension\n fileMode = 'w'\n\n elif isinstance(obj, Architecture):\n try:\n ctx = mouduleScopes[obj.entity]\n except KeyError:\n raise SerializerException(\n \"Entity should be serialized\"\n \" before architecture of %s\"\n % (obj.getEntityName()))\n\n sc = serializer.Architecture(obj, ctx)\n if createFiles:\n fName = obj.getEntityName() + serializer.fileExtension\n fileMode = 'a'\n else:\n if hasattr(obj, \"_hdlSources\"):\n for fn in obj._hdlSources:\n if isinstance(fn, str):\n shutil.copy2(fn, saveTo)\n files.append(fn)\n continue\n else:\n sc = serializer.asHdl(obj)\n\n if sc:\n if createFiles:\n fp = os.path.join(saveTo, fName)\n files.append(fp)\n with open(fp, fileMode) as f:\n if fileMode == 'a':\n f.write(\"\\n\")\n\n f.write(\n serializer.formatter(sc)\n )\n else:\n codeBuff.append(sc)\n\n elif not createFiles:\n try:\n name = '\"%s\"' % obj.name\n except AttributeError:\n name = \"\"\n codeBuff.append(serializer.comment(\n \"Object of class %s, %s was not serialized as specified\" % (\n obj.__class__.__name__, name)))\n\n if createFiles:\n return files\n else:\n return serializer.formatter(\n \"\\n\".join(codeBuff)\n )", "response": "Convert a unit instance or class to RTL."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef name_for_process_and_mark_outputs(statements: List[HdlStatement])\\\n -> str:\n \"\"\"\n Resolve name for process and mark outputs of statemens as not hidden\n \"\"\"\n out_names = []\n for stm in statements:\n for sig in stm._outputs:\n if not sig.hasGenericName:\n out_names.append(sig.name)\n\n if out_names:\n return min(out_names)\n else:\n return \"\"", "response": "Resolve name for process and mark outputs of statemens as not hidden"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef cut_off_drivers_of(dstSignal, statements):\n separated = []\n stm_filter = []\n for stm in statements:\n stm._clean_signal_meta()\n d = stm._cut_off_drivers_of(dstSignal)\n if d is not None:\n separated.append(d)\n\n f = d is not stm\n stm_filter.append(f)\n\n return list(compress(statements, stm_filter)), separated", "response": "Cut off drivers of a list of statements."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconverts a list of HdlStatements into a list of HWProcesses.", "response": "def statements_to_HWProcesses(statements: List[HdlStatement])\\\n -> Generator[HWProcess, None, None]:\n \"\"\"\n Pack statements into HWProcess instances,\n * for each out signal resolve it's drivers and collect them\n * split statements if there is and combinational loop\n * merge statements if it is possible\n * resolve sensitivitilists\n * wrap into HWProcess instance\n * for every IO of process generate name if signal has not any\n \"\"\"\n # create copy because this set will be reduced\n statements = copy(statements)\n\n # process ranks = how many assignments is probably in process\n # used to minimize number of merge tries\n processes = []\n while statements:\n stm = statements.pop()\n proc_statements = [stm, ]\n ps = _statements_to_HWProcesses(proc_statements, True)\n processes.extend(ps)\n\n yield from reduceProcesses(processes)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nmark all signals that are driven by something and have hidden = False if they are connecting statements or if they are external interface", "response": "def markVisibilityOfSignals(ctx, ctxName, signals, interfaceSignals):\n \"\"\"\n * check if all signals are driven by something\n * mark signals with hidden = False if they are connecting statements\n or if they are external interface\n \"\"\"\n for sig in signals:\n driver_cnt = len(sig.drivers)\n has_comb_driver = False\n if driver_cnt > 1:\n sig.hidden = False\n for d in sig.drivers:\n if not isinstance(d, Operator):\n sig.hidden = False\n\n is_comb_driver = False\n\n if isinstance(d, PortItem):\n is_comb_driver = True\n elif not d._now_is_event_dependent:\n for a in walk_assignments(d, sig):\n if not a.indexes\\\n and not a._is_completly_event_dependent:\n is_comb_driver = True\n break\n\n if has_comb_driver and is_comb_driver:\n raise MultipleDriversErr(\n \"%s: Signal %r has multiple combinational drivers\" %\n (ctx.getDebugScopeName(), sig))\n\n has_comb_driver |= is_comb_driver\n elif driver_cnt == 1:\n if not isinstance(sig.drivers[0], Operator):\n sig.hidden = False\n else:\n sig.hidden = False\n if sig not in interfaceSignals:\n if not sig.defVal._isFullVld():\n raise NoDriverErr(\n sig, \"Signal without any driver or valid value in \", ctxName)\n sig._const = True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef sig(self, name, dtype=BIT, clk=None, syncRst=None, defVal=None):\n if isinstance(defVal, RtlSignal):\n assert defVal._const, \\\n \"Initial value of register has to be constant\"\n _defVal = defVal._auto_cast(dtype)\n elif isinstance(defVal, Value):\n _defVal = defVal._auto_cast(dtype)\n elif isinstance(defVal, InterfaceBase):\n _defVal = defVal._sig\n else:\n _defVal = dtype.fromPy(defVal)\n\n if clk is not None:\n s = RtlSyncSignal(self, name, dtype, _defVal)\n if syncRst is not None and defVal is None:\n raise SigLvlConfErr(\n \"Probably forgotten default value on sync signal %s\", name)\n if syncRst is not None:\n r = If(syncRst._isOn(),\n RtlSignal.__call__(s, _defVal)\n ).Else(\n RtlSignal.__call__(s, s.next)\n )\n else:\n r = [RtlSignal.__call__(s, s.next)]\n\n If(clk._onRisingEdge(),\n r\n )\n else:\n if syncRst:\n raise SigLvlConfErr(\n \"Signal %s has reset but has no clk\" % name)\n s = RtlSignal(self, name, dtype, defVal=_defVal)\n\n self.signals.add(s)\n\n return s", "response": "Create a new signal in this context."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nbuild Entity and Architecture instance out of netlist representation.", "response": "def synthesize(self, name, interfaces, targetPlatform):\n \"\"\"\n Build Entity and Architecture instance out of netlist representation\n \"\"\"\n ent = Entity(name)\n ent._name = name + \"_inst\" # instance name\n\n # create generics\n for _, v in self.params.items():\n ent.generics.append(v)\n\n # interface set for faster lookup\n if isinstance(interfaces, set):\n intfSet = interfaces\n else:\n intfSet = set(interfaces)\n\n # create ports\n for s in interfaces:\n pi = portItemfromSignal(s, ent)\n pi.registerInternSig(s)\n ent.ports.append(pi)\n s.hidden = False\n\n removeUnconnectedSignals(self)\n markVisibilityOfSignals(self, name, self.signals, intfSet)\n\n for proc in targetPlatform.beforeHdlArchGeneration:\n proc(self)\n\n arch = Architecture(ent)\n for p in statements_to_HWProcesses(self.statements):\n arch.processes.append(p)\n\n # add signals, variables etc. in architecture\n for s in self.signals:\n if s not in intfSet and not s.hidden:\n arch.variables.append(s)\n\n # instantiate subUnits in architecture\n for u in self.subUnits:\n arch.componentInstances.append(u)\n\n # add components in architecture\n for su in distinctBy(self.subUnits, lambda x: x.name):\n arch.components.append(su)\n\n self.synthesised = True\n\n return [ent, arch]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nconvert python or hdl value or signal object to hdl value or signal object", "response": "def toHVal(op: Any, suggestedType: Optional[HdlType]=None):\n \"\"\"Convert python or hdl value/signal object to hdl value/signal object\"\"\"\n if isinstance(op, Value) or isinstance(op, SignalItem):\n return op\n elif isinstance(op, InterfaceBase):\n return op._sig\n else:\n if isinstance(op, int):\n if suggestedType is not None:\n return suggestedType.fromPy(op)\n\n if op >= 1 << 31:\n raise TypeError(\n \"Number %d is too big to fit in 32 bit integer of HDL\"\n \" use Bits type instead\" % op)\n elif op < -(1 << 31):\n raise TypeError(\n \"Number %d is too small to fit in 32 bit integer\"\n \" of HDL use Bits type instead\" % op)\n try:\n hType = defaultPyConversions[type(op)]\n except KeyError:\n hType = None\n\n if hType is None:\n raise TypeError(\"Unknown hardware type for %s\" % (op.__class__))\n\n return hType.fromPy(op)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef Value(cls, val, ctx: SerializerCtx):\n t = val._dtype\n\n if isinstance(val, RtlSignalBase):\n return cls.SignalItem(val, ctx)\n\n c = cls.Value_try_extract_as_const(val, ctx)\n if c:\n return c\n\n if isinstance(t, Slice):\n return cls.Slice_valAsHdl(t, val, ctx)\n elif isinstance(t, HArray):\n return cls.HArrayValAsHdl(t, val, ctx)\n elif isinstance(t, Bits):\n return cls.Bits_valAsHdl(t, val, ctx)\n elif isinstance(t, HBool):\n return cls.Bool_valAsHdl(t, val, ctx)\n elif isinstance(t, HEnum):\n return cls.HEnumValAsHdl(t, val, ctx)\n elif isinstance(t, Integer):\n return cls.Integer_valAsHdl(t, val, ctx)\n elif isinstance(t, String):\n return cls.String_valAsHdl(t, val, ctx)\n else:\n raise SerializerException(\n \"can not resolve value serialization for %r\"\n % (val))", "response": "returns the value in the specified context"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef getMaxStmIdForStm(stm):\n maxId = 0\n if isinstance(stm, Assignment):\n return stm._instId\n elif isinstance(stm, WaitStm):\n return maxId\n else:\n for _stm in stm._iter_stms():\n maxId = max(maxId, getMaxStmIdForStm(_stm))\n return maxId", "response": "Get the maximum _instId for a statement"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef maxStmId(proc):\n maxId = 0\n for stm in proc.statements:\n maxId = max(maxId, getMaxStmIdForStm(stm))\n return maxId", "response": "get max statement id used for sorting of processes in architecture\n "} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncollect data from interface", "response": "def monitor(self, sim):\n \"\"\"Collect data from interface\"\"\"\n if self.notReset(sim) and self._enabled:\n self.wrRd(sim.write, 1)\n\n yield sim.waitOnCombUpdate()\n\n d = self.doRead(sim)\n self.data.append(d)\n else:\n self.wrRd(sim.write, 0)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nwrites data to interface", "response": "def doWrite(self, sim, data):\n \"\"\"write data to interface\"\"\"\n sim.write(data, self.intf.data)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\npushing data to the interface", "response": "def driver(self, sim):\n \"\"\"Push data to interface\"\"\"\n r = sim.read\n if self.actualData is NOP and self.data:\n self.actualData = self.data.popleft()\n\n do = self.actualData is not NOP\n\n if do:\n self.doWrite(sim, self.actualData)\n else:\n self.doWrite(sim, None)\n\n en = self.notReset(sim) and self._enabled\n if not (en and do):\n return\n\n yield sim.waitOnCombUpdate()\n\n rd = self.isRd(r)\n if en:\n assert rd.vldMask, (\n (\"%r: ready signal for interface %r is in invalid state,\"\n \" this would cause desynchronization\") %\n (sim.now, self.intf))\n if rd.val:\n if self._debugOutput is not None:\n self._debugOutput.write(\"%s, wrote, %d: %r\\n\" % (\n self.intf._getFullName(),\n sim.now, self.actualData))\n if self.data:\n self.actualData = self.data.popleft()\n else:\n self.actualData = NOP"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a new instance of the class from a python value.", "response": "def fromPy(cls, val, typeObj, vldMask=None):\n \"\"\"\n :param val: value of python type int or None\n :param typeObj: instance of Integer\n :param vldMask: None vldMask is resolved from val,\n if is 0 value is invalidated\n if is 1 value has to be valid\n \"\"\"\n assert isinstance(typeObj, Integer)\n vld = int(val is not None)\n if not vld:\n assert vldMask is None or vldMask == 0\n val = 0\n else:\n if vldMask == 0:\n val = False\n vld = 0\n else:\n val = int(val)\n\n return cls(val, typeObj, vld)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nchange the direction of the master interface", "response": "def _m(self):\n \"\"\"\n Note that this interface will be master\n\n :return: self\n \"\"\"\n assert not hasattr(self, \"_interfaces\") or not self._interfaces, \\\n \"Too late to change direction of interface\"\n self._direction = DIRECTION.asIntfDirection(DIRECTION.opposite(self._masterDir))\n\n return self"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nload declaratoins from _declr method", "response": "def _loadDeclarations(self):\n \"\"\"\n load declaratoins from _declr method\n This function is called first for parent and then for children\n \"\"\"\n if not hasattr(self, \"_interfaces\"):\n self._interfaces = []\n self._setAttrListener = self._declrCollector\n self._declr()\n self._setAttrListener = None\n\n for i in self._interfaces:\n i._isExtern = self._isExtern\n i._loadDeclarations()\n\n for p in self._params:\n p.setReadOnly()\n \n if self._isExtern:\n # direction from inside of unit (reverset compared to outside direction)\n if self._direction == INTF_DIRECTION.UNKNOWN:\n self._direction = INTF_DIRECTION.MASTER\n self._setDirectionsLikeIn(self._direction)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncleaning all signals from this unit.", "response": "def _clean(self, rmConnetions=True, lockNonExternal=True):\n \"\"\"\n Remove all signals from this interface (used after unit is synthesized\n and its parent is connecting its interface to this unit)\n \"\"\"\n\n if self._interfaces:\n for i in self._interfaces:\n i._clean(rmConnetions=rmConnetions,\n lockNonExternal=lockNonExternal)\n else:\n self._sigInside = self._sig\n del self._sig\n\n if lockNonExternal and not self._isExtern:\n self._isAccessible = False"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngenerates _sig for each interface which has no subinterface return it instead", "response": "def _signalsForInterface(self, context, prefix='', typeTransform=None):\n \"\"\"\n generate _sig for each interface which has no subinterface\n if already has _sig return it instead\n\n :param context: instance of RtlNetlist where signals should be created\n :param prefix: name prefix for created signals\n :param typeTransform: optional function (type) returns modified type\n for signal\n \"\"\"\n sigs = []\n if self._interfaces:\n for intf in self._interfaces:\n sigs.extend(\n intf._signalsForInterface(context, prefix,\n typeTransform=typeTransform))\n else:\n if hasattr(self, '_sig'):\n sigs = [self._sig]\n else:\n t = self._dtype\n if typeTransform is not None:\n t = typeTransform(t)\n\n s = context.sig(prefix + self._getPhysicalName(), t)\n s._interface = self\n self._sig = s\n\n if hasattr(self, '_boundedEntityPort'):\n self._boundedEntityPort.connectSig(self._sig)\n sigs = [s]\n\n return sigs"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _getPhysicalName(self):\n if hasattr(self, \"_boundedEntityPort\"):\n return self._boundedEntityPort.name\n else:\n return self._getFullName().replace('.', self._NAME_SEPARATOR)", "response": "Get name in HDL"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _replaceParam(self, p, newP):\n i = self._params.index(p)\n pName = p._scopes[self][1]\n assert i > -1\n self._params[i] = newP\n del p._scopes[self] # remove reference from old param\n newP._registerScope(pName, self)\n object.__setattr__(self, pName, newP)", "response": "Replace the parameter p with newP."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _updateParamsFrom(self, otherObj, updater=_default_param_updater,\n exclude=None, prefix=\"\"):\n \"\"\"\n :note: doc in :func:`~hwt.synthesizer.interfaceLevel.propDeclCollector._updateParamsFrom`\n \"\"\"\n PropDeclrCollector._updateParamsFrom(self, otherObj, updater, exclude, prefix)", "response": "Update the parameters of this object with the values from another object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsum of all width of interfaces in this interface", "response": "def _bit_length(self):\n \"\"\"Sum of all width of interfaces in this interface\"\"\"\n try:\n interfaces = self._interfaces\n except AttributeError:\n interfaces = None\n\n if interfaces is None:\n # not loaded interface\n _intf = self._clone()\n _intf._loadDeclarations()\n interfaces = _intf._interfaces\n\n if interfaces:\n w = 0\n for i in interfaces:\n w += i._bit_length()\n return w\n else:\n return self._dtype.bit_length()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _connectTo(self, master, exclude=None, fit=False):\n return list(self._connectToIter(master, exclude, fit))", "response": "Connect to another interface"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef sensitivityByOp(op):\n if op == AllOps.RISING_EDGE:\n return SENSITIVITY.RISING\n elif op == AllOps.FALLING_EDGE:\n return SENSITIVITY.FALLING\n else:\n raise TypeError()", "response": "get sensitivity type for an operation"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef eval(self, operator, simulator=None):\n def getVal(v):\n while not isinstance(v, Value):\n v = v._val\n\n return v\n\n operands = list(map(getVal, operator.operands))\n\n if isEventDependentOp(operator.operator):\n operands.append(simulator.now)\n elif operator.operator == AllOps.IntToBits:\n operands.append(operator.result._dtype)\n\n return self._evalFn(*operands)", "response": "Load all operands and process them by self. _evalFn"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconverting signed - unsigned to int or bool", "response": "def convertBits(self, sigOrVal, toType):\n \"\"\"\n Cast signed-unsigned, to int or bool\n \"\"\"\n if isinstance(sigOrVal, Value):\n return convertBits__val(self, sigOrVal, toType)\n elif isinstance(toType, HBool):\n if self.bit_length() == 1:\n v = 0 if sigOrVal._dtype.negated else 1\n return sigOrVal._eq(self.getValueCls().fromPy(v, self))\n elif isinstance(toType, Bits):\n if self.bit_length() == toType.bit_length():\n return sigOrVal._convSign(toType.signed)\n elif toType == INT:\n return Operator.withRes(AllOps.BitsToInt, [sigOrVal], toType)\n\n return default_auto_cast_fn(self, sigOrVal, toType)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef reinterpret_bits_to_hstruct(sigOrVal, hStructT):\n container = hStructT.fromPy(None)\n offset = 0\n for f in hStructT.fields:\n t = f.dtype\n width = t.bit_length()\n if f.name is not None:\n s = sigOrVal[(width + offset):offset]\n s = s._reinterpret_cast(t)\n setattr(container, f.name, s)\n\n offset += width\n\n return container", "response": "Reinterpret bits to signal of type HStruct"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreinterpret a bit string between two types", "response": "def reinterpretBits(self, sigOrVal, toType):\n \"\"\"\n Cast object of same bit size between to other type\n (f.e. bits to struct, union or array)\n \"\"\"\n if isinstance(sigOrVal, Value):\n return reinterpretBits__val(self, sigOrVal, toType)\n elif isinstance(toType, Bits):\n return fitTo_t(sigOrVal, toType)\n elif sigOrVal._dtype.bit_length() == toType.bit_length():\n if isinstance(toType, HStruct):\n raise reinterpret_bits_to_hstruct(sigOrVal, toType)\n elif isinstance(toType, HUnion):\n raise NotImplementedError()\n elif isinstance(toType, HArray):\n reinterpret_bits_to_harray(sigOrVal, toType)\n\n return default_auto_cast_fn(self, sigOrVal, toType)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsorting items from iterators by alwas selecting lowest value from first iterator.", "response": "def iterSort(iterators, cmpFn):\n \"\"\"\n Sort items from iterators(generators) by alwas selecting item\n with lowest value (min first)\n\n :return: generator of tuples (origin index, item) where origin index\n is index of iterator in \"iterators\" from where item commes from\n \"\"\"\n actual = []\n _iterators = []\n for i, it in enumerate(iterators):\n try:\n a = next(it)\n _iterators.append((i, it))\n actual.append(a)\n except StopIteration:\n continue\n\n while True:\n if not _iterators:\n return\n elif len(_iterators) == 1:\n originIndex, it = _iterators[0]\n yield originIndex, actual[0]\n for item in it:\n yield originIndex, item\n return\n\n # select minimum and iterator from where it comes from\n minimum = None\n minimumIndex = None\n secondMin = None\n for i, val in enumerate(actual):\n skipSecMinCheck = False\n if minimum is None:\n minimum = val\n minimumIndex = i\n elif cmpFn(val, minimum):\n secondMin = minimum\n minimum = val\n minimumIndex = i\n skipSecMinCheck = True\n elif not skipSecMinCheck and (\n secondMin is None or cmpFn(val, secondMin)):\n secondMin = val\n\n actualI, actualIt = _iterators[minimumIndex]\n while not cmpFn(secondMin, minimum):\n yield (actualI, minimum)\n try:\n minimum = next(actualIt)\n except StopIteration:\n minimum = None\n break\n\n # consume from actual iterator while\n if minimum is None:\n del _iterators[minimumIndex]\n del actual[minimumIndex]\n else:\n # minimum is not minimum anymore\n actual[minimumIndex] = minimum"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef fullWordCnt(self, start: int, end: int):\n assert end >= start, (start, end)\n gap = max(0, (end - start) - (start % self.wordWidth))\n return gap // self.wordWidth", "response": "Count the number of complete words between two addresses\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef groupByWordIndex(self, transaction: 'TransTmpl', offset: int):\n actualW = None\n partsInWord = []\n wordWidth = self.wordWidth\n for item in self.splitOnWords(transaction, offset):\n _actualW = item.startOfPart // wordWidth\n if actualW is None:\n actualW = _actualW\n partsInWord.append(item)\n elif _actualW > actualW:\n yield (actualW, partsInWord)\n actualW = _actualW\n partsInWord = [item, ]\n else:\n partsInWord.append(item)\n\n if partsInWord:\n yield (actualW, partsInWord)", "response": "Group transaction parts split on words to words\n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef splitOnWords(self, transaction, addrOffset=0):\n wordWidth = self.wordWidth\n end = addrOffset\n for tmp in transaction.walkFlatten(offset=addrOffset):\n if isinstance(tmp, OneOfTransaction):\n split = [self.splitOnWords(ch, end)\n for ch in tmp.possibleTransactions]\n yield from groupIntoChoices(split, wordWidth, tmp)\n end = addrOffset + tmp.possibleTransactions[0].bitAddrEnd\n elif isinstance(tmp, StreamTransaction):\n ch_len = tmp.child.bit_length()\n if end % self.wordWidth != 0 or ch_len != self.wordWidth: \n # assert start, end is aligned\n raise NotImplementedError(tmp)\n else:\n s = StreamOfFramePars(end, tmp)\n s.extend(self.splitOnWords(tmp.child, end))\n s.setIsLast(True)\n s.resolveEnd()\n yield s\n end = addrOffset + tmp.child.bitAddrEnd\n else:\n (base, end), tmpl = tmp\n startOfPart = base\n while startOfPart != end:\n wordIndex = startOfPart // wordWidth\n endOfWord = (wordIndex + 1) * wordWidth\n endOfPart = min(endOfWord, end)\n inFieldOffset = startOfPart - base\n yield TransPart(self, tmpl, startOfPart, endOfPart,\n inFieldOffset)\n startOfPart = endOfPart", "response": "yields TransPart instances for all possible words in the transaction. addrOffset is the offset of the first word in the transaction."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates an instance of HdlType from python type bool or None.", "response": "def fromPy(cls, val, typeObj, vldMask=None):\n \"\"\"\n :param val: value of python type bool or None\n :param typeObj: instance of HdlType\n :param vldMask: None vldMask is resolved from val,\n if is 0 value is invalidated\n if is 1 value has to be valid\n \"\"\"\n vld = int(val is not None)\n if not vld:\n assert vldMask is None or vldMask == 0\n val = False\n else:\n if vldMask == 0:\n val = False\n vld = 0\n else:\n val = bool(val)\n\n return cls(val, typeObj, vld)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef pprintInterface(intf, prefix=\"\", indent=0, file=sys.stdout):\n try:\n s = intf._sig\n except AttributeError:\n s = \"\"\n if s is not \"\":\n s = \" \" + repr(s)\n\n file.write(\"\".join([getIndent(indent), prefix, repr(intf._getFullName()),\n s]))\n file.write(\"\\n\")\n \n if isinstance(intf, HObjList):\n for i, p in enumerate(intf):\n # interfaces have already name of this array and index in it's name\n pprintInterface(p, prefix=prefix, indent=indent + 1, file=file)\n else:\n for i in intf._interfaces:\n pprintInterface(i, indent=indent + 1, file=file)", "response": "Pretty print an interface."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert a TransTmpl object into a sequence of FrameTmpls and FrameTmpls objects.", "response": "def framesFromTransTmpl(transaction: 'TransTmpl',\n wordWidth: int,\n maxFrameLen: Union[int, float]=inf,\n maxPaddingWords: Union[int, float]=inf,\n trimPaddingWordsOnStart: bool=False,\n trimPaddingWordsOnEnd: bool=False) -> Generator[\n 'FrameTmpl', None, None]:\n \"\"\"\n Convert transaction template into FrameTmpls\n\n :param transaction: transaction template used which are FrameTmpls\n created from\n :param wordWidth: width of data signal in target interface\n where frames will be used\n :param maxFrameLen: maximum length of frame in bits,\n if exceeded another frame will be created\n :param maxPaddingWords: maximum of continual padding words in frame,\n if exceed frame is split and words are cut of\n :attention: if maxPaddingWords 0\n assert maxPaddingWords >= 0\n if maxPaddingWords < inf:\n assert trimPaddingWordsOnStart or trimPaddingWordsOnEnd, \\\n \"Padding has to be cut off somewhere\"\n\n it = TransTmplWordIterator(wordWidth)\n lastWordI = 0\n endOfThisFrame = maxFrameLen\n parts = []\n for wordI, word in it.groupByWordIndex(transaction, 0):\n if wordI * wordWidth >= endOfThisFrame:\n # now in first+ word behind the frame\n # cut off padding at end of frame\n paddingWords = wordI - lastWordI\n if trimPaddingWordsOnEnd and paddingWords > maxPaddingWords:\n # cut off padding and align end of frame to word\n _endOfThisFrame = (lastWordI + 1) * wordWidth\n else:\n _endOfThisFrame = wordI * wordWidth\n\n yield FrameTmpl(transaction,\n wordWidth,\n startOfThisFrame,\n _endOfThisFrame,\n parts)\n\n # prepare for start of new frame\n parts = []\n isFirstInFrame = True\n partsPending = False\n # start on new word\n startOfThisFrame = _endOfThisFrame\n endOfThisFrame = startOfThisFrame + maxFrameLen\n lastWordI = wordI\n\n # check if padding at potential end of frame can be cut off\n if (not isFirstInFrame\n and trimPaddingWordsOnEnd\n and wordI - lastWordI > 1):\n # there is too much continual padding,\n # cut it out and start new frame\n _endOfThisFrame = (lastWordI + 1) * wordWidth\n yield FrameTmpl(transaction,\n wordWidth,\n startOfThisFrame,\n _endOfThisFrame,\n parts)\n\n # prepare for start of new frame\n parts = []\n isFirstInFrame = True\n partsPending = False\n # start on new word\n startOfThisFrame = _endOfThisFrame\n endOfThisFrame = startOfThisFrame + maxFrameLen\n lastWordI = wordI - 1\n\n if isFirstInFrame:\n partsPending = True\n isFirstInFrame = False\n # cut off padding at start of frame\n paddingWords = wordI - lastWordI\n if trimPaddingWordsOnStart and paddingWords > maxPaddingWords:\n startOfThisFrame += paddingWords * wordWidth\n\n endOfThisFrame = startOfThisFrame + maxFrameLen\n\n # resolve end of this part\n parts.extend(word)\n lastWordI = wordI\n\n # reminder in \"parts\" after last iteration\n endOfThisFrame = transaction.bitAddrEnd\n withPadding = not (trimPaddingWordsOnEnd or trimPaddingWordsOnStart)\n if partsPending or (withPadding\n and endOfThisFrame != startOfThisFrame):\n # cut off padding at end of frame\n endOfLastWord = (lastWordI + 1) * wordWidth\n if endOfThisFrame < endOfLastWord:\n endOfThisFrame = endOfLastWord\n else:\n paddingWords = it.fullWordCnt(endOfLastWord, endOfThisFrame)\n if trimPaddingWordsOnEnd and paddingWords > maxPaddingWords:\n endOfThisFrame -= paddingWords * wordWidth\n # align end of frame to word\n endOfThisFrame = min(startOfThisFrame +\n maxFrameLen, endOfThisFrame)\n\n yield FrameTmpl(transaction,\n wordWidth,\n startOfThisFrame,\n endOfThisFrame,\n parts)\n parts = []\n startOfThisFrame = endOfThisFrame\n\n # final padding on the end\n while withPadding and startOfThisFrame < transaction.bitAddrEnd:\n endOfThisFrame = min(startOfThisFrame +\n maxFrameLen, transaction.bitAddrEnd)\n\n yield FrameTmpl(transaction,\n wordWidth,\n startOfThisFrame,\n endOfThisFrame,\n [])\n\n startOfThisFrame = endOfThisFrame"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef walkWords(self, showPadding: bool=False):\n wIndex = 0\n lastEnd = self.startBitAddr\n parts = []\n for p in self.parts:\n end = p.startOfPart\n if showPadding and end != lastEnd:\n # insert padding\n while end != lastEnd:\n assert end >= lastEnd, (end, lastEnd)\n endOfWord = ceil(\n (lastEnd + 1) / self.wordWidth) * self.wordWidth\n endOfPadding = min(endOfWord, end)\n _p = TransPart(self, None, lastEnd, endOfPadding, 0)\n parts.append(_p)\n\n if endOfPadding >= endOfWord:\n yield (wIndex, parts)\n wIndex += 1\n parts = []\n\n lastEnd = endOfPadding\n\n if self._wordIndx(lastEnd) != self._wordIndx(p.startOfPart):\n yield (wIndex, parts)\n\n wIndex += 1\n parts = []\n lastEnd = p.endOfPart\n\n parts.append(p)\n lastEnd = p.endOfPart\n if lastEnd % self.wordWidth == 0:\n yield (wIndex, parts)\n\n wIndex += 1\n parts = []\n\n if showPadding and (parts\n or lastEnd != self.endBitAddr\n or lastEnd % self.wordWidth != 0):\n # align end to end of last word\n end = ceil(self.endBitAddr / self.wordWidth) * self.wordWidth\n while end != lastEnd:\n assert end >= lastEnd, (end, lastEnd)\n endOfWord = ((lastEnd // self.wordWidth) + 1) * self.wordWidth\n endOfPadding = min(endOfWord, end)\n _p = TransPart(self, None, lastEnd, endOfPadding, 0)\n _p.parent = self\n parts.append(_p)\n\n if endOfPadding >= endOfWord:\n yield (wIndex, parts)\n wIndex += 1\n parts = []\n\n lastEnd = endOfPadding\n\n if parts:\n # in the case end of frame is not aligned to end of word\n yield (wIndex, parts)", "response": "Walk enumerated words in this frame and yield TransParts of the words that are part of the next word."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef fieldToDataDict(dtype, data, res):\n # assert data is None or isinstance(data, dict)\n for f in dtype.fields:\n try:\n fVal = data[f.name]\n except KeyError:\n fVal = None\n\n if isinstance(f.dtype, Bits):\n if fVal is not None:\n assert isinstance(fVal, int)\n res[f] = fVal\n elif isinstance(f.dtype, HStruct):\n if fVal:\n FrameTmpl.fieldToDataDict(f.dtype, fVal, res)\n elif isinstance(f.dtype, HArray):\n if fVal:\n # assert isinstance(fVal, class_or_tuple)\n res[f] = fVal\n\n return res", "response": "Construct dictionary for faster lookup of values\n for fields\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef packData(self, data):\n typeOfWord = simBitsT(self.wordWidth, None)\n fieldToVal = self._fieldToTPart\n if fieldToVal is None:\n fieldToVal = self._fieldToTPart = self.fieldToDataDict(\n self.origin.dtype,\n data,\n {})\n\n for _, transParts in self.walkWords(showPadding=True):\n actualVldMask = 0\n actualVal = 0\n for tPart in transParts:\n high, low = tPart.getBusWordBitRange()\n fhigh, flow = tPart.getFieldBitRange()\n if not tPart.isPadding:\n val = fieldToVal.get(tPart.tmpl.origin, None)\n else:\n val = None\n\n if val is None:\n newBits = 0\n vld = 0\n else:\n newBits = selectBitRange(val, flow, fhigh - flow)\n vld = mask(high - low) << low\n\n actualVal = setBitRange(actualVal, low, high - low, newBits)\n actualVldMask = setBitRange(actualVal, low, high - low, vld)\n\n yield typeOfWord.getValueCls()(actualVal, typeOfWord,\n actualVldMask, -1)", "response": "Packs data into list of BitsVal of specified dataWidth"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning True if two Value instances are same HVal", "response": "def isSameHVal(a: Value, b: Value) -> bool:\n \"\"\"\n :return: True if two Value instances are same\n :note: not just equal\n \"\"\"\n return a is b or (isinstance(a, Value)\n and isinstance(b, Value)\n and a.val == b.val\n and a.vldMask == b.vldMask)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef areSameHVals(a: Union[None, List[Value]],\n b: Union[None, List[Value]]) -> bool:\n \"\"\"\n :return: True if two vectors of Value instances are same\n :note: not just equal\n \"\"\"\n if a is b:\n return True\n if a is None or b is None:\n return False\n if len(a) == len(b):\n for a_, b_ in zip(a, b):\n if not isSameHVal(a_, b_):\n return False\n return True\n else:\n return False", "response": "Returns True if two vectors of Value instances are same HVal."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning True if two lists of HdlStatement instances are same", "response": "def isSameStatementList(stmListA: List[HdlStatement],\n stmListB: List[HdlStatement]) -> bool:\n \"\"\"\n :return: True if two lists of HdlStatement instances are same\n \"\"\"\n if stmListA is stmListB:\n return True\n if stmListA is None or stmListB is None:\n return False\n\n for a, b in zip(stmListA, stmListB):\n if not a.isSame(b):\n return False\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning True if all statements are same", "response": "def statementsAreSame(statements: List[HdlStatement]) -> bool:\n \"\"\"\n :return: True if all statements are same\n \"\"\"\n iterator = iter(statements)\n try:\n first = next(iterator)\n except StopIteration:\n return True\n\n return all(first.isSame(rest) for rest in iterator)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the last statement in the iterator that has rank > 0 or None if no branches are found.", "response": "def _get_stm_with_branches(stm_it):\n \"\"\"\n :return: first statement with rank > 0 or None if iterator empty\n \"\"\"\n last = None\n while last is None or last.rank == 0:\n try:\n last = next(stm_it)\n except StopIteration:\n last = None\n break\n\n return last"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _clean_signal_meta(self):\n self._enclosed_for = None\n self._sensitivity = None\n for stm in self._iter_stms():\n stm._clean_signal_meta()", "response": "Clean informations about enclosure for outputs and sensitivity for this statement"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncollects inputs and outputs from all child statements to self. _inputs and self. _outputs.", "response": "def _collect_io(self) -> None:\n \"\"\"\n Collect inputs/outputs from all child statements\n to :py:attr:`~_input` / :py:attr:`_output` attribure on this object\n \"\"\"\n in_add = self._inputs.extend\n out_add = self._outputs.extend\n\n for stm in self._iter_stms():\n in_add(stm._inputs)\n out_add(stm._outputs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _discover_enclosure_for_statements(statements: List['HdlStatement'],\n outputs: List['HdlStatement']):\n \"\"\"\n Discover enclosure for list of statements\n\n :param statements: list of statements in one code branch\n :param outputs: list of outputs which should be driven from this statement list\n :return: set of signals for which this statement list have always some driver\n (is enclosed)\n \"\"\"\n result = set()\n if not statements:\n return result\n\n for stm in statements:\n stm._discover_enclosure()\n\n for o in outputs:\n has_driver = False\n\n for stm in statements:\n if o in stm._outputs:\n assert not has_driver\n has_driver = False\n if o in stm._enclosed_for:\n result.add(o)\n else:\n pass\n\n return result", "response": "Discover enclosure for list of statements in one code branch."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _discover_sensitivity_seq(self,\n signals: List[RtlSignalBase],\n seen: set, ctx: SensitivityCtx)\\\n -> None:\n \"\"\"\n Discover sensitivity for list of signals\n\n \"\"\"\n casualSensitivity = set()\n for s in signals:\n s._walk_sensitivity(casualSensitivity, seen, ctx)\n if ctx.contains_ev_dependency:\n break\n\n # if event dependent sensitivity found do not add other sensitivity\n if not ctx.contains_ev_dependency:\n ctx.extend(casualSensitivity)", "response": "Discovers the sensitivity sequence for the given list of signals and adds them to the given set."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets RtlNetlist context from signals and outputs.", "response": "def _get_rtl_context(self):\n \"\"\"\n get RtlNetlist context from signals\n \"\"\"\n for sig in chain(self._inputs, self._outputs):\n if sig.ctx:\n return sig.ctx\n else:\n # Param instances does not have context\n continue\n raise HwtSyntaxError(\n \"Statement does not have any signal in any context\", self)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _on_reduce(self, self_reduced: bool, io_changed: bool,\n result_statements: List[\"HdlStatement\"]) -> None:\n \"\"\"\n Update signal IO after reuce atempt\n\n :param self_reduced: if True this object was reduced\n :param io_changed: if True IO of this object may changed\n and has to be updated\n :param result_statements: list of statements which are result\n of reduce operation on this statement\n \"\"\"\n\n parentStm = self.parentStm\n if self_reduced:\n was_top = parentStm is None\n # update signal drivers/endpoints\n if was_top:\n # disconnect self from signals\n ctx = self._get_rtl_context()\n ctx.statements.remove(self)\n ctx.statements.update(result_statements)\n\n for i in self._inputs:\n i.endpoints.discard(self)\n for o in self._outputs:\n o.drivers.remove(self)\n\n for stm in result_statements:\n stm.parentStm = parentStm\n if parentStm is None:\n # conect signals to child statements\n for inp in stm._inputs:\n inp.endpoints.append(stm)\n for outp in stm._outputs:\n outp.drivers.append(stm)\n else:\n # parent has to update it's inputs/outputs\n if io_changed:\n self._inputs = UniqList()\n self._outputs = UniqList()\n self._collect_io()", "response": "Update the signal IO after reduce operation on this object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _on_merge(self, other):\n self._inputs.extend(other._inputs)\n self._outputs.extend(other._outputs)\n\n if self._sensitivity is not None:\n self._sensitivity.extend(other._sensitivity)\n else:\n assert other._sensitivity is None\n\n if self._enclosed_for is not None:\n self._enclosed_for.update(other._enclosed_for)\n else:\n assert other._enclosed_for is None\n\n other_was_top = other.parentStm is None\n if other_was_top:\n other._get_rtl_context().statements.remove(other)\n for s in other._inputs:\n s.endpoints.discard(other)\n s.endpoints.append(self)\n\n for s in other._outputs:\n s.drivers.discard(other)\n s.drivers.append(self)", "response": "Update the internal state of this object with the contents of the other object."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck if the lists of statements can be merged into one statement list.", "response": "def _is_mergable_statement_list(cls, stmsA, stmsB):\n \"\"\"\n Walk statements and compare if they can be merged into one statement list\n \"\"\"\n if stmsA is None and stmsB is None:\n return True\n\n elif stmsA is None or stmsB is None:\n return False\n\n a_it = iter(stmsA)\n b_it = iter(stmsB)\n\n a = _get_stm_with_branches(a_it)\n b = _get_stm_with_branches(b_it)\n while a is not None or b is not None:\n if a is None or b is None or not a._is_mergable(b):\n return False\n\n a = _get_stm_with_branches(a_it)\n b = _get_stm_with_branches(b_it)\n\n # lists are empty\n return True"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _merge_statements(statements: List[\"HdlStatement\"])\\\n -> Tuple[List[\"HdlStatement\"], int]:\n \"\"\"\n Merge statements in list to remove duplicated if-then-else trees\n\n :return: tuple (list of merged statements, rank decrease due merging)\n :note: rank decrease is sum of ranks of reduced statements\n :attention: statement list has to me mergable\n \"\"\"\n order = {}\n for i, stm in enumerate(statements):\n order[stm] = i\n\n new_statements = []\n rank_decrease = 0\n\n for rank, stms in groupedby(statements, lambda s: s.rank):\n if rank == 0:\n new_statements.extend(stms)\n else:\n if len(stms) == 1:\n new_statements.extend(stms)\n continue\n\n # try to merge statements if they are same condition tree\n for iA, stmA in enumerate(stms):\n if stmA is None:\n continue\n\n for iB, stmB in enumerate(islice(stms, iA + 1, None)):\n if stmB is None:\n continue\n\n if stmA._is_mergable(stmB):\n rank_decrease += stmB.rank\n stmA._merge_with_other_stm(stmB)\n stms[iA + 1 + iB] = None\n new_statements.append(stmA)\n else:\n new_statements.append(stmA)\n new_statements.append(stmB)\n\n new_statements.sort(key=lambda stm: order[stm])\n return new_statements, rank_decrease", "response": "Merge statements in list to remove duplicated if - then - else trees\n and return a tuple of the merged statements and the rank decrease due merging"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nmerging two lists of statements into one.", "response": "def _merge_statement_lists(stmsA: List[\"HdlStatement\"], stmsB: List[\"HdlStatement\"])\\\n -> List[\"HdlStatement\"]:\n \"\"\"\n Merge two lists of statements into one\n\n :return: list of merged statements\n \"\"\"\n if stmsA is None and stmsB is None:\n return None\n\n tmp = []\n\n a_it = iter(stmsA)\n b_it = iter(stmsB)\n\n a = None\n b = None\n a_empty = False\n b_empty = False\n\n while not a_empty and not b_empty:\n while not a_empty:\n a = next(a_it, None)\n if a is None:\n a_empty = True\n break\n elif a.rank == 0:\n # simple statement does not require merging\n tmp.append(a)\n a = None\n else:\n break\n\n while not b_empty:\n b = next(b_it, None)\n if b is None:\n b_empty = True\n break\n elif b.rank == 0:\n # simple statement does not require merging\n tmp.append(b)\n b = None\n else:\n break\n\n if a is not None or b is not None:\n a._merge_with_other_stm(b)\n tmp.append(a)\n a = None\n b = None\n\n return tmp"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsimplify statements in the list and return a list of lists where each entry is a new entry and the rank decrease and io change flag.", "response": "def _try_reduce_list(statements: List[\"HdlStatement\"]):\n \"\"\"\n Simplify statements in the list\n \"\"\"\n io_change = False\n new_statements = []\n\n for stm in statements:\n reduced, _io_change = stm._try_reduce()\n new_statements.extend(reduced)\n io_change |= _io_change\n\n new_statements, rank_decrease = HdlStatement._merge_statements(\n new_statements)\n\n return new_statements, rank_decrease, io_change"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\npropagating event dependency flag to all child statements.", "response": "def _on_parent_event_dependent(self):\n \"\"\"\n After parrent statement become event dependent\n propagate event dependency flag to child statements\n \"\"\"\n if not self._is_completly_event_dependent:\n self._is_completly_event_dependent = True\n for stm in self._iter_stms():\n stm._on_parent_event_dependent()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _set_parent_stm(self, parentStm: \"HdlStatement\"):\n was_top = self.parentStm is None\n self.parentStm = parentStm\n if not self._now_is_event_dependent\\\n and parentStm._now_is_event_dependent:\n self._on_parent_event_dependent()\n\n topStatement = parentStm\n while topStatement.parentStm is not None:\n topStatement = topStatement.parentStm\n\n parent_out_add = topStatement._outputs.append\n parent_in_add = topStatement._inputs.append\n\n if was_top:\n for inp in self._inputs:\n inp.endpoints.discard(self)\n inp.endpoints.append(topStatement)\n parent_in_add(inp)\n\n for outp in self._outputs:\n outp.drivers.discard(self)\n outp.drivers.append(topStatement)\n parent_out_add(outp)\n\n ctx = self._get_rtl_context()\n ctx.statements.discard(self)\n\n parentStm.rank += self.rank", "response": "Assign parent statement and propagate dependency flags if necessary"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd statements to this container under conditions specified by condSet .", "response": "def _register_stements(self, statements: List[\"HdlStatement\"],\n target: List[\"HdlStatement\"]):\n \"\"\"\n Append statements to this container under conditions specified\n by condSet\n \"\"\"\n for stm in flatten(statements):\n assert stm.parentStm is None, stm\n stm._set_parent_stm(self)\n target.append(stm)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _destroy(self):\n ctx = self._get_rtl_context()\n for i in self._inputs:\n i.endpoints.discard(self)\n\n for o in self._outputs:\n o.drivers.remove(self)\n\n ctx.statements.remove(self)", "response": "Disconnect this statement from signals and delete it from RtlNetlist context"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef fromPy(cls, val, typeObj, vldMask=None):\n assert isinstance(val, str) or val is None\n vld = 0 if val is None else 1\n if not vld:\n assert vldMask is None or vldMask == 0\n val = \"\"\n else:\n if vldMask == 0:\n val = \"\"\n vld = 0\n\n return cls(val, typeObj, vld)", "response": "Create a new HdlType from a python string or None."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating register in this unit", "response": "def _reg(self, name, dtype=BIT, defVal=None, clk=None, rst=None):\n \"\"\"\n Create register in this unit\n\n :param defVal: default value of this register,\n if this value is specified reset of this component is used\n (unit has to have single interface of class Rst or Rst_n)\n :param clk: optional clok signal specification\n :param rst: optional reset signal specification\n :note: rst/rst_n resolution is done from signal type,\n if it is negated type it is rst_n\n :note: if clk or rst is not specifid default signal\n from parent unit will be used\n \"\"\"\n if clk is None:\n clk = getClk(self)\n\n if defVal is None:\n # if no value is specified reset is not required\n rst = None\n else:\n rst = getRst(self)._sig\n\n if isinstance(dtype, HStruct):\n if defVal is not None:\n raise NotImplementedError()\n container = dtype.fromPy(None)\n for f in dtype.fields:\n if f.name is not None:\n r = self._reg(\"%s_%s\" % (name, f.name), f.dtype)\n setattr(container, f.name, r)\n\n return container\n\n return self._ctx.sig(name,\n dtype=dtype,\n clk=clk._sig,\n syncRst=rst,\n defVal=defVal)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a signal in this unit", "response": "def _sig(self, name, dtype=BIT, defVal=None):\n \"\"\"\n Create signal in this unit\n \"\"\"\n if isinstance(dtype, HStruct):\n if defVal is not None:\n raise NotImplementedError()\n container = dtype.fromPy(None)\n for f in dtype.fields:\n if f.name is not None:\n r = self._sig(\"%s_%s\" % (name, f.name), f.dtype)\n setattr(container, f.name, r)\n\n return container\n\n return self._ctx.sig(name, dtype=dtype, defVal=defVal)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndisconnecting internal signals so unit can be reused by parent unit", "response": "def _cleanAsSubunit(self):\n \"\"\"Disconnect internal signals so unit can be reused by parent unit\"\"\"\n for pi in self._entity.ports:\n pi.connectInternSig()\n for i in chain(self._interfaces, self._private_interfaces):\n i._clean()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nselect fields from structure", "response": "def HStruct_selectFields(structT, fieldsToUse):\n \"\"\"\n Select fields from structure (rest will become spacing)\n\n :param structT: HStruct type instance\n :param fieldsToUse: dict {name:{...}} or set of names to select,\n dictionary is used to select nested fields\n in HStruct or HUnion fields\n (f.e. {\"struct1\": {\"field1\", \"field2\"}, \"field3\":{}}\n will select field1 and 2 from struct1 and field3 from root)\n \"\"\"\n\n template = []\n fieldsToUse = fieldsToUse\n foundNames = set()\n\n for f in structT.fields:\n name = None\n subfields = []\n\n if f.name is not None:\n try:\n if isinstance(fieldsToUse, dict):\n subfields = fieldsToUse[f.name]\n name = f.name\n else:\n if f.name in fieldsToUse:\n name = f.name\n except KeyError:\n name = None\n\n if name is not None and subfields:\n fields = HStruct_selectFields(f.dtype, subfields)\n template.append(HStructField(fields, name))\n else:\n template.append(HStructField(f.dtype, name))\n\n if f.name is not None:\n foundNames.add(f.name)\n\n if isinstance(fieldsToUse, dict):\n fieldsToUse = set(fieldsToUse.keys())\n assert fieldsToUse.issubset(foundNames)\n\n return HStruct(*template)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef walkFlattenFields(sigOrVal, skipPadding=True):\n t = sigOrVal._dtype\n if isinstance(t, Bits):\n yield sigOrVal\n elif isinstance(t, HUnion):\n yield from walkFlattenFields(sigOrVal._val, skipPadding=skipPadding)\n elif isinstance(t, HStruct):\n for f in t.fields:\n isPadding = f.name is None\n if not isPadding or not skipPadding:\n if isPadding:\n v = f.dtype.fromPy(None)\n else:\n v = getattr(sigOrVal, f.name)\n\n yield from walkFlattenFields(v)\n\n elif isinstance(t, HArray):\n for item in sigOrVal:\n yield from walkFlattenFields(item)\n else:\n raise NotImplementedError(t)", "response": "Walk all simple values in HStruct or HArray or HUnion and yield all simple values in HStruct or HArray."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef HStruct_unpack(structT, data, getDataFn=None, dataWidth=None):\n if getDataFn is None:\n assert dataWidth is not None\n\n def _getDataFn(x):\n return toHVal(x)._auto_cast(Bits(dataWidth))\n\n getDataFn = _getDataFn\n\n val = structT.fromPy(None)\n\n fData = iter(data)\n\n # actual is storage variable for items from frameData\n actualOffset = 0\n actual = None\n\n for v in walkFlattenFields(val, skipPadding=False):\n # walk flatten fields and take values from fData and parse them to\n # field\n required = v._dtype.bit_length()\n\n if actual is None:\n actualOffset = 0\n try:\n actual = getDataFn(next(fData))\n except StopIteration:\n raise Exception(\"Input data too short\")\n\n if dataWidth is None:\n dataWidth = actual._dtype.bit_length()\n actuallyHave = dataWidth\n else:\n actuallyHave = actual._dtype.bit_length() - actualOffset\n\n while actuallyHave < required:\n # collect data for this field\n try:\n d = getDataFn(next(fData))\n except StopIteration:\n raise Exception(\"Input data too short\")\n\n actual = d._concat(actual)\n actuallyHave += dataWidth\n\n if actuallyHave >= required:\n # parse value of actual to field\n # skip padding\n _v = actual[(required + actualOffset):actualOffset]\n _v = _v._auto_cast(v._dtype)\n v.val = _v.val\n v.vldMask = _v.vldMask\n v.updateTime = _v.updateTime\n\n # update slice out what was taken\n actuallyHave -= required\n actualOffset += required\n\n if actuallyHave == 0:\n actual = None\n\n if actual is not None:\n assert actual._dtype.bit_length(\n ) - actualOffset < dataWidth, \"It should be just a padding at the end of frame\"\n\n return val", "response": "unpacks a single H structure into a single H structure"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _convSign(self, signed):\n if isinstance(self, Value):\n return self._convSign__val(signed)\n else:\n if self._dtype.signed == signed:\n return self\n t = copy(self._dtype)\n t.signed = signed\n if signed is None:\n cnv = AllOps.BitsAsVec\n elif signed:\n cnv = AllOps.BitsAsSigned\n else:\n cnv = AllOps.BitsAsUnsigned\n\n return Operator.withRes(cnv, [self], t)", "response": "Convert signum no bit manipulation just data are represented\n differently"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconstruct a value from a pythonic value.", "response": "def fromPy(cls, val, typeObj, vldMask=None):\n \"\"\"\n Construct value from pythonic value (int, bytes, enum.Enum member)\n \"\"\"\n assert not isinstance(val, Value)\n if val is None:\n vld = 0\n val = 0\n assert vldMask is None or vldMask == 0\n else:\n allMask = typeObj.all_mask()\n w = typeObj.bit_length()\n if isinstance(val, bytes):\n val = int.from_bytes(\n val, byteorder=\"little\", signed=bool(typeObj.signed))\n else:\n try:\n val = int(val)\n except TypeError as e:\n if isinstance(val, enum.Enum):\n val = int(val.value)\n else:\n raise e\n\n if vldMask is None:\n vld = allMask\n else:\n assert vldMask <= allMask and vldMask >= 0\n vld = vldMask\n\n if val < 0:\n assert typeObj.signed\n assert signFix(val & allMask, w) == val, (\n val, signFix(val & allMask, w))\n val = signFix(val & vld, w)\n else:\n if typeObj.signed:\n msb = 1 << (w - 1)\n if msb & val:\n assert val < 0, val\n\n if val & allMask != val:\n raise ValueError(\n \"Not enought bits to represent value\",\n val, val & allMask)\n val = val & vld\n\n return cls(val, typeObj, vld)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconcatenates this with other to one wider value or signal", "response": "def _concat(self, other):\n \"\"\"\n Concatenate this with other to one wider value/signal\n \"\"\"\n w = self._dtype.bit_length()\n try:\n other_bit_length = other._dtype.bit_length\n except AttributeError:\n raise TypeError(\"Can not concat bits and\", other._dtype)\n\n other_w = other_bit_length()\n resWidth = w + other_w\n resT = Bits(resWidth)\n\n if areValues(self, other):\n return self._concat__val(other)\n else:\n w = self._dtype.bit_length()\n other_w = other._dtype.bit_length()\n resWidth = w + other_w\n resT = Bits(resWidth)\n # is instance of signal\n if isinstance(other, InterfaceBase):\n other = other._sig\n if isinstance(other._dtype, Bits):\n if other._dtype.signed is not None:\n other = other._vec()\n elif other._dtype == BOOL:\n other = other._auto_cast(BIT)\n else:\n raise TypeError(other._dtype)\n\n if self._dtype.signed is not None:\n self = self._vec()\n\n return Operator.withRes(AllOps.CONCAT, [self, other], resT)\\\n ._auto_cast(Bits(resWidth,\n signed=self._dtype.signed))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef sensitivity(proc: HWProcess, *sensitiveTo):\n for s in sensitiveTo:\n if isinstance(s, tuple):\n sen, s = s\n if sen == SENSITIVITY.ANY:\n s.simSensProcs.add(proc)\n elif sen == SENSITIVITY.RISING:\n s.simRisingSensProcs.add(proc)\n elif sen == SENSITIVITY.FALLING:\n s.simFallingSensProcs.add(proc)\n else:\n raise AssertionError(sen)\n else:\n s.simSensProcs.add(proc)", "response": "register sensitivity for process\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nevaluating list of values as condition", "response": "def simEvalCond(simulator, *conds):\n \"\"\"\n Evaluate list of values as condition\n \"\"\"\n _cond = True\n _vld = True\n for v in conds:\n val = bool(v.val)\n fullVld = v.vldMask == 1\n if fullVld:\n if not val:\n return False, True\n else:\n return False, False\n\n _cond = _cond and val\n _vld = _vld and fullVld\n\n return _cond, _vld"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconnects a simulation port to another model by name.", "response": "def connectSimPort(simUnit, subSimUnit, srcName, dstName, direction):\n \"\"\"\n Connect ports of simulation models by name\n \"\"\"\n if direction == DIRECTION.OUT:\n origPort = getattr(subSimUnit, srcName)\n newPort = getattr(simUnit, dstName)\n setattr(subSimUnit, srcName, newPort)\n else:\n origPort = getattr(subSimUnit, dstName)\n newPort = getattr(simUnit, srcName)\n setattr(subSimUnit, dstName, newPort)\n\n subSimUnit._ctx.signals.remove(origPort)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates value updater for simulation", "response": "def mkUpdater(nextVal: Value, invalidate: bool):\n \"\"\"\n Create value updater for simulation\n\n :param nextVal: instance of Value which will be asssiggned to signal\n :param invalidate: flag which tells if value has been compromised\n and if it should be invaidated\n :return: function(value) -> tuple(valueHasChangedFlag, nextVal)\n \"\"\"\n\n def updater(currentVal):\n _nextVal = nextVal.clone()\n if invalidate:\n _nextVal.vldMask = 0\n return (valueHasChanged(currentVal, _nextVal), _nextVal)\n return updater"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate value updater for simulation for value of array type Taxonomy", "response": "def mkArrayUpdater(nextItemVal: Value, indexes: Tuple[Value],\n invalidate: bool):\n \"\"\"\n Create value updater for simulation for value of array type\n\n :param nextVal: instance of Value which will be asssiggned to signal\n :param indexes: tuple on indexes where value should be updated\n in target array\n\n :return: function(value) -> tuple(valueHasChangedFlag, nextVal)\n \"\"\"\n def updater(currentVal):\n if len(indexes) > 1:\n raise NotImplementedError(\"[TODO] implement for more indexes\")\n\n _nextItemVal = nextItemVal.clone()\n if invalidate:\n _nextItemVal.vldMask = 0\n\n index = indexes[0]\n change = valueHasChanged(currentVal._getitem__val(index), _nextItemVal)\n currentVal._setitem__val(index, _nextItemVal)\n return (change, currentVal)\n\n return updater"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a new instance of the class from a Python dictionary.", "response": "def fromPy(cls, val, typeObj, vldMask=None):\n \"\"\"\n :param val: None or dictionary {index:value} or iterrable of values\n :param vldMask: if is None validity is resolved from val\n if is 0 value is invalidated\n if is 1 value has to be valid\n \"\"\"\n size = evalParam(typeObj.size)\n if isinstance(size, Value):\n size = int(size)\n\n elements = {}\n if vldMask == 0:\n val = None\n\n if val is None:\n pass\n elif isinstance(val, dict):\n for k, v in val.items():\n if not isinstance(k, int):\n k = int(k)\n elements[k] = typeObj.elmType.fromPy(v)\n else:\n for k, v in enumerate(val):\n if isinstance(v, RtlSignalBase): # is signal\n assert v._dtype == typeObj.elmType\n e = v\n else:\n e = typeObj.elmType.fromPy(v)\n elements[k] = e\n\n _mask = int(bool(val))\n if vldMask is None:\n vldMask = _mask\n else:\n assert (vldMask == _mask)\n\n return cls(elements, typeObj, vldMask)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _getitem__val(self, key):\n try:\n kv = key.val\n if not key._isFullVld():\n raise KeyError()\n else:\n if kv >= self._dtype.size:\n raise KeyError()\n\n return self.val[kv].clone()\n except KeyError:\n return self._dtype.elmType.fromPy(None)", "response": "get item from array"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate hdl vector value", "response": "def vec(val, width, signed=None):\n \"\"\"create hdl vector value\"\"\"\n return Bits(width, signed, forceVector=True).fromPy(val)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef monitor(self, sim):\n r = sim.read\n if self.notReset(sim):\n # update rd signal only if required\n if self._lastRd is not 1:\n self.wrRd(sim.write, 1)\n self._lastRd = 1\n\n # try to run onMonitorReady if there is any\n try:\n onMonitorReady = self.onMonitorReady\n except AttributeError:\n onMonitorReady = None\n\n if onMonitorReady is not None:\n onMonitorReady(sim)\n\n # wait for response of master\n yield sim.waitOnCombUpdate()\n vld = self.isVld(r)\n assert vld.vldMask, (sim.now, self.intf,\n \"vld signal is in invalid state\")\n\n if vld.val:\n # master responded with positive ack, do read data\n d = self.doRead(sim)\n if self._debugOutput is not None:\n self._debugOutput.write(\n \"%s, read, %d: %r\\n\" % (\n self.intf._getFullName(),\n sim.now, d))\n self.data.append(d)\n if self._afterRead is not None:\n self._afterRead(sim)\n else:\n if self._lastRd is not 0:\n # can not receive, say it to masters\n self.wrRd(sim.write, 0)\n self._lastRd = 0", "response": "Collect data from master and master"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef bit_length(self):\n try:\n itemSize = self.elmType.bit_length\n except AttributeError:\n itemSize = None\n if itemSize is None:\n raise TypeError(\n \"Can not determine size of array because item has\"\n \" not determinable size\")\n\n s = self.size\n if isinstance(s, RtlSignalBase):\n s = int(s.staticEval())\n return s * itemSize()", "response": "Returns the bit length of the element in the array."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nevaluating a param object", "response": "def evalParam(p):\n \"\"\"\n Get value of parameter\n \"\"\"\n while isinstance(p, Param):\n p = p.get()\n\n if isinstance(p, RtlSignalBase):\n return p.staticEval()\n # use rather param inheritance instead of param as param value\n return toHVal(p)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef set(self, val):\n assert not self.__isReadOnly, \\\n (\"This parameter(%s) was locked\"\n \" and now it can not be changed\" % self.name)\n assert self.replacedWith is None, \\\n (\"This param was replaced with new one and this \"\n \"should not exists\")\n\n val = toHVal(val)\n self.defVal = val\n self._val = val.staticEval()\n self._dtype = self._val._dtype", "response": "set the value of the parameter"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef HTypeFromIntfMap(interfaceMap):\n structFields = []\n\n for m in interfaceMap:\n f = HTypeFromIntfMapItem(m)\n structFields.append(f)\n\n return HStruct(*structFields)", "response": "Generate flattened register map for HStruct\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef registerMUX(self, stm: Union[HdlStatement, Operator], sig: RtlSignal,\n inputs_cnt: int):\n \"\"\"\n mux record is in format (self.MUX, n, m)\n where n is number of bits of this mux\n and m is number of possible inputs\n \"\"\"\n assert inputs_cnt > 1\n res = self.resources\n w = sig._dtype.bit_length()\n k = (ResourceMUX, w, inputs_cnt)\n res[k] = res.get(k, 0) + 1\n\n self.resource_for_object[(stm, sig)] = k", "response": "Register a mux record for the given statement and signal."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef finalize(self):\n ff_to_remove = 0\n res = self.resources\n for m, addrDict in self.memories.items():\n rwSyncPorts, rSyncPorts, wSyncPorts = 0, 0, 0\n rwAsyncPorts, rAsyncPorts, wAsyncPorts = 0, 0, 0\n rSync_wAsyncPorts, rAsync_wSyncPorts = 0, 0\n\n for _, (rSync, wSync, rAsync, wAsync) in addrDict.items():\n if rSync:\n ff_to_remove += rSync * m._dtype.elmType.bit_length()\n\n # resolve port count for this addr signal\n rwSync = min(rSync, wSync)\n rSync -= rwSync\n wSync -= rwSync\n\n rwAsync = min(rAsync, wAsync)\n rAsync -= rwAsync\n wAsync -= rwAsync\n\n rSync_wAsync = min(rSync, wAsync)\n rSync -= rSync_wAsync\n wAsync -= rSync_wAsync\n\n rAsync_wSync = min(rAsync, wSync)\n rAsync -= rAsync_wSync\n wSync -= rAsync_wSync\n\n # update port counts for mem\n rwSyncPorts += rwSync\n rSyncPorts += rSync\n wSyncPorts += wSync\n rwAsyncPorts += rwAsync\n rAsyncPorts += rAsync\n wAsyncPorts += wAsync\n\n rSync_wAsyncPorts += rSync_wAsync\n rAsync_wSyncPorts += rAsync_wSync\n k = ResourceRAM(m._dtype.elmType.bit_length(),\n int(m._dtype.size),\n rwSyncPorts, rSyncPorts, wSyncPorts,\n rSync_wAsyncPorts,\n rwAsyncPorts, rAsyncPorts, wAsyncPorts,\n rAsync_wSyncPorts)\n res[k] = res.get(k, 0) + 1\n\n self.memories.clear()\n\n # remove register on read ports which will be merged into ram\n if ff_to_remove:\n ff_cnt = res[ResourceFF]\n ff_cnt -= ff_to_remove\n if ff_cnt:\n res[ResourceFF] = ff_cnt\n else:\n del res[ResourceFF]", "response": "Finalize the internal state of the object."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ntries lookup operator with this parameters in _usedOps if not found create new one and soter it in _usedOps :param operator: instance of OpDefinition :param opCreateDelegate: function (*ops) to create operator :param otherOps: other operands (ops = self + otherOps) :return: RtlSignal which is result of newly created operator", "response": "def naryOp(self, operator, opCreateDelegate, *otherOps) -> RtlSignalBase:\n \"\"\"\n Try lookup operator with this parameters in _usedOps\n if not found create new one and soter it in _usedOps\n\n :param operator: instance of OpDefinition\n :param opCreateDelegate: function (*ops) to create operator\n :param otherOps: other operands (ops = self + otherOps)\n\n :return: RtlSignal which is result of newly created operator\n \"\"\"\n k = (operator, *otherOps)\n used = self._usedOps\n try:\n return used[k]\n except KeyError:\n pass\n\n o = opCreateDelegate(self, *otherOps)\n\n # input operads may be type converted,\n # search if this happend, and return always same result signal\n try:\n op_instanciated = (o.origin.operator == operator\n and o.origin.operands[0] is self)\n except AttributeError:\n op_instanciated = False\n\n if op_instanciated:\n k_real = (operator, *o.origin.operands[1:])\n real_o = used.get(k_real, None)\n if real_o is not None:\n # destroy newly created operator and result, because it is same\n # as\n ctx = self.ctx\n if ctx is not None:\n ctx.signals.remove(o)\n\n op = o.origin\n o.origin = None\n o.drivers.clear()\n for inp in op.operands:\n if isinstance(inp, RtlSignalBase):\n inp.endpoints.remove(op)\n\n o = real_o\n else:\n used[k_real] = o\n\n used[k] = o\n\n return o"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _eq(self, other):\n return self.naryOp(AllOps.EQ, tv(self)._eq, other)", "response": "Compare two sets of keys and return the first one."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfinding out if this signal is something indexed", "response": "def _getIndexCascade(self):\n \"\"\"\n Find out if this signal is something indexed\n \"\"\"\n try:\n # now I am result of the index xxx[xx] <= source\n # get index op\n d = self.singleDriver()\n try:\n op = d.operator\n except AttributeError:\n return\n\n if op == AllOps.INDEX:\n # get signal on which is index applied\n indexedOn = d.operands[0]\n if isinstance(indexedOn, RtlSignalBase):\n # [TODO] multidimensional indexing\n return indexedOn, [d.operands[1]]\n else:\n raise Exception(\n \"can not drive static value %r\" % indexedOn)\n\n except (MultipleDriversErr, NoDriverErr):\n pass"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nconstruct a new value from a Python object.", "response": "def fromPy(self, v, vldMask=None):\n \"\"\"\n Construct value of this type.\n Delegated on value class for this type\n \"\"\"\n return self.getValueCls().fromPy(v, self, vldMask=vldMask)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncasting value or signal of this type to another compatible type.", "response": "def auto_cast(self, sigOrVal, toType):\n \"\"\"\n Cast value or signal of this type to another compatible type.\n\n :param sigOrVal: instance of signal or value to cast\n :param toType: instance of HdlType to cast into\n \"\"\"\n if sigOrVal._dtype == toType:\n return sigOrVal\n\n try:\n c = self._auto_cast_fn\n except AttributeError:\n c = self.get_auto_cast_fn()\n self._auto_cast_fn = c\n\n return c(self, sigOrVal, toType)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncasting value or signal of this type to another type of same size.", "response": "def reinterpret_cast(self, sigOrVal, toType):\n \"\"\"\n Cast value or signal of this type to another type of same size.\n\n :param sigOrVal: instance of signal or value to cast\n :param toType: instance of HdlType to cast into\n \"\"\"\n try:\n return self.auto_cast(sigOrVal, toType)\n except TypeConversionErr:\n pass\n\n try:\n r = self._reinterpret_cast_fn\n except AttributeError:\n r = self.get_reinterpret_cast_fn()\n self._reinterpret_cast_fn = r\n\n return r(self, sigOrVal, toType)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef walkParams(intf, discovered):\n for si in intf._interfaces:\n yield from walkParams(si, discovered)\n\n for p in intf._params:\n if p not in discovered:\n discovered.add(p)\n yield p", "response": "Walk the parameters of the given interface and yield the parameter instances that are not already in the given set."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef connectPacked(srcPacked, dstInterface, exclude=None):\n offset = 0\n connections = []\n for i in reversed(list(walkPhysInterfaces(dstInterface))):\n if exclude is not None and i in exclude:\n continue\n sig = i._sig\n t = sig._dtype\n if t == BIT:\n s = srcPacked[offset]\n offset += 1\n else:\n w = t.bit_length()\n s = srcPacked[(w + offset): offset]\n offset += w\n connections.append(sig(s))\n\n return connections", "response": "Connect a packed src vector signal to another structuralized interface."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\npack all signals to one big signal recursively.", "response": "def packIntf(intf, masterDirEqTo=DIRECTION.OUT, exclude=None):\n \"\"\"\n Concatenate all signals to one big signal, recursively\n\n :param masterDirEqTo: only signals with this direction are packed\n :param exclude: sequence of signals/interfaces to exclude\n \"\"\"\n if not intf._interfaces:\n if intf._masterDir == masterDirEqTo:\n return intf._sig\n return None\n\n res = None\n for i in intf._interfaces:\n if exclude is not None and i in exclude:\n continue\n\n if i._interfaces:\n if i._masterDir == DIRECTION.IN:\n d = DIRECTION.opposite(masterDirEqTo)\n else:\n d = masterDirEqTo\n s = i._pack(d, exclude=exclude)\n else:\n if i._masterDir == masterDirEqTo:\n s = i._sig\n else:\n s = None\n\n if s is not None:\n if res is None:\n res = s\n else:\n res = res._concat(s)\n\n return res"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nyields all the related objects for this unit in the given targetPlatform.", "response": "def _toRtl(self, targetPlatform: DummyPlatform):\n \"\"\"\n synthesize all subunits, make connections between them,\n build entity and component for this unit\n \"\"\"\n assert not self._wasSynthetised()\n\n self._targetPlatform = targetPlatform\n if not hasattr(self, \"_name\"):\n self._name = self._getDefaultName()\n\n for proc in targetPlatform.beforeToRtl:\n proc(self)\n\n self._ctx.params = self._buildParams()\n self._externInterf = []\n\n # prepare subunits\n for u in self._units:\n yield from u._toRtl(targetPlatform)\n\n for u in self._units:\n subUnitName = u._name\n u._signalsForMyEntity(self._ctx, \"sig_\" + subUnitName)\n\n # prepare signals for interfaces\n for i in self._interfaces:\n signals = i._signalsForInterface(self._ctx)\n if i._isExtern:\n self._externInterf.extend(signals)\n\n for proc in targetPlatform.beforeToRtlImpl:\n proc(self)\n self._loadMyImplementations()\n yield from self._lazyLoaded\n\n if not self._externInterf:\n raise IntfLvlConfErr(\n \"Can not find any external interface for unit %s\"\n \"- unit without interfaces are not allowed\"\n % self._name)\n\n for proc in targetPlatform.afterToRtlImpl:\n proc(self)\n\n yield from self._synthetiseContext(self._externInterf)\n self._checkArchCompInstances()\n\n for proc in targetPlatform.afterToRtl:\n proc(self)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nloads all declarations from _decl method recursively Load all declarations from _decl() method recursively Load all declarations from _declr collector", "response": "def _loadDeclarations(self):\n \"\"\"\n Load all declarations from _decl() method, recursively\n for all interfaces/units.\n \"\"\"\n if not hasattr(self, \"_interfaces\"):\n self._interfaces = []\n if not hasattr(self, \"_private_interfaces\"):\n self._private_interfaces = []\n if not hasattr(self, \"_units\"):\n self._units = []\n self._setAttrListener = self._declrCollector\n self._declr()\n self._setAttrListener = None\n for i in self._interfaces:\n self._loadInterface(i, True)\n\n # if I am a unit load subunits\n for u in self._units:\n u._loadDeclarations()\n for p in self._params:\n p.setReadOnly()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nregisters an interface in the implementation phase", "response": "def _registerIntfInImpl(self, iName, intf):\n \"\"\"\n Register interface in implementation phase\n \"\"\"\n self._registerInterface(iName, intf, isPrivate=True)\n self._loadInterface(intf, False)\n intf._signalsForInterface(self._ctx)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreverses byte order of signal or value.", "response": "def reverseByteOrder(signalOrVal):\n \"\"\"\n Reverse byteorder (littleendian/bigendian) of signal or value\n \"\"\"\n w = signalOrVal._dtype.bit_length()\n i = w\n items = []\n\n while i > 0:\n # take last 8 bytes or rest\n lower = max(i - 8, 0)\n items.append(signalOrVal[i:lower])\n i -= 8\n\n return Concat(*items)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef tryReduceAnd(sig, val):\n m = sig._dtype.all_mask()\n if val._isFullVld():\n v = val.val\n if v == m:\n return sig\n elif v == 0:\n return val", "response": "Try to reduce a single or multiple expression into a single or multiple expression."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef tryReduceXor(sig, val):\n m = sig._dtype.all_mask()\n if not val.vldMask:\n return val\n\n if val._isFullVld():\n v = val.val\n if v == m:\n return ~sig\n elif v == 0:\n return sig", "response": "Try to reduce a single term and a single value."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the base name scope of a class.", "response": "def getBaseNameScope(cls):\n \"\"\"\n Get root of name space\n \"\"\"\n s = NameScope(False)\n s.setLevel(1)\n s[0].update(cls._keywords_dict)\n return s"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef asHdl(cls, obj, ctx: SerializerCtx):\n if isinstance(obj, RtlSignalBase):\n return cls.SignalItem(obj, ctx)\n elif isinstance(obj, Value):\n return cls.Value(obj, ctx)\n else:\n try:\n serFn = getattr(cls, obj.__class__.__name__)\n except AttributeError:\n raise SerializerException(\"Not implemented for\", obj)\n return serFn(obj, ctx)", "response": "Convert object to HDL string"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef Entity(cls, ent: Entity, ctx: SerializerCtx):\n\n ent.name = ctx.scope.checkedName(ent.name, ent, isGlobal=True)\n return \"\"", "response": "This is just forward declaration of Architecture and it is not used\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef serializationDecision(cls, obj, serializedClasses,\n serializedConfiguredUnits):\n \"\"\"\n Decide if this unit should be serialized or not eventually fix name\n to fit same already serialized unit\n\n :param obj: object to serialize\n :param serializedClasses: dict {unitCls : unitobj}\n :param serializedConfiguredUnits: (unitCls, paramsValues) : unitObj\n where paramsValues are named tuple name:value\n \"\"\"\n isDeclaration = isinstance(obj, Entity)\n isDefinition = isinstance(obj, Architecture)\n if isDeclaration:\n unit = obj.origin\n elif isDefinition:\n unit = obj.entity.origin\n else:\n return True\n\n assert isinstance(unit, Unit)\n sd = unit._serializeDecision\n if sd is None:\n return True\n else:\n prevPriv = serializedClasses.get(unit.__class__, None)\n seriazlize, nextPriv = sd(unit, obj, isDeclaration, prevPriv)\n serializedClasses[unit.__class__] = nextPriv\n return seriazlize", "response": "decides if this unit should be serialized or not eventually fix name\n to fit same already serialized unit"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a string representation of a IfContainer.", "response": "def IfContainer(cls, ifc: IfContainer, ctx: SerializerCtx):\n \"\"\"\n Srialize IfContainer instance\n \"\"\"\n childCtx = ctx.withIndent()\n\n def asHdl(statements):\n return [cls.asHdl(s, childCtx) for s in statements]\n\n try:\n cond = cls.condAsHdl(ifc.cond, True, ctx)\n except UnsupportedEventOpErr as e:\n cond = None\n\n if cond is None:\n assert not ifc.elIfs\n assert not ifc.ifFalse\n stmBuff = [cls.asHdl(s, ctx) for s in ifc.ifTrue]\n return \"\\n\".join(stmBuff)\n\n elIfs = []\n ifTrue = ifc.ifTrue\n ifFalse = ifc.ifFalse\n if ifFalse is None:\n ifFalse = []\n\n for c, statements in ifc.elIfs:\n try:\n elIfs.append((cls.condAsHdl(c, True, ctx), asHdl(statements)))\n except UnsupportedEventOpErr as e:\n if len(ifc.elIfs) == 1 and not ifFalse:\n # register expression is in valid format and this\n # is just register with asynchronous reset or etc...\n ifFalse = statements\n else:\n raise e\n\n return cls.ifTmpl.render(\n indent=getIndent(ctx.indent),\n cond=cond,\n ifTrue=asHdl(ifTrue),\n elIfs=elIfs,\n ifFalse=asHdl(ifFalse))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef pullDownAfter(sig, initDelay=6 * Time.ns):\n def _pullDownAfter(s):\n s.write(True, sig)\n yield s.wait(initDelay)\n s.write(False, sig)\n\n return _pullDownAfter", "response": "A simulation driver which keeps the signal value high for initDelay\n then it sets value to 0"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the original cond and negated flag", "response": "def getBaseCond(c):\n \"\"\"\n if is negated return original cond and negated flag\n \"\"\"\n isNegated = False\n try:\n drivers = c.drivers\n except AttributeError:\n return (c, isNegated)\n\n if len(drivers) == 1:\n d = list(c.drivers)[0]\n if isinstance(d, Operator) and d.operator == AllOps.NOT:\n c = d.operands[0]\n isNegated = True\n\n return (c, isNegated)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef simBitsT(width: int, signed: Union[bool, None]):\n k = (width, signed)\n try:\n return __simBitsTCache[k]\n except KeyError:\n t = SimBitsT(width, signed)\n __simBitsTCache[k] = t\n return t", "response": "Construct a SimBitsT with the specified width and signed flag."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef fromPy(cls, val, typeObj, vldMask=None):\n if vldMask == 0:\n val = None\n return cls(val, typeObj)", "response": "Create a new HdlType from a Python value."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef VectSignal(width,\n signed=None,\n masterDir=D.OUT,\n loadConfig=True):\n \"\"\"\n Create basic :class:`.Signal` interface where type is vector\n \"\"\"\n return Signal(masterDir,\n Bits(width, signed, forceVector=True),\n loadConfig)", "response": "Create a vector sequence."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets constant name for value", "response": "def getConstName(self, val):\n \"\"\"\n Get constant name for value\n name of constant is reused if same value was used before\n \"\"\"\n try:\n return self._cache[val]\n except KeyError:\n if isinstance(val.val, int):\n name = \"const_%d_\" % val.val\n else:\n name = \"const_\"\n\n c = self.nameCheckFn(name, val)\n self._cache[val] = c\n return c"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the set of cut off statements which are driver of specified signal", "response": "def _cut_off_drivers_of(self, sig: RtlSignalBase):\n \"\"\"\n Cut off statements which are driver of specified signal\n \"\"\"\n if self.dst is sig:\n self.parentStm = None\n return self\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _loadFromArray(self, dtype: HdlType, bitAddr: int) -> int:\n self.itemCnt = evalParam(dtype.size).val\n self.children = TransTmpl(\n dtype.elmType, 0, parent=self, origin=self.origin)\n return bitAddr + self.itemCnt * self.children.bitAddrEnd", "response": "Parse an array of items and create a TransTmpl instance."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse HStruct type to this transaction template instance and add it to the list of TransTmpl objects.", "response": "def _loadFromHStruct(self, dtype: HdlType, bitAddr: int):\n \"\"\"\n Parse HStruct type to this transaction template instance\n\n :return: address of it's end\n \"\"\"\n for f in dtype.fields:\n t = f.dtype\n origin = f\n isPadding = f.name is None\n\n if isPadding:\n width = t.bit_length()\n bitAddr += width\n else:\n fi = TransTmpl(t, bitAddr, parent=self, origin=origin)\n self.children.append(fi)\n bitAddr = fi.bitAddrEnd\n\n return bitAddr"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nparsing HUnion type to this transaction template instance and add it to the list of children", "response": "def _loadFromUnion(self, dtype: HdlType, bitAddr: int) -> int:\n \"\"\"\n Parse HUnion type to this transaction template instance\n\n :return: address of it's end\n \"\"\"\n for field in dtype.fields.values():\n ch = TransTmpl(field.dtype, 0, parent=self, origin=field)\n self.children.append(ch)\n return bitAddr + dtype.bit_length()"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nparsing HUnion type to this transaction template instance instance", "response": "def _loadFromHStream(self, dtype: HStream, bitAddr: int) -> int:\n \"\"\"\n Parse HUnion type to this transaction template instance\n\n :return: address of it's end\n \"\"\"\n ch = TransTmpl(dtype.elmType, 0, parent=self, origin=self.origin)\n self.children.append(ch)\n return bitAddr + dtype.elmType.bit_length()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _loadFromHType(self, dtype: HdlType, bitAddr: int) -> None:\n self.bitAddr = bitAddr\n childrenAreChoice = False\n if isinstance(dtype, Bits):\n ld = self._loadFromBits\n elif isinstance(dtype, HStruct):\n ld = self._loadFromHStruct\n elif isinstance(dtype, HArray):\n ld = self._loadFromArray\n elif isinstance(dtype, HStream):\n ld = self._loadFromHStream\n elif isinstance(dtype, HUnion):\n ld = self._loadFromUnion\n childrenAreChoice = True\n else:\n raise TypeError(\"expected instance of HdlType\", dtype)\n\n self.bitAddrEnd = ld(dtype, bitAddr)\n self.childrenAreChoice = childrenAreChoice", "response": "Parse any HDL type to this transaction template instance and load it into memory."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the width of the item in the original array.", "response": "def getItemWidth(self) -> int:\n \"\"\"\n Only for transactions derived from HArray\n\n :return: width of item in original array\n \"\"\"\n if not isinstance(self.dtype, HArray):\n raise TypeError()\n return (self.bitAddrEnd - self.bitAddr) // self.itemCnt"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef walkFlatten(self, offset: int=0,\n shouldEnterFn=_default_shouldEnterFn,\n otherObjItCtx: ObjIteratorCtx =_DummyIteratorCtx()\n ) -> Generator[\n Union[Tuple[Tuple[int, int], 'TransTmpl'], 'OneOfTransaction'],\n None, None]:\n \"\"\"\n Walk fields in instance of TransTmpl\n\n :param offset: optional offset for all children in this TransTmpl\n :param shouldEnterFn: function (transTmpl) which returns True\n when field should be split on it's children\n :param shouldEnterFn: function(transTmpl) which should return\n (shouldEnter, shouldUse) where shouldEnter is flag that means\n iterator should look inside of this actual object\n and shouldUse flag means that this field should be used\n (=generator should yield it)\n :return: generator of tuples ((startBitAddress, endBitAddress),\n TransTmpl instance)\n \"\"\"\n\n t = self.dtype\n base = self.bitAddr + offset\n end = self.bitAddrEnd + offset\n\n shouldEnter, shouldYield = shouldEnterFn(self)\n if shouldYield:\n yield ((base, end), self)\n\n if shouldEnter:\n if isinstance(t, Bits):\n pass\n elif isinstance(t, HStruct):\n for ch in self.children:\n with otherObjItCtx(ch.origin.name):\n yield from ch.walkFlatten(\n offset,\n shouldEnterFn,\n otherObjItCtx)\n elif isinstance(t, HArray):\n itemSize = (self.bitAddrEnd - self.bitAddr) // self.itemCnt\n for i in range(self.itemCnt):\n with otherObjItCtx(i):\n yield from self.children.walkFlatten(\n base + i * itemSize,\n shouldEnterFn,\n otherObjItCtx)\n elif isinstance(t, HUnion):\n yield OneOfTransaction(self, offset, shouldEnterFn,\n self.children)\n elif isinstance(t, HStream):\n assert len(self.children) == 1\n yield StreamTransaction(self, offset, shouldEnterFn,\n self.children[0])\n else:\n raise TypeError(t)", "response": "Walks through all children of this TransTmpl and yields a generator of tuples."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef walkFlattenChilds(self) -> Generator[\n Union[Tuple[Tuple[int, int], TransTmpl], 'OneOfTransaction'],\n None, None]:\n \"\"\"\n :return: generator of generators of tuples\n ((startBitAddress, endBitAddress), TransTmpl instance)\n for each possiblility in this transaction\n \"\"\"\n for p in self.possibleTransactions:\n yield p.walkFlatten(offset=self.offset,\n shouldEnterFn=self.shouldEnterFn)", "response": "Returns a generator that yields all possible transactions in this transaction."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconverting negative int to positive int which has same bits set", "response": "def signFix(val, width):\n \"\"\"\n Convert negative int to positive int which has same bits set\n \"\"\"\n if val > 0:\n msb = 1 << (width - 1)\n if val & msb:\n val -= mask(width) + 1\n return val"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncomparing two sets of bits.", "response": "def bitsCmp(self, other, op, evalFn=None):\n \"\"\"\n :attention: If other is Bool signal convert this to bool (not ideal,\n due VHDL event operator)\n \"\"\"\n other = toHVal(other)\n t = self._dtype\n ot = other._dtype\n\n iamVal = isinstance(self, Value)\n otherIsVal = isinstance(other, Value)\n\n if evalFn is None:\n evalFn = op._evalFn\n\n if iamVal and otherIsVal:\n if ot == BOOL:\n self = self._auto_cast(BOOL)\n elif t == ot:\n pass\n elif isinstance(ot, Integer):\n other = other._auto_cast(t)\n else:\n raise TypeError(\"Values of types (%r, %r) are not comparable\" % (\n self._dtype, other._dtype))\n\n return bitsCmp__val(self, other, op, evalFn)\n else:\n if ot == BOOL:\n self = self._auto_cast(BOOL)\n elif t == ot:\n pass\n elif isinstance(ot, Integer):\n other = other._auto_cast(self._dtype)\n else:\n raise TypeError(\"Values of types (%r, %r) are not comparable\" % (\n self._dtype, other._dtype))\n\n # try to reduce useless cmp\n res = None\n if otherIsVal and other._isFullVld():\n res = bitsCmp_detect_useless_cmp(self, other, op)\n elif iamVal and self._isFullVld():\n res = bitsCmp_detect_useless_cmp(other, self, CMP_OP_REVERSE[op])\n\n if res is None:\n pass\n elif isinstance(res, Value):\n return res\n else:\n assert res == AllOps.EQ, res\n op = res\n\n return Operator.withRes(op, [self, other], BOOL)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef bitsBitOp(self, other, op, getVldFn, reduceCheckFn):\n other = toHVal(other)\n\n iamVal = isinstance(self, Value)\n otherIsVal = isinstance(other, Value)\n\n if iamVal and otherIsVal:\n other = other._auto_cast(self._dtype)\n return bitsBitOp__val(self, other, op, getVldFn)\n else:\n if other._dtype == BOOL:\n self = self._auto_cast(BOOL)\n return op._evalFn(self, other)\n elif self._dtype == other._dtype:\n pass\n else:\n raise TypeError(\"Can not apply operator %r (%r, %r)\" %\n (op, self._dtype, other._dtype))\n\n if otherIsVal:\n r = reduceCheckFn(self, other)\n if r is not None:\n return r\n\n elif iamVal:\n r = reduceCheckFn(other, self)\n if r is not None:\n return r\n\n return Operator.withRes(op, [self, other], self._dtype)", "response": "Apply a bit - bit operation to the current and other objects."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef fromPy(cls, val, typeObj, vldMask=None):\n if val is None:\n assert vldMask is None or vldMask == 0\n valid = False\n val = typeObj._allValues[0]\n else:\n if vldMask is None or vldMask == 1:\n assert isinstance(val, str)\n valid = True\n else:\n valid = False\n val = None\n\n return cls(val, typeObj, valid)", "response": "Create an HEnum object from a python value."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndiscover the sensitivity of the current state of the current state and adds it to the set of known evolution types.", "response": "def _discover_sensitivity(self, seen) -> None:\n \"\"\"\n Doc on parent class :meth:`HdlStatement._discover_sensitivity`\n \"\"\"\n assert self._sensitivity is None, self\n ctx = self._sensitivity = SensitivityCtx()\n\n casual_sensitivity = set()\n self.switchOn._walk_sensitivity(casual_sensitivity, seen, ctx)\n if ctx.contains_ev_dependency:\n raise HwtSyntaxError(\n \"Can not switch on event operator result\", self.switchOn)\n ctx.extend(casual_sensitivity)\n\n for stm in self._iter_stms():\n stm._discover_sensitivity(seen)\n ctx.extend(stm._sensitivity)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfills the enclosure with the ones that are not already in the enclosure.", "response": "def _fill_enclosure(self, enclosure: Dict[RtlSignalBase, HdlStatement]) -> None:\n \"\"\"\n :attention: enclosure has to be discoverd first use _discover_enclosure() method\n \"\"\"\n select = []\n outputs = self._outputs\n for e in enclosure.keys():\n if e in outputs:\n select.append(e)\n\n for (_, stms), e in zip(self.cases, self._case_enclosed_for):\n fill_stm_list_with_enclosure(self, e, stms, select, enclosure)\n e.update(select)\n\n t = self.switchOn._dtype\n default_required = len(self.cases) < t.domain_size()\n\n if self.default is not None or default_required:\n self.default = fill_stm_list_with_enclosure(\n self, self._default_enclosed_for, self.default, select, enclosure)\n self._default_enclosed_for.update(select)\n\n self._enclosed_for.update(select)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _iter_stms(self):\n for _, stms in self.cases:\n yield from stms\n\n if self.default is not None:\n yield from self.default", "response": "Iterate over the set of items in the hierarchy."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _is_mergable(self, other) -> bool:\n if not isinstance(other, SwitchContainer):\n return False\n\n if not (self.switchOn is other.switchOn and\n len(self.cases) == len(other.cases) and\n self._is_mergable_statement_list(self.default, other.default)):\n return False\n\n for (vA, caseA), (vB, caseB) in zip(self.cases, other.cases):\n if vA != vB or not self._is_mergable_statement_list(caseA, caseB):\n return False\n\n return True", "response": "Returns True if this SwitchContainer can be merged into other SwitchContainer."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nmerge other statement into this one.", "response": "def _merge_with_other_stm(self, other: \"IfContainer\") -> None:\n \"\"\"\n Merge other statement to this statement\n \"\"\"\n merge = self._merge_statement_lists\n newCases = []\n for (c, caseA), (_, caseB) in zip(self.cases, other.cases):\n newCases.append((c, merge(caseA, caseB)))\n\n self.cases = newCases\n\n if self.default is not None:\n self.default = merge(self.default, other.default)\n\n self._on_merge(other)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ntrying to reduce the list of resources.", "response": "def _try_reduce(self) -> Tuple[List[\"HdlStatement\"], bool]:\n \"\"\"\n Doc on parent class :meth:`HdlStatement._try_reduce`\n \"\"\"\n io_change = False\n\n new_cases = []\n for val, statements in self.cases:\n _statements, rank_decrease, _io_change = self._try_reduce_list(\n statements)\n io_change |= _io_change\n self.rank -= rank_decrease\n new_cases.append((val, _statements))\n self.cases = new_cases\n\n if self.default is not None:\n self.default, rank_decrease, _io_change = self._try_reduce_list(\n self.default)\n self.rank -= rank_decrease\n io_change |= _io_change\n\n reduce_self = not self._condHasEffect()\n if reduce_self:\n if self.cases:\n res = self.cases[0][1]\n elif self.default is not None:\n res = self.default\n else:\n res = []\n else:\n res = [self, ]\n\n self._on_reduce(reduce_self, io_change, res)\n\n if not self.default:\n t = self.switchOn._dtype\n if isinstance(t, HEnum):\n dom_size = t.domain_size()\n val_cnt = len(t._allValues)\n if len(self.cases) == val_cnt and val_cnt < dom_size:\n # bit representation is not fully matching enum description\n # need to set last case as default to prevent latches\n _, stms = self.cases.pop()\n self.default = stms\n\n return res, io_change"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _condHasEffect(self) -> bool:\n if not self.cases:\n return False\n\n # [TODO]\n type_domain_covered = bool(self.default) or len(\n self.cases) == self.switchOn._dtype.domain_size()\n\n stmCnt = len(self.cases[0])\n if type_domain_covered and reduce(\n and_,\n [len(stm) == stmCnt\n for _, stm in self.cases],\n True) and (self.default is None\n or len(self.default) == stmCnt):\n stms = list(self._iter_stms())\n if statementsAreSame(stms):\n return False\n else:\n return True\n return True", "response": "Returns True if statements in branches has different effect."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn True if self and other are the same as other.", "response": "def isSame(self, other: HdlStatement) -> bool:\n \"\"\"\n Doc on parent class :meth:`HdlStatement.isSame`\n \"\"\"\n if self is other:\n return True\n\n if self.rank != other.rank:\n return False\n\n if isinstance(other, SwitchContainer) \\\n and isSameHVal(self.switchOn, other.switchOn)\\\n and len(self.cases) == len(other.cases)\\\n and isSameStatementList(self.default, other.default):\n for (ac, astm), (bc, bstm) in zip(self.cases, other.cases):\n if not isSameHVal(ac, bc)\\\n or not isSameStatementList(astm, bstm):\n return False\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef discoverEventDependency(sig):\n\n try:\n drivers = sig.drivers\n except AttributeError:\n return\n\n if len(drivers) == 1:\n d = drivers[0]\n if isinstance(d, Operator):\n if isEventDependentOp(d.operator):\n yield (d.operator, d.operands[0])\n else:\n for op in d.operands:\n yield from discoverEventDependency(op)", "response": "returns generator of tuples event operator and signal"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef getIndent(indentNum):\n try:\n return _indentCache[indentNum]\n except KeyError:\n i = \"\".join([_indent for _ in range(indentNum)])\n _indentCache[indentNum] = i\n return i", "response": "Get the indent string for the specified number of spaces."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef verilogTypeOfSig(signalItem):\n driver_cnt = len(signalItem.drivers)\n if signalItem._const or driver_cnt > 1 or\\\n arr_any(signalItem.drivers, _isEventDependentDriver):\n return SIGNAL_TYPE.REG\n else:\n if driver_cnt == 1:\n d = signalItem.drivers[0]\n if not isinstance(d, (Assignment, PortItem)):\n return SIGNAL_TYPE.REG\n\n return SIGNAL_TYPE.WIRE", "response": "Check if the signalItem is register or wire"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef toHdlConversion(self, top, topName: str, saveTo: str) -> List[str]:\n\n return toRtl(top,\n saveTo=saveTo,\n name=topName,\n serializer=self.serializer,\n targetPlatform=self.targetPlatform)", "response": "This function converts the object to the correct compile order for the HDL file."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef serializeType(self, hdlType: HdlType) -> str:\n\n def createTmpVar(suggestedName, dtype):\n raise NotImplementedError(\n \"Can not seraialize hdl type %r into\"\n \"ipcore format\" % (hdlType))\n\n return VhdlSerializer.HdlType(hdlType, VhdlSerializer.getBaseContext())", "response": "Serialize a HdlType into the ipcore format."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef getVectorFromType(self, dtype) -> Union[bool, None, Tuple[int, int]]:\n if dtype == BIT:\n return False\n elif isinstance(dtype, Bits):\n return [evalParam(dtype.width) - 1, hInt(0)]", "response": "Returns the vector of the given dtype."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the expression value of the passed in value", "response": "def getExprVal(self, val, do_eval=False):\n \"\"\"\n :see: doc of method on parent class\n \"\"\"\n ctx = VhdlSerializer.getBaseContext()\n\n def createTmpVar(suggestedName, dtype):\n raise NotImplementedError(\n \"Width value can not be converted do ipcore format (%r)\",\n val)\n\n ctx.createTmpVarFn = createTmpVar\n if do_eval:\n val = val.staticEval()\n val = VivadoTclExpressionSerializer.asHdl(val, ctx)\n return val"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef getTypeWidth(self, dtype: HdlType, do_eval=False) -> Tuple[int, Union[int, RtlSignal], bool]:\n width = dtype.width\n if isinstance(width, int):\n widthStr = str(width)\n else:\n widthStr = self.getExprVal(width, do_eval=do_eval)\n\n return width, widthStr, False", "response": "Returns the width and the expression of the type."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef getObjDebugName(self, obj: Union[Interface, Unit, Param]) -> str:\n return obj._getFullName()", "response": "Get the debug name of the object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef serialzeValueToTCL(self, val, do_eval=False) -> Tuple[str, str, bool]:\n if isinstance(val, int):\n val = hInt(val)\n if do_eval:\n val = val.staticEval()\n\n if isinstance(val, RtlSignalBase):\n ctx = VivadoTclExpressionSerializer.getBaseContext()\n tclVal = VivadoTclExpressionSerializer.asHdl(val, ctx)\n tclValVal = VivadoTclExpressionSerializer.asHdl(\n val.staticEval())\n return tclVal, tclValVal, False\n else:\n\n tclVal = VivadoTclExpressionSerializer.asHdl(val, None)\n return tclVal, tclVal, True", "response": "serialze value to TCL"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef nameAvailabilityCheck(obj, propName, prop):\n if getattr(obj, propName, None) is not None:\n raise IntfLvlConfErr(\"%r already has property %s old:%s new:%s\" % \n (obj, propName, repr(getattr(obj, propName)), prop))", "response": "Check if the object has the given name."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nregister a parameter on the object.", "response": "def _registerParameter(self, pName, parameter) -> None:\n \"\"\"\n Register Param object on interface level object\n \"\"\"\n nameAvailabilityCheck(self, pName, parameter)\n # resolve name in this scope\n try:\n hasName = parameter._name is not None\n except AttributeError:\n hasName = False\n if not hasName:\n parameter._name = pName\n # add name in this scope\n parameter._registerScope(pName, self)\n\n if parameter.hasGenericName:\n parameter.name = pName\n\n if parameter._parent is None:\n parameter._parent = self\n\n self._params.append(parameter)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nassociate this object with specified clk and rst.", "response": "def _make_association(self, clk=None, rst=None) -> None:\n \"\"\"\n Associate this object with specified clk/rst\n \"\"\"\n if clk is not None:\n assert self._associatedClk is None\n self._associatedClk = clk\n\n if rst is not None:\n assert self._associatedRst is None\n self._associatedRst = rst"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _updateParamsFrom(self, otherObj:\"PropDeclrCollector\", updater, exclude:set, prefix:str) -> None:\n excluded = set()\n if exclude is not None:\n exclude = set(exclude)\n\n for myP in self._params:\n pPName = prefix + myP._scopes[self][1]\n try:\n otherP = getattr(otherObj, pPName)\n if not isinstance(otherP, Param):\n continue\n except AttributeError:\n continue\n\n if exclude and otherP in exclude:\n excluded.add(otherP)\n continue\n updater(self, myP, otherP)\n \n if exclude is not None:\n # assert that what should be excluded really exists\n assert excluded == exclude", "response": "Update all parameters which are defined on self from otherObj."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nregisters a unit object on interface level object", "response": "def _registerUnit(self, uName, unit):\n \"\"\"\n Register unit object on interface level object\n \"\"\"\n nameAvailabilityCheck(self, uName, unit)\n assert unit._parent is None\n unit._parent = self\n unit._name = uName\n self._units.append(unit)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nregistering an interface object on the level object.", "response": "def _registerInterface(self, iName, intf, isPrivate=False):\n \"\"\"\n Register interface object on interface level object\n \"\"\"\n nameAvailabilityCheck(self, iName, intf)\n assert intf._parent is None\n intf._parent = self\n intf._name = iName\n intf._ctx = self._ctx\n\n if isPrivate:\n self._private_interfaces.append(intf)\n intf._isExtern = False\n else:\n self._interfaces.append(intf)\n intf._isExtern = True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _registerArray(self, name, items):\n items._parent = self\n items._name = name\n for i, item in enumerate(items):\n setattr(self, \"%s_%d\" % (name, i), item)", "response": "Register array of items on interface level object"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _registerUnitInImpl(self, uName, u):\n self._registerUnit(uName, u)\n u._loadDeclarations()\n self._lazyLoaded.extend(u._toRtl(self._targetPlatform))\n u._signalsForMyEntity(self._ctx, \"sig_\" + uName)", "response": "register a unit in the internal list"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef singleDriver(self):\n # [TODO] no driver exception\n drv_cnt = len(self.drivers)\n if not drv_cnt:\n raise NoDriverErr(self)\n elif drv_cnt != 1:\n raise MultipleDriversErr(self)\n\n return self.drivers[0]", "response": "Returns a single driver if signal has only one driver."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nregisters potential signals to drivers and endpoints.", "response": "def registerSignals(self, outputs=[]):\n \"\"\"\n Register potential signals to drivers/endpoints\n \"\"\"\n for o in self.operands:\n if isinstance(o, RtlSignalBase):\n if o in outputs:\n o.drivers.append(self)\n else:\n o.endpoints.append(self)\n elif isinstance(o, Value):\n pass\n else:\n raise NotImplementedError(\n \"Operator operands can be\"\n \" only signal or values got:%r\" % (o))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates an operator with result", "response": "def withRes(opDef, operands, resT, outputs=[]):\n \"\"\"\n Create operator with result signal\n\n :ivar resT: data type of result signal\n :ivar outputs: iterable of singnals which are outputs\n from this operator\n \"\"\"\n op = Operator(opDef, operands)\n out = RtlSignal(getCtxFromOps(operands), None, resT)\n out._const = arr_all(op.operands, isConst)\n out.drivers.append(op)\n out.origin = op\n op.result = out\n op.registerSignals(outputs)\n if out._const:\n out.staticEval()\n return out"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef withIndent(self, indent=1):\n ctx = copy(self)\n ctx.indent += indent\n return ctx", "response": "Create a copy of this context with increased indent."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ntries to connect src to interface of specified name on unit.", "response": "def _tryConnect(src, unit, intfName):\n \"\"\"\n Try connect src to interface of specified name on unit.\n Ignore if interface is not present or if it already has driver.\n \"\"\"\n try:\n dst = getattr(unit, intfName)\n except AttributeError:\n return\n if not dst._sig.drivers:\n connect(src, dst)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef propagateClk(obj):\n clk = obj.clk\n for u in obj._units:\n _tryConnect(clk, u, 'clk')", "response": "Propagate clk signal to all units in the hierarchy"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\npropagating clk clock and negative reset rst_n signal to all subcomponents of obj.", "response": "def propagateClkRstn(obj):\n \"\"\"\n Propagate \"clk\" clock and negative reset \"rst_n\" signal\n to all subcomponents\n \"\"\"\n clk = obj.clk\n rst_n = obj.rst_n\n\n for u in obj._units:\n _tryConnect(clk, u, 'clk')\n _tryConnect(rst_n, u, 'rst_n')\n _tryConnect(~rst_n, u, 'rst')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef propagateClkRst(obj):\n clk = obj.clk\n rst = obj.rst\n\n for u in obj._units:\n _tryConnect(clk, u, 'clk')\n _tryConnect(~rst, u, 'rst_n')\n _tryConnect(rst, u, 'rst')", "response": "Propagate clk and reset rst signal to all subcomponents\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef propagateRstn(obj):\n rst_n = obj.rst_n\n\n for u in obj._units:\n _tryConnect(rst_n, u, 'rst_n')\n _tryConnect(~rst_n, u, 'rst')", "response": "Propagate negative reset rst_n signal\n to all subcomponents\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef propagateRst(obj):\n rst = obj.rst\n\n for u in obj._units:\n _tryConnect(~rst, u, 'rst_n')\n _tryConnect(rst, u, 'rst')", "response": "Propagate reset rst signal\n to all subcomponents of obj. rst"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfit the given signal to the given length.", "response": "def fitTo_t(what: Union[RtlSignal, Value], where_t: HdlType,\n extend: bool=True, shrink: bool=True):\n \"\"\"\n Slice signal \"what\" to fit in \"where\"\n or\n arithmetically (for signed by MSB / unsigned, vector with 0) extend\n \"what\" to same width as \"where\"\n\n little-endian impl.\n \"\"\"\n\n whatWidth = what._dtype.bit_length()\n toWidth = where_t.bit_length()\n if toWidth == whatWidth:\n return what\n elif toWidth < whatWidth:\n # slice\n if not shrink:\n raise BitWidthErr()\n\n return what[toWidth:]\n else:\n if not extend:\n raise BitWidthErr()\n\n w = toWidth - whatWidth\n\n if what._dtype.signed:\n # signed extension\n msb = what[whatWidth - 1]\n ext = reduce(lambda a, b: a._concat(b), [msb for _ in range(w)])\n else:\n # 0 extend\n ext = vec(0, w)\n\n return ext._concat(what)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\niterating over bits in sequence", "response": "def iterBits(sigOrVal: Union[RtlSignal, Value], bitsInOne: int=1,\n skipPadding: bool=True, fillup: bool=False):\n \"\"\"\n Iterate over bits in vector\n\n :param sigOrVal: signal or value to iterate over\n :param bitsInOne: number of bits in one part\n :param skipPadding: if true padding is skipped in dense types\n \"\"\"\n bw = BitWalker(sigOrVal, skipPadding, fillup)\n for _ in range(ceil(sigOrVal._dtype.bit_length() / bitsInOne)):\n yield bw.get(bitsInOne)\n\n bw.assertIsOnEnd()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get(self, numberOfBits: int, doCollect: bool):\n if not isinstance(numberOfBits, int):\n numberOfBits = int(numberOfBits)\n\n while self.actuallyHave < numberOfBits:\n # accumulate while not has enought\n try:\n f = next(self.it)\n except StopIteration:\n if self.fillup and self.actual is not None:\n break\n else:\n raise NotEnoughtBitsErr()\n\n thisFieldLen = f._dtype.bit_length()\n if self.actual is None:\n if not doCollect and thisFieldLen <= numberOfBits:\n numberOfBits -= thisFieldLen\n else:\n self.actual = f\n self.actuallyHave = thisFieldLen\n else:\n if not doCollect and self.actuallyHave < numberOfBits:\n self.actuallyHave = thisFieldLen\n self.actual = f\n else:\n self.actuallyHave += thisFieldLen\n self.actual = f._concat(self.actual)\n\n # slice out from actual\n actual = self.actual\n actualOffset = self.actualOffset\n\n if self.actuallyHave < numberOfBits:\n assert self.fillup\n if doCollect:\n t = self.actual._dtype\n fillupW = numberOfBits - self.actuallyHave\n padding_t = Bits(fillupW, signed=t.signed, negated=t.negated)\n padding = padding_t.fromPy(None)\n actual = padding._concat(actual)\n self.actuallyHave = 0\n\n # update about what was taken\n self.actuallyHave -= numberOfBits\n self.actualOffset += numberOfBits\n if self.actuallyHave == 0:\n self.actual = None\n self.actualOffset = 0\n\n if doCollect:\n if numberOfBits == 1:\n return actual[actualOffset]\n else:\n return actual[(actualOffset + numberOfBits):actualOffset]", "response": "get from the actual possition\n "} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget a chunk of bits from the current possition .", "response": "def get(self, numberOfBits: int) -> Union[RtlSignal, Value]:\n \"\"\"\n :param numberOfBits: number of bits to get from actual possition\n :return: chunk of bits of specified size (instance of Value or RtlSignal)\n \"\"\"\n return self._get(numberOfBits, True)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _serializeOnce_eval(parentUnit, obj, isDeclaration, priv):\n clsName = parentUnit.__class__.__name__\n\n if isDeclaration:\n obj.name = clsName\n\n if priv is None:\n priv = parentUnit\n elif isDeclaration:\n # prepare entity which will not be serialized\n prepareEntity(obj, clsName, parentUnit)\n\n serialize = priv is parentUnit\n\n return serialize, priv", "response": "Internal function to serialize only first obj of it s class\n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _serializeParamsUniq_eval(parentUnit, obj, isDeclaration, priv):\n\n params = paramsToValTuple(parentUnit)\n\n if priv is None:\n priv = {}\n\n if isDeclaration:\n try:\n prevUnit = priv[params]\n except KeyError:\n priv[params] = parentUnit\n return True, priv\n\n prepareEntity(obj, prevUnit._entity.name, prevUnit)\n return False, priv\n\n return priv[params] is parentUnit, priv", "response": "Internal function to serialize only objs with uniq parameters and class\nTaxonomy"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _getFullName(self):\n name = \"\"\n tmp = self\n while isinstance(tmp, (InterfaceBase, HObjList)):\n if hasattr(tmp, \"_name\"):\n n = tmp._name\n else:\n n = ''\n if name == '':\n name = n\n else:\n name = n + '.' + name\n if hasattr(tmp, \"_parent\"):\n tmp = tmp._parent\n else:\n tmp = None\n return name", "response": "get all name hierarchy separated by."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndelegate _make_association on items", "response": "def _make_association(self, *args, **kwargs):\n \"\"\"\n Delegate _make_association on items\n\n :note: doc in :func:`~hwt.synthesizer.interfaceLevel.propDeclCollector._make_association`\n \"\"\"\n for o in self:\n o._make_association(*args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _updateParamsFrom(self, *args, **kwargs):\n for o in self:\n o._updateParamsFrom(*args, **kwargs)", "response": "update params from all the objects in this list"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating simulation model and connect it with interfaces of original unit and decorate it with agents of original unit", "response": "def simPrepare(unit: Unit, modelCls: Optional[SimModel]=None,\n targetPlatform=DummyPlatform(),\n dumpModelIn: str=None, onAfterToRtl=None):\n \"\"\"\n Create simulation model and connect it with interfaces of original unit\n and decorate it with agents\n\n :param unit: interface level unit which you wont prepare for simulation\n :param modelCls: class of rtl simulation model to run simulation on,\n if is None rtl sim model will be generated from unit\n :param targetPlatform: target platform for this synthes\n :param dumpModelIn: folder to where put sim model files\n (if is None sim model will be constructed only in memory)\n :param onAfterToRtl: callback fn(unit, modelCls) which will be called\n after unit will be synthesised to rtl\n\n :return: tuple (fully loaded unit with connected sim model,\n connected simulation model,\n simulation processes of agents\n )\n \"\"\"\n if modelCls is None:\n modelCls = toSimModel(\n unit, targetPlatform=targetPlatform, dumpModelIn=dumpModelIn)\n else:\n # to instantiate hierarchy of unit\n toSimModel(unit)\n\n if onAfterToRtl:\n onAfterToRtl(unit, modelCls)\n\n reconnectUnitSignalsToModel(unit, modelCls)\n model = modelCls()\n procs = autoAddAgents(unit)\n return unit, model, procs"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef toSimModel(unit, targetPlatform=DummyPlatform(), dumpModelIn=None):\n sim_code = toRtl(unit,\n targetPlatform=targetPlatform,\n saveTo=dumpModelIn,\n serializer=SimModelSerializer)\n if dumpModelIn is not None:\n d = os.path.join(os.getcwd(), dumpModelIn)\n dInPath = d in sys.path\n if not dInPath:\n sys.path.insert(0, d)\n if unit._name in sys.modules:\n del sys.modules[unit._name]\n simModule = importlib.import_module(unit._name)\n\n if not dInPath:\n sys.path.remove(d)\n else:\n simModule = ModuleType('simModule')\n # python supports only ~100 opened brackets\n # it exceded it throws MemoryError: s_push: parser stack overflow\n exec(sim_code, simModule.__dict__)\n\n return simModule.__dict__[unit._name]", "response": "Create a simulation model for a given unit"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef reconnectUnitSignalsToModel(synthesisedUnitOrIntf, modelCls):\n obj = synthesisedUnitOrIntf\n subInterfaces = obj._interfaces\n\n\n if subInterfaces:\n for intf in subInterfaces:\n # proxies are destroyed on original interfaces and only proxies on\n # array items will remain\n reconnectUnitSignalsToModel(intf, modelCls)\n else:\n # reconnect signal from model\n s = synthesisedUnitOrIntf\n s._sigInside = getattr(modelCls, s._sigInside.name)", "response": "Re - connects model signals to unit"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef simUnitVcd(simModel, stimulFunctions, outputFile=sys.stdout,\n until=100 * Time.ns):\n \"\"\"\n Syntax sugar\n If outputFile is string try to open it as file\n\n :return: hdl simulator object\n \"\"\"\n assert isinstance(simModel, SimModel), \\\n \"Class of SimModel is required (got %r)\" % (simModel)\n if isinstance(outputFile, str):\n d = os.path.dirname(outputFile)\n if d:\n os.makedirs(d, exist_ok=True)\n with open(outputFile, 'w') as f:\n return _simUnitVcd(simModel, stimulFunctions,\n f, until)\n else:\n return _simUnitVcd(simModel, stimulFunctions,\n outputFile, until)", "response": "Returns a simulation object for the given model and stimul functions."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsimulate a single unit of the vcd", "response": "def _simUnitVcd(simModel, stimulFunctions, outputFile, until):\n \"\"\"\n :param unit: interface level unit to simulate\n :param stimulFunctions: iterable of function(env)\n (simpy environment) which are driving the simulation\n :param outputFile: file where vcd will be dumped\n :param time: endtime of simulation, time units are defined in HdlSimulator\n :return: hdl simulator object\n \"\"\"\n sim = HdlSimulator()\n\n # configure simulator to log in vcd\n sim.config = VcdHdlSimConfig(outputFile)\n\n # run simulation, stimul processes are register after initial\n # initialization\n sim.simUnit(simModel, until=until, extraProcesses=stimulFunctions)\n return sim"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef onTWriteCallback__init(self, sim):\n yield from self.onTWriteCallback(sim)\n self.intf.t._sigInside.registerWriteCallback(\n self.onTWriteCallback,\n self.getEnable)\n self.intf.o._sigInside.registerWriteCallback(\n self.onTWriteCallback,\n self.getEnable)", "response": "This method is called by the onTWriteCallback method in the class."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconnecting to the port item on the given signal.", "response": "def connectSig(self, signal):\n \"\"\"\n Connect to port item on subunit\n \"\"\"\n if self.direction == DIRECTION.IN:\n if self.src is not None:\n raise HwtSyntaxError(\n \"Port %s is already associated with %r\"\n % (self.name, self.src))\n self.src = signal\n signal.endpoints.append(self)\n\n elif self.direction == DIRECTION.OUT:\n if self.dst is not None:\n raise HwtSyntaxError(\n \"Port %s is already associated with %r\"\n % (self.name, self.dst))\n self.dst = signal\n signal.drivers.append(self)\n\n else:\n raise NotImplementedError(self)\n\n signal.hidden = False\n signal.ctx.subUnits.add(self.unit)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef registerInternSig(self, signal):\n if self.direction == DIRECTION.OUT:\n if self.src is not None:\n raise HwtSyntaxError(\n \"Port %s is already associated with %s\"\n % (self.name, str(self.src)))\n self.src = signal\n\n elif self.direction == DIRECTION.IN:\n if self.dst is not None:\n raise HwtSyntaxError(\n \"Port %s is already associated with %s\"\n % (self.name, str(self.dst)))\n self.dst = signal\n\n else:\n raise NotImplementedError(self.direction)", "response": "Registers an internal signal to this port item."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nconnects this instance to the internal side of this instance.", "response": "def connectInternSig(self):\n \"\"\"\n connet signal from internal side of of this component to this port\n \"\"\"\n d = self.direction\n if d == DIRECTION.OUT:\n self.src.endpoints.append(self)\n elif d == DIRECTION.IN or d == DIRECTION.INOUT:\n self.dst.drivers.append(self)\n else:\n raise NotImplementedError(d)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the canonical signal of the current object.", "response": "def getInternSig(self):\n \"\"\"\n return signal inside unit which has this port\n \"\"\"\n d = self.direction\n if d == DIRECTION.IN:\n return self.dst\n elif d == DIRECTION.OUT:\n return self.src\n else:\n raise NotImplementedError(d)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef isEvDependentOn(sig, process) -> bool:\n if sig is None:\n return False\n\n return process in sig.simFallingSensProcs\\\n or process in sig.simRisingSensProcs", "response": "Check if hdl process has event depenency on signal\n "} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds a process to the scheduler.", "response": "def _add_process(self, proc, priority) -> None:\n \"\"\"\n Schedule process on actual time with specified priority\n \"\"\"\n self._events.push(self.now, priority, proc)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding a hdl process to the execution queue.", "response": "def _addHdlProcToRun(self, trigger: SimSignal, proc) -> None:\n \"\"\"\n Add hdl process to execution queue\n\n :param trigger: instance of SimSignal\n :param proc: python generator function representing HDL process\n \"\"\"\n # first process in time has to plan executing of apply values on the\n # end of this time\n if not self._applyValPlaned:\n # (apply on end of this time to minimalize process reevaluation)\n self._scheduleApplyValues()\n\n if isEvDependentOn(trigger, proc):\n if self.now == 0:\n return # pass event dependent on startup\n self._seqProcsToRun.append(proc)\n else:\n self._combProcsToRun.append(proc)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ninitializing all the signals and processIOs for the given unit.", "response": "def _initUnitSignals(self, unit: Unit) -> None:\n \"\"\"\n * Inject default values to simulation\n\n * Instantiate IOs for every process\n \"\"\"\n # set initial value to all signals and propagate it\n for s in unit._ctx.signals:\n if s.defVal.vldMask:\n v = s.defVal.clone()\n s.simUpdateVal(self, mkUpdater(v, False))\n\n for u in unit._units:\n self._initUnitSignals(u)\n\n for p in unit._processes:\n self._addHdlProcToRun(None, p)\n\n for p, outputs in unit._outputs.items():\n # name has to be explicit because it may be possible that signal\n # with has this name was replaced by signal from parent/child\n containerNames = list(map(lambda x: x[0], outputs))\n\n class SpecificIoContainer(IoContainer):\n __slots__ = containerNames\n\n self._outputContainers[p] = SpecificIoContainer(outputs)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nschedule a new event to be processed when the update is done.", "response": "def _scheduleCombUpdateDoneEv(self) -> Event:\n \"\"\"\n Schedule combUpdateDoneEv event to let agents know that current\n delta step is ending and values from combinational logic are stable\n \"\"\"\n assert not self._combUpdateDonePlaned, self.now\n cud = Event(self)\n cud.process_to_wake.append(self.__deleteCombUpdateDoneEv())\n self._add_process(cud, PRIORITY_AGENTS_UPDATE_DONE)\n self._combUpdateDonePlaned = True\n self.combUpdateDoneEv = cud\n return cud"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nschedules the apply values to the signals.", "response": "def _scheduleApplyValues(self) -> None:\n \"\"\"\n Apply stashed values to signals\n \"\"\"\n assert not self._applyValPlaned, self.now\n self._add_process(self._applyValues(), PRIORITY_APPLY_COMB)\n self._applyValPlaned = True\n\n if self._runSeqProcessesPlaned:\n # if runSeqProcesses is already scheduled\n return\n\n assert not self._seqProcsToRun and not self._runSeqProcessesPlaned, self.now\n self._add_process(self._runSeqProcesses(), PRIORITY_APPLY_SEQ)\n self._runSeqProcessesPlaned = True"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _runCombProcesses(self) -> None:\n for proc in self._combProcsToRun:\n cont = self._outputContainers[proc]\n proc(self, cont)\n for sigName, sig in cont._all_signals:\n newVal = getattr(cont, sigName)\n if newVal is not None:\n res = self._conflictResolveStrategy(newVal)\n # prepare update\n updater, isEvDependent = res\n self._valuesToApply.append(\n (sig, updater, isEvDependent, proc))\n setattr(cont, sigName, None)\n # else value is latched\n\n self._combProcsToRun = UniqList()", "response": "Run all combinational processes."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _runSeqProcesses(self) -> Generator[None, None, None]:\n updates = []\n for proc in self._seqProcsToRun:\n try:\n outContainer = self._outputContainers[proc]\n except KeyError:\n # processes does not have to have outputs\n outContainer = None\n\n proc(self, outContainer)\n\n if outContainer is not None:\n updates.append(outContainer)\n\n self._seqProcsToRun = UniqList()\n self._runSeqProcessesPlaned = False\n\n for cont in updates:\n for sigName, sig in cont._all_signals:\n newVal = getattr(cont, sigName)\n if newVal is not None:\n v = self._conflictResolveStrategy(newVal)\n updater, _ = v\n sig.simUpdateVal(self, updater)\n setattr(cont, sigName, None)\n return\n yield", "response": "Run all the event dependent processes and update the sequence number of events that have been applied."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _applyValues(self) -> Generator[None, None, None]:\n va = self._valuesToApply\n self._applyValPlaned = False\n\n # log if there are items to log\n lav = self.config.logApplyingValues\n if va and lav:\n lav(self, va)\n self._valuesToApply = []\n\n # apply values to signals, values can overwrite each other\n # but each signal should be driven by only one process and\n # it should resolve value collision\n addSp = self._seqProcsToRun.append\n for s, vUpdater, isEventDependent, comesFrom in va:\n if isEventDependent:\n # now=0 and this was process initialization or async reg\n addSp(comesFrom)\n else:\n # regular combinational process\n s.simUpdateVal(self, vUpdater)\n\n self._runCombProcesses()\n\n # processes triggered from simUpdateVal can add new values\n if self._valuesToApply and not self._applyValPlaned:\n self._scheduleApplyValues()\n\n return\n yield", "response": "Perform delta step by writing stacked values to signals\n "} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreads value from signal or interface.", "response": "def read(self, sig) -> Value:\n \"\"\"\n Read value from signal or interface\n \"\"\"\n try:\n v = sig._val\n except AttributeError:\n v = sig._sigInside._val\n\n return v.clone()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwrites the given value to the specified signal or interface.", "response": "def write(self, val, sig: SimSignal)-> None:\n \"\"\"\n Write value to signal or interface.\n \"\"\"\n # get target RtlSignal\n try:\n simSensProcs = sig.simSensProcs\n except AttributeError:\n sig = sig._sigInside\n simSensProcs = sig.simSensProcs\n\n # type cast of input value\n t = sig._dtype\n\n if isinstance(val, Value):\n v = val.clone()\n v = v._auto_cast(t)\n else:\n v = t.fromPy(val)\n\n # can not update value in signal directly due singnal proxies\n sig.simUpdateVal(self, lambda curentV: (\n valueHasChanged(curentV, v), v))\n\n if not self._applyValPlaned:\n if not (simSensProcs or\n sig.simRisingSensProcs or\n sig.simFallingSensProcs):\n # signal value was changed but there are no sensitive processes\n # to it because of this _applyValues is never planed\n # and should be\n self._scheduleApplyValues()\n elif (sig._writeCallbacks or\n sig._writeCallbacksToEn):\n # signal write did not caused any change on any other signal\n # but there are still simulation agets waiting on\n # updateComplete event\n self._scheduleApplyValues()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nrunning simulation until specified time until.", "response": "def run(self, until: float) -> None:\n \"\"\"\n Run simulation until specified time\n :note: can be used to run simulation again after it ends from time when it ends\n \"\"\"\n assert until > self.now\n events = self._events\n schedule = events.push\n next_event = events.pop\n\n # add handle to stop simulation\n schedule(until, PRIORITY_URGENT, raise_StopSimulation(self))\n\n try:\n # for all events\n while True:\n nextTime, priority, process = next_event()\n self.now = nextTime\n # process is python generator or Event\n if isinstance(process, Event):\n process = iter(process)\n\n # run process or activate processes dependent on Event\n while True:\n try:\n ev = next(process)\n except StopIteration:\n break\n\n # if process requires waiting put it back in queue\n if isinstance(ev, Wait):\n # nextTime is self.now\n schedule(nextTime + ev.time, priority, process)\n break\n elif isinstance(ev, Event):\n # process going to wait for event\n # if ev.process_to_wake is None event was already\n # destroyed\n ev.process_to_wake.append(process)\n break\n else:\n # else this process spoted new process\n # and it has to be put in queue\n schedule(nextTime, priority, ev)\n\n except StopSimumulation:\n return"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds a process to the set of processes.", "response": "def add_process(self, proc) -> None:\n \"\"\"\n Add process to events with default priority on current time\n \"\"\"\n self._events.push(self.now, PRIORITY_NORMAL, proc)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrun simulation for a single unit instance", "response": "def simUnit(self, synthesisedUnit: Unit, until: float, extraProcesses=[]):\n \"\"\"\n Run simulation for Unit instance\n \"\"\"\n beforeSim = self.config.beforeSim\n if beforeSim is not None:\n beforeSim(self, synthesisedUnit)\n\n add_proc = self.add_process\n for p in extraProcesses:\n add_proc(p(self))\n\n self._initUnitSignals(synthesisedUnit)\n self.run(until)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _mkOp(fn):\n def op(*operands, key=None) -> RtlSignalBase:\n \"\"\"\n :param operands: variadic parameter of input uperands\n :param key: optional function applied on every operand\n before processing\n \"\"\"\n assert operands, operands\n top = None\n if key is not None:\n operands = map(key, operands)\n\n for s in operands:\n if top is None:\n top = s\n else:\n top = fn(top, s)\n return top\n\n return op", "response": "Create a variadic operator function that applies fn to each element of the input uperands and returns the result of the function fn."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking if the signalItem is register or wire", "response": "def systemCTypeOfSig(signalItem):\n \"\"\"\n Check if is register or wire\n \"\"\"\n if signalItem._const or\\\n arr_any(signalItem.drivers,\n lambda d: isinstance(d, HdlStatement)\n and d._now_is_event_dependent):\n\n return SIGNAL_TYPE.REG\n else:\n return SIGNAL_TYPE.WIRE"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert all ternary operators to IfContainers", "response": "def ternaryOpsToIf(statements):\n \"\"\"Convert all ternary operators to IfContainers\"\"\"\n stms = []\n\n for st in statements:\n if isinstance(st, Assignment):\n try:\n if not isinstance(st.src, RtlSignalBase):\n raise DoesNotContainsTernary()\n d = st.src.singleDriver()\n if not isinstance(d, Operator) or d.operator != AllOps.TERNARY:\n raise DoesNotContainsTernary()\n else:\n ops = d.operands\n ifc = IfContainer(ops[0],\n [Assignment(ops[1], st.dst)],\n [Assignment(ops[2], st.dst)]\n )\n stms.append(ifc)\n continue\n\n except (MultipleDriversErr, DoesNotContainsTernary):\n pass\n except NoDriverErr:\n assert (hasattr(st.src, \"_interface\")\n and st.src._interface is not None)\\\n or st.src.defVal.vldMask, st.src\n\n stms.append(st)\n return stms"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nserializing HWProcess objects as VHDL", "response": "def HWProcess(cls, proc, ctx):\n \"\"\"\n Serialize HWProcess objects as VHDL\n\n :param scope: name scope to prevent name collisions\n \"\"\"\n body = proc.statements\n extraVars = []\n extraVarsSerialized = []\n\n hasToBeVhdlProcess = arr_any(body,\n lambda x: isinstance(x,\n (IfContainer,\n SwitchContainer,\n WhileContainer,\n WaitStm)))\n\n sensitivityList = sorted(\n map(lambda s: cls.sensitivityListItem(s, ctx),\n proc.sensitivityList))\n\n if hasToBeVhdlProcess:\n childCtx = ctx.withIndent()\n else:\n childCtx = copy(ctx)\n\n def createTmpVarFn(suggestedName, dtype):\n s = RtlSignal(None, None, dtype, virtualOnly=True)\n s.name = ctx.scope.checkedName(suggestedName, s)\n s.hidden = False\n serializedS = cls.SignalItem(s, childCtx, declaration=True)\n extraVars.append(s)\n extraVarsSerialized.append(serializedS)\n return s\n\n childCtx.createTmpVarFn = createTmpVarFn\n\n statemets = [cls.asHdl(s, childCtx) for s in body]\n proc.name = ctx.scope.checkedName(proc.name, proc)\n\n extraVarsInit = []\n for s in extraVars:\n if isinstance(s.defVal, RtlSignalBase) or s.defVal.vldMask:\n a = Assignment(s.defVal, s, virtualOnly=True)\n extraVarsInit.append(cls.Assignment(a, childCtx))\n else:\n assert s.drivers, s\n for d in s.drivers:\n extraVarsInit.append(cls.asHdl(d, childCtx))\n\n _hasToBeVhdlProcess = hasToBeVhdlProcess\n hasToBeVhdlProcess = extraVars or hasToBeVhdlProcess\n\n if hasToBeVhdlProcess and not _hasToBeVhdlProcess:\n # add indent because we did not added it before because we did not\n # know t\n oneIndent = getIndent(1)\n statemets = list(map(lambda x: oneIndent + x, statemets))\n\n return cls.processTmpl.render(\n indent=getIndent(ctx.indent),\n name=proc.name,\n hasToBeVhdlProcess=hasToBeVhdlProcess,\n extraVars=extraVarsSerialized,\n sensitivityList=\", \".join(sensitivityList),\n statements=extraVarsInit + statemets\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncomputing the hamming distance between two hashes", "response": "def hash_distance(left_hash, right_hash):\n \"\"\"Compute the hamming distance between two hashes\"\"\"\n if len(left_hash) != len(right_hash):\n raise ValueError('Hamming distance requires two strings of equal length')\n\n return sum(map(lambda x: 0 if x[0] == x[1] else 1, zip(left_hash, right_hash)))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncomputes the average hash of the given image.", "response": "def average_hash(image_path, hash_size=8):\n \"\"\" Compute the average hash of the given image. \"\"\"\n with open(image_path, 'rb') as f:\n # Open the image, resize it and convert it to black & white.\n image = Image.open(f).resize((hash_size, hash_size), Image.ANTIALIAS).convert('L')\n pixels = list(image.getdata())\n\n avg = sum(pixels) / len(pixels)\n\n # Compute the hash based on each pixels value compared to the average.\n bits = \"\".join(map(lambda pixel: '1' if pixel > avg else '0', pixels))\n hashformat = \"0{hashlength}x\".format(hashlength=hash_size ** 2 // 4)\n return int(bits, 2).__format__(hashformat)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncompute the hamming distance between two images", "response": "def distance(image_path, other_image_path):\n \"\"\" Compute the hamming distance between two images\"\"\"\n image_hash = average_hash(image_path)\n other_image_hash = average_hash(other_image_path)\n\n return hash_distance(image_hash, other_image_hash)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef setup_platform(hass, config, add_entities, discovery_info=None):\n host = config.get(CONF_HOST)\n token = config.get(CONF_ACCESS_TOKEN)\n name = config.get(CONF_NAME)\n volume_step = config.get(CONF_VOLUME_STEP)\n device_type = config.get(CONF_DEVICE_CLASS)\n device = VizioDevice(host, token, name, volume_step, device_type)\n if device.validate_setup() is False:\n _LOGGER.error(\"Failed to set up Vizio platform, \"\n \"please check if host and API key are correct\")\n return\n elif (token is None or token == \"\") and device_type == \"tv\":\n _LOGGER.error(\"Failed to set up Vizio platform, \"\n \"if device_class is 'tv' then an auth_token needs \"\n \"to be provided, otherwise if device_class is \"\n \"'soundbar' then add the right device_class to config\")\n return\n\n if config.get(CONF_SUPPRESS_WARNING):\n from requests.packages import urllib3\n _LOGGER.warning(\"InsecureRequestWarning is disabled \"\n \"because of Vizio platform configuration\")\n urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)\n add_entities([device], True)", "response": "Setup the Vizio media player platform."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nupdates the state of the current state of the object.", "response": "def update(self):\n \"\"\"Retrieve latest state of the device.\"\"\"\n is_on = self._device.get_power_state()\n\n if is_on:\n self._state = STATE_ON\n\n volume = self._device.get_current_volume()\n if volume is not None:\n self._volume_level = float(volume) / self._max_volume\n\n input_ = self._device.get_current_input()\n if input_ is not None:\n self._current_input = input_.meta_name\n\n inputs = self._device.get_inputs()\n if inputs is not None:\n self._available_inputs = [input_.name for input_ in inputs]\n\n else:\n if is_on is None:\n self._state = None\n else:\n self._state = STATE_OFF\n\n self._volume_level = None\n self._current_input = None\n self._available_inputs = None"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nincreasing the volume of the device.", "response": "def volume_up(self):\n \"\"\"Increasing volume of the device.\"\"\"\n self._volume_level += self._volume_step / self._max_volume\n self._device.vol_up(num=self._volume_step)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef volume_down(self):\n self._volume_level -= self._volume_step / self._max_volume\n self._device.vol_down(num=self._volume_step)", "response": "Decreasing volume of the device."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef reset(self):\n '''Restores the starting position.'''\n self.piece_bb = [\n BB_VOID, # NONE\n BB_RANK_C | BB_RANK_G, # PAWN\n BB_A1 | BB_I1 | BB_A9 | BB_I9, # LANCE\n BB_A2 | BB_A8 | BB_I2 | BB_I8, # KNIGHT\n BB_A3 | BB_A7 | BB_I3 | BB_I7, # SILVER\n BB_A4 | BB_A6 | BB_I4 | BB_I6, # GOLD\n BB_B2 | BB_H8, # BISHOP\n BB_B8 | BB_H2, # ROOK\n BB_A5 | BB_I5, # KING\n BB_VOID, # PROM_PAWN\n BB_VOID, # PROM_LANCE\n BB_VOID, # PROM_KNIGHT\n BB_VOID, # PROM_SILVER\n BB_VOID, # PROM_BISHOP\n BB_VOID, # PROM_ROOK\n ]\n\n self.pieces_in_hand = [collections.Counter(), collections.Counter()]\n\n self.occupied = Occupied(BB_RANK_G | BB_H2 | BB_H8 | BB_RANK_I, BB_RANK_A | BB_B2 | BB_B8 | BB_RANK_C)\n\n self.king_squares = [I5, A5]\n self.pieces = [NONE for i in SQUARES]\n\n for i in SQUARES:\n mask = BB_SQUARES[i]\n for piece_type in PIECE_TYPES:\n if mask & self.piece_bb[piece_type]:\n self.pieces[i] = piece_type\n\n self.turn = BLACK\n self.move_number = 1\n self.captured_piece_stack = collections.deque()\n self.move_stack = collections.deque()\n self.incremental_zobrist_hash = self.board_zobrist_hash(DEFAULT_RANDOM_ARRAY)\n self.transpositions = collections.Counter((self.zobrist_hash(), ))", "response": "Restores the starting position."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the piece at the given square.", "response": "def piece_at(self, square):\n '''Gets the piece at the given square.'''\n mask = BB_SQUARES[square]\n color = int(bool(self.occupied[WHITE] & mask))\n\n piece_type = self.piece_type_at(square)\n if piece_type:\n return Piece(piece_type, color)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef remove_piece_at(self, square, into_hand=False):\n '''Removes a piece from the given square if present.'''\n piece_type = self.piece_type_at(square)\n\n if piece_type == NONE:\n return\n\n if into_hand:\n self.add_piece_into_hand(piece_type, self.turn)\n\n mask = BB_SQUARES[square]\n\n self.piece_bb[piece_type] ^= mask\n\n color = int(bool(self.occupied[WHITE] & mask))\n\n self.pieces[square] = NONE\n self.occupied.ixor(mask, color, square)\n\n # Update incremental zobrist hash.\n if color == BLACK:\n piece_index = (piece_type - 1) * 2\n else:\n piece_index = (piece_type - 1) * 2 + 1\n self.incremental_zobrist_hash ^= DEFAULT_RANDOM_ARRAY[81 * piece_index + 9 * rank_index(square) + file_index(square)]", "response": "Removes a piece from the given square if present."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_piece_at(self, square, piece, from_hand=False, into_hand=False):\n '''Sets a piece at the given square. An existing piece is replaced.'''\n if from_hand:\n self.remove_piece_from_hand(piece.piece_type, self.turn)\n\n self.remove_piece_at(square, into_hand)\n\n self.pieces[square] = piece.piece_type\n\n mask = BB_SQUARES[square]\n\n piece_type = piece.piece_type\n\n self.piece_bb[piece_type] |= mask\n\n if piece_type == KING:\n self.king_squares[piece.color] = square\n\n self.occupied.ixor(mask, piece.color, square)\n\n # Update incremental zorbist hash.\n if piece.color == BLACK:\n piece_index = (piece.piece_type - 1) * 2\n else:\n piece_index = (piece.piece_type - 1) * 2 + 1\n self.incremental_zobrist_hash ^= DEFAULT_RANDOM_ARRAY[81 * piece_index + 9 * rank_index(square) + file_index(square)]", "response": "Sets a piece at the given square. An existing piece is replaced."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nchecking if the given move would leave the king in check or put it into check.", "response": "def is_suicide_or_check_by_dropping_pawn(self, move):\n '''\n Checks if the given move would move would leave the king in check or\n put it into check.\n '''\n\n self.push(move)\n is_suicide = self.was_suicide()\n is_check_by_dropping_pawn = self.was_check_by_dropping_pawn(move)\n self.pop()\n return is_suicide or is_check_by_dropping_pawn"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck if the king of the other side is attacked.", "response": "def was_suicide(self):\n '''\n Checks if the king of the other side is attacked. Such a position is not\n valid and could only be reached by an illegal move.\n '''\n return self.is_attacked_by(self.turn, self.king_squares[self.turn ^ 1])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking if the game is over due to checkmate stalemate or fourfold repetition.", "response": "def is_game_over(self):\n '''\n Checks if the game is over due to checkmate, stalemate or\n fourfold repetition.\n '''\n\n # Stalemate or checkmate.\n try:\n next(self.generate_legal_moves().__iter__())\n except StopIteration:\n return True\n\n # Fourfold repetition.\n if self.is_fourfold_repetition():\n return True\n\n return False"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef is_checkmate(self):\n '''Checks if the current position is a checkmate.'''\n if not self.is_check():\n return False\n\n try:\n next(self.generate_legal_moves().__iter__())\n return False\n except StopIteration:\n return True", "response": "Checks if the current position is a checkmate."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns True if the game is ended if a position occurs for the fourth time on consecutive alternating moves.", "response": "def is_fourfold_repetition(self):\n '''\n a game is ended if a position occurs for the fourth time\n on consecutive alternating moves.\n '''\n zobrist_hash = self.zobrist_hash()\n\n # A minimum amount of moves must have been played and the position\n # in question must have appeared at least four times.\n if self.transpositions[zobrist_hash] < 4:\n return False\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef push(self, move):\n '''\n Updates the position with the given move and puts it onto a stack.\n Null moves just increment the move counters, switch turns and forfeit\n en passant capturing.\n No validation is performed. For performance moves are assumed to be at\n least pseudo legal. Otherwise there is no guarantee that the previous\n board state can be restored. To check it yourself you can use:\n >>> move in board.pseudo_legal_moves\n True\n '''\n # Increment move number.\n self.move_number += 1\n\n # Remember game state.\n captured_piece = self.piece_type_at(move.to_square) if move else NONE\n self.captured_piece_stack.append(captured_piece)\n self.move_stack.append(move)\n\n # On a null move simply swap turns.\n if not move:\n self.turn ^= 1\n return\n\n if move.drop_piece_type:\n # Drops.\n piece_type = move.drop_piece_type\n from_hand = True\n else:\n # Promotion.\n piece_type = self.piece_type_at(move.from_square)\n from_hand = False\n\n if move.promotion:\n piece_type = PIECE_PROMOTED[piece_type]\n\n # Remove piece from target square.\n self.remove_piece_at(move.from_square, False)\n\n # Put piece on target square.\n self.set_piece_at(move.to_square, Piece(piece_type, self.turn), from_hand, True)\n\n # Swap turn.\n self.turn ^= 1\n\n # Update transposition table.\n self.transpositions.update((self.zobrist_hash(), ))", "response": "Pushes a piece onto the stack."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef pop(self):\n '''\n Restores the previous position and returns the last move from the stack.\n '''\n move = self.move_stack.pop()\n\n # Update transposition table.\n self.transpositions.subtract((self.zobrist_hash(), ))\n\n # Decrement move number.\n self.move_number -= 1\n\n # Restore state.\n captured_piece_type = self.captured_piece_stack.pop()\n captured_piece_color = self.turn\n\n # On a null move simply swap the turn.\n if not move:\n self.turn ^= 1\n return move\n\n # Restore the source square.\n piece_type = self.piece_type_at(move.to_square)\n if move.promotion:\n piece_type = PIECE_PROMOTED.index(piece_type)\n\n if move.from_square is None:\n self.add_piece_into_hand(piece_type, self.turn ^ 1)\n else:\n self.set_piece_at(move.from_square, Piece(piece_type, self.turn ^ 1))\n\n # Restore target square.\n if captured_piece_type:\n self.remove_piece_from_hand(captured_piece_type, captured_piece_color ^ 1)\n self.set_piece_at(move.to_square, Piece(captured_piece_type, captured_piece_color))\n else:\n self.remove_piece_at(move.to_square)\n\n # Swap turn.\n self.turn ^= 1\n\n return move", "response": "Restores the previous position and returns the last move from the stack."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget an SFEN representation of the current position.", "response": "def sfen(self):\n '''\n Gets an SFEN representation of the current position.\n '''\n sfen = []\n empty = 0\n\n # Position part.\n for square in SQUARES:\n piece = self.piece_at(square)\n\n if not piece:\n empty += 1\n else:\n if empty:\n sfen.append(str(empty))\n empty = 0\n sfen.append(piece.symbol())\n\n if BB_SQUARES[square] & BB_FILE_1:\n if empty:\n sfen.append(str(empty))\n empty = 0\n\n if square != I1:\n sfen.append('/')\n\n sfen.append(' ')\n\n # Side to move.\n if self.turn == WHITE:\n sfen.append('w')\n else:\n sfen.append('b')\n\n sfen.append(' ')\n\n # Pieces in hand\n pih_len = 0\n for color in COLORS:\n p = self.pieces_in_hand[color]\n pih_len += len(p)\n for piece_type in sorted(p.keys(), reverse=True):\n if p[piece_type] >= 1:\n if p[piece_type] > 1:\n sfen.append(str(p[piece_type]))\n piece = Piece(piece_type, color)\n sfen.append(piece.symbol())\n if pih_len == 0:\n sfen.append('-')\n\n sfen.append(' ')\n\n # Move count\n sfen.append(str(self.move_number))\n\n return ''.join(sfen)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef set_sfen(self, sfen):\n '''\n Parses a SFEN and sets the position from it.\n Rasies `ValueError` if the SFEN string is invalid.\n '''\n # Ensure there are six parts.\n parts = sfen.split()\n if len(parts) != 4:\n raise ValueError('sfen string should consist of 6 parts: {0}'.format(repr(sfen)))\n\n # Ensure the board part is valid.\n rows = parts[0].split('/')\n if len(rows) != 9:\n raise ValueError('expected 9 rows in position part of sfen: {0}'.format(repr(sfen)))\n\n # Validate each row.\n for row in rows:\n field_sum = 0\n previous_was_digit = False\n previous_was_plus = False\n\n for c in row:\n if c in ['1', '2', '3', '4', '5', '6', '7', '8', '9']:\n if previous_was_digit:\n raise ValueError('two subsequent digits in position part of sfen: {0}'.format(repr(sfen)))\n if previous_was_plus:\n raise ValueError('Cannot promote squares in position part of sfen: {0}'.format(repr(sfen)))\n field_sum += int(c)\n previous_was_digit = True\n previous_was_plus = False\n elif c == '+':\n if previous_was_plus:\n raise ValueError('Double promotion prefixes in position part of sfen: {0}'.format(repr(sfen)))\n previous_was_digit = False\n previous_was_plus = True\n elif c.lower() in ['p', 'l', 'n', 's', 'g', 'b', 'r', 'k']:\n field_sum += 1\n if previous_was_plus and (c.lower() == 'g' or c.lower() == 'k'):\n raise ValueError('Gold and King cannot promote in position part of sfen: {0}')\n previous_was_digit = False\n previous_was_plus = False\n else:\n raise ValueError('invalid character in position part of sfen: {0}'.format(repr(sfen)))\n\n if field_sum != 9:\n raise ValueError('expected 9 columns per row in position part of sfen: {0}'.format(repr(sfen)))\n\n # Check that the turn part is valid.\n if not parts[1] in ['b', 'w']:\n raise ValueError(\"expected 'b' or 'w' for turn part of sfen: {0}\".format(repr(sfen)))\n\n # Check pieces in hand is valid.\n # TODO: implement with checking parts[2]\n\n # Check that the fullmove number part is valid.\n # 0 is allowed for compability but later replaced with 1.\n if int(parts[3]) < 0:\n raise ValueError('fullmove number must be positive: {0}'.format(repr(sfen)))\n\n # Clear board.\n self.clear()\n\n # Put pieces on the board.\n square_index = 0\n previous_was_plus = False\n for c in parts[0]:\n if c in ['1', '2', '3', '4', '5', '6', '7', '8', '9']:\n square_index += int(c)\n elif c == '+':\n previous_was_plus = True\n elif c == '/':\n pass\n else:\n piece_symbol = c\n if previous_was_plus:\n piece_symbol = '+' + piece_symbol\n self.set_piece_at(square_index, Piece.from_symbol(piece_symbol))\n square_index += 1\n previous_was_plus = False\n\n # Set the turn.\n if parts[1] == 'w':\n self.turn = WHITE\n else:\n self.turn = BLACK\n\n # Set the pieces in hand\n self.pieces_in_hand = [collections.Counter(), collections.Counter()]\n if parts[2] != '-':\n piece_count = 0\n for c in parts[2]:\n if c in ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']:\n piece_count *= 10\n piece_count += int(c)\n else:\n piece = Piece.from_symbol(c)\n if piece_count == 0:\n piece_count = 1\n self.add_piece_into_hand(piece.piece_type, piece.color, piece_count)\n piece_count = 0\n\n # Set the mover counters.\n self.move_number = int(parts[3]) or 1\n\n # Reset the transposition table.\n self.transpositions = collections.Counter((self.zobrist_hash(), ))", "response": "Parses a SFEN string and sets the position from it."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\npush a new entry in the move stack and returns the new move.", "response": "def push_usi(self, usi):\n '''\n Parses a move in standard coordinate notation, makes the move and puts\n it on the the move stack.\n Raises `ValueError` if neither legal nor a null move.\n Returns the move.\n '''\n move = Move.from_usi(usi)\n self.push(move)\n return move"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a Zobrist hash of the current position.", "response": "def zobrist_hash(self, array=None):\n '''\n Returns a Zobrist hash of the current position.\n '''\n # Hash in the board setup.\n zobrist_hash = self.board_zobrist_hash(array)\n\n if array is None:\n array = DEFAULT_RANDOM_ARRAY\n\n if self.turn == WHITE:\n zobrist_hash ^= array[2268]\n\n # pieces in hand pattern is\n # 19 * 5 * 5 * 5 * 5 * 3 * 3 = 106875 < pow(2, 17)\n # just checking black side is okay in normal state\n i = (\n self.pieces_in_hand[BLACK][ROOK] * 35625 +\n self.pieces_in_hand[BLACK][BISHOP] * 11875 +\n self.pieces_in_hand[BLACK][GOLD] * 2375 +\n self.pieces_in_hand[BLACK][SILVER] * 475 +\n self.pieces_in_hand[BLACK][KNIGHT] * 95 +\n self.pieces_in_hand[BLACK][LANCE] * 19 +\n self.pieces_in_hand[BLACK][PAWN])\n bit = bit_scan(i)\n while bit != -1 and bit is not None:\n zobrist_hash ^= array[2269 + bit]\n bit = bit_scan(i, bit + 1)\n\n return zobrist_hash"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the symbol of the current piece.", "response": "def symbol(self):\n '''\n Gets the symbol `p`, `l`, `n`, etc.\n '''\n if self.color == BLACK:\n return PIECE_SYMBOLS[self.piece_type].upper()\n else:\n return PIECE_SYMBOLS[self.piece_type]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef from_symbol(cls, symbol):\n '''\n Creates a piece instance from a piece symbol.\n Raises `ValueError` if the symbol is invalid.\n '''\n if symbol.lower() == symbol:\n return cls(PIECE_SYMBOLS.index(symbol), WHITE)\n else:\n return cls(PIECE_SYMBOLS.index(symbol.lower()), BLACK)", "response": "Creates a piece instance from a piece symbol."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef usi(self):\n '''\n Gets an USI string for the move.\n For example a move from 7A to 8A would be `7a8a` or `7a8a+` if it is\n a promotion.\n '''\n if self:\n if self.drop_piece_type:\n return '{0}*{1}'.format(PIECE_SYMBOLS[self.drop_piece_type].upper(), SQUARE_NAMES[self.to_square])\n else:\n return SQUARE_NAMES[self.from_square] + SQUARE_NAMES[self.to_square] + \\\n ('+' if self.promotion else '')\n else:\n return '0000'", "response": "Gets an USI string for the current locale."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nparse an USI string and returns a new object.", "response": "def from_usi(cls, usi):\n '''\n Parses an USI string.\n Raises `ValueError` if the USI string is invalid.\n '''\n if usi == '0000':\n return cls.null()\n elif len(usi) == 4:\n if usi[1] == '*':\n piece = Piece.from_symbol(usi[0])\n return cls(None, SQUARE_NAMES.index(usi[2:4]), False, piece.piece_type)\n else:\n return cls(SQUARE_NAMES.index(usi[0:2]), SQUARE_NAMES.index(usi[2:4]))\n elif len(usi) == 5 and usi[4] == '+':\n return cls(SQUARE_NAMES.index(usi[0:2]), SQUARE_NAMES.index(usi[2:4]), True)\n else:\n raise ValueError('expected usi string to be of length 4 or 5')"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\naccepting a string and parse it into many commits. Parse and yield each commit - dictionary. This function is a generator.", "response": "def parse_commits(data):\n '''Accept a string and parse it into many commits.\n Parse and yield each commit-dictionary.\n This function is a generator.\n '''\n raw_commits = RE_COMMIT.finditer(data)\n for rc in raw_commits:\n full_commit = rc.groups()[0]\n parts = RE_COMMIT.match(full_commit).groupdict()\n parsed_commit = parse_commit(parts)\n yield parsed_commit"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse_commit(parts):\n '''Accept a parsed single commit. Some of the named groups\n require further processing, so parse those groups.\n Return a dictionary representing the completely parsed\n commit.\n '''\n commit = {}\n commit['commit'] = parts['commit']\n commit['tree'] = parts['tree']\n parent_block = parts['parents']\n commit['parents'] = [\n parse_parent_line(parentline)\n for parentline in\n parent_block.splitlines()\n ]\n commit['author'] = parse_author_line(parts['author'])\n commit['committer'] = parse_committer_line(parts['committer'])\n message_lines = [\n parse_message_line(msgline)\n for msgline in\n parts['message'].split(\"\\n\")\n ]\n commit['message'] = \"\\n\".join(\n msgline\n for msgline in\n message_lines\n if msgline is not None\n )\n commit['changes'] = [\n parse_numstat_line(numstat)\n for numstat in\n parts['numstats'].splitlines()\n ]\n return commit", "response": "Accept a parsed single commit."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef run_git_log(git_dir=None, git_since=None):\n '''run_git_log([git_dir]) -> File\n\n Run `git log --numstat --pretty=raw` on the specified\n git repository and return its stdout as a pseudo-File.'''\n import subprocess\n if git_dir:\n command = [\n 'git',\n '--git-dir=' + git_dir,\n 'log',\n '--numstat',\n '--pretty=raw'\n ]\n else:\n command = ['git', 'log', '--numstat', '--pretty=raw']\n if git_since is not None:\n command.append('--since=' + git_since)\n raw_git_log = subprocess.Popen(\n command,\n stdout=subprocess.PIPE\n )\n if sys.version_info < (3, 0):\n return raw_git_log.stdout\n else:\n return raw_git_log.stdout.read().decode('utf-8', 'ignore')", "response": "Run git log on the specified\nTaxonomy git repository and return its stdout as a pseudo - File."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nloading config from CLI arguments and returns a list of config file names", "response": "def load_config_from_cli(config: GoodConf, argv: List[str]) -> List[str]:\n \"\"\"Loads config, checking CLI arguments for a config file\"\"\"\n\n # Monkey patch Django's command parser\n from django.core.management.base import BaseCommand\n original_parser = BaseCommand.create_parser\n\n def patched_parser(self, prog_name, subcommand):\n parser = original_parser(self, prog_name, subcommand)\n argparser_add_argument(parser, config)\n return parser\n\n BaseCommand.create_parser = patched_parser\n\n try:\n parser = argparse.ArgumentParser(add_help=False)\n argparser_add_argument(parser, config)\n\n config_arg, default_args = parser.parse_known_args(argv)\n config.load(config_arg.config)\n yield default_args\n finally:\n # Put that create_parser back where it came from or so help me!\n BaseCommand.create_parser = original_parser"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef execute_from_command_line_with_config(config: GoodConf, argv: List[str]):\n with load_config_from_cli(config, argv) as args:\n from django.core.management import execute_from_command_line\n execute_from_command_line(args)", "response": "Load s config then runs Django s execute_from_command_line"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef argparser_add_argument(parser: argparse.ArgumentParser, config: GoodConf):\n help = \"Config file.\"\n if config.file_env_var:\n help += (\" Can also be configured via the \"\n \"environment variable: {}\".format(config.file_env_var))\n if config.default_files:\n help += (\" Defaults to the first file that exists from \"\n \"[{}].\".format(', '.join(config.default_files)))\n parser.add_argument('-C', '--config', metavar='FILE', help=help)", "response": "Adds an argument for config to an argument parser"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nload a configuration file into a dictionary.", "response": "def _load_config(path: str) -> dict:\n \"\"\"\n Given a file path, parse it based on its extension (YAML or JSON)\n and return the values as a Python dictionary. JSON is the default if an\n extension can't be determined.\n \"\"\"\n __, ext = os.path.splitext(path)\n if ext in ['.yaml', '.yml']:\n import ruamel.yaml\n loader = ruamel.yaml.safe_load\n else:\n loader = json.load\n with open(path) as f:\n config = loader(f)\n return config"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nloading config file and set values", "response": "def load(self, filename: str = None):\n \"\"\"Find config file and set values\"\"\"\n if filename:\n self.config_file = _find_file(filename)\n else:\n if self.file_env_var and self.file_env_var in os.environ:\n self.config_file = _find_file(os.environ[self.file_env_var])\n if not self.config_file:\n for filename in self.default_files:\n self.config_file = _find_file(filename, require=False)\n if self.config_file:\n break\n if self.config_file:\n config = _load_config(self.config_file)\n log.info(\"Loading config from %s\", self.config_file)\n else:\n config = {}\n log.info(\"No config file specified. \"\n \"Loading with environment variables.\")\n self.set_values(config)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef generate_yaml(cls, **override):\n import ruamel.yaml\n yaml = ruamel.yaml.YAML()\n yaml_str = StringIO()\n yaml.dump(cls.get_initial(**override), stream=yaml_str)\n yaml_str.seek(0)\n dict_from_yaml = yaml.load(yaml_str)\n if cls.__doc__:\n dict_from_yaml.yaml_set_start_comment(\n '\\n' + cls.__doc__ + '\\n\\n')\n for k in dict_from_yaml.keys():\n if cls._values[k].help:\n dict_from_yaml.yaml_set_comment_before_after_key(\n k, before='\\n' + cls._values[k].help)\n yaml_str = StringIO()\n yaml.dump(dict_from_yaml, yaml_str)\n yaml_str.seek(0)\n return yaml_str.read()", "response": "Generates a YAML string of the current configuration."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef generate_markdown(cls):\n lines = []\n if cls.__doc__:\n lines.extend(['# {}'.format(cls.__doc__), ''])\n for k, v in cls._values.items():\n lines.append('* **{}** '.format(k))\n if v.required:\n lines[-1] = lines[-1] + '_REQUIRED_ '\n if v.help:\n lines.append(' {} '.format(v.help))\n lines.append(' type: `{}` '.format(v.cast_as.__name__))\n if v.default is not None:\n lines.append(' default: `{}` '.format(v.default))\n return '\\n'.join(lines)", "response": "Generates a markdown string for the class."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef cast(self, val: str):\n try:\n return getattr(self, 'cast_as_{}'.format(\n self.cast_as.__name__.lower()))(val)\n except AttributeError:\n return self.cast_as(val)", "response": "converts string to type requested by cast_as"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn all dates from first to last included.", "response": "def list_dates_between(first_date, last_date):\n \"\"\"Returns all dates from first to last included.\"\"\"\n return [first_date + timedelta(days=n)\n for n in range(1 + (last_date - first_date).days)]"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse_date(s):\n try:\n return datetime.date(int(s[:4]), int(s[5:7]), int(s[8:10]))\n except ValueError: # other accepted format used in one-day data set\n return datetime.datetime.strptime(s, '%d %B %Y').date()", "response": "Fast %Y - m - d parsing."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nload the data from a file.", "response": "def load_file(self, currency_file):\n \"\"\"To be subclassed if alternate methods of loading data.\n \"\"\"\n if currency_file.startswith(('http://', 'https://')):\n content = urlopen(currency_file).read()\n else:\n with open(currency_file, 'rb') as f:\n content = f.read()\n\n if currency_file.endswith('.zip'):\n self.load_lines(get_lines_from_zip(content))\n else:\n self.load_lines(content.decode('utf-8').splitlines())"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfills missing rates of a currency with the closest available ones.", "response": "def _set_missing_to_none(self, currency):\n \"\"\"Fill missing rates of a currency with the closest available ones.\"\"\"\n rates = self._rates[currency]\n first_date, last_date = self.bounds[currency]\n\n for date in list_dates_between(first_date, last_date):\n if date not in rates:\n rates[date] = None\n\n if self.verbose:\n missing = len([r for r in itervalues(rates) if r is None])\n if missing:\n print('{0}: {1} missing rates from {2} to {3} ({4} days)'.format(\n currency, missing, first_date, last_date,\n 1 + (last_date - first_date).days))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncomputes the missing rates for a given currency.", "response": "def _compute_missing_rates(self, currency):\n \"\"\"Fill missing rates of a currency.\n\n This is done by linear interpolation of the two closest available rates.\n\n :param str currency: The currency to fill missing rates for.\n \"\"\"\n rates = self._rates[currency]\n\n # tmp will store the closest rates forward and backward\n tmp = defaultdict(lambda: [None, None])\n\n for date in sorted(rates):\n rate = rates[date]\n if rate is not None:\n closest_rate = rate\n dist = 0\n else:\n dist += 1\n tmp[date][0] = closest_rate, dist\n\n for date in sorted(rates, reverse=True):\n rate = rates[date]\n if rate is not None:\n closest_rate = rate\n dist = 0\n else:\n dist += 1\n tmp[date][1] = closest_rate, dist\n\n for date in sorted(tmp):\n (r0, d0), (r1, d1) = tmp[date]\n rates[date] = (r0 * d1 + r1 * d0) / (d0 + d1)\n if self.verbose:\n print(('{0}: filling {1} missing rate using {2} ({3}d old) and '\n '{4} ({5}d later)').format(currency, date, r0, d0, r1, d1))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget a rate for a given currency and date.", "response": "def _get_rate(self, currency, date):\n \"\"\"Get a rate for a given currency and date.\n\n :type date: datetime.date\n\n >>> from datetime import date\n >>> c = CurrencyConverter()\n >>> c._get_rate('USD', date=date(2014, 3, 28))\n 1.375...\n >>> c._get_rate('BGN', date=date(2010, 11, 21))\n Traceback (most recent call last):\n RateNotFoundError: BGN has no rate for 2010-11-21\n \"\"\"\n if currency == self.ref_currency:\n return 1.0\n\n if date not in self._rates[currency]:\n first_date, last_date = self.bounds[currency]\n\n if not self.fallback_on_wrong_date:\n raise RateNotFoundError('{0} not in {1} bounds {2}/{3}'.format(\n date, currency, first_date, last_date))\n\n if date < first_date:\n fallback_date = first_date\n elif date > last_date:\n fallback_date = last_date\n else:\n raise AssertionError('Should never happen, bug in the code!')\n\n if self.verbose:\n print(r'/!\\ {0} not in {1} bounds {2}/{3}, falling back to {4}'.format(\n date, currency, first_date, last_date, fallback_date))\n\n date = fallback_date\n\n rate = self._rates[currency][date]\n if rate is None:\n raise RateNotFoundError('{0} has no rate for {1}'.format(currency, date))\n return rate"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef convert(self, amount, currency, new_currency='EUR', date=None):\n for c in currency, new_currency:\n if c not in self.currencies:\n raise ValueError('{0} is not a supported currency'.format(c))\n\n if date is None:\n date = self.bounds[currency].last_date\n else:\n try:\n date = date.date() # fallback if input was a datetime object\n except AttributeError:\n pass\n\n r0 = self._get_rate(currency, date)\n r1 = self._get_rate(new_currency, date)\n\n return float(amount) / r0 * r1", "response": "Convert amount from a currency to another currency."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngroup iterable by n elements.", "response": "def grouper(iterable, n, fillvalue=None):\n \"\"\"Group iterable by n elements.\n\n >>> for t in grouper('abcdefg', 3, fillvalue='x'):\n ... print(''.join(t))\n abc\n def\n gxx\n \"\"\"\n return list(zip_longest(*[iter(iterable)] * n, fillvalue=fillvalue))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef animate(frames, interval, name, iterations=2):\n for i in range(iterations):\n for frame in frames:\n frame = get_coded_text(frame)\n output = \"\\r{0} {1}\".format(frame, name)\n sys.stdout.write(output)\n sys.stdout.write(CLEAR_LINE)\n sys.stdout.flush()\n time.sleep(0.001 * interval)", "response": "Animate given frames for set number of iterations."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts the object to string", "response": "def tostring(self, cnf):\n \"\"\"Convert Cnf object ot Dimacs cnf string\n \n cnf: Cnf object\n \n In the converted Cnf there will be only numbers for\n variable names. The conversion guarantees that the\n variables will be numbered alphabetically.\n \"\"\"\n self.varname_dict = {}\n self.varobj_dict = {}\n\n varis = set()\n for d in cnf.dis:\n for v in d:\n varis.add(v.name)\n\n ret = \"p cnf %d %d\" % (len(varis), len(cnf.dis))\n\n varis = dict(list(zip(sorted(list(varis)),list(map(str,list(range(1,len(varis)+1)))))))\n\n for v in varis:\n vo = Variable(v)\n self.varname_dict[vo] = varis[v]\n self.varobj_dict[varis[v]] = vo\n\n for d in cnf.dis:\n ret += \"\\n\"\n vnamelist = []\n for v in d:\n vnamelist.append((\"-\" if v.inverted else \"\") + varis[v.name])\n ret += \" \".join(vnamelist) + \" 0\"\n\n return ret"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nload the polynomial series for name and return it.", "response": "def load(self, name):\n \"\"\"[DEPRECATED] Load the polynomial series for `name` and return it.\"\"\"\n s = self.sets.get(name)\n if s is None:\n self.sets[name] = s = np.load(self.path('jpl-%s.npy' % name))\n return s"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncomputing the position of name at time tdb + tdb2.", "response": "def position(self, name, tdb, tdb2=0.0):\n \"\"\"[DEPRECATED] Compute the position of `name` at time ``tdb [+ tdb2]``.\n\n The position is returned as a NumPy array ``[x y z]``.\n\n The barycentric dynamical time `tdb` argument should be a float.\n If there are many dates you want computed, then make `tdb` an\n array, which is more efficient than calling this method multiple\n times; the return value will be a two-dimensional array giving a\n row of values for each coordinate.\n\n For extra precision, the time can be split into two floats; a\n popular choice is to use `tdb` for the integer or half-integer\n date, and `tdb2` to hold the remaining fraction.\n\n Consult the `names` attribute of this ephemeris for the values\n of `name` it supports, such as ``'mars'`` or ``'earthmoon'``.\n\n \"\"\"\n bundle = self.compute_bundle(name, tdb, tdb2)\n return self.position_from_bundle(bundle)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncompute the position and velocity of a name at a given dynamical time.", "response": "def position_and_velocity(self, name, tdb, tdb2=0.0):\n \"\"\"[DEPRECATED] Compute the position and velocity of `name` at ``tdb [+ tdb2]``.\n\n The position and velocity are returned in a 2-tuple::\n\n ([x y z], [xdot ydot zdot])\n\n The barycentric dynamical time `tdb` argument should be a float.\n If there are many dates you want computed, then make `tdb` an\n array, which is more efficient than calling this method multiple\n times; the return values will be two-dimensional arrays giving a\n row of values for each coordinate.\n\n For extra precision, the time can be split into two floats; a\n popular choice is to use `tdb` for the integer or half-integer\n date, and `tdb2` to hold the remaining fraction.\n\n Consult the `names` attribute of this ephemeris for the values\n of `name` it supports, such as ``'mars'`` or ``'earthmoon'``.\n\n \"\"\"\n bundle = self.compute_bundle(name, tdb, tdb2)\n position = self.position_from_bundle(bundle)\n velocity = self.velocity_from_bundle(bundle)\n return position, velocity"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncompute the Chebyshev intermediate values for a specific entry in the set of available Chebyshev entries.", "response": "def compute_bundle(self, name, tdb, tdb2=0.0):\n \"\"\"[DEPRECATED] Return a tuple of coefficients and parameters for `tdb`.\n\n The return value is a tuple that bundles together the\n coefficients and other Chebyshev intermediate values that are\n needed for the computation of either the position or velocity.\n The bundle can then be passed to either `position_from_bundle()`\n or `velocity_from_bundle()` to finish the computation. See the\n package-level documentation for details; most users will simply\n call `position()` or `position_and_velocity()` instead.\n\n The barycentric dynamical time `tdb` argument should be a float.\n If there are many dates you want computed, then make `tdb` an\n array, which is more efficient than calling this method multiple\n times; the return values will be arrays providing a value for\n each time in `tdb`.\n\n For extra precision, the time can be split into two floats; a\n popular choice is to use `tdb` for the integer or half-integer\n date, and `tdb2` to hold the remaining fraction.\n\n Consult the `names` attribute of this ephemeris for the values\n of `name` it supports, such as ``'mars'`` or ``'earthmoon'``.\n\n \"\"\"\n input_was_scalar = getattr(tdb, 'shape', ()) == ()\n if input_was_scalar:\n tdb = np.array((tdb,))\n # no need to deal with tdb2; numpy broadcast will add fine below.\n\n coefficient_sets = self.load(name)\n number_of_sets, axis_count, coefficient_count = coefficient_sets.shape\n\n jalpha, jomega = self.jalpha, self.jomega\n days_per_set = (jomega - jalpha) / number_of_sets\n # to keep precision, first subtract, then add\n index, offset = divmod((tdb - jalpha) + tdb2, days_per_set)\n index = index.astype(int)\n\n if (index < 0).any() or (number_of_sets < index).any():\n raise DateError('ephemeris %s only covers dates %.1f through %.1f'\n % (self.name, jalpha, jomega))\n\n omegas = (index == number_of_sets)\n index[omegas] -= 1\n offset[omegas] += days_per_set\n\n coefficients = np.rollaxis(coefficient_sets[index], 1)\n\n # Chebyshev recurrence:\n\n T = np.empty((coefficient_count, len(index)))\n T[0] = 1.0\n T[1] = t1 = 2.0 * offset / days_per_set - 1.0\n twot1 = t1 + t1\n for i in range(2, coefficient_count):\n T[i] = twot1 * T[i-1] - T[i-2]\n\n bundle = coefficients, days_per_set, T, twot1\n return bundle"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the position of the entry in the given bundle.", "response": "def position_from_bundle(self, bundle):\n \"\"\"[DEPRECATED] Return position, given the `coefficient_bundle()` return value.\"\"\"\n\n coefficients, days_per_set, T, twot1 = bundle\n return (T.T * coefficients).sum(axis=2)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning velocity given the coefficient_bundle return value.", "response": "def velocity_from_bundle(self, bundle):\n \"\"\"[DEPRECATED] Return velocity, given the `coefficient_bundle()` return value.\"\"\"\n\n coefficients, days_per_set, T, twot1 = bundle\n coefficient_count = coefficients.shape[2]\n\n # Chebyshev derivative:\n\n dT = np.empty_like(T)\n dT[0] = 0.0\n dT[1] = 1.0\n dT[2] = twot1 + twot1\n for i in range(3, coefficient_count):\n dT[i] = twot1 * dT[i-1] - dT[i-2] + T[i-1] + T[i-1]\n dT *= 2.0\n dT /= days_per_set\n\n return (dT.T * coefficients).sum(axis=2)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef read_record(self, n):\n self.file.seek(n * K - K)\n return self.file.read(K)", "response": "Return record n as 1 234 bytes."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nwrite data to file record n.", "response": "def write_record(self, n, data):\n \"\"\"Write `data` to file record `n`; records are indexed from 1.\"\"\"\n self.file.seek(n * K - K)\n return self.file.write(data)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef map_words(self, start, end):\n i, j = 8 * start - 8, 8 * end\n try:\n fileno = self.file.fileno()\n except (AttributeError, io.UnsupportedOperation):\n fileno = None\n if fileno is None:\n skip = 0\n self.file.seek(i)\n m = self.file.read(j - i)\n else:\n skip = i % mmap.ALLOCATIONGRANULARITY\n r = mmap.ACCESS_READ\n m = mmap.mmap(fileno, length=j-i+skip, access=r, offset=i-skip)\n if sys.version_info > (3,):\n m = memoryview(m) # so further slicing can return views\n return m, skip", "response": "Return a memory - map of the elements start through end."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef comments(self):\n record_numbers = range(2, self.fward)\n if not record_numbers:\n return ''\n data = b''.join(self.read_record(n)[0:1000] for n in record_numbers)\n try:\n return data[:data.find(b'\\4')].decode('ascii').replace('\\0', '\\n')\n except IndexError:\n raise ValueError('DAF file comment area is missing its EOT byte')\n except UnicodeDecodeError:\n raise ValueError('DAF file comment area is not ASCII text')", "response": "Return the text inside the comment area of the file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef read_array(self, start, end):\n f = self.file\n f.seek(8 * (start - 1))\n length = 1 + end - start\n data = f.read(8 * length)\n return ndarray(length, self.endian + 'd', data)", "response": "Return a numpy array from start to end inclusive indexed from 1."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef map_array(self, start, end):\n if self._array is None:\n self._map, skip = self.map_words(1, self.free - 1)\n assert skip == 0\n self._array = ndarray(self.free - 1, self.endian + 'd', self._map)\n return self._array[start - 1 : end]", "response": "Return a numpy array from start to end inclusive indexed from 1."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nyield the number of summaries and data for each record.", "response": "def summary_records(self):\n \"\"\"Yield (record_number, n_summaries, record_data) for each record.\n\n Readers will only use the second two values in each tuple.\n Writers can update the record using the `record_number`.\n\n \"\"\"\n record_number = self.fward\n unpack = self.summary_control_struct.unpack\n while record_number:\n data = self.read_record(record_number)\n next_number, previous_number, n_summaries = unpack(data[:24])\n yield record_number, n_summaries, data\n record_number = int(next_number)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nyielding name value pairs for each summary in the file.", "response": "def summaries(self):\n \"\"\"Yield (name, (value, value, ...)) for each summary in the file.\"\"\"\n length = self.summary_length\n step = self.summary_step\n for record_number, n_summaries, summary_data in self.summary_records():\n name_data = self.read_record(record_number + 1)\n for i in range(0, int(n_summaries) * step, step):\n j = self.summary_control_struct.size + i\n name = name_data[i:i+step].strip()\n data = summary_data[j:j+length]\n values = self.summary_struct.unpack(data)\n yield name, values"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd a new array to the DAF file.", "response": "def add_array(self, name, values, array):\n \"\"\"Add a new array to the DAF file.\n\n The summary will be initialized with the `name` and `values`,\n and will have its start word and end word fields set to point to\n where the `array` of floats has been appended to the file.\n\n \"\"\"\n f = self.file\n scs = self.summary_control_struct\n\n record_number = self.bward\n data = bytearray(self.read_record(record_number))\n next_record, previous_record, n_summaries = scs.unpack(data[:24])\n\n if n_summaries < self.summaries_per_record:\n summary_record = record_number\n name_record = summary_record + 1\n data[:24] = scs.pack(next_record, previous_record, n_summaries + 1)\n self.write_record(summary_record, data)\n else:\n summary_record = ((self.free - 1) * 8 + 1023) // 1024 + 1\n name_record = summary_record + 1\n free_record = summary_record + 2\n\n n_summaries = 0\n data[:24] = scs.pack(summary_record, previous_record, n_summaries)\n self.write_record(record_number, data)\n\n summaries = scs.pack(0, record_number, 1).ljust(1024, b'\\0')\n names = b'\\0' * 1024\n self.write_record(summary_record, summaries)\n self.write_record(name_record, names)\n\n self.bward = summary_record\n self.free = (free_record - 1) * 1024 // 8 + 1\n\n start_word = self.free\n f.seek((start_word - 1) * 8)\n array = numpy_array(array) # TODO: force correct endian\n f.write(array.view())\n end_word = f.tell() // 8\n\n self.free = end_word + 1\n self.write_file_record()\n\n values = values[:self.nd + self.ni - 2] + (start_word, end_word)\n\n base = 1024 * (summary_record - 1)\n offset = int(n_summaries) * self.summary_step\n f.seek(base + scs.size + offset)\n f.write(self.summary_struct.pack(*values))\n f.seek(base + 1024 + offset)\n f.write(name[:self.summary_length].ljust(self.summary_step, b' '))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncloses this SPK file.", "response": "def close(self):\n \"\"\"Close this SPK file.\"\"\"\n self.daf.file.close()\n for segment in self.segments:\n if hasattr(segment, '_data'):\n del segment._data\n self.daf._array = None\n self.daf._map = None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a textual description of the segment.", "response": "def describe(self, verbose=True):\n \"\"\"Return a textual description of the segment.\"\"\"\n center = titlecase(target_names.get(self.center, 'Unknown center'))\n target = titlecase(target_names.get(self.target, 'Unknown target'))\n text = ('{0.start_jd:.2f}..{0.end_jd:.2f} {1} ({0.center})'\n ' -> {2} ({0.target})'.format(self, center, target))\n if verbose:\n text += ('\\n frame={0.frame} data_type={0.data_type} source={1}'\n .format(self, self.source.decode('ascii')))\n return text"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncomputing the component values for the time tdb plus tdb2.", "response": "def compute(self, tdb, tdb2=0.0):\n \"\"\"Compute the component values for the time `tdb` plus `tdb2`.\"\"\"\n for position in self.generate(tdb, tdb2):\n return position"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef describe(self, verbose=True):\n body = titlecase(target_names.get(self.body, 'Unknown body'))\n text = ('{0.start_jd:.2f}..{0.end_jd:.2f} frame={0.frame}'\n ' {1} ({0.body})'.format(self, body))\n if verbose:\n text += ('\\n data_type={0.data_type} source={1}'\n .format(self, self.source.decode('ascii')))\n return text", "response": "Return a textual description of the segment."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _load(self):\n if self.data_type == 2:\n component_count = 3\n else:\n raise ValueError('only binary PCK data type 2 is supported')\n\n init, intlen, rsize, n = self.daf.read_array(self.end_i - 3, self.end_i)\n initial_epoch = jd(init)\n interval_length = intlen / S_PER_DAY\n coefficient_count = int(rsize - 2) // component_count\n coefficients = self.daf.map_array(self.start_i, self.end_i - 4)\n\n coefficients.shape = (int(n), int(rsize))\n coefficients = coefficients[:,2:] # ignore MID and RADIUS elements\n coefficients.shape = (int(n), component_count, coefficient_count)\n coefficients = rollaxis(coefficients, 1)\n return initial_epoch, interval_length, coefficients", "response": "Load the PCK data into memory using a NumPy array."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef compute(self, tdb, tdb2, derivative=True):\n scalar = not getattr(tdb, 'shape', 0) and not getattr(tdb2, 'shape', 0)\n if scalar:\n tdb = array((tdb,))\n\n data = self._data\n if data is None:\n self._data = data = self._load()\n\n initial_epoch, interval_length, coefficients = data\n component_count, n, coefficient_count = coefficients.shape\n\n # Subtracting tdb before adding tdb2 affords greater precision.\n index, offset = divmod((tdb - initial_epoch) + tdb2, interval_length)\n index = index.astype(int)\n\n if (index < 0).any() or (index > n).any():\n final_epoch = initial_epoch + interval_length * n\n raise ValueError('segment only covers dates %.1f through %.1f'\n % (initial_epoch, final_epoch))\n\n omegas = (index == n)\n index[omegas] -= 1\n offset[omegas] += interval_length\n\n coefficients = coefficients[:,index]\n\n # Chebyshev polynomial.\n\n T = empty((coefficient_count, len(index)))\n T[0] = 1.0\n T[1] = t1 = 2.0 * offset / interval_length - 1.0\n twot1 = t1 + t1\n for i in range(2, coefficient_count):\n T[i] = twot1 * T[i-1] - T[i-2]\n\n components = (T.T * coefficients).sum(axis=2)\n if scalar:\n components = components[:,0]\n\n if not derivative:\n return components\n\n # Chebyshev differentiation.\n\n dT = empty_like(T)\n dT[0] = 0.0\n dT[1] = 1.0\n if coefficient_count > 2:\n dT[2] = twot1 + twot1\n for i in range(3, coefficient_count):\n dT[i] = twot1 * dT[i-1] - dT[i-2] + T[i-1] + T[i-1]\n dT *= 2.0\n dT /= interval_length\n\n rates = (dT.T * coefficients).sum(axis=2)\n if scalar:\n rates = rates[:,0]\n\n return components, rates", "response": "Generate the angle and derivative of a Chebyshev time - database entry."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nshow system notification with duration t ( ms )", "response": "def notify(msg, msg_type=0, t=None):\n \"Show system notification with duration t (ms)\"\n if platform.system() == 'Darwin':\n command = notify_command_osx(msg, msg_type, t)\n else:\n command = notify_command_linux(msg, t)\n os.system(command.encode('utf-8'))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef geturls_new_api(song_ids):\n br_to_quality = {128000: 'MD 128k', 320000: 'HD 320k'}\n alters = NetEase().songs_detail_new_api(song_ids)\n urls = [alter['url'] for alter in alters]\n return urls", "response": "Get urls for new api"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nvisit a function call.", "response": "def visit_Call(self, node):\n \"\"\"\n Visit a function call.\n\n We expect every logging statement and string format to be a function call.\n\n \"\"\"\n # CASE 1: We're in a logging statement\n if self.within_logging_statement():\n if self.within_logging_argument() and self.is_format_call(node):\n self.violations.append((node, STRING_FORMAT_VIOLATION))\n super(LoggingVisitor, self).generic_visit(node)\n return\n\n logging_level = self.detect_logging_level(node)\n\n if logging_level and self.current_logging_level is None:\n self.current_logging_level = logging_level\n\n # CASE 2: We're in some other statement\n if logging_level is None:\n super(LoggingVisitor, self).generic_visit(node)\n return\n\n # CASE 3: We're entering a new logging statement\n self.current_logging_call = node\n\n if logging_level == \"warn\":\n self.violations.append((node, WARN_VIOLATION))\n\n self.check_exc_info(node)\n\n for index, child in enumerate(iter_child_nodes(node)):\n if index == 1:\n self.current_logging_argument = child\n if index >= 1:\n self.check_exception_arg(child)\n if index > 1 and isinstance(child, keyword) and child.arg == \"extra\":\n self.current_extra_keyword = child\n\n super(LoggingVisitor, self).visit(child)\n\n self.current_logging_argument = None\n self.current_extra_keyword = None\n\n self.current_logging_call = None\n self.current_logging_level = None"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef visit_BinOp(self, node):\n if self.within_logging_statement() and self.within_logging_argument():\n # handle percent format\n if isinstance(node.op, Mod):\n self.violations.append((node, PERCENT_FORMAT_VIOLATION))\n # handle string concat\n if isinstance(node.op, Add):\n self.violations.append((node, STRING_CONCAT_VIOLATION))\n super(LoggingVisitor, self).generic_visit(node)", "response": "Process binary operations while processing the first logging argument."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef visit_JoinedStr(self, node):\n if version_info >= (3, 6):\n if self.within_logging_statement():\n if any(isinstance(i, FormattedValue) for i in node.values):\n if self.within_logging_argument():\n self.violations.append((node, FSTRING_VIOLATION))\n super(LoggingVisitor, self).generic_visit(node)", "response": "Process joined string arguments."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef detect_logging_level(self, node):\n try:\n if self.get_id_attr(node.func.value) == \"warnings\":\n return None\n # NB: We could also look at the argument signature or the target attribute\n if node.func.attr in LOGGING_LEVELS:\n return node.func.attr\n except AttributeError:\n pass\n return None", "response": "Detects whether an AST Call is a logging call."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_id_attr(self, value):\n if not hasattr(value, \"id\") and hasattr(value, \"value\"):\n value = value.value\n return value.id", "response": "Check if value has id attribute and return it."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks if the node is a bare exception name from an except block.", "response": "def is_bare_exception(self, node):\n \"\"\"\n Checks if the node is a bare exception name from an except block.\n\n \"\"\"\n return isinstance(node, Name) and node.id in self.current_except_names"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck if the node is the expression str or unicode.", "response": "def is_str_exception(self, node):\n \"\"\"\n Checks if the node is the expression str(e) or unicode(e), where e is an exception name from an except block\n\n \"\"\"\n return (\n isinstance(node, Call)\n and isinstance(node.func, Name)\n and node.func.id in ('str', 'unicode')\n and node.args\n and self.is_bare_exception(node.args[0])\n )"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef check_exc_info(self, node):\n if self.current_logging_level not in ('error', 'exception'):\n return\n\n for kw in node.keywords:\n if kw.arg == 'exc_info':\n if self.current_logging_level == 'error':\n violation = ERROR_EXC_INFO_VIOLATION\n else:\n violation = REDUNDANT_EXC_INFO_VIOLATION\n self.violations.append((node, violation))", "response": "Checks if the exc_info keyword is used with logging. error or logging. exception."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndeletes file from database only if needed.", "response": "def delete_file_if_needed(instance, filefield_name):\n \"\"\"Delete file from database only if needed.\n\n When editing and the filefield is a new file,\n deletes the previous file (if any) from the database.\n Call this function immediately BEFORE saving the instance.\n \"\"\"\n if instance.pk:\n model_class = type(instance)\n\n # Check if there is a file for the instance in the database\n if model_class.objects.filter(pk=instance.pk).exclude(\n **{'%s__isnull' % filefield_name: True}\n ).exclude(\n **{'%s__exact' % filefield_name: ''}\n ).exists():\n old_file = getattr(\n model_class.objects.only(filefield_name).get(pk=instance.pk),\n filefield_name\n )\n else:\n old_file = None\n\n # If there is a file, delete it if needed\n if old_file:\n # When editing and NOT changing the file,\n # old_file.name == getattr(instance, filefield_name)\n # returns True. In this case, the file must NOT be deleted.\n # If the file IS being changed, the comparison returns False.\n # In this case, the old file MUST be deleted.\n if (old_file.name == getattr(instance, filefield_name)) is False:\n DatabaseFileStorage().delete(old_file.name)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndeleting the file from the database.", "response": "def delete_file(instance, filefield_name):\n \"\"\"Delete the file (if any) from the database.\n\n Call this function immediately AFTER deleting the instance.\n \"\"\"\n file_instance = getattr(instance, filefield_name)\n if file_instance:\n DatabaseFileStorage().delete(file_instance.name)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef db_file_widget(cls):\n\n def get_link_display(url):\n unquoted = unquote(url.split('%2F')[-1])\n if sys.version_info.major == 2: # python 2\n from django.utils.encoding import force_unicode\n unquoted = force_unicode(unquoted)\n return escape(unquoted)\n\n def get_template_substitution_values(self, value):\n # Used by Django < 1.11\n subst = super(cls, self).get_template_substitution_values(value)\n subst['initial'] = get_link_display(value.url)\n return subst\n setattr(cls,\n 'get_template_substitution_values',\n get_template_substitution_values)\n\n def get_context(self, name, value, attrs):\n context = super(cls, self).get_context(name, value, attrs)\n if value and hasattr(value, 'url'):\n context['widget']['display'] = get_link_display(value.url)\n return context\n setattr(cls, 'get_context', get_context)\n\n return cls", "response": "Edit the download - link inner text."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef rendered_content(self):\n cmd_options = self.cmd_options.copy()\n return render_pdf_from_template(\n self.resolve_template(self.template_name),\n self.resolve_template(self.header_template),\n self.resolve_template(self.footer_template),\n context=self.resolve_context(self.context_data),\n request=self._request,\n cmd_options=cmd_options,\n cover_template=self.resolve_template(self.cover_template)\n\n )", "response": "Returns the rendered content for the template and context."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a PDF response with a template rendered with the given context.", "response": "def render_to_response(self, context, **response_kwargs):\n \"\"\"\n Returns a PDF response with a template rendered with the given context.\n \"\"\"\n filename = response_kwargs.pop('filename', None)\n cmd_options = response_kwargs.pop('cmd_options', None)\n\n if issubclass(self.response_class, PDFTemplateResponse):\n if filename is None:\n filename = self.get_filename()\n\n if cmd_options is None:\n cmd_options = self.get_cmd_options()\n\n return super(PDFTemplateView, self).render_to_response(\n context=context, filename=filename,\n show_content_in_browser=self.show_content_in_browser,\n header_template=self.header_template,\n footer_template=self.footer_template,\n cmd_options=cmd_options,\n cover_template=self.cover_template,\n **response_kwargs\n )\n else:\n return super(PDFTemplateView, self).render_to_response(\n context=context,\n **response_kwargs\n )"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nconvert a list of options into a list of command - line arguments.", "response": "def _options_to_args(**options):\n \"\"\"\n Converts ``options`` into a list of command-line arguments.\n Skip arguments where no value is provided\n For flag-type (No argument) variables, pass only the name and only then if the value is True\n \"\"\"\n flags = []\n for name in sorted(options):\n value = options[name]\n formatted_flag = '--%s' % name if len(name) > 1 else '-%s' % name\n formatted_flag = formatted_flag.replace('_', '-')\n accepts_no_arguments = formatted_flag in NO_ARGUMENT_OPTIONS\n if value is None or (value is False and accepts_no_arguments):\n continue\n flags.append(formatted_flag)\n if accepts_no_arguments:\n continue\n flags.append(six.text_type(value))\n return flags"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconverting html to PDF using http://wkhtmltopdf. org.", "response": "def wkhtmltopdf(pages, output=None, **kwargs):\n \"\"\"\n Converts html to PDF using http://wkhtmltopdf.org/.\n\n pages: List of file paths or URLs of the html to be converted.\n output: Optional output file path. If None, the output is returned.\n **kwargs: Passed to wkhtmltopdf via _extra_args() (See\n https://github.com/antialize/wkhtmltopdf/blob/master/README_WKHTMLTOPDF\n for acceptable args.)\n Kwargs is passed through as arguments. e.g.:\n {'footer_html': 'http://example.com/foot.html'}\n becomes\n '--footer-html http://example.com/foot.html'\n\n Where there is no value passed, use True. e.g.:\n {'disable_javascript': True}\n becomes:\n '--disable-javascript'\n\n To disable a default option, use None. e.g:\n {'quiet': None'}\n becomes:\n ''\n\n example usage:\n wkhtmltopdf(pages=['/tmp/example.html'],\n dpi=300,\n orientation='Landscape',\n disable_javascript=True)\n \"\"\"\n if isinstance(pages, six.string_types):\n # Support a single page.\n pages = [pages]\n\n if output is None:\n # Standard output.\n output = '-'\n has_cover = kwargs.pop('has_cover', False)\n\n # Default options:\n options = getattr(settings, 'WKHTMLTOPDF_CMD_OPTIONS', None)\n if options is None:\n options = {'quiet': True}\n else:\n options = copy(options)\n options.update(kwargs)\n\n # Force --encoding utf8 unless the user has explicitly overridden this.\n options.setdefault('encoding', 'utf8')\n\n env = getattr(settings, 'WKHTMLTOPDF_ENV', None)\n if env is not None:\n env = dict(os.environ, **env)\n\n cmd = 'WKHTMLTOPDF_CMD'\n cmd = getattr(settings, cmd, os.environ.get(cmd, 'wkhtmltopdf'))\n\n # Adding 'cover' option to add cover_file to the pdf to generate.\n if has_cover:\n pages.insert(0, 'cover')\n\n ck_args = list(chain(shlex.split(cmd),\n _options_to_args(**options),\n list(pages),\n [output]))\n ck_kwargs = {'env': env}\n # Handling of fileno() attr. based on https://github.com/GrahamDumpleton/mod_wsgi/issues/85\n try:\n i = sys.stderr.fileno()\n ck_kwargs['stderr'] = sys.stderr\n except (AttributeError, IOError):\n # can't call fileno() on mod_wsgi stderr object\n pass\n\n return check_output(ck_args, **ck_kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef http_quote(string):\n if isinstance(string, six.text_type):\n try:\n import unidecode\n except ImportError:\n pass\n else:\n string = unidecode.unidecode(string)\n string = string.encode('ascii', 'replace')\n # Wrap in double-quotes for ; , and the like\n string = string.replace(b'\\\\', b'\\\\\\\\').replace(b'\"', b'\\\\\"')\n return '\"{0!s}\"'.format(string.decode())", "response": "Given a unicode string will do its dandiest to give you back a\n valid ascii charset string you can use in say http headers and the\n like."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_absolute_paths(content):\n overrides = [\n {\n 'root': settings.MEDIA_ROOT,\n 'url': settings.MEDIA_URL,\n },\n {\n 'root': settings.STATIC_ROOT,\n 'url': settings.STATIC_URL,\n }\n ]\n has_scheme = re.compile(r'^[^:/]+://')\n\n for x in overrides:\n if not x['url'] or has_scheme.match(x['url']):\n continue\n\n if not x['root'].endswith('/'):\n x['root'] += '/'\n\n occur_pattern = '''[\"|']({0}.*?)[\"|']'''\n occurences = re.findall(occur_pattern.format(x['url']), content)\n occurences = list(set(occurences)) # Remove dups\n for occur in occurences:\n content = content.replace(occur,\n pathname2fileurl(x['root']) +\n occur[len(x['url']):])\n\n return content", "response": "Convert all MEDIA files into a file://URL paths in order to\n correctly get it displayed in PDFs."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef match(self, text):\n\n match_obj = None\n if self.fullmatch:\n match_obj = self.regex_obj.fullmatch(text)\n else:\n match_obj = self.regex_obj.search(text)\n\n if match_obj == None:\n return None\n matches = match_obj.groupdict()\n for key,match in matches.items():\n try:\n if self.type_mapper[key] == 'int':\n matches[key] = int(match)\n if self.type_mapper[key] == 'float':\n matches[key] = float(match)\n except (TypeError, KeyError) as e:\n pass\n return matches", "response": "Returns a dictionary of variable names specified in pattern and their corresponding values."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef configure(module=None, prefix='MONGODB_', **kwargs):\n if module is not None and isinstance(module, types.ModuleType):\n # Search module for MONGODB_* attributes and converting them\n # to _Options' values, ex: MONGODB_PORT ==> port.\n attrs = ((attr.replace(prefix, '').lower(), value)\n for attr, value in vars(module).items()\n if attr.startswith(prefix))\n\n _Options._configure(**dict(attrs))\n elif kwargs:\n _Options._configure(**kwargs)", "response": "Sets default values for class Meta declarations."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nupdate class - level defaults for the given container.", "response": "def _configure(cls, **defaults):\n \"\"\"Updates class-level defaults for :class:`_Options` container.\"\"\"\n for attr in defaults:\n setattr(cls, attr, defaults[attr])"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconverting a given string from CamelCase to under_score.", "response": "def to_underscore(string):\n \"\"\"Converts a given string from CamelCase to under_score.\n\n >>> to_underscore('FooBar')\n 'foo_bar'\n \"\"\"\n new_string = re.sub(r'([A-Z]+)([A-Z][a-z])', r'\\1_\\2', string)\n new_string = re.sub(r'([a-z\\d])([A-Z])', r'\\1_\\2', new_string)\n return new_string.lower()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nbuild all indices listed in model s Meta class.", "response": "def auto_index(mcs):\n \"\"\"Builds all indices, listed in model's Meta class.\n\n >>> class SomeModel(Model)\n ... class Meta:\n ... indices = (\n ... Index('foo'),\n ... )\n\n .. note:: this will result in calls to\n :meth:`pymongo.collection.Collection.ensure_index`\n method at import time, so import all your models up\n front.\n \"\"\"\n for index in mcs._meta.indices:\n index.ensure(mcs.collection)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef find(self, *args, **kwargs):\n return Cursor(self, *args, wrap=self.document_class, **kwargs)", "response": "Same as pymongo. collection. Collection. find except it returns the right document class."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef parse_file(self, file_path, currency) -> List[PriceModel]:\n # load file\n # read csv into memory?\n contents = self.load_file(file_path)\n prices = []\n\n # parse price elements\n for line in contents:\n price = self.parse_line(line)\n assert isinstance(price, PriceModel)\n price.currency = currency\n prices.append(price)\n\n return prices", "response": "Load and parse a. csv file and return a list of PriceModels."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nload the content of the text file", "response": "def load_file(self, file_path) -> List[str]:\n \"\"\" Loads the content of the text file \"\"\"\n content = []\n content = read_lines_from_file(file_path)\n return content"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nparses a CSV line into a price element", "response": "def parse_line(self, line: str) -> PriceModel:\n \"\"\" Parse a CSV line into a price element \"\"\"\n line = line.rstrip()\n parts = line.split(',')\n\n result = PriceModel()\n\n # symbol\n result.symbol = self.translate_symbol(parts[0])\n\n # value\n result.value = Decimal(parts[1])\n\n # date\n date_str = parts[2]\n date_str = date_str.replace('\"', '')\n date_parts = date_str.split('/')\n\n year_str = date_parts[2]\n month_str = date_parts[1]\n day_str = date_parts[0]\n\n logging.debug(f\"parsing {date_parts} into date\")\n result.datetime = datetime(int(year_str), int(month_str), int(day_str))\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntranslating the incoming symbol into locally - used", "response": "def translate_symbol(self, in_symbol: str) -> str:\n \"\"\" translate the incoming symbol into locally-used \"\"\"\n # read all mappings from the db\n if not self.symbol_maps:\n self.__load_symbol_maps()\n # translate the incoming symbol\n result = self.symbol_maps[in_symbol] if in_symbol in self.symbol_maps else in_symbol\n\n return result"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef __load_symbol_maps(self):\n repo = SymbolMapRepository(self.__get_session())\n all_maps = repo.get_all()\n self.symbol_maps = {}\n for item in all_maps:\n self.symbol_maps[item.in_symbol] = item.out_symbol", "response": "Loads all symbol maps from db"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef __get_session(self):\n if not self.session:\n self.session = dal.get_default_session()\n return self.session", "response": "Get the db session"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef import_csv(filepath: str, currency: str):\n logger.debug(f\"currency = {currency}\")\n # auto-convert to uppercase.\n currency = currency.upper()\n\n app = PriceDbApplication()\n app.logger = logger\n app.import_prices(filepath, currency)", "response": "Imports prices from CSV file"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndisplaying last price for symbol if provided", "response": "def last(symbol: str):\n \"\"\" displays last price, for symbol if provided \"\"\"\n app = PriceDbApplication()\n\n # convert to uppercase\n if symbol:\n symbol = symbol.upper()\n # extract namespace\n sec_symbol = SecuritySymbol(\"\", \"\")\n sec_symbol.parse(symbol)\n\n latest = app.get_latest_price(sec_symbol)\n assert isinstance(latest, PriceModel)\n print(f\"{latest}\")\n else:\n # Show the latest prices available for all securities.\n latest = app.get_latest_prices()\n for price in latest:\n print(f\"{price}\")"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef list_prices(date, currency, last):\n app = PriceDbApplication()\n app.logger = logger\n\n if last:\n # fetch only the last prices\n prices = app.get_latest_prices()\n else:\n prices = app.get_prices(date, currency)\n for price in prices:\n print(price)\n\n print(f\"{len(prices)} records found.\")", "response": "Display all prices for a given date and currency."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef download(ctx, help: bool, symbol: str, namespace: str, agent: str, currency: str):\n if help:\n click.echo(ctx.get_help())\n ctx.exit()\n\n app = PriceDbApplication()\n app.logger = logger\n\n if currency:\n currency = currency.strip()\n currency = currency.upper()\n\n # Otherwise download the prices for securities listed in the database.\n app.download_prices(currency=currency, agent=agent, symbol=symbol, namespace=namespace)", "response": "Download latest prices for a given symbol."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef prune(symbol: str, all: str):\n app = PriceDbApplication()\n app.logger = logger\n count = 0\n\n if symbol is not None:\n sec_symbol = SecuritySymbol(\"\", \"\")\n sec_symbol.parse(symbol)\n\n deleted = app.prune(sec_symbol)\n if deleted:\n count = 1\n else:\n count = app.prune_all()\n\n print(f\"Removed {count} old price entries.\")", "response": "Delete old prices leaving just the last."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_default_session():\n from .config import Config, ConfigKeys\n\n db_path = Config().get(ConfigKeys.price_database)\n if not db_path:\n raise ValueError(\"Price database not set in the configuration file!\")\n return get_session(db_path)", "response": "Return the default session."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nadd a new mapping between two symbols.", "response": "def add_map(incoming, outgoing):\n \"\"\" Creates a symbol mapping \"\"\"\n db_path = Config().get(ConfigKeys.pricedb_path)\n session = get_session(db_path)\n\n new_map = SymbolMap()\n new_map.in_symbol = incoming\n new_map.out_symbol = outgoing\n\n session.add(new_map)\n session.commit()\n click.echo(\"Record saved.\")"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef list_maps():\n db_path = Config().get(ConfigKeys.price_database)\n session = get_session(db_path)\n\n maps = session.query(SymbolMap).all()\n for item in maps:\n click.echo(item)", "response": "Displays all symbol maps"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfinding the map by in - symbol", "response": "def get_by_id(self, symbol: str) -> SymbolMap:\n \"\"\" Finds the map by in-symbol \"\"\"\n return self.query.filter(SymbolMap.in_symbol == symbol).first()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nread text lines from a file", "response": "def read_lines_from_file(file_path: str) -> List[str]:\n \"\"\" Read text lines from a file \"\"\"\n # check if the file exists?\n with open(file_path) as csv_file:\n content = csv_file.readlines()\n return content"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef map_entity(self, entity: dal.Price) -> PriceModel:\n if not entity:\n return None\n\n result = PriceModel()\n result.currency = entity.currency\n\n # date/time\n dt_string = entity.date\n format_string = \"%Y-%m-%d\"\n if entity.time:\n dt_string += f\"T{entity.time}\"\n format_string += \"T%H:%M:%S\"\n price_datetime = datetime.strptime(dt_string, format_string)\n result.datum = Datum()\n result.datum.from_datetime(price_datetime)\n assert isinstance(result.datum, Datum)\n\n #result.namespace = entity.namespace\n #result.symbol = entity.symbol\n result.symbol = SecuritySymbol(entity.namespace, entity.symbol)\n\n # Value\n value = Decimal(entity.value) / Decimal(entity.denom)\n result.value = Decimal(value)\n\n return result", "response": "Map the price entity to a PriceModel object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef map_model(self, model: PriceModel) -> Price:\n # assert isinstance(model, PriceModel)\n assert isinstance(model.symbol, SecuritySymbol)\n assert isinstance(model.datum, Datum)\n\n entity = Price()\n\n # Format date as ISO string\n date_iso = f\"{model.datum.value.year}-{model.datum.value.month:02d}-{model.datum.value.day:02d}\"\n entity.date = date_iso\n\n entity.time = f\"{model.datum.value.hour:02d}:{model.datum.value.minute:02d}:{model.datum.value.second:02d}\"\n\n # Symbol\n # properly mapped symbols have a namespace, except for the US markets\n # TODO check this with .csv import\n if model.symbol.namespace:\n entity.namespace = model.symbol.namespace.upper()\n entity.symbol = model.symbol.mnemonic.upper()\n\n assert isinstance(model.value, Decimal)\n # Find number of decimal places\n dec_places = abs(model.value.as_tuple().exponent)\n entity.denom = 10 ** dec_places\n # Price value\n entity.value = int(model.value * entity.denom)\n\n # Currency\n entity.currency = model.currency.upper()\n\n # self.logger.debug(f\"{entity}\")\n return entity", "response": "Parse a PriceModel into a Price entity."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef __read_config(self, file_path: str):\n if not os.path.exists(file_path):\n raise FileNotFoundError(f\"File path not found: {file_path}\")\n # check if file exists\n if not os.path.isfile(file_path):\n self.logger.error(f\"file not found: {file_path}\")\n raise FileNotFoundError(f\"configuration file not found {file_path}\")\n\n self.config.read(file_path)", "response": "Read the config file"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the default config path from resources", "response": "def __get_config_template_path(self) -> str:\n \"\"\" gets the default config path from resources \"\"\"\n filename = resource_filename(\n Requirement.parse(package_name),\n template_path + config_filename)\n return filename"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef __create_user_config(self):\n src_path = self.__get_config_template_path()\n src = os.path.abspath(src_path)\n\n if not os.path.exists(src):\n message = f\"Config template not found {src}\"\n self.logger.error(message)\n raise FileNotFoundError(message)\n\n dst = os.path.abspath(self.get_config_path())\n\n shutil.copyfile(src, dst)\n\n if not os.path.exists(dst):\n raise FileNotFoundError(\"Config file could not be copied to user dir!\")", "response": "Copy the config template into user s directory"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the path where the active config file is expected.", "response": "def get_config_path(self) -> str:\n \"\"\"\n Returns the path where the active config file is expected.\n This is the user's profile folder.\n \"\"\"\n dst_dir = self.__get_user_path()\n dst = dst_dir + \"/\" + config_filename\n return dst"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nread the contents of the config file and returns it.", "response": "def get_contents(self) -> str:\n \"\"\" Reads the contents of the config file \"\"\"\n content = None\n # with open(file_path) as cfg_file:\n # contents = cfg_file.read()\n\n # Dump the current contents into an in-memory file.\n in_memory = io.StringIO(\"\")\n self.config.write(in_memory)\n in_memory.seek(0)\n content = in_memory.read()\n # log(DEBUG, \"config content: %s\", content)\n in_memory.close()\n return content"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef set(self, option: ConfigKeys, value):\n assert isinstance(option, ConfigKeys)\n\n # As currently we only have 1 section.\n section = SECTION\n self.config.set(section, option.name, value)\n self.save()", "response": "Sets a value in config"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get(self, option: ConfigKeys):\n assert isinstance(option, ConfigKeys)\n\n # Currently only one section is used\n section = SECTION\n return self.config.get(section, option.name)", "response": "Retrieves a config value"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nsave the current configuration file", "response": "def save(self):\n \"\"\" Save the config file \"\"\"\n file_path = self.get_config_path()\n contents = self.get_contents()\n with open(file_path, mode='w') as cfg_file:\n cfg_file.write(contents)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse(self, symbol: str) -> (str, str):\n symbol_parts = symbol.split(\":\")\n namespace = None\n mnemonic = symbol\n\n if len(symbol_parts) > 1:\n namespace = symbol_parts[0]\n mnemonic = symbol_parts[1]\n\n self.namespace = namespace\n self.mnemonic = mnemonic\n\n return namespace, mnemonic", "response": "Splits the symbol into namespace mnemonic and return the namespace and mnemonic."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add_price(self, price: PriceModel):\n # assert isinstance(price, PriceModel)\n\n if not price:\n raise ValueError(\"Cannot add price. The received model is null!\")\n\n mapper = mappers.PriceMapper()\n entity = mapper.map_model(price)\n\n self.add_price_entity(entity)", "response": "Adds a price record to the current record."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding the price entity to the database.", "response": "def add_price_entity(self, price: dal.Price):\n \"\"\" Adds the price \"\"\"\n from decimal import Decimal\n\n # check if the price already exists in db.\n repo = self.get_price_repository()\n existing = (\n repo.query\n .filter(dal.Price.namespace == price.namespace)\n .filter(dal.Price.symbol == price.symbol)\n .filter(dal.Price.date == price.date)\n .filter(dal.Price.time == price.time)\n .first()\n )\n if existing:\n # Update existing price.\n new_value = Decimal(price.value) / Decimal(price.denom)\n self.logger.info(f\"Exists: {price}\")\n if price.currency != existing.currency:\n raise ValueError(\n f\"The currency is different for price {price}!\")\n if existing.value != price.value:\n existing.value = price.value\n self.logger.info(f\"Updating to {new_value}.\")\n if existing.denom != price.denom:\n existing.denom = price.denom\n else:\n # Insert new price\n self.session.add(price)\n self.logger.info(f\"Added {price}\")"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef download_price(self, symbol: str, currency: str, agent: str) -> PriceModel:\n price = self.__download_price(symbol, currency, agent)\n self.save()\n return price", "response": "Download and save price online"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef download_prices(self, **kwargs):\n currency: str = kwargs.get('currency', None)\n if currency:\n currency = currency.upper()\n agent: str = kwargs.get('agent', None)\n if agent:\n agent = agent.upper()\n symbol: str = kwargs.get('symbol', None)\n if symbol:\n symbol = symbol.upper()\n namespace: str = kwargs.get('namespace', None)\n if namespace:\n namespace = namespace.upper()\n securities = self.__get_securities(currency, agent, symbol, namespace)\n #self.logger.debug(securities)\n\n for sec in securities:\n symbol = f\"{sec.namespace}:{sec.symbol}\"\n currency = sec.currency\n agent = sec.updater\n #self.logger.debug(f\"Initiating download for {symbol} {currency} with {agent}...\")\n try:\n self.__download_price(symbol.strip(), currency, agent)\n except Exception as e:\n self.logger.error(str(e))\n self.save()", "response": "Downloads all the prices that are listed in the Security table."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nimports all prices from a CSV file into the database.", "response": "def import_prices(self, file_path: str, currency_symbol: str):\n \"\"\" Incomplete \"\"\"\n from .csv import CsvParser\n\n assert isinstance(file_path, str)\n assert isinstance(currency_symbol, str)\n\n self.logger.debug(f\"Importing {file_path}\")\n parser = CsvParser()\n prices = parser.parse_file(file_path, currency_symbol)\n\n counter = 0\n session = self.session\n # Create insert statements\n mapper = mappers.PriceMapper()\n for price in prices:\n new_price = mapper.map_model(price)\n self.add_price_entity(new_price)\n counter += 1\n # Save all to database\n session.commit()\n print(f\"{counter} records inserted.\")"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef session(self):\n if not self.__session:\n self.__session = dal.get_default_session()\n return self.__session", "response": "Returns the current db session"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_prices(self, date: str, currency: str) -> List[PriceModel]:\n from .repositories import PriceRepository\n\n session = self.session\n repo = PriceRepository(session)\n query = repo.query\n if date:\n query = query.filter(dal.Price.date == date)\n if currency:\n query = query.filter(dal.Price.currency == currency)\n # Sort by symbol.\n query = query.order_by(dal.Price.namespace, dal.Price.symbol)\n price_entities = query.all()\n\n mapper = mappers.PriceMapper()\n result = []\n for entity in price_entities:\n model = mapper.map_entity(entity)\n result.append(model)\n return result", "response": "Fetches all the prices for the given date and currency."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_prices_on(self, on_date: str, namespace: str, symbol: str):\n repo = self.get_price_repository()\n query = (\n repo.query.filter(dal.Price.namespace == namespace)\n .filter(dal.Price.symbol == symbol)\n .filter(dal.Price.date == on_date)\n .order_by(dal.Price.time.desc())\n )\n result = query.first()\n # logging.debug(result)\n return result", "response": "Returns the latest price on the date"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the price repository object.", "response": "def get_price_repository(self):\n \"\"\" Price repository \"\"\"\n from .repositories import PriceRepository\n\n if not self.price_repo:\n self.price_repo = PriceRepository(self.session)\n return self.price_repo"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_security_repository(self):\n from .repositories import SecurityRepository\n\n if not self.security_repo:\n self.security_repo = SecurityRepository(self.session)\n return self.security_repo", "response": "Returns the security repository object."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef prune_all(self) -> int:\n from .repositories import PriceRepository\n\n # get all symbols that have prices\n repo = PriceRepository()\n items = repo.query.distinct(dal.Price.namespace, dal.Price.symbol).all()\n # self.logger.debug(items)\n count = 0\n\n for item in items:\n symbol = SecuritySymbol(item.namespace, item.symbol)\n deleted = self.prune(symbol)\n if deleted:\n count += 1\n\n return count", "response": "Prune all historical prices for all symbols leaving only the latest. Returns the number of items removed."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndelete all but the latest available price for the given symbol. Returns the number of items deleted.", "response": "def prune(self, symbol: SecuritySymbol):\n \"\"\"\n Delete all but the latest available price for the given symbol.\n Returns the number of items removed.\n \"\"\"\n from .repositories import PriceRepository\n\n assert isinstance(symbol, SecuritySymbol)\n\n self.logger.debug(f\"pruning prices for {symbol}\")\n\n repo = PriceRepository()\n query = (\n repo.query.filter(dal.Price.namespace == symbol.namespace)\n .filter(dal.Price.symbol == symbol.mnemonic)\n .order_by(dal.Price.date.desc())\n .order_by(dal.Price.time.desc())\n )\n all_prices = query.all()\n # self.logger.debug(f\"fetched {all_prices}\")\n\n deleted = False\n first = True\n for single in all_prices:\n if not first:\n repo.query.filter(dal.Price.id == single.id).delete()\n deleted = True\n self.logger.debug(f\"deleting {single.id}\")\n else:\n first = False\n\n repo.save()\n\n return deleted"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef save(self):\n if self.__session:\n self.session.commit()\n else:\n self.logger.warning(\"Save called but no session open.\")", "response": "Commit changes to the database."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndownloads and parses the price for the symbol.", "response": "def __download_price(self, symbol: str, currency: str, agent: str):\n \"\"\" Downloads and parses the price \"\"\"\n from finance_quote_python import Quote\n\n assert isinstance(symbol, str)\n assert isinstance(currency, str)\n assert isinstance(agent, str)\n\n if not symbol:\n return None\n\n #self.logger.info(f\"Downloading {symbol}... \")\n\n dl = Quote()\n dl.logger = self.logger\n\n dl.set_source(agent)\n dl.set_currency(currency)\n\n result = dl.fetch(agent, [symbol])\n\n if not result:\n raise ValueError(f\"Did not receive a response for {symbol}.\")\n\n price = result[0]\n\n if not price:\n raise ValueError(f\"Price not downloaded/parsed for {symbol}.\")\n else:\n # Create price data entity, to be inserted.\n self.add_price(price)\n\n return price"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfetching the securities that match the given filters", "response": "def __get_securities(self, currency: str, agent: str, symbol: str,\n namespace: str) -> List[dal.Security]:\n \"\"\" Fetches the securities that match the given filters \"\"\"\n repo = self.get_security_repository()\n query = repo.query\n\n if currency is not None:\n query = query.filter(dal.Security.currency == currency)\n\n if agent is not None:\n query = query.filter(dal.Security.updater == agent)\n\n if symbol is not None:\n query = query.filter(dal.Security.symbol == symbol)\n\n if namespace is not None:\n query = query.filter(dal.Security.namespace == namespace)\n\n # Sorting\n query = query.order_by(dal.Security.namespace, dal.Security.symbol)\n\n securities = query.all()\n return securities"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef partial(self):\n ba = self.data[\"bound_args\"]\n return state_partial(self.data[\"func\"], *ba.args[1:], **ba.kwargs)", "response": "Return partial of original function call"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nupdating the child nodes on original function call with their partials", "response": "def update_child_calls(self):\n \"\"\"Replace child nodes on original function call with their partials\"\"\"\n\n for node in filter(lambda n: len(n.arg_name), self.child_list):\n self.data[\"bound_args\"].arguments[node.arg_name] = node.partial()\n self.updated = True"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndescends depth first into all child nodes", "response": "def descend(self, include_me=True):\n \"\"\"Descend depth first into all child nodes\"\"\"\n if include_me:\n yield self\n\n for child in self.child_list:\n yield child\n yield from child.descend()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef multi_dec(f):\n\n @wraps(f)\n def wrapper(*args, **kwargs):\n args = (\n args[0] if len(args) == 1 and isinstance(args[0], (list, tuple)) else args\n )\n for arg in args:\n if isinstance(arg, Node) and arg.parent.name is \"root\":\n arg.parent.remove_child(arg)\n arg.update_child_calls()\n return f(*args, **kwargs)\n\n return wrapper", "response": "Decorator for multi to remove nodes for original test functions from root node"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nverifying that a part that is zoomed in on has equal length.", "response": "def has_equal_part_len(state, name, unequal_msg):\n \"\"\"Verify that a part that is zoomed in on has equal length.\n\n Typically used in the context of ``check_function_def()``\n\n Arguments:\n name (str): name of the part for which to check the length to the corresponding part in the solution.\n unequal_msg (str): Message in case the lengths do not match.\n state (State): state as passed by the SCT chain. Don't specify this explicitly.\n\n :Examples:\n\n Student and solution code::\n\n def shout(word):\n return word + '!!!'\n\n SCT that checks number of arguments::\n\n Ex().check_function_def('shout').has_equal_part_len('args', 'not enough args!')\n \"\"\"\n d = dict(\n stu_len=len(state.student_parts[name]), sol_len=len(state.solution_parts[name])\n )\n\n if d[\"stu_len\"] != d[\"sol_len\"]:\n _msg = state.build_message(unequal_msg, d)\n state.report(Feedback(_msg, state))\n\n return state"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ntests whether abstract syntax trees match between the student and solution code.", "response": "def has_equal_ast(state, incorrect_msg=None, code=None, exact=True, append=None):\n \"\"\"Test whether abstract syntax trees match between the student and solution code.\n\n ``has_equal_ast()`` can be used in two ways:\n\n * As a robust version of ``has_code()``. By setting ``code``, you can look for the AST representation of ``code`` in the student's submission.\n But be aware that ``a`` and ``a = 1`` won't match, as reading and assigning are not the same in an AST.\n Use ``ast.dump(ast.parse(code))`` to see an AST representation of ``code``.\n * As an expression-based check when using more advanced SCT chain, e.g. to compare the equality of expressions to set function arguments.\n\n Args:\n incorrect_msg: message displayed when ASTs mismatch. When you specify ``code`` yourself, you have to specify this.\n code: optional code to use instead of the solution AST.\n exact: whether the representations must match exactly. If false, the solution AST\n only needs to be contained within the student AST (similar to using test student typed).\n Defaults to ``True``, unless the ``code`` argument has been specified.\n\n :Example:\n\n Student and Solution Code::\n\n dict(a = 'value').keys()\n\n SCT::\n\n # all pass\n Ex().has_equal_ast()\n Ex().has_equal_ast(code = \"dict(a = 'value').keys()\")\n Ex().has_equal_ast(code = \"dict(a = 'value')\", exact = False)\n\n Student and Solution Code::\n\n import numpy as np\n arr = np.array([1, 2, 3, 4, 5])\n np.mean(arr)\n\n SCT::\n\n # Check underlying value of arugment a of np.mean:\n Ex().check_function('numpy.mean').check_args('a').has_equal_ast()\n\n # Only check AST equality of expression used to specify argument a:\n Ex().check_function('numpy.mean').check_args('a').has_equal_ast()\n\n \"\"\"\n if utils.v2_only():\n state.assert_is_not([\"object_assignments\"], \"has_equal_ast\", [\"check_object\"])\n state.assert_is_not([\"function_calls\"], \"has_equal_ast\", [\"check_function\"])\n\n if code and incorrect_msg is None:\n raise InstructorError(\n \"If you manually specify the code to match inside has_equal_ast(), \"\n \"you have to explicitly set the `incorrect_msg` argument.\"\n )\n\n if (\n append is None\n ): # if not specified, set to False if incorrect_msg was manually specified\n append = incorrect_msg is None\n if incorrect_msg is None:\n incorrect_msg = \"Expected `{{sol_str}}`, but got `{{stu_str}}`.\"\n\n def parse_tree(tree):\n # get contents of module.body if only 1 element\n crnt = (\n tree.body[0]\n if isinstance(tree, ast.Module) and len(tree.body) == 1\n else tree\n )\n\n # remove Expr if it exists\n return ast.dump(crnt.value if isinstance(crnt, ast.Expr) else crnt)\n\n stu_rep = parse_tree(state.student_ast)\n sol_rep = parse_tree(state.solution_ast if not code else ast.parse(code))\n\n fmt_kwargs = {\n \"sol_str\": state.solution_code if not code else code,\n \"stu_str\": state.student_code,\n }\n\n _msg = state.build_message(incorrect_msg, fmt_kwargs, append=append)\n\n if exact and not code:\n state.do_test(EqualTest(stu_rep, sol_rep, Feedback(_msg, state)))\n elif not sol_rep in stu_rep:\n state.report(Feedback(_msg, state))\n\n return state"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntest the student code.", "response": "def has_code(state, text, pattern=True, not_typed_msg=None):\n \"\"\"Test the student code.\n\n Tests if the student typed a (pattern of) text. It is advised to use ``has_equal_ast()`` instead of ``has_code()``,\n as it is more robust to small syntactical differences that don't change the code's behavior.\n\n Args:\n text (str): the text that is searched for\n pattern (bool): if True (the default), the text is treated as a pattern. If False, it is treated as plain text.\n not_typed_msg (str): feedback message to be displayed if the student did not type the text.\n\n :Example:\n\n Student code and solution code::\n\n y = 1 + 2 + 3\n\n SCT::\n\n # Verify that student code contains pattern (not robust!!):\n Ex().has_code(r\"1\\\\s*\\\\+2\\\\s*\\\\+3\")\n\n \"\"\"\n if not not_typed_msg:\n if pattern:\n not_typed_msg = \"Could not find the correct pattern in your code.\"\n else:\n not_typed_msg = \"Could not find the following text in your code: %r\" % text\n\n student_code = state.student_code\n\n _msg = state.build_message(not_typed_msg)\n state.do_test(\n StringContainsTest(student_code, text, pattern, Feedback(_msg, state))\n )\n\n return state"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nchecking whether a student has imported a particular package or function.", "response": "def has_import(\n state,\n name,\n same_as=False,\n not_imported_msg=\"Did you import `{{pkg}}`?\",\n incorrect_as_msg=\"Did you import `{{pkg}}` as `{{alias}}`?\",\n):\n \"\"\"Checks whether student imported a package or function correctly.\n\n Python features many ways to import packages.\n All of these different methods revolve around the ``import``, ``from`` and ``as`` keywords.\n ``has_import()`` provides a robust way to check whether a student correctly imported a certain package.\n\n By default, ``has_import()`` allows for different ways of aliasing the imported package or function.\n If you want to make sure the correct alias was used to refer to the package or function that was imported,\n set ``same_as=True``.\n\n Args:\n name (str): the name of the package that has to be checked.\n same_as (bool): if True, the alias of the package or function has to be the same. Defaults to False.\n not_imported_msg (str): feedback message when the package is not imported.\n incorrect_as_msg (str): feedback message if the alias is wrong.\n\n :Example:\n\n Example 1, where aliases don't matter (defaut): ::\n\n # solution\n import matplotlib.pyplot as plt\n\n # sct\n Ex().has_import(\"matplotlib.pyplot\")\n\n # passing submissions\n import matplotlib.pyplot as plt\n from matplotlib import pyplot as plt\n import matplotlib.pyplot as pltttt\n\n # failing submissions\n import matplotlib as mpl\n\n Example 2, where the SCT is coded so aliases do matter: ::\n\n # solution\n import matplotlib.pyplot as plt\n\n # sct\n Ex().has_import(\"matplotlib.pyplot\", same_as=True)\n\n # passing submissions\n import matplotlib.pyplot as plt\n from matplotlib import pyplot as plt\n\n # failing submissions\n import matplotlib.pyplot as pltttt\n\n \"\"\"\n student_imports = state.ast_dispatcher(\"imports\", state.student_ast)\n solution_imports = state.ast_dispatcher(\"imports\", state.solution_ast)\n\n if name not in solution_imports:\n raise InstructorError(\n \"`has_import()` couldn't find an import of the package %s in your solution code.\"\n % name\n )\n\n fmt_kwargs = {\"pkg\": name, \"alias\": solution_imports[name]}\n\n _msg = state.build_message(not_imported_msg, fmt_kwargs)\n state.do_test(DefinedCollTest(name, student_imports, _msg))\n\n if same_as:\n _msg = state.build_message(incorrect_as_msg, fmt_kwargs)\n state.do_test(EqualTest(solution_imports[name], student_imports[name], _msg))\n\n return state"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nsearch student output for a pattern.", "response": "def has_output(state, text, pattern=True, no_output_msg=None):\n \"\"\"Search student output for a pattern.\n\n Among the student and solution process, the student submission and solution code as a string,\n the ``Ex()`` state also contains the output that a student generated with his or her submission.\n\n With ``has_output()``, you can access this output and match it against a regular or fixed expression.\n\n Args:\n text (str): the text that is searched for\n pattern (bool): if True (default), the text is treated as a pattern. If False, it is treated as plain text.\n no_output_msg (str): feedback message to be displayed if the output is not found.\n\n :Example:\n\n As an example, suppose we want a student to print out a sentence: ::\n\n # Print the \"This is some ... stuff\"\n print(\"This is some weird stuff\")\n\n The following SCT tests whether the student prints out ``This is some weird stuff``: ::\n\n # Using exact string matching\n Ex().has_output(\"This is some weird stuff\", pattern = False)\n\n # Using a regular expression (more robust)\n # pattern = True is the default\n msg = \"Print out ``This is some ... stuff`` to the output, \" + \\\\\n \"fill in ``...`` with a word you like.\"\n Ex().has_output(r\"This is some \\w* stuff\", no_output_msg = msg)\n\n \"\"\"\n if not no_output_msg:\n no_output_msg = \"You did not output the correct things.\"\n\n _msg = state.build_message(no_output_msg)\n state.do_test(StringContainsTest(state.raw_student_output, text, pattern, _msg))\n\n return state"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncheck if the right printout in the current order of the student output.", "response": "def has_printout(\n state, index, not_printed_msg=None, pre_code=None, name=None, copy=False\n):\n \"\"\"Check if the right printouts happened.\n\n ``has_printout()`` will look for the printout in the solution code that you specified with ``index`` (0 in this case), rerun the ``print()`` call in\n the solution process, capture its output, and verify whether the output is present in the output of the student.\n\n This is more robust as ``Ex().check_function('print')`` initiated chains as students can use as many\n printouts as they want, as long as they do the correct one somewhere.\n\n Args:\n index (int): index of the ``print()`` call in the solution whose output you want to search for in the student output.\n not_printed_msg (str): if specified, this overrides the default message that is generated when the output\n is not found in the student output.\n pre_code (str): Python code as a string that is executed before running the targeted student call.\n This is the ideal place to set a random seed, for example.\n copy (bool): whether to try to deep copy objects in the environment, such as lists, that could\n accidentally be mutated. Disabled by default, which speeds up SCTs.\n state (State): state as passed by the SCT chain. Don't specify this explicitly.\n\n :Example:\n\n Suppose you want somebody to print out 4: ::\n\n print(1, 2, 3, 4)\n\n The following SCT would check that: ::\n\n Ex().has_printout(0)\n\n All of the following SCTs would pass: ::\n\n print(1, 2, 3, 4)\n print('1 2 3 4')\n print(1, 2, '3 4')\n print(\"random\"); print(1, 2, 3, 4)\n\n :Example:\n\n Watch out: ``has_printout()`` will effectively **rerun** the ``print()`` call in the solution process after the entire solution script was executed.\n If your solution script updates the value of `x` after executing it, ``has_printout()`` will not work.\n\n Suppose you have the following solution: ::\n\n x = 4\n print(x)\n x = 6\n\n The following SCT will not work: ::\n\n Ex().has_printout(0)\n\n Why? When the ``print(x)`` call is executed, the value of ``x`` will be 6, and pythonwhat will look for the output `'6`' in the output the student generated.\n In cases like these, ``has_printout()`` cannot be used.\n\n :Example:\n\n Inside a for loop ``has_printout()``\n\n Suppose you have the following solution: ::\n\n for i in range(5):\n print(i)\n\n The following SCT will not work: ::\n\n Ex().check_for_loop().check_body().has_printout(0)\n\n The reason is that ``has_printout()`` can only be called from the root state. ``Ex()``.\n If you want to check printouts done in e.g. a for loop, you have to use a `check_function('print')` chain instead: ::\n\n Ex().check_for_loop().check_body().\\\\\n set_context(0).check_function('print').\\\\\n check_args(0).has_equal_value()\n\n \"\"\"\n\n extra_msg = \"If you want to check printouts done in e.g. a for loop, you have to use a `check_function('print')` chain instead.\"\n state.assert_root(\"has_printout\", extra_msg=extra_msg)\n\n if not_printed_msg is None:\n not_printed_msg = (\n \"Have you used `{{sol_call}}` to do the appropriate printouts?\"\n )\n\n try:\n sol_call_ast = state.ast_dispatcher(\"function_calls\", state.solution_ast)[\n \"print\"\n ][index][\"node\"]\n except (KeyError, IndexError):\n raise InstructorError(\n \"`has_printout({})` couldn't find the {} print call in your solution.\".format(\n index, utils.get_ord(index + 1)\n )\n )\n\n out_sol, str_sol = getOutputInProcess(\n tree=sol_call_ast,\n process=state.solution_process,\n context=state.solution_context,\n env=state.solution_env,\n pre_code=pre_code,\n copy=copy,\n )\n\n sol_call_str = state.solution_ast_tokens.get_text(sol_call_ast)\n\n if isinstance(str_sol, Exception):\n raise InstructorError(\n \"Evaluating the solution expression {} raised error in solution process.\"\n \"Error: {} - {}\".format(sol_call_str, type(out_sol), str_sol)\n )\n\n _msg = state.build_message(not_printed_msg, {\"sol_call\": sol_call_str})\n\n has_output(state, out_sol.strip(), pattern=False, no_output_msg=_msg)\n\n return state"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef has_no_error(\n state,\n incorrect_msg=\"Have a look at the console: your code contains an error. Fix it and try again!\",\n):\n \"\"\"Check whether the submission did not generate a runtime error.\n\n If all SCTs for an exercise pass, before marking the submission as correct pythonwhat will automatically check whether\n the student submission generated an error. This means it is not needed to use ``has_no_error()`` explicitly.\n\n However, in some cases, using ``has_no_error()`` explicitly somewhere throughout your SCT execution can be helpful:\n\n - If you want to make sure people didn't write typos when writing a long function name.\n - If you want to first verify whether a function actually runs, before checking whether the arguments were specified correctly.\n - More generally, if, because of the content, it's instrumental that the script runs without\n errors before doing any other verifications.\n\n Args:\n incorrect_msg: if specified, this overrides the default message if the student code generated an error.\n\n :Example:\n\n Suppose you're verifying an exercise about model training and validation: ::\n\n # pre exercise code\n import numpy as np\n from sklearn.model_selection import train_test_split\n from sklearn import datasets\n from sklearn import svm\n\n iris = datasets.load_iris()\n iris.data.shape, iris.target.shape\n\n # solution\n X_train, X_test, y_train, y_test = train_test_split(\n iris.data, iris.target, test_size=0.4, random_state=0)\n\n If you want to make sure that ``train_test_split()`` ran without errors,\n which would check if the student typed the function without typos and used\n sensical arguments, you could use the following SCT: ::\n\n Ex().has_no_error()\n Ex().check_function('sklearn.model_selection.train_test_split').multi(\n check_args(['arrays', 0]).has_equal_value(),\n check_args(['arrays', 0]).has_equal_value(),\n check_args(['options', 'test_size']).has_equal_value(),\n check_args(['options', 'random_state']).has_equal_value()\n )\n\n If, on the other hand, you want to fall back onto pythonwhat's built in behavior,\n that checks for an error before marking the exercise as correct, you can simply\n leave of the ``has_no_error()`` step.\n\n \"\"\"\n state.assert_root(\"has_no_error\")\n\n if state.reporter.errors:\n _msg = state.build_message(\n incorrect_msg, {\"error\": str(state.reporter.errors[0])}\n )\n state.report(Feedback(_msg, state))\n\n return state", "response": "Check whether the student submission generated an error."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntesting for a MultipleChoiceExercise.", "response": "def has_chosen(state, correct, msgs):\n \"\"\"Test multiple choice exercise.\n\n Test for a MultipleChoiceExercise. The correct answer (as an integer) and feedback messages\n are passed to this function.\n\n Args:\n correct (int): the index of the correct answer (should be an instruction). Starts at 1.\n msgs (list(str)): a list containing all feedback messages belonging to each choice of the\n student. The list should have the same length as the number of options.\n \"\"\"\n if not issubclass(type(correct), int):\n raise InstructorError(\n \"Inside `has_chosen()`, the argument `correct` should be an integer.\"\n )\n\n student_process = state.student_process\n if not isDefinedInProcess(MC_VAR_NAME, student_process):\n raise InstructorError(\"Option not available in the student process\")\n else:\n selected_option = getOptionFromProcess(student_process, MC_VAR_NAME)\n if not issubclass(type(selected_option), int):\n raise InstructorError(\"selected_option should be an integer\")\n\n if selected_option < 1 or correct < 1:\n raise InstructorError(\n \"selected_option and correct should be greater than zero\"\n )\n\n if selected_option > len(msgs) or correct > len(msgs):\n raise InstructorError(\"there are not enough feedback messages defined\")\n\n feedback_msg = msgs[selected_option - 1]\n\n state.reporter.success_msg = msgs[correct - 1]\n\n state.do_test(EqualTest(selected_option, correct, feedback_msg))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nchecking whether a particular function is called by the student.", "response": "def check_function(\n state,\n name,\n index=0,\n missing_msg=None,\n params_not_matched_msg=None,\n expand_msg=None,\n signature=True,\n):\n \"\"\"Check whether a particular function is called.\n\n ``check_function()`` is typically followed by:\n \n - ``check_args()`` to check whether the arguments were specified.\n In turn, ``check_args()`` can be followed by ``has_equal_value()`` or ``has_equal_ast()``\n to assert that the arguments were correctly specified.\n - ``has_equal_value()`` to check whether rerunning the function call coded by the student\n gives the same result as calling the function call as in the solution.\n\n Checking function calls is a tricky topic. Please visit the\n `dedicated article `_ for more explanation,\n edge cases and best practices.\n\n Args:\n name (str): the name of the function to be tested. When checking functions in packages, always\n use the 'full path' of the function.\n index (int): index of the function call to be checked. Defaults to 0.\n missing_msg (str): If specified, this overrides an automatically generated feedback message in case\n the student did not call the function correctly.\n params_not_matched_msg (str): If specified, this overrides an automatically generated feedback message\n in case the function parameters were not successfully matched.\n expand_msg (str): If specified, this overrides any messages that are prepended by previous SCT chains.\n signature (Signature): Normally, check_function() can figure out what the function signature is,\n but it might be necessary to use ``sig_from_params()`` to manually build a signature and pass this along.\n state (State): State object that is passed from the SCT Chain (don't specify this).\n\n :Examples:\n\n Student code and solution code::\n\n import numpy as np\n arr = np.array([1, 2, 3, 4, 5])\n np.mean(arr)\n\n SCT::\n\n # Verify whether arr was correctly set in np.mean\n Ex().check_function('numpy.mean').check_args('a').has_equal_value()\n\n # Verify whether np.mean(arr) produced the same result\n Ex().check_function('numpy.mean').has_equal_value()\n \"\"\"\n\n append_missing = missing_msg is None\n append_params_not_matched = params_not_matched_msg is None\n if missing_msg is None:\n missing_msg = MISSING_MSG\n if expand_msg is None:\n expand_msg = PREPEND_MSG\n if params_not_matched_msg is None:\n params_not_matched_msg = SIG_ISSUE_MSG\n\n stu_out = state.ast_dispatcher(\"function_calls\", state.student_ast)\n sol_out = state.ast_dispatcher(\"function_calls\", state.solution_ast)\n\n student_mappings = state.ast_dispatcher(\"mappings\", state.student_ast)\n\n fmt_kwargs = {\n \"times\": get_times(index + 1),\n \"ord\": get_ord(index + 1),\n \"index\": index,\n \"mapped_name\": get_mapped_name(name, student_mappings),\n }\n\n # Get Parts ----\n # Copy, otherwise signature binding overwrites sol_out[name][index]['args']\n try:\n sol_parts = {**sol_out[name][index]}\n except KeyError:\n raise InstructorError(\n \"`check_function()` couldn't find a call of `%s()` in the solution code. Make sure you get the mapping right!\"\n % name\n )\n except IndexError:\n raise InstructorError(\n \"`check_function()` couldn't find %s calls of `%s()` in your solution code.\"\n % (index + 1, name)\n )\n\n try:\n # Copy, otherwise signature binding overwrites stu_out[name][index]['args']\n stu_parts = {**stu_out[name][index]}\n except (KeyError, IndexError):\n _msg = state.build_message(missing_msg, fmt_kwargs, append=append_missing)\n state.report(Feedback(_msg, state))\n\n # Signatures -----\n if signature:\n signature = None if isinstance(signature, bool) else signature\n get_sig = partial(\n getSignatureInProcess,\n name=name,\n signature=signature,\n manual_sigs=state.get_manual_sigs(),\n )\n\n try:\n sol_sig = get_sig(\n mapped_name=sol_parts[\"name\"], process=state.solution_process\n )\n sol_parts[\"args\"] = bind_args(sol_sig, sol_parts[\"args\"])\n except Exception as e:\n raise InstructorError(\n \"`check_function()` couldn't match the %s call of `%s` to its signature:\\n%s \"\n % (get_ord(index + 1), name, e)\n )\n\n try:\n stu_sig = get_sig(\n mapped_name=stu_parts[\"name\"], process=state.student_process\n )\n stu_parts[\"args\"] = bind_args(stu_sig, stu_parts[\"args\"])\n except Exception:\n _msg = state.build_message(\n params_not_matched_msg, fmt_kwargs, append=append_params_not_matched\n )\n state.report(\n Feedback(\n _msg, StubState(stu_parts[\"node\"], state.highlighting_disabled)\n )\n )\n\n # three types of parts: pos_args, keywords, args (e.g. these are bound to sig)\n append_message = {\"msg\": expand_msg, \"kwargs\": fmt_kwargs}\n child = part_to_child(\n stu_parts, sol_parts, append_message, state, node_name=\"function_calls\"\n )\n return child"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef process_task(f):\n sig = inspect.signature(f)\n\n @wraps(f)\n def wrapper(*args, **kwargs):\n # get bound arguments for call\n ba = sig.bind_partial(*args, **kwargs)\n # when process is specified, remove from args and use to execute\n process = ba.arguments.get(\"process\")\n if process:\n ba.arguments[\"process\"] = None\n # partial function since shell argument may have been left\n # unspecified, as it will be passed when the process executes\n pf = partial(wrapper, *ba.args, **ba.kwargs)\n return process.executeTask(pf)\n # otherwise, run original function\n return f(*ba.args, **ba.kwargs)\n\n return wrapper", "response": "Decorator to run a function in a process."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget a value from process return tuple of value res if succesful", "response": "def getResultFromProcess(res, tempname, process):\n \"\"\"Get a value from process, return tuple of value, res if succesful\"\"\"\n if not isinstance(res, (UndefinedValue, Exception)):\n value = getRepresentation(tempname, process)\n return value, res\n else:\n return res, str(res)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate code to assign node to expr", "response": "def assign_from_ast(node, expr):\n \"\"\"\n Creates code to assign name (or tuple of names) node from expr\n\n This is useful for recreating destructuring assignment behavior, like\n a, *b = [1,2,3].\n \"\"\"\n if isinstance(expr, str):\n expr = ast.Name(id=expr, ctx=ast.Load())\n mod = ast.Module([ast.Assign(targets=[node], value=expr)])\n ast.fix_missing_locations(mod)\n return compile(mod, \"\", \"exec\")"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef override(state, solution):\n\n # the old ast may be a number of node types, but generally either a\n # (1) ast.Module, or for single expressions...\n # (2) whatever was grabbed using module.body[0]\n # (3) module.body[0].value, when module.body[0] is an Expr node\n old_ast = state.solution_ast\n new_ast = ast.parse(solution)\n if not isinstance(old_ast, ast.Module) and len(new_ast.body) == 1:\n expr = new_ast.body[0]\n candidates = [expr, expr.value] if isinstance(expr, ast.Expr) else [expr]\n for node in candidates:\n if isinstance(node, old_ast.__class__):\n new_ast = node\n break\n\n kwargs = state.messages[-1] if state.messages else {}\n child = state.to_child(\n solution_ast=new_ast,\n student_ast=state.student_ast,\n highlight=state.highlight,\n append_message={\"msg\": \"\", \"kwargs\": kwargs},\n )\n\n return child", "response": "Override the student and solution code with something arbitrary."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set_context(state, *args, **kwargs):\n\n stu_crnt = state.student_context.context\n sol_crnt = state.solution_context.context\n\n # for now, you can't specify both\n if len(args) > 0 and len(kwargs) > 0:\n raise InstructorError(\n \"In `set_context()`, specify arguments either by position, either by name.\"\n )\n\n # set args specified by pos -----------------------------------------------\n if args:\n # stop if too many pos args for solution\n if len(args) > len(sol_crnt):\n raise InstructorError(\n \"Too many positional args. There are {} context vals, but tried to set {}\".format(\n len(sol_crnt), len(args)\n )\n )\n # set pos args\n upd_sol = sol_crnt.update(dict(zip(sol_crnt.keys(), args)))\n upd_stu = stu_crnt.update(dict(zip(stu_crnt.keys(), args)))\n else:\n upd_sol = sol_crnt\n upd_stu = stu_crnt\n\n # set args specified by keyword -------------------------------------------\n if kwargs:\n # stop if keywords don't match with solution\n if set(kwargs) - set(upd_sol):\n raise InstructorError(\n \"`set_context()` failed: context val names are {}, but you tried to set {}.\".format(\n upd_sol or \"missing\", sorted(list(kwargs.keys()))\n )\n )\n out_sol = upd_sol.update(kwargs)\n # need to match keys in kwargs with corresponding keys in stu context\n # in case they used, e.g., different loop variable names\n match_keys = dict(zip(sol_crnt.keys(), stu_crnt.keys()))\n out_stu = upd_stu.update(\n {match_keys[k]: v for k, v in kwargs.items() if k in match_keys}\n )\n else:\n out_sol = upd_sol\n out_stu = upd_stu\n\n return state.to_child(\n student_context=out_stu, solution_context=out_sol, highlight=state.highlight\n )", "response": "Update the student and solution context for a student environment."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nupdate or set environment variables for student and solution environments.", "response": "def set_env(state, **kwargs):\n \"\"\"Update/set environemnt variables for student and solution environments.\n\n When ``has_equal_x()`` is used after this, the variables specified through this function will\n be available in the student and solution process. Note that you will not see these variables\n in the student process of the state produced by this function: the values are saved on the state\n and are only added to the student and solution processes when ``has_equal_ast()`` is called.\n\n :Example:\n\n Student and Solution Code::\n\n a = 1\n if a > 4:\n print('pretty large')\n\n SCT::\n\n # check if condition works with different values of a\n Ex().check_if_else().check_test().multi(\n set_env(a = 3).has_equal_value(),\n set_env(a = 4).has_equal_value(),\n set_env(a = 5).has_equal_value()\n )\n\n # equivalent SCT, by setting extra_env in has_equal_value()\n Ex().check_if_else().check_test().\\\\\n multi([has_equal_value(extra_env={'a': i}) for i in range(3, 6)])\n \"\"\"\n\n stu_crnt = state.student_env.context\n sol_crnt = state.solution_env.context\n\n stu_new = stu_crnt.update(kwargs)\n sol_new = sol_crnt.update(kwargs)\n\n return state.to_child(\n student_env=stu_new, solution_env=sol_new, highlight=state.highlight\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncheck whether an object is defined in the student process and zooms in on its value.", "response": "def check_object(\n state, index, missing_msg=None, expand_msg=None, typestr=\"variable\"\n):\n \"\"\"Check object existence (and equality)\n\n Check whether an object is defined in the student's process, and zoom in on its value in both\n student and solution process to inspect quality (with has_equal_value().\n\n In ``pythonbackend``, both the student's submission as well as the solution code are executed, in separate processes.\n ``check_object()`` looks at these processes and checks if the referenced object is available in the student process.\n Next, you can use ``has_equal_value()`` to check whether the objects in the student and solution process correspond.\n\n Args:\n index (str): the name of the object which value has to be checked.\n missing_msg (str): feedback message when the object is not defined in the student process.\n expand_msg (str): If specified, this overrides any messages that are prepended by previous SCT chains.\n\n :Example:\n \n Suppose you want the student to create a variable ``x``, equal to 15: ::\n\n x = 15\n\n The following SCT will verify this: ::\n\n Ex().check_object(\"x\").has_equal_value()\n\n - ``check_object()`` will check if the variable ``x`` is defined in the student process.\n - ``has_equal_value()`` will check whether the value of ``x`` in the solution process is the same as in the student process.\n \n Note that ``has_equal_value()`` only looks at **end result** of a variable in the student process.\n In the example, how the object ``x`` came about in the student's submission, does not matter. \n This means that all of the following submission will also pass the above SCT: ::\n\n x = 15\n x = 12 + 3\n x = 3; x += 12\n\n :Example:\n\n As the previous example mentioned, ``has_equal_value()`` only looks at the **end result**. If your exercise is\n first initializing and object and further down the script is updating the object, you can only look at the final value!\n\n Suppose you want the student to initialize and populate a list `my_list` as follows: ::\n\n my_list = []\n for i in range(20):\n if i % 3 == 0:\n my_list.append(i)\n\n There is no robust way to verify whether `my_list = [0]` was coded correctly in a separate way.\n The best SCT would look something like this: ::\n\n msg = \"Have you correctly initialized `my_list`?\"\n Ex().check_correct(\n check_object('my_list').has_equal_value(),\n multi(\n # check initialization: [] or list()\n check_or(\n has_equal_ast(code = \"[]\", incorrect_msg = msg),\n check_function('list')\n ),\n check_for_loop().multi(\n check_iter().has_equal_value(),\n check_body().check_if_else().multi(\n check_test().multi(\n set_context(2).has_equal_value(),\n set_context(3).has_equal_value()\n ),\n check_body().set_context(3).\\\\\n set_env(my_list = [0]).\\\\\n has_equal_value(name = 'my_list')\n )\n )\n )\n )\n \n - ``check_correct()`` is used to robustly check whether ``my_list`` was built correctly.\n - If ``my_list`` is not correct, **both** the initialization and the population code are checked.\n \n :Example:\n\n Because checking object correctness incorrectly is such a common misconception, we're adding another example: ::\n\n import pandas as pd\n df = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})\n df['c'] = [7, 8, 9]\n \n The following SCT would be **wrong**, as it does not factor in the possibility that the 'add column ``c``' step could've been wrong: ::\n\n Ex().check_correct(\n check_object('df').has_equal_value(),\n check_function('pandas.DataFrame').check_args(0).has_equal_value()\n )\n\n The following SCT would be better, as it is specific to the steps: ::\n\n # verify the df = pd.DataFrame(...) step\n Ex().check_correct(\n check_df('df').multi(\n check_keys('a').has_equal_value(),\n check_keys('b').has_equal_value()\n ),\n check_function('pandas.DataFrame').check_args(0).has_equal_value()\n )\n\n # verify the df['c'] = [...] step\n Ex().check_df('df').check_keys('c').has_equal_value()\n\n :Example:\n\n pythonwhat compares the objects in the student and solution process with the ``==`` operator.\n For basic objects, this ``==`` is operator is properly implemented, so that the objects can be effectively compared.\n For more complex objects that are produced by third-party packages, however, it's possible that this equality operator is not implemented in a way you'd expect.\n Often, for these object types the ``==`` will compare the actual object instances: ::\n\n # pre exercise code\n class Number():\n def __init__(self, n):\n self.n = n\n\n # solution\n x = Number(1)\n\n # sct that won't work\n Ex().check_object().has_equal_value()\n\n # sct\n Ex().check_object().has_equal_value(expr_code = 'x.n')\n\n # submissions that will pass this sct\n x = Number(1)\n x = Number(2 - 1)\n \n The basic SCT like in the previous example will notwork here.\n Notice how we used the ``expr_code`` argument to _override_ which value `has_equal_value()` is checking.\n Instead of checking whether `x` corresponds between student and solution process, it's now executing the expression ``x.n``\n and seeing if the result of running this expression in both student and solution process match.\n\n \"\"\"\n\n # Only do the assertion if PYTHONWHAT_V2_ONLY is set to '1'\n if v2_only():\n extra_msg = \"If you want to check the value of an object in e.g. a for loop, use `has_equal_value(name = 'my_obj')` instead.\"\n state.assert_root(\"check_object\", extra_msg=extra_msg)\n\n if missing_msg is None:\n missing_msg = \"Did you define the {{typestr}} `{{index}}` without errors?\"\n\n if expand_msg is None:\n expand_msg = \"Did you correctly define the {{typestr}} `{{index}}`? \"\n\n if (\n not isDefinedInProcess(index, state.solution_process)\n and state.has_different_processes()\n ):\n raise InstructorError(\n \"`check_object()` couldn't find object `%s` in the solution process.\"\n % index\n )\n\n append_message = {\"msg\": expand_msg, \"kwargs\": {\"index\": index, \"typestr\": typestr}}\n\n # create child state, using either parser output, or create part from name\n fallback = lambda: ObjectAssignmentParser.get_part(index)\n stu_part = state.ast_dispatcher(\"object_assignments\", state.student_ast).get(index, fallback())\n sol_part = state.ast_dispatcher(\"object_assignments\", state.solution_ast).get(index, fallback())\n\n # test object exists\n _msg = state.build_message(missing_msg, append_message[\"kwargs\"])\n state.do_test(DefinedProcessTest(index, state.student_process, Feedback(_msg)))\n\n child = part_to_child(\n stu_part, sol_part, append_message, state, node_name=\"object_assignments\"\n )\n\n return child"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef is_instance(state, inst, not_instance_msg=None):\n\n state.assert_is([\"object_assignments\"], \"is_instance\", [\"check_object\"])\n\n sol_name = state.solution_parts.get(\"name\")\n stu_name = state.student_parts.get(\"name\")\n\n if not_instance_msg is None:\n not_instance_msg = \"Is it a {{inst.__name__}}?\"\n\n if not isInstanceInProcess(sol_name, inst, state.solution_process):\n raise InstructorError(\n \"`is_instance()` noticed that `%s` is not a `%s` in the solution process.\"\n % (sol_name, inst.__name__)\n )\n\n _msg = state.build_message(not_instance_msg, {\"inst\": inst})\n feedback = Feedback(_msg, state)\n state.do_test(InstanceProcessTest(stu_name, inst, state.student_process, feedback))\n\n return state", "response": "Check whether an object is an instance of a certain class."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nchecking whether a DataFrame was defined and is the right type for the specified object.", "response": "def check_df(\n state, index, missing_msg=None, not_instance_msg=None, expand_msg=None\n):\n \"\"\"Check whether a DataFrame was defined and it is the right type\n \n ``check_df()`` is a combo of ``check_object()`` and ``is_instance()`` that checks whether the specified object exists\n and whether the specified object is pandas DataFrame.\n\n You can continue checking the data frame with ``check_keys()`` function to 'zoom in' on a particular column in the pandas DataFrame:\n\n Args:\n index (str): Name of the data frame to zoom in on.\n missing_msg (str): See ``check_object()``.\n not_instance_msg (str): See ``is_instance()``.\n expand_msg (str): If specified, this overrides any messages that are prepended by previous SCT chains.\n\n :Example:\n\n Suppose you want the student to create a DataFrame ``my_df`` with two columns.\n The column ``a`` should contain the numbers 1 to 3,\n while the contents of column ``b`` can be anything: ::\n\n import pandas as pd\n my_df = pd.DataFrame({\"a\": [1, 2, 3], \"b\": [\"a\", \"n\", \"y\"]})\n\n The following SCT would robustly check that: ::\n\n Ex().check_df(\"my_df\").multi(\n check_keys(\"a\").has_equal_value(),\n check_keys(\"b\")\n )\n\n - ``check_df()`` checks if ``my_df`` exists (``check_object()`` behind the scenes) and is a DataFrame (``is_instance()``)\n - ``check_keys(\"a\")`` zooms in on the column ``a`` of the data frame, and ``has_equal_value()`` checks if the columns correspond between student and solution process.\n - ``check_keys(\"b\")`` zooms in on hte column ``b`` of the data frame, but there's no 'equality checking' happening\n \n The following submissions would pass the SCT above: ::\n \n my_df = pd.DataFrame({\"a\": [1, 1 + 1, 3], \"b\": [\"a\", \"l\", \"l\"]})\n my_df = pd.DataFrame({\"a\": [1, 2, 3], \"b\": [4, 5, 6], \"c\": [7, 8, 9]})\n\n \"\"\"\n child = check_object(\n state,\n index,\n missing_msg=missing_msg,\n expand_msg=expand_msg,\n typestr=\"pandas DataFrame\",\n )\n is_instance(child, pd.DataFrame, not_instance_msg=not_instance_msg)\n return child"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef check_keys(state, key, missing_msg=None, expand_msg=None):\n\n state.assert_is([\"object_assignments\"], \"is_instance\", [\"check_object\", \"check_df\"])\n\n if missing_msg is None:\n missing_msg = \"There is no {{ 'column' if 'DataFrame' in parent.typestr else 'key' }} `'{{key}}'`.\"\n if expand_msg is None:\n expand_msg = \"Did you correctly set the {{ 'column' if 'DataFrame' in parent.typestr else 'key' }} `'{{key}}'`? \"\n\n sol_name = state.solution_parts.get(\"name\")\n stu_name = state.student_parts.get(\"name\")\n\n if not isDefinedCollInProcess(sol_name, key, state.solution_process):\n raise InstructorError(\n \"`check_keys()` couldn't find key `%s` in object `%s` in the solution process.\"\n % (key, sol_name)\n )\n\n # check if key available\n _msg = state.build_message(missing_msg, {\"key\": key})\n state.do_test(\n DefinedCollProcessTest(\n stu_name, key, state.student_process, Feedback(_msg, state)\n )\n )\n\n def get_part(name, key, highlight):\n if isinstance(key, str):\n slice_val = ast.Str(s=key)\n else:\n slice_val = ast.parse(str(key)).body[0].value\n expr = ast.Subscript(\n value=ast.Name(id=name, ctx=ast.Load()),\n slice=ast.Index(value=slice_val),\n ctx=ast.Load(),\n )\n ast.fix_missing_locations(expr)\n return {\"node\": expr, \"highlight\": highlight}\n\n stu_part = get_part(stu_name, key, state.student_parts.get(\"highlight\"))\n sol_part = get_part(sol_name, key, state.solution_parts.get(\"highlight\"))\n append_message = {\"msg\": expand_msg, \"kwargs\": {\"key\": key}}\n child = part_to_child(stu_part, sol_part, append_message, state)\n return child", "response": "Checks whether an object with the given key exists in the current object."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef defined_items(self):\n return self.__class__(\n [(k, v) for k, v in self.items() if v is not self.EMPTY], is_empty=False\n )", "response": "Return a copy of instance omitting entries that are EMPTY"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef to_child(self, append_message=\"\", node_name=\"\", **kwargs):\n base_kwargs = {\n attr: getattr(self, attr)\n for attr in self.params\n if attr not in [\"highlight\"]\n }\n\n if not isinstance(append_message, dict):\n append_message = {\"msg\": append_message, \"kwargs\": {}}\n\n kwargs[\"messages\"] = [*self.messages, append_message]\n kwargs[\"parent_state\"] = self\n\n for kwarg in [\"solution_context\", \"student_context\"]:\n if kwarg in kwargs and not kwargs[kwarg]:\n kwargs.pop(kwarg, None)\n\n def update_kwarg(name, func):\n kwargs[name] = func(kwargs[name])\n\n def update_context(name):\n update_kwarg(name, getattr(self, name).update_ctx)\n\n if isinstance(kwargs.get(\"student_ast\", None), list):\n update_kwarg(\"student_ast\", wrap_in_module)\n if isinstance(kwargs.get(\"solution_ast\", None), list):\n update_kwarg(\"solution_ast\", wrap_in_module)\n\n if \"student_ast\" in kwargs:\n kwargs[\"student_code\"] = self.student_ast_tokens.get_text(\n kwargs[\"student_ast\"]\n )\n if \"solution_ast\" in kwargs:\n kwargs[\"solution_code\"] = self.solution_ast_tokens.get_text(\n kwargs[\"solution_ast\"]\n )\n\n # get new contexts\n if \"solution_context\" in kwargs:\n update_context(\"solution_context\")\n if \"student_context\" in kwargs:\n update_context(\"student_context\")\n\n # get new envs\n if \"solution_env\" in kwargs:\n update_context(\"solution_env\")\n if \"student_env\" in kwargs:\n update_context(\"student_env\")\n\n klass = self.SUBCLASSES[node_name] if node_name else State\n child = klass(**{**base_kwargs, **kwargs})\n return child", "response": "Dive into nested tree."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _getx(self, Parser, ext_attr, tree):\n # return cached output if possible\n cache_key = Parser.__name__ + str(hash(tree))\n if self._parser_cache.get(cache_key):\n p = self._parser_cache[cache_key]\n else:\n # otherwise, run parser over tree\n p = Parser()\n # set mappings for parsers that inspect attribute access\n if ext_attr != \"mappings\" and Parser in [\n FunctionParser,\n ObjectAccessParser,\n ]:\n p.mappings = self.context_mappings.copy()\n # run parser\n p.visit(tree)\n # cache\n self._parser_cache[cache_key] = p\n return getattr(p, ext_attr)", "response": "getter for Parser outputs"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef has_context_loop(state, incorrect_msg, exact_names):\n return _test(\n state,\n incorrect_msg or MSG_INCORRECT_LOOP,\n exact_names,\n tv_name=\"_target_vars\",\n highlight_name=\"target\",\n )", "response": "When dispatched on loops has_context the target vars are the attribute _target_vars."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef has_context_with(state, incorrect_msg, exact_names):\n\n for i in range(len(state.solution_parts[\"context\"])):\n ctxt_state = check_part_index(state, \"context\", i, \"{{ordinal}} context\")\n _has_context(ctxt_state, incorrect_msg or MSG_INCORRECT_WITH, exact_names)\n\n return state", "response": "Check if the state has any context with the given message."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef check_part(state, name, part_msg, missing_msg=None, expand_msg=None):\n\n if missing_msg is None:\n missing_msg = \"Are you sure you defined the {{part}}? \"\n if expand_msg is None:\n expand_msg = \"Did you correctly specify the {{part}}? \"\n\n if not part_msg:\n part_msg = name\n append_message = {\"msg\": expand_msg, \"kwargs\": {\"part\": part_msg}}\n\n has_part(state, name, missing_msg, append_message[\"kwargs\"])\n\n stu_part = state.student_parts[name]\n sol_part = state.solution_parts[name]\n\n assert_ast(state, sol_part, append_message[\"kwargs\"])\n\n return part_to_child(stu_part, sol_part, append_message, state)", "response": "Return child state with name part as its ast tree"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef check_part_index(state, name, index, part_msg, missing_msg=None, expand_msg=None):\n\n if missing_msg is None:\n missing_msg = \"Are you sure you defined the {{part}}? \"\n if expand_msg is None:\n expand_msg = \"Did you correctly specify the {{part}}? \"\n\n # create message\n ordinal = get_ord(index + 1) if isinstance(index, int) else \"\"\n fmt_kwargs = {\"index\": index, \"ordinal\": ordinal}\n fmt_kwargs.update(part=render(part_msg, fmt_kwargs))\n\n append_message = {\"msg\": expand_msg, \"kwargs\": fmt_kwargs}\n\n # check there are enough parts for index\n has_part(state, name, missing_msg, fmt_kwargs, index)\n\n # get part at index\n stu_part = state.student_parts[name]\n sol_part = state.solution_parts[name]\n\n if isinstance(index, list):\n for ind in index:\n stu_part = stu_part[ind]\n sol_part = sol_part[ind]\n else:\n stu_part = stu_part[index]\n sol_part = sol_part[index]\n\n assert_ast(state, sol_part, fmt_kwargs)\n\n # return child state from part\n return part_to_child(stu_part, sol_part, append_message, state)", "response": "Check if a student or solution part at a given index is present in the state."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncheck whether a function argument is specified. This function can follow ``check_function()`` in an SCT chain and verifies whether an argument is specified. If you want to go on and check whether the argument was correctly specified, you can can continue chaining with ``has_equal_value()`` (value-based check) or ``has_equal_ast()`` (AST-based check) This function can also follow ``check_function_def()`` or ``check_lambda_function()`` to see if arguments have been specified. Args: name (str): the name of the argument for which you want to check it is specified. This can also be a number, in which case it refers to the positional arguments. Named argumetns take precedence. missing_msg (str): If specified, this overrides an automatically generated feedback message in case the student did specify the argument. state (State): State object that is passed from the SCT Chain (don't specify this). :Examples: Student and solution code:: import numpy as np arr = np.array([1, 2, 3, 4, 5]) np.mean(arr) SCT:: # Verify whether arr was correctly set in np.mean # has_equal_value() checks the value of arr, used to set argument a Ex().check_function('numpy.mean').check_args('a').has_equal_value() # Verify whether arr was correctly set in np.mean # has_equal_ast() checks the expression used to set argument a Ex().check_function('numpy.mean').check_args('a').has_equal_ast() Student and solution code:: def my_power(x): print(\"calculating sqrt...\") return(x * x) SCT:: Ex().check_function_def('my_power').multi( check_args('x') # will fail if student used y as arg check_args(0) # will still pass if student used y as arg )", "response": "def check_args(state, name, missing_msg=None):\n \"\"\"Check whether a function argument is specified.\n\n This function can follow ``check_function()`` in an SCT chain and verifies whether an argument is specified.\n If you want to go on and check whether the argument was correctly specified, you can can continue chaining with\n ``has_equal_value()`` (value-based check) or ``has_equal_ast()`` (AST-based check)\n\n This function can also follow ``check_function_def()`` or ``check_lambda_function()`` to see if arguments have been\n specified.\n\n Args:\n name (str): the name of the argument for which you want to check it is specified. This can also be\n a number, in which case it refers to the positional arguments. Named argumetns take precedence.\n missing_msg (str): If specified, this overrides an automatically generated feedback message in case\n the student did specify the argument.\n state (State): State object that is passed from the SCT Chain (don't specify this).\n\n :Examples:\n\n Student and solution code::\n\n import numpy as np\n arr = np.array([1, 2, 3, 4, 5])\n np.mean(arr)\n\n SCT::\n\n # Verify whether arr was correctly set in np.mean\n # has_equal_value() checks the value of arr, used to set argument a\n Ex().check_function('numpy.mean').check_args('a').has_equal_value()\n\n # Verify whether arr was correctly set in np.mean\n # has_equal_ast() checks the expression used to set argument a\n Ex().check_function('numpy.mean').check_args('a').has_equal_ast()\n\n Student and solution code::\n\n def my_power(x):\n print(\"calculating sqrt...\")\n return(x * x)\n\n SCT::\n\n Ex().check_function_def('my_power').multi(\n check_args('x') # will fail if student used y as arg\n check_args(0) # will still pass if student used y as arg\n )\n\n \"\"\"\n if missing_msg is None:\n missing_msg = \"Did you specify the {{part}}?\"\n\n if name in [\"*args\", \"**kwargs\"]: # for check_function_def\n return check_part(state, name, name, missing_msg=missing_msg)\n else:\n if isinstance(name, list): # dealing with args or kwargs\n if name[0] == \"args\":\n arg_str = \"{} argument passed as a variable length argument\".format(\n get_ord(name[1] + 1)\n )\n else:\n arg_str = \"argument `{}`\".format(name[1])\n else:\n arg_str = (\n \"{} argument\".format(get_ord(name + 1))\n if isinstance(name, int)\n else \"argument `{}`\".format(name)\n )\n return check_part_index(state, \"args\", name, arg_str, missing_msg=missing_msg)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn whether the compiler supports OpenMP parallelization?", "response": "def detect_openmp():\n\t\"\"\"Does this compiler support OpenMP parallelization?\"\"\"\n\tcompiler = new_compiler()\n\tprint(\"Checking for OpenMP support... \")\n\thasopenmp = hasfunction(compiler, 'omp_get_num_threads()')\n\tneeds_gomp = hasopenmp\n\tif not hasopenmp:\n\t\tcompiler.add_library('gomp')\n\thasopenmp = hasfunction(compiler, 'omp_get_num_threads()')\n\tneeds_gomp = hasopenmp\n\tif hasopenmp: print(\"Compiler supports OpenMP\")\n\telse: print( \"Did not detect OpenMP support.\")\n\treturn hasopenmp, needs_gomp"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nmakes the plot of the last two nonlinear and quadratic functions", "response": "def make_plots():\n\timport matplotlib.pyplot as plt\n\t\"\"\"zs = np.linspace(0., 1., 1000)\n\trp = 0.1\n\twrapped = wrapper(_quadratic_ld._quadratic_ld, zs, rp, 0.1, 0.3, 1)\n\tt = timeit.timeit(wrapped,number=10000)\n\tprint(\"time:\", t)\"\"\"\n\t\"\"\"zs = np.linspace(0., 1., 1000)\n\trp = 0.1\n\tu = [0., 0.7, 0.0, -0.3]\n\tf = _nonlinear_ld._nonlinear_ld(zs, rp, u[0], u[1], u[2], u[3], 1.0e-2, 4)\n\tfhi = _nonlinear_ld._nonlinear_ld(zs, rp, u[0], u[1], u[2], u[3], 1.0e-4, 4)\n\tfquad = occultquad.occultquad(zs, rp, 0.1, 0.3, 4)\n\t#for i in range(len(f)): print \"z, fnl, fquad\", zs[i], f[i], fquad[i]\n\n\tfor i in range(1,16):\n\t\twrapped = wrapper(occultquad.occultquad, zs, rp, 0.1, 0.3, i)\n\t\tt = timeit.timeit(wrapped,number=1)\n\t\tprint i, t\n\n\tplt.plot(zs, (f - fhi)*1.0e6)\n\tplt.plot(zs, (fhi - fquad)*1.0e6, color='r')\n\tplt.axvline(0.9)\n\tplt.show()\"\"\"\n\n\n\t#generates Figure FIXME: max err as a function of function call time\n\tzs = np.linspace(0., 1., 1000)\n\trp = 0.1\n\tu = [0., 0.7, 0.0, -0.3]\n\tn = 20\n\tts = []\n\terrs = []\n\tf_ref = _nonlinear_ld._nonlinear_ld(zs, rp, u[0], u[1], u[2], u[3], 1.0e-4, 4)\n\tfac = np.logspace(-3, -1, n)\n\tfor i in range(n):\n\t\twrapped = wrapper(_nonlinear_ld._nonlinear_ld, zs, rp, u[0], u[1], u[2], u[3], fac[i], 1)\n\t\tt = timeit.timeit(wrapped,number=10)/10.\n\t\tts.append(t)\n\t\tprint(t)\n\t\tf= _nonlinear_ld._nonlinear_ld(zs, rp, u[0], u[1], u[2], u[3], fac[i], 12)\n\t\terr = np.max(np.abs(f - f_ref))\n\t\terrs.append(err)\n\tplt.plot(np.array(ts), np.array(errs)*1.0e6, color='k')\n\tplt.xlim((1.0e-3, 1.0e-1))\n\tplt.yscale('log')\n\tplt.xscale('log')\n\tplt.xlabel(\"Time (s)\")\n\tplt.ylabel(\"Max Err (ppm)\")\n\tplt.show()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef light_curve(self, params):\n\t\t#recalculates rsky and fac if necessary\n\t\tif params.t0 != self.t0 or params.per != self.per or params.a != self.a or params.inc != self.inc or params.ecc != self.ecc or params.w != self.w or params.t_secondary != self.t_secondary:\n\t\t\tif self.transittype == 2 and params.t_secondary != self.t_secondary:\n\t\t\t\tparams.t0 = self.get_t_conjunction(params)\n\t\t\tself.ds= _rsky._rsky(self.t_supersample, params.t0, params.per, params.a, params.inc*pi/180., params.ecc, params.w*pi/180., self.transittype, self.nthreads)\n\t\tif params.limb_dark != self.limb_dark: self.fac = self._get_fac()\n\n\t\t#updates transit params\n\t\tself.t0 = params.t0\n\t\tself.per = params.per\n\t\tself.rp = params.rp\n\t\tself.a = params.a\n\t\tself.inc = params.inc\n\t\tself.ecc = params.ecc\n\t\tself.w = params.w\n\t\tself.u = params.u\n\t\tself.limb_dark = params.limb_dark\n\t\tself.fp = params.fp\n\t\tself.t_secondary = params.t_secondary\n\t\tself.inverse = False\n\n\t\t#handles the case of inverse transits (rp < 0)\n\t\tif self.rp < 0.: \n\t\t\tself.rp = -1.*self.rp\n\t\t\tparams.rp = -1.*params.rp\n\t\t\tself.inverse = True\n\t\t\n\t\tif self.transittype == 1:\n\t\t\tif params.limb_dark != self.limb_dark: raise Exception(\"Need to reinitialize model in order to change limb darkening option\")\n\t\t\tif self.limb_dark == \"quadratic\": lc = _quadratic_ld._quadratic_ld(self.ds, params.rp, params.u[0], params.u[1], self.nthreads)\n\t\t\telif self.limb_dark == \"linear\": lc = _quadratic_ld._quadratic_ld(self.ds, params.rp, params.u[0], 0., self.nthreads)\n\t\t\telif self.limb_dark == \"nonlinear\": lc = _nonlinear_ld._nonlinear_ld(self.ds, params.rp, params.u[0], params.u[1], params.u[2], params.u[3], self.fac, self.nthreads)\n\t\t\telif self.limb_dark == \"squareroot\": lc = _nonlinear_ld._nonlinear_ld(self.ds, params.rp, params.u[1], params.u[0], 0., 0., self.fac, self.nthreads)\n\t\t\telif self.limb_dark == \"uniform\": lc = _uniform_ld._uniform_ld(self.ds, params.rp, self.nthreads)\n\t\t\telif self.limb_dark == \"logarithmic\": lc = _logarithmic_ld._logarithmic_ld(self.ds, params.rp, params.u[0], params.u[1], self.fac, self.nthreads)\n\t\t\telif self.limb_dark == \"exponential\": lc = _exponential_ld._exponential_ld(self.ds, params.rp, params.u[0], params.u[1], self.fac, self.nthreads)\n\t\t\telif self.limb_dark == \"power2\": lc = _power2_ld._power2_ld(self.ds, params.rp, params.u[0], params.u[1], self.fac, self.nthreads)\n\t\t\telif self.limb_dark == \"custom\": lc = _custom_ld._custom_ld(self.ds, params.rp, params.u[0], params.u[1], params.u[2], params.u[3], params.u[4], params.u[5], self.fac, self.nthreads)\n\t\t\telse: raise Exception(\"Invalid limb darkening option\")\n\n\t\t\tif self.inverse == True: lc = 2. - lc\n\n\t\telse: lc = _eclipse._eclipse(self.ds, params.rp, params.fp, self.nthreads)\t\t\t\n\t\tif self.supersample_factor == 1: return lc\n\t\telse: return np.mean(lc.reshape(-1, self.supersample_factor), axis=1)", "response": "Calculate a model light curve."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the time of periastron passage (calculated using `params.t0`).", "response": "def get_t_periastron(self, params):\n\t\t\"\"\"\n\t\tReturn the time of periastron passage (calculated using `params.t0`).\n\t\t\"\"\"\n\t\tphase = self._get_phase(params, \"primary\")\n\t\treturn params.t0 - params.per*phase"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_t_secondary(self, params):\n\t\tphase = self._get_phase(params, \"primary\")\n\t\tphase2 = self._get_phase(params, \"secondary\")\n\t\treturn params.t0 + params.per*(phase2-phase)", "response": "get_t_secondary - Returns the time of secondary eclipse center calculated using params. t0"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_t_conjunction(self, params):\n\t\tphase = self._get_phase(params, \"primary\")\n\t\tphase2 = self._get_phase(params, \"secondary\")\n\t\treturn params.t_secondary + params.per*(phase-phase2)", "response": "get_t_conjunction - Returns the time of primary transit center ( calculated using params. t_secondary"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_true_anomaly(self):\n\t\tself.f = _rsky._getf(self.t_supersample, self.t0, self.per, self.a,\n\t\t\t\t\t\t\t self.inc*pi/180., self.ecc, self.w*pi/180.,\n\t\t\t\t\t\t\t self.transittype, self.nthreads)\n\t\treturn self.f", "response": "get the true anomaly at each time\n\tReturn the true anomaly at each time\n\t"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndetecting whether the current compiler supports OpenMP parallelization?", "response": "def detect():\n\t\"\"\"Does this compiler support OpenMP parallelization?\"\"\"\n\tcompiler = new_compiler()\n\thasopenmp = hasfunction(compiler, 'omp_get_num_threads()')\n\tneeds_gomp = hasopenmp\n\tif not hasopenmp:\n\t\tcompiler.add_library('gomp')\n\thasopenmp = hasfunction(compiler, 'omp_get_num_threads()')\n\tneeds_gomp = hasopenmp\n\treturn hasopenmp"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nvalidate the username and password data against LDAP directory", "response": "def validate_ldap(self):\n logging.debug('Validating LDAPLoginForm against LDAP')\n 'Validate the username/password data against ldap directory'\n ldap_mgr = current_app.ldap3_login_manager\n username = self.username.data\n password = self.password.data\n\n result = ldap_mgr.authenticate(username, password)\n\n if result.status == AuthenticationResponseStatus.success:\n self.user = ldap_mgr._save_user(\n result.user_dn,\n result.user_id,\n result.user_info,\n result.user_groups\n )\n return True\n\n else:\n self.user = None\n self.username.errors.append('Invalid Username/Password.')\n self.password.errors.append('Invalid Username/Password.')\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nvalidates the form by calling validate on each field passing any validators to the field validator.", "response": "def validate(self, *args, **kwargs):\n \"\"\"\n Validates the form by calling `validate` on each field, passing any\n extra `Form.validate_` validators to the field validator.\n\n also calls `validate_ldap`\n \"\"\"\n\n valid = FlaskForm.validate(self, *args, **kwargs)\n if not valid:\n logging.debug(\"Form validation failed before we had a chance to \"\n \"check ldap. Reasons: '{0}'\".format(self.errors))\n return valid\n\n return self.validate_ldap()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ninitializes this extension with the given app. This is a convenience method that is called by the application s init_appcontext method.", "response": "def init_app(self, app):\n '''\n Configures this extension with the given app. This registers an\n ``teardown_appcontext`` call, and attaches this ``LDAP3LoginManager``\n to it as ``app.ldap3_login_manager``.\n\n Args:\n app (flask.Flask): The flask app to initialise with\n '''\n\n app.ldap3_login_manager = self\n\n servers = list(self._server_pool)\n for s in servers:\n self._server_pool.remove(s)\n\n self.init_config(app.config)\n\n if hasattr(app, 'teardown_appcontext'):\n app.teardown_appcontext(self.teardown)\n else: # pragma: no cover\n app.teardown_request(self.teardown)\n\n self.app = app"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef init_config(self, config):\n '''\n Configures this extension with a given configuration dictionary.\n This allows use of this extension without a flask app.\n\n Args:\n config (dict): A dictionary with configuration keys\n '''\n\n self.config.update(config)\n\n self.config.setdefault('LDAP_PORT', 389)\n self.config.setdefault('LDAP_HOST', None)\n self.config.setdefault('LDAP_USE_SSL', False)\n self.config.setdefault('LDAP_READONLY', True)\n self.config.setdefault('LDAP_CHECK_NAMES', True)\n self.config.setdefault('LDAP_BIND_DIRECT_CREDENTIALS', False)\n self.config.setdefault('LDAP_BIND_DIRECT_PREFIX', '')\n self.config.setdefault('LDAP_BIND_DIRECT_SUFFIX', '')\n self.config.setdefault('LDAP_BIND_DIRECT_GET_USER_INFO', True)\n self.config.setdefault('LDAP_ALWAYS_SEARCH_BIND', False)\n self.config.setdefault('LDAP_BASE_DN', '')\n self.config.setdefault('LDAP_BIND_USER_DN', None)\n self.config.setdefault('LDAP_BIND_USER_PASSWORD', None)\n self.config.setdefault('LDAP_SEARCH_FOR_GROUPS', True)\n self.config.setdefault('LDAP_FAIL_AUTH_ON_MULTIPLE_FOUND', False)\n\n # Prepended to the Base DN to limit scope when searching for\n # Users/Groups.\n self.config.setdefault('LDAP_USER_DN', '')\n self.config.setdefault('LDAP_GROUP_DN', '')\n\n self.config.setdefault('LDAP_BIND_AUTHENTICATION_TYPE', 'SIMPLE')\n\n # Ldap Filters\n self.config.setdefault('LDAP_USER_SEARCH_SCOPE',\n 'LEVEL')\n self.config.setdefault('LDAP_USER_OBJECT_FILTER',\n '(objectclass=person)')\n self.config.setdefault('LDAP_USER_LOGIN_ATTR', 'uid')\n self.config.setdefault('LDAP_USER_RDN_ATTR', 'uid')\n self.config.setdefault(\n 'LDAP_GET_USER_ATTRIBUTES', ldap3.ALL_ATTRIBUTES)\n\n self.config.setdefault('LDAP_GROUP_SEARCH_SCOPE',\n 'LEVEL')\n self.config.setdefault(\n 'LDAP_GROUP_OBJECT_FILTER', '(objectclass=group)')\n self.config.setdefault('LDAP_GROUP_MEMBERS_ATTR', 'uniqueMember')\n self.config.setdefault(\n 'LDAP_GET_GROUP_ATTRIBUTES', ldap3.ALL_ATTRIBUTES)\n self.config.setdefault('LDAP_ADD_SERVER', True)\n\n if self.config['LDAP_ADD_SERVER']:\n self.add_server(\n hostname=self.config['LDAP_HOST'],\n port=self.config['LDAP_PORT'],\n use_ssl=self.config['LDAP_USE_SSL']\n )", "response": "Initializes the configuration dictionary for the current LDAP server."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nadds an additional server to the server pool and returns the new server object.", "response": "def add_server(self, hostname, port, use_ssl, tls_ctx=None):\n \"\"\"\n Add an additional server to the server pool and return the\n freshly created server.\n\n Args:\n hostname (str): Hostname of the server\n port (int): Port of the server\n use_ssl (bool): True if SSL is to be used when connecting.\n tls_ctx (ldap3.Tls): An optional TLS context object to use\n when connecting.\n\n Returns:\n ldap3.Server: The freshly created server object.\n \"\"\"\n if not use_ssl and tls_ctx:\n raise ValueError(\"Cannot specify a TLS context and not use SSL!\")\n server = ldap3.Server(\n hostname,\n port=port,\n use_ssl=use_ssl,\n tls=tls_ctx\n )\n self._server_pool.add(server)\n return server"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding a connection to the ldap3_manager_connections list for the current application context.", "response": "def _contextualise_connection(self, connection):\n \"\"\"\n Add a connection to the appcontext so it can be freed/unbound at\n a later time if an exception occured and it was not freed.\n\n Args:\n connection (ldap3.Connection): Connection to add to the appcontext\n\n \"\"\"\n\n ctx = stack.top\n if ctx is not None:\n if not hasattr(ctx, 'ldap3_manager_connections'):\n ctx.ldap3_manager_connections = [connection]\n else:\n ctx.ldap3_manager_connections.append(connection)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nremoving a connection from the ldap3_manager_connections list", "response": "def _decontextualise_connection(self, connection):\n \"\"\"\n Remove a connection from the appcontext.\n\n Args:\n connection (ldap3.Connection): connection to remove from the\n appcontext\n\n \"\"\"\n\n ctx = stack.top\n if ctx is not None and connection in ctx.ldap3_manager_connections:\n ctx.ldap3_manager_connections.remove(connection)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef authenticate_direct_credentials(self, username, password):\n\n bind_user = '{}{}{}'.format(\n self.config.get('LDAP_BIND_DIRECT_PREFIX'),\n username,\n self.config.get('LDAP_BIND_DIRECT_SUFFIX')\n )\n connection = self._make_connection(\n bind_user=bind_user,\n bind_password=password,\n )\n\n response = AuthenticationResponse()\n try:\n connection.bind()\n response.status = AuthenticationResponseStatus.success\n response.user_id = username\n log.debug(\n \"Authentication was successful for user '{0}'\".format(username))\n\n if self.config.get('LDAP_BIND_DIRECT_GET_USER_INFO'):\n # User wants extra info about the bind\n user_filter = '({search_attr}={username})'.format(\n search_attr=self.config.get('LDAP_USER_LOGIN_ATTR'),\n username=username\n )\n search_filter = '(&{0}{1})'.format(\n self.config.get('LDAP_USER_OBJECT_FILTER'),\n user_filter,\n )\n\n connection.search(\n search_base=self.full_user_search_dn,\n search_filter=search_filter,\n search_scope=getattr(\n ldap3, self.config.get('LDAP_USER_SEARCH_SCOPE')),\n attributes=self.config.get('LDAP_GET_USER_ATTRIBUTES'),\n )\n\n if len(connection.response) == 0 or \\\n (self.config.get('LDAP_FAIL_AUTH_ON_MULTIPLE_FOUND') and\n len(connection.response) > 1):\n # Don't allow them to log in.\n log.error(\n \"Could not gather extra info for user '{0}'\".format(username))\n else:\n\n user = connection.response[0]\n user['attributes']['dn'] = user['dn']\n response.user_info = user['attributes']\n response.user_dn = user['dn']\n\n except ldap3.core.exceptions.LDAPInvalidCredentialsResult:\n log.debug(\n \"Authentication was not successful for user '{0}'\".format(username))\n response.status = AuthenticationResponseStatus.fail\n except Exception as e:\n log.error(e)\n response.status = AuthenticationResponseStatus.fail\n\n self.destroy_connection(connection)\n return response", "response": "Authenticate a user using direct credentials."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef authenticate_direct_bind(self, username, password):\n\n bind_user = '{rdn}={username},{user_search_dn}'.format(\n rdn=self.config.get('LDAP_USER_RDN_ATTR'),\n username=username,\n user_search_dn=self.full_user_search_dn,\n )\n\n connection = self._make_connection(\n bind_user=bind_user,\n bind_password=password,\n )\n\n response = AuthenticationResponse()\n\n try:\n connection.bind()\n log.debug(\n \"Authentication was successful for user '{0}'\".format(username))\n response.status = AuthenticationResponseStatus.success\n # Get user info here.\n\n user_info = self.get_user_info(\n dn=bind_user, _connection=connection)\n response.user_dn = bind_user\n response.user_id = username\n response.user_info = user_info\n if self.config.get('LDAP_SEARCH_FOR_GROUPS'):\n response.user_groups = self.get_user_groups(\n dn=bind_user, _connection=connection)\n\n except ldap3.core.exceptions.LDAPInvalidCredentialsResult:\n log.debug(\n \"Authentication was not successful for user '{0}'\".format(username))\n response.status = AuthenticationResponseStatus.fail\n except Exception as e:\n log.error(e)\n response.status = AuthenticationResponseStatus.fail\n\n self.destroy_connection(connection)\n return response", "response": "Authenticate a user with a direct bind."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nauthenticate a user using an LDAP search.", "response": "def authenticate_search_bind(self, username, password):\n \"\"\"\n Performs a search bind to authenticate a user. This is\n required when a the login attribute is not the same\n as the RDN, since we cannot string together their DN on\n the fly, instead we have to find it in the LDAP, then attempt\n to bind with their credentials.\n\n Args:\n username (str): Username of the user to bind (the field specified\n as LDAP_BIND_LOGIN_ATTR)\n password (str): User's password to bind with when we find their dn.\n\n Returns:\n AuthenticationResponse\n\n \"\"\"\n connection = self._make_connection(\n bind_user=self.config.get('LDAP_BIND_USER_DN'),\n bind_password=self.config.get('LDAP_BIND_USER_PASSWORD'),\n )\n\n try:\n connection.bind()\n log.debug(\"Successfully bound to LDAP as '{0}' for search_bind method\".format(\n self.config.get('LDAP_BIND_USER_DN') or 'Anonymous'\n ))\n except Exception as e:\n self.destroy_connection(connection)\n log.error(e)\n return AuthenticationResponse()\n\n # Find the user in the search path.\n user_filter = '({search_attr}={username})'.format(\n search_attr=self.config.get('LDAP_USER_LOGIN_ATTR'),\n username=username\n )\n search_filter = '(&{0}{1})'.format(\n self.config.get('LDAP_USER_OBJECT_FILTER'),\n user_filter,\n )\n\n log.debug(\n \"Performing an LDAP Search using filter '{0}', base '{1}', \"\n \"and scope '{2}'\".format(\n search_filter,\n self.full_user_search_dn,\n self.config.get('LDAP_USER_SEARCH_SCOPE')\n ))\n\n connection.search(\n search_base=self.full_user_search_dn,\n search_filter=search_filter,\n search_scope=getattr(\n ldap3, self.config.get('LDAP_USER_SEARCH_SCOPE')),\n attributes=self.config.get('LDAP_GET_USER_ATTRIBUTES')\n )\n\n response = AuthenticationResponse()\n\n if len(connection.response) == 0 or \\\n (self.config.get('LDAP_FAIL_AUTH_ON_MULTIPLE_FOUND') and\n len(connection.response) > 1):\n # Don't allow them to log in.\n log.debug(\n \"Authentication was not successful for user '{0}'\".format(username))\n\n else:\n for user in connection.response:\n # Attempt to bind with each user we find until we can find\n # one that works.\n\n if 'type' not in user or user.get('type') != 'searchResEntry':\n # Issue #13 - Don't return non-entry results.\n continue\n\n user_connection = self._make_connection(\n bind_user=user['dn'],\n bind_password=password\n )\n\n log.debug(\n \"Directly binding a connection to a server with \"\n \"user:'{0}'\".format(user['dn']))\n try:\n user_connection.bind()\n log.debug(\n \"Authentication was successful for user '{0}'\".format(username))\n response.status = AuthenticationResponseStatus.success\n\n # Populate User Data\n user['attributes']['dn'] = user['dn']\n response.user_info = user['attributes']\n response.user_id = username\n response.user_dn = user['dn']\n if self.config.get('LDAP_SEARCH_FOR_GROUPS'):\n response.user_groups = self.get_user_groups(\n dn=user['dn'], _connection=connection)\n self.destroy_connection(user_connection)\n break\n\n except ldap3.core.exceptions.LDAPInvalidCredentialsResult:\n log.debug(\n \"Authentication was not successful for \"\n \"user '{0}'\".format(username))\n response.status = AuthenticationResponseStatus.fail\n except Exception as e: # pragma: no cover\n # This should never happen, however in case ldap3 does ever\n # throw an error here, we catch it and log it\n log.error(e)\n response.status = AuthenticationResponseStatus.fail\n\n self.destroy_connection(user_connection)\n\n self.destroy_connection(connection)\n return response"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_user_groups(self, dn, group_search_dn=None, _connection=None):\n\n connection = _connection\n if not connection:\n connection = self._make_connection(\n bind_user=self.config.get('LDAP_BIND_USER_DN'),\n bind_password=self.config.get('LDAP_BIND_USER_PASSWORD')\n )\n connection.bind()\n\n safe_dn = ldap3.utils.conv.escape_filter_chars(dn)\n search_filter = '(&{group_filter}({members_attr}={user_dn}))'.format(\n group_filter=self.config.get('LDAP_GROUP_OBJECT_FILTER'),\n members_attr=self.config.get('LDAP_GROUP_MEMBERS_ATTR'),\n user_dn=safe_dn\n )\n\n log.debug(\n \"Searching for groups for specific user with filter '{0}' \"\n \", base '{1}' and scope '{2}'\".format(\n search_filter,\n group_search_dn or self.full_group_search_dn,\n self.config.get('LDAP_GROUP_SEARCH_SCOPE')\n ))\n\n connection.search(\n search_base=group_search_dn or self.full_group_search_dn,\n search_filter=search_filter,\n attributes=self.config.get('LDAP_GET_GROUP_ATTRIBUTES'),\n search_scope=getattr(\n ldap3, self.config.get('LDAP_GROUP_SEARCH_SCOPE'))\n )\n\n results = []\n for item in connection.response:\n if 'type' not in item or item.get('type') != 'searchResEntry':\n # Issue #13 - Don't return non-entry results.\n continue\n\n group_data = item['attributes']\n group_data['dn'] = item['dn']\n results.append(group_data)\n\n if not _connection:\n # We made a connection, so we need to kill it.\n self.destroy_connection(connection)\n\n return results", "response": "Returns a list of LDAP groups that a user is a member of."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_user_info(self, dn, _connection=None):\n return self.get_object(\n dn=dn,\n filter=self.config.get('LDAP_USER_OBJECT_FILTER'),\n attributes=self.config.get(\"LDAP_GET_USER_ATTRIBUTES\"),\n _connection=_connection,\n )", "response": "Gets info about a user specified at dn."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a dictionary of user info for a user.", "response": "def get_user_info_for_username(self, username, _connection=None):\n \"\"\"\n Gets info about a user at a specified username by searching the\n Users DN. Username attribute is the same as specified as\n LDAP_USER_LOGIN_ATTR.\n\n\n Args:\n username (str): Username of the user to search for.\n _connection (ldap3.Connection): A connection object to use when\n searching. If not given, a temporary connection will be\n created, and destroyed after use.\n Returns:\n dict: A dictionary of the user info from LDAP\n \"\"\"\n ldap_filter = '(&({0}={1}){2})'.format(\n self.config.get('LDAP_USER_LOGIN_ATTR'),\n username,\n self.config.get('LDAP_USER_OBJECT_FILTER')\n )\n\n return self.get_object(\n dn=self.full_user_search_dn,\n filter=ldap_filter,\n attributes=self.config.get(\"LDAP_GET_USER_ATTRIBUTES\"),\n _connection=_connection,\n )"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_object(self, dn, filter, attributes, _connection=None):\n\n connection = _connection\n if not connection:\n connection = self._make_connection(\n bind_user=self.config.get('LDAP_BIND_USER_DN'),\n bind_password=self.config.get('LDAP_BIND_USER_PASSWORD')\n )\n connection.bind()\n\n connection.search(\n search_base=dn,\n search_filter=filter,\n attributes=attributes,\n )\n\n data = None\n if len(connection.response) > 0:\n data = connection.response[0]['attributes']\n data['dn'] = connection.response[0]['dn']\n\n if not _connection:\n # We made a connection, so we need to kill it.\n self.destroy_connection(connection)\n\n return data", "response": "Gets an object at the specified dn and returns it."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a ldap3. Connection object for the current LDAP version of the current application context.", "response": "def connection(self):\n \"\"\"\n Convenience property for externally accessing an authenticated\n connection to the server. This connection is automatically\n handled by the appcontext, so you do not have to perform an unbind.\n\n Returns:\n ldap3.Connection: A bound ldap3.Connection\n Raises:\n ldap3.core.exceptions.LDAPException: Since this method is performing\n a bind on behalf of the caller. You should handle this case\n occuring, such as invalid service credentials.\n \"\"\"\n ctx = stack.top\n if ctx is None:\n raise Exception(\"Working outside of the Flask application \"\n \"context. If you wish to make a connection outside of a flask\"\n \" application context, please handle your connections \"\n \"and use manager.make_connection()\")\n\n if hasattr(ctx, 'ldap3_manager_main_connection'):\n return ctx.ldap3_manager_main_connection\n else:\n connection = self._make_connection(\n bind_user=self.config.get('LDAP_BIND_USER_DN'),\n bind_password=self.config.get('LDAP_BIND_USER_PASSWORD'),\n contextualise=False\n )\n connection.bind()\n if ctx is not None:\n ctx.ldap3_manager_main_connection = connection\n return connection"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a connection to the LDAP Directory.", "response": "def make_connection(self, bind_user=None, bind_password=None, **kwargs):\n \"\"\"\n Make a connection to the LDAP Directory.\n\n Args:\n bind_user (str): User to bind with. If `None`, AUTH_ANONYMOUS is\n used, otherwise authentication specified with\n config['LDAP_BIND_AUTHENTICATION_TYPE'] is used.\n bind_password (str): Password to bind to the directory with\n **kwargs (dict): Additional arguments to pass to the\n ``ldap3.Connection``\n\n Returns:\n ldap3.Connection: An unbound ldap3.Connection. You should handle exceptions\n upon bind if you use this internal method.\n \"\"\"\n\n return self._make_connection(bind_user, bind_password,\n contextualise=False, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _make_connection(self, bind_user=None, bind_password=None,\n contextualise=True, **kwargs):\n \"\"\"\n Make a connection.\n\n Args:\n bind_user (str): User to bind with. If `None`, AUTH_ANONYMOUS is\n used, otherwise authentication specified with\n config['LDAP_BIND_AUTHENTICATION_TYPE'] is used.\n bind_password (str): Password to bind to the directory with\n contextualise (bool): If true (default), will add this connection to the\n appcontext so it can be unbound upon app_teardown.\n\n Returns:\n ldap3.Connection: An unbound ldap3.Connection. You should handle exceptions\n upon bind if you use this internal method.\n \"\"\"\n\n authentication = ldap3.ANONYMOUS\n if bind_user:\n authentication = getattr(ldap3, self.config.get(\n 'LDAP_BIND_AUTHENTICATION_TYPE'))\n\n log.debug(\"Opening connection with bind user '{0}'\".format(\n bind_user or 'Anonymous'))\n connection = ldap3.Connection(\n server=self._server_pool,\n read_only=self.config.get('LDAP_READONLY'),\n user=bind_user,\n password=bind_password,\n client_strategy=ldap3.SYNC,\n authentication=authentication,\n check_names=self.config['LDAP_CHECK_NAMES'],\n raise_exceptions=True,\n **kwargs\n )\n\n if contextualise:\n self._contextualise_connection(connection)\n return connection", "response": "Creates a connection to the LDAP server."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef destroy_connection(self, connection):\n\n log.debug(\"Destroying connection at <{0}>\".format(hex(id(connection))))\n self._decontextualise_connection(connection)\n connection.unbind()", "response": "Destroys a connection. Removes the connection from the appcontext and unbinds it."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef compiled_sub_dn(self, prepend):\n prepend = prepend.strip()\n if prepend == '':\n return self.config.get('LDAP_BASE_DN')\n return '{prepend},{base}'.format(\n prepend=prepend,\n base=self.config.get('LDAP_BASE_DN')\n )", "response": "Returns a compiled DN with the DN Base appended to the end."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef search_all(self, quiet=False):\n '''a \"show all\" search that doesn't require a query\n \n Parameters\n ==========\n quiet: if quiet is True, we only are using the function to return\n rows of results.\n '''\n\n results = []\n\n for obj in self.bucket.objects.all():\n subsrc = obj.Object()\n\n # Metadata bug will capitalize all fields, workaround is to lowercase\n # https://github.com/boto/boto3/issues/1709\n metadata = dict((k.lower(), v) for k, v in subsrc.metadata.items())\n size = ''\n\n # MM-DD-YYYY\n datestr = \"%s-%s-%s\" %(obj.last_modified.month,\n obj.last_modified.day,\n obj.last_modified.year)\n\n if 'sizemb' in metadata:\n size = '%sMB' % metadata['sizemb']\n\n results.append([obj.key, datestr, size ])\n \n if len(results) == 0:\n bot.info(\"No container collections found.\")\n sys.exit(1)\n\n if not quiet:\n bot.info(\"Containers\")\n bot.table(results)\n return results", "response": "a show all search that doesn t require a query\n "} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef container_search(self, query, across_collections=False):\n '''search for a specific container. If across collections is False,\n the query is parsed as a full container name and a specific container\n is returned. If across_collections is True, the container is searched\n for across collections. If across collections is True, details are\n not shown'''\n\n\n results = self._search_all(quiet=True)\n matches = []\n for result in results:\n\n # This is the container name\n if query in result[0]:\n matches.append(result)\n\n\n if len(matches) > 0:\n bot.info(\"Containers %s\" %query)\n bot.table(matches)\n else:\n bot.info('No matches for %s found.' % name)\n\n return matches", "response": "search for a specific container"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsearching a Singularity registry for a list of images.", "response": "def search(self, query=None, args=None):\n '''query a Singularity registry for a list of images. \n If query is None, collections are listed. \n\n EXAMPLE QUERIES:\n\n [empty] list all collections in registry\n vsoch do a general search for the expression \"vsoch\"\n vsoch/ list all containers in collection vsoch\n /dinosaur list containers across collections called \"dinosaur\"\n vsoch/dinosaur list details of container vsoch/dinosaur\n tag \"latest\" is used by default, and then the most recent\n vsoch/dinosaur:tag list details for specific container\n \n '''\n\n if query is not None:\n\n # List all containers in collection query/\n if query.endswith('/'): # collection search\n return self._collection_search(query)\n\n # List containers across collections called /query\n elif query.startswith('/'): \n return self._container_search(query, across_collections=True)\n\n # List details of a specific collection container\n elif \"/\" in query or \":\" in query:\n return self._container_search(query)\n\n # Search collections across all fields\n return self._collection_search(query=query)\n\n\n # Search collections across all fields\n return self._search_all()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsearching for a specific container", "response": "def container_search(self, query, across_collections=False):\n '''search for a specific container. If across collections is False,\n the query is parsed as a full container name and a specific container\n is returned. If across_collections is True, the container is searched\n for across collections. If across collections is True, details are\n not shown'''\n\n query = query.lower().strip('/')\n\n q = parse_image_name(remove_uri(query), defaults=False)\n\n if q['tag'] is not None:\n if across_collections is True:\n url = '%s/container/search/name/%s/tag/%s' % (self.base, q['image'], q['tag'])\n else:\n url = '%s/container/search/collection/%s/name/%s/tag/%s' % (self.base, q['collection'], q['image'], q['tag'])\n\n elif q['tag'] is None: \n if across_collections is True:\n url = '%s/container/search/name/%s' % (self.base, q['image'])\n else:\n url = '%s/container/search/collection/%s/name/%s' % (self.base, q['collection'], q['image'])\n\n result = self._get(url)\n if \"containers\" in result:\n result = result['containers']\n\n if len(result) == 0:\n bot.info(\"No containers found.\")\n sys.exit(1)\n\n bot.info(\"Containers %s\" %query)\n\n rows = []\n for c in result: \n\n rows.append([ '%s/%s' %(c['collection'], c['name']),\n c['tag'] ])\n\n bot.table(rows)\n return rows"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nquerying a GitLab artifacts folder for a list of images.", "response": "def search(self, query=None, args=None):\n '''query a GitLab artifacts folder for a list of images. \n If query is None, collections are listed. \n '''\n if query is None:\n bot.exit('You must include a collection query, /')\n\n # or default to listing (searching) all things.\n return self._search_all(query)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nensuring that the client name is included in a list of tags. This is important for matching builders to the correct client. We exit on fail.", "response": "def _client_tagged(self, tags):\n '''ensure that the client name is included in a list of tags. This is\n important for matching builders to the correct client. We exit\n on fail.\n \n Parameters\n ==========\n tags: a list of tags to look for client name in\n\n '''\n\n # We must match the client to a tag\n name = self.client_name.lower()\n tags = [t.lower() for t in tags]\n\n if name not in tags:\n bot.error('%s not found in %s, must match!' %(name, tags))\n sys.exit(1)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef speak(self):\n '''\n a function for the client to announce him or herself, depending\n on the level specified. If you want your client to have additional\n announced things here, then implement the class `_speak` for your\n client.\n\n '''\n if self.quiet is False:\n bot.info('[client|%s] [database|%s]' %(self.client_name,\n self.database))\n\n self._speak()", "response": "speak the user s information about the resource in the database"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef announce(self, command=None):\n '''the client will announce itself given that a command is not in a\n particular predefined list.\n '''\n if command is not None:\n if command not in ['get'] and self.quiet is False:\n self.speak()", "response": "the client will announce itself given that a command is not in a specific predefined list."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _update_secrets(self):\n '''The user is required to have an application secrets file in his\n or her environment. The client exists with error \n if the variable isn't found.\n '''\n env = 'SREGISTRY_GOOGLE_DRIVE_CREDENTIALS'\n self._secrets = self._get_and_update_setting(env)\n self._base = self._get_and_update_setting('SREGISTRY_GOOGLE_DRIVE_ROOT')\n\n if self._base is None:\n self._base = 'sregistry'\n\n if self._secrets is None:\n bot.error('You must export %s to use Google Drive client' %env)\n bot.info(\"https://singularityhub.github.io/sregistry-cli/client-google-drive\")\n sys.exit(1)", "response": "Update the secrets file with the values from the environment variable."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting service client for the google drive API", "response": "def _get_service(self, version='v3'):\n '''get service client for the google drive API\n :param version: version to use (default is v3)\n '''\n invalid = True\n\n # The user hasn't disabled cache of credentials\n if self._credential_cache is not None:\n storage = Storage(self._credential_cache)\n\n # The store has never been used before\n if os.path.exists(self._credential_cache):\n credentials = storage.get()\n if not credentials.invalid:\n invalid = False\n\n # If credentials are allowed but invalid, refresh\n if invalid is True:\n \n class flags:\n auth_host_name='localhost'\n auth_host_port=[8080]\n noauth_local_webserver=False\n logging_level='INFO'\n flow = oclient.flow_from_clientsecrets(self._secrets, self._scope)\n credentials = tools.run_flow(flow, storage, flags)\n\n # If the user is ok to cache them\n if self._credential_cache is not None:\n storage.put(credentials)\n\n # Either way, authenticate the user with credentials\n http = credentials.authorize(httplib2.Http())\n return build('drive', version, http=http)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nqueries a Singularity registry for a list of images.", "response": "def search(self, query=None, **kwargs):\n '''query a Singularity registry for a list of images. \n If query is None, collections are listed. \n\n EXAMPLE QUERIES:\n\n [empty] list all collections in singularity hub\n vsoch do a general search for collection \"vsoch\"\n vsoch/dinosaur list details of container vsoch/dinosaur\n tag \"latest\" is used by default, and then the most recent\n vsoch/dinosaur:tag list details for specific container\n \n '''\n\n if query is not None:\n return self._search_collection(query)\n\n # Search collections across all fields\n return self.list()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef list_all(self, **kwargs):\n '''a \"show all\" search that doesn't require a query'''\n\n quiet=False\n if \"quiet\" in kwargs:\n quiet = kwargs['quiet']\n\n bot.spinner.start()\n url = '%s/collections/' %self.base\n results = self._paginate_get(url)\n bot.spinner.stop() \n\n if len(results) == 0:\n bot.info(\"No container collections found.\")\n sys.exit(1)\n\n rows = []\n for result in results:\n if \"containers\" in result:\n if result['id'] not in [37,38,39]:\n for c in result['containers']:\n rows.append([c['detail'],\"%s:%s\" %(c['name'],c['tag'])])\n\n if quiet is False:\n bot.info(\"Collections\")\n bot.table(rows)\n\n return rows", "response": "a show all search that doesn t require a query"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\npulling an image from gitlab", "response": "def pull(self, images, file_name=None, save=True, **kwargs):\n '''pull an image from gitlab. The image is found based on the \n uri that should correspond to a gitlab repository, and then\n the branch, job name, artifact folder, and tag of the container.\n The minimum that we need are the job id, collection, and job name. Eg:\n\n job_id|collection|job_name (or)\n job_id|collection\n\n Parameters\n ==========\n images: refers to the uri given by the user to pull in the format\n specified above\n file_name: the user's requested name for the file. It can \n optionally be None if the user wants a default.\n save: if True, you should save the container to the database\n using self.add()\n \n Returns\n =======\n finished: a single container path, or list of paths\n\n '''\n force = False\n if \"force\" in kwargs:\n force = kwargs['force']\n\n if not isinstance(images, list):\n images = [images]\n\n bot.debug('Execution of PULL for %s images' %len(images))\n\n # If used internally we want to return a list to the user.\n finished = []\n for image in images:\n\n # Format job_id|collection|job_name\n # 122056733,singularityhub/gitlab-ci'\n # 122056733,singularityhub/gitlab-ci,build\n\n job_id, collection, job_name = self._parse_image_name(image)\n names = parse_image_name(remove_uri(collection))\n\n # If the user didn't provide a file, make one based on the names\n if file_name is None:\n file_name = self._get_storage_name(names)\n\n # If the file already exists and force is False\n if os.path.exists(file_name) and force is False:\n bot.error('Image exists! Remove first, or use --force to overwrite')\n sys.exit(1) \n\n # Put together the GitLab URI\n image_name = \"Singularity.%s.simg\" %(names['tag'])\n if names['tag'] == 'latest':\n image_name = \"Singularity.simg\"\n\n # Assemble artifact path\n artifact_path = \"%s/%s\" %(self.artifacts, image_name)\n bot.info('Looking for artifact %s for job name %s, %s' %(artifact_path,\n job_name,\n job_id))\n\n project = quote_plus(collection.strip('/'))\n \n # This is supposed to work, but it doesn't\n # url = \"%s/projects/%s/jobs/%s/artifacts/file/%s\" %(self.api_base, \n # project, job_id,\n # artifact_path)\n\n # This does work :)\n url = \"%s/%s/-/jobs/%s/artifacts/raw/%s/?inline=false\" % (self.base, \n collection, \n job_id, \n artifact_path) \n\n bot.info(url)\n\n # stream the url content to the file name\n image_file = self.download(url=url,\n file_name=file_name,\n show_progress=True)\n\n metadata = self._get_metadata()\n metadata['collection'] = collection\n metadata['job_id'] = job_id\n metadata['job_name'] = job_name\n metadata['artifact_path'] = artifact_path\n metadata['sregistry_pull'] = image\n\n # If we save to storage, the uri is the dropbox_path\n if save is True:\n container = self.add(image_path = image_file,\n image_uri = image,\n metadata = metadata,\n url = url)\n\n # When the container is created, this is the path to the image\n image_file = container.image\n\n if os.path.exists(image_file):\n bot.debug('Retrieved image file %s' %image_file)\n bot.custom(prefix=\"Success!\", message=image_file)\n finished.append(image_file)\n\n\n if len(finished) == 1:\n finished = finished[0]\n return finished"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrun will send a list of tasks a tuple with arguments and then through a function.", "response": "def run(self, func, tasks, func2=None):\n '''run will send a list of tasks,\n a tuple with arguments, through a function.\n the arguments should be ordered correctly.\n :param func: the function to run with multiprocessing.pool\n :param tasks: a list of tasks, each a tuple\n of arguments to process\n :param func2: filter function to run result\n from func through (optional)\n '''\n\n # Keep track of some progress for the user\n progress = 1\n total = len(tasks)\n\n # if we don't have tasks, don't run\n if len(tasks) == 0:\n return\n\n # If two functions are run per task, double total jobs\n if func2 is not None:\n total = total * 2\n\n finished = []\n level1 = []\n results = []\n\n try:\n prefix = \"[%s/%s]\" % (progress, total)\n bot.show_progress(0, total, length=35, prefix=prefix)\n pool = multiprocessing.Pool(self.workers, init_worker)\n\n self.start()\n for task in tasks:\n result = pool.apply_async(multi_wrapper,\n multi_package(func, [task]))\n results.append(result)\n level1.append(result._job)\n\n while len(results) > 0:\n result = results.pop()\n result.wait()\n bot.show_progress(progress, total, length=35, prefix=prefix)\n progress += 1\n prefix = \"[%s/%s]\" % (progress, total)\n\n # Pass the result through a second function?\n if func2 is not None and result._job in level1:\n result = pool.apply_async(multi_wrapper,\n multi_package(func2,\n [(result.get(),)]))\n results.append(result)\n else:\n finished.append(result.get())\n\n self.end()\n pool.close()\n pool.join()\n\n except (KeyboardInterrupt, SystemExit):\n bot.error(\"Keyboard interrupt detected, terminating workers!\")\n pool.terminate()\n sys.exit(1)\n\n except Exception as e:\n bot.error(e)\n\n return finished"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef push(self, path, name, tag=None):\n '''push an image to Google Cloud Storage, meaning uploading it\n \n path: should correspond to an absolte image path (or derive it)\n name: should be the complete uri that the user has requested to push.\n tag: should correspond with an image tag. This is provided to mirror Docker\n '''\n path = os.path.abspath(path)\n bot.debug(\"PUSH %s\" % path)\n\n if not os.path.exists(path):\n bot.error('%s does not exist.' %path)\n sys.exit(1)\n\n\n # This returns a data structure with collection, container, based on uri\n names = parse_image_name(remove_uri(name),tag=tag)\n\n if names['version'] is None:\n version = get_image_hash(path)\n names = parse_image_name(remove_uri(name), tag=tag, version=version) \n\n # Update metadata with names\n metadata = self.get_metadata(path, names=names)\n if \"data\" in metadata:\n metadata = metadata['data']\n metadata.update(names)\n\n manifest = self._upload(source=path, \n destination=names['storage'],\n metadata=metadata)\n\n print(manifest['mediaLink'])", "response": "push an image to Google Cloud Storage"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef upload(self, source, \n destination, \n bucket,\n chunk_size = 2 * 1024 * 1024, \n metadata=None,\n keep_private=True):\n\n '''upload a file from a source to a destination. The client is expected\n to have a bucket (self._bucket) that is created when instantiated.\n \n This would be the method to do the same using the storage client,\n but not easily done for resumable\n\n blob = self._bucket.blob(destination)\n blob.upload_from_filename(filename=source, \n content_type=\"application/zip\",\n client=self._service)\n\n url = blob.public_url\n if isinstance(url, six.binary_type):\n url = url.decode('utf-8')\n\n return url\n '''\n env = 'SREGISTRY_GOOGLE_STORAGE_PRIVATE'\n keep_private = self._get_and_update_setting(env) or keep_private\n\n media = MediaFileUpload(source, chunksize=chunk_size, resumable=True)\n request = self._storage_service.objects().insert(bucket=bucket.name, \n name=destination,\n media_body=media)\n\n response = None\n total = request.resumable._size / (1024*1024.0)\n\n bar = ProgressBar(expected_size=total, filled_char='=', hide=self.quiet)\n\n while response is None:\n error = None\n try:\n progress, response = request.next_chunk()\n if progress:\n bar.show(progress.resumable_progress / (1024*1024.0))\n except:\n raise\n\n # When we finish upload, get as blob\n blob = bucket.blob(destination)\n if blob.exists():\n\n if not keep_private:\n blob.make_public()\n \n # If the user has a dictionary of metadata to update\n if metadata is not None:\n body = prepare_metadata(metadata)\n blob.metadata = metadata \n blob._properties['metadata'] = metadata\n blob.patch()\n\n return response", "response": "Uploads a file from a source to a destination."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nupdate headers with a token & other fields", "response": "def update_headers(self,fields=None):\n '''update headers with a token & other fields\n '''\n do_reset = True\n if hasattr(self, 'headers'):\n if self.headers is not None:\n do_reset = False\n\n if do_reset is True:\n self._reset_headers()\n\n if fields is not None:\n for key,value in fields.items():\n self.headers[key] = value\n\n header_names = \",\".join(list(self.headers.keys()))\n bot.debug(\"Headers found: %s\" %header_names)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef download_task(url, headers, destination, download_type='layer'):\n '''download an image layer (.tar.gz) to a specified download folder.\n This task is done by using local versions of the same download functions\n that are used for the client.\n core stream/download functions of the parent client.\n\n Parameters\n ==========\n image_id: the shasum id of the layer, already determined to not exist\n repo_name: the image name (library/ubuntu) to retrieve\n download_folder: download to this folder. If not set, uses temp.\n \n\n '''\n # Update the user what we are doing\n bot.verbose(\"Downloading %s from %s\" % (download_type, url))\n\n # Step 1: Download the layer atomically\n file_name = \"%s.%s\" % (destination,\n next(tempfile._get_candidate_names()))\n\n tar_download = download(url, file_name, headers=headers)\n\n try:\n shutil.move(tar_download, destination)\n except Exception:\n msg = \"Cannot untar layer %s,\" % tar_download\n msg += \" was there a problem with download?\"\n bot.error(msg)\n sys.exit(1)\n\n return destination", "response": "download an image layer to a specified download folder"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef post(url,data=None,return_json=True):\n '''post will use requests to get a particular url\n '''\n bot.debug(\"POST %s\" %url)\n return call(url,\n headers=headers,\n func=requests.post,\n data=data,\n return_json=return_json)", "response": "post will use requests to get a particular url\n "} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets will use requests to get a particular url", "response": "def get(url,headers=None,token=None,data=None,return_json=True):\n '''get will use requests to get a particular url\n '''\n bot.debug(\"GET %s\" %url)\n return call(url,\n headers=headers,\n func=requests.get,\n data=data,\n return_json=return_json)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nrequire secrets ensures that the client has the secrets file and that the client has all required parameters defined.", "response": "def require_secrets(self, params=None):\n '''require secrets ensures that the client has the secrets file, and\n specifically has one or more parameters defined. If params is None,\n only a check is done for the file.\n\n Parameters\n ==========\n params: a list of keys to lookup in the client secrets, eg:\n \n secrets[client_name][params1] should not be in [None,''] or not set\n\n '''\n name = self.client_name \n\n # Check 1: the client must have secrets, period\n has_secrets = True\n\n # Secrets file not asked for (incorrectly) but still wanted\n # The client shouldn't be calling this function if didn't init secrets\n if not hasattr(self,'secrets'):\n has_secrets = False\n\n # Secret file was not found, period\n elif hasattr(self,'secrets'):\n if self.secrets is None: \n has_secrets = False\n\n # The client isn't defined in the secrets file\n elif self.client_name not in self.secrets: \n has_secrets = False\n\n # Missing file or client secrets, fail\n if has_secrets is False:\n message = '%s requires client secrets.' %name\n bot.error(message)\n sys.exit(1)\n\n # Check 2: we have secrets and lookup, do we have all needed params?\n if params is not None:\n\n # Assume list so we can always parse through\n if not isinstance(params,list):\n params = [params]\n\n for param in params:\n\n # The parameter is not a key for the client\n if param not in self.secrets[name]: \n has_secrets = False\n\n # The parameter is a key, but empty or undefined\n elif self.secrets[name][param] in [None,'']: \n has_secrets=False\n\n # Missing parameter, exit on fail\n if has_secrets is False:\n message = 'Missing %s in client secrets.' %param\n bot.error(message)\n sys.exit(1)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef auth_flow(self, url):\n '''auth flow is a function to present the user with a url to retrieve\n some token/code, and then copy paste it back in the terminal.\n\n Parameters\n ==========\n url should be a url that is generated for the user to go to and accept\n getting a credential in the browser.\n \n '''\n print('Please go to this URL and login: {0}'.format(url))\n get_input = getattr(__builtins__, 'raw_input', input)\n message = 'Please enter the code you get after login here: '\n code = get_input(message).strip()\n return code", "response": "auth flow is a function to get some token from the user and copy paste it back in the terminal."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndownload the content of a URL to a file", "response": "def download(url, file_name, headers=None, show_progress=True):\n '''stream to a temporary file, rename on successful completion\n\n Parameters\n ==========\n file_name: the file name to stream to\n url: the url to stream from\n headers: additional headers to add\n '''\n\n fd, tmp_file = tempfile.mkstemp(prefix=(\"%s.tmp.\" % file_name)) \n os.close(fd)\n\n if DISABLE_SSL_CHECK is True:\n bot.warning('Verify of certificates disabled! ::TESTING USE ONLY::')\n\n verify = not DISABLE_SSL_CHECK\n response = stream(url, headers=headers, stream_to=tmp_file)\n shutil.move(tmp_file, file_name)\n return file_name"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nstreaming is a get that will stream to file_name. It will retry if the request fails.", "response": "def stream(url, headers, stream_to=None, retry=True):\n '''stream is a get that will stream to file_name. Since this is a worker\n task, it differs from the client provided version in that it requires\n headers.\n '''\n bot.debug(\"GET %s\" % url)\n\n if DISABLE_SSL_CHECK is True:\n bot.warning('Verify of certificates disabled! ::TESTING USE ONLY::')\n\n # Ensure headers are present, update if not\n response = requests.get(url, \n headers=headers,\n verify=not DISABLE_SSL_CHECK,\n stream=True)\n\n # If we get permissions error, one more try with updated token\n if response.status_code in [401, 403]:\n headers = update_token(headers)\n return stream(url, headers, stream_to, retry=False)\n\n # Successful Response\n elif response.status_code == 200:\n\n # Keep user updated with Progress Bar\n content_size = None\n if 'Content-Length' in response.headers:\n progress = 0\n content_size = int(response.headers['Content-Length'])\n bot.show_progress(progress,content_size,length=35)\n\n chunk_size = 1 << 20\n with open(stream_to,'wb') as filey:\n for chunk in response.iter_content(chunk_size=chunk_size):\n filey.write(chunk)\n if content_size is not None:\n progress+=chunk_size\n bot.show_progress(iteration=progress,\n total=content_size,\n length=35,\n carriage_return=False)\n\n # Newline to finish download\n sys.stdout.write('\\n')\n\n return stream_to \n\n bot.error(\"Problem with stream, response %s\" %(response.status_code))\n sys.exit(1)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a folder at the drive root.", "response": "def get_or_create_folder(self, folder):\n '''create a folder at the drive root. If the folder already exists,\n it is simply returned.\n\n folder = self._get_or_create_folder(self._base)\n $ folder\n {'id': '1pXR5S8wufELh9Q-jDkhCoYu-BL1NqN9y'}\n\n '''\n q = \"mimeType='application/vnd.google-apps.folder' and name='%s'\" %folder\n response = self._service.files().list(q=q,\n spaces='drive').execute().get('files',[])\n\n # If no folder is found, create it!\n if len(response) == 0: \n folder = self._create_folder(folder)\n else:\n folder = response[0]\n return folder"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_folder(self, folder):\n '''create a folder with a particular name. Be careful, if the folder \n already exists you can still create one (a different one) with\n the equivalent name!\n '''\n folder_metadata = {\n 'name': os.path.basename(folder),\n 'mimeType': 'application/vnd.google-apps.folder'\n }\n created = self._service.files().create(body=folder_metadata,\n fields='id').execute()\n return created", "response": "create a folder with a particular name."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _read_response(self,response, field=\"detail\"):\n '''attempt to read the detail provided by the response. If none, \n default to using the reason'''\n\n try:\n message = json.loads(response._content.decode('utf-8'))[field]\n except:\n message = response.reason\n return message", "response": "attempt to read the detail provided by the response. If none found use the reason"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_bucket_name(self):\n '''get or return the s3 bucket name. If not yet defined via an environment\n variable or setting, we create a name with the pattern.\n sregistry--<1234>\n\n You can use the following environment variables to determine\n interaction with the bucket:\n \n SREGISTRY_S3_BUCKET: the bucket name (all lowercase, no underscore)\n \n '''\n # Get bucket name\n bucket_name = 'sregistry-%s' % RobotNamer().generate()\n self.bucket_name = self._get_and_update_setting('SREGISTRY_S3_BUCKET', \n bucket_name)", "response": "get or return the s3 bucket name"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_bucket(self):\n '''given a bucket name and a client that is initialized, get or\n create the bucket.\n '''\n for attr in ['bucket_name', 's3']:\n if not hasattr(self, attr):\n bot.exit('client is missing attribute %s' %(attr))\n\n # See if the bucket is already existing\n self.bucket = None\n for bucket in self.s3.buckets.all():\n if bucket.name == self.bucket_name:\n self.bucket = bucket\n \n # If the bucket doesn't exist, create it\n if self.bucket is None:\n self.bucket = self.s3.create_bucket(Bucket=self.bucket_name)\n bot.info('Created bucket %s' % self.bucket.name )\n\n return self.bucket", "response": "get or create the bucket."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nusing the user provided endpoint and keys to connect to the resource", "response": "def get_resource(self):\n '''use the user provided endpoint and keys (from environment) to\n connect to the resource. We can share the aws environment\n variables:\n\n AWS_ACCESS_KEY_ID\n AWS_SECRET_ACCESS_KEY\n\n https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html\n '''\n\n # If base is not defined, assume using aws client\n if self.base != None:\n\n # s3.ServiceResource()\n self.s3 = boto3.resource('s3',\n endpoint_url=self.base,\n aws_access_key_id=self._id,\n aws_secret_access_key=self._key,\n config=boto3.session.Config(signature_version=self._signature))\n else:\n # We will need to test options for reading credentials here\n self.s3 = boto3.client('s3')"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _update_secrets(self, base=None):\n '''update secrets will update/get the base for the server, along\n with the bucket name, defaulting to sregistry.\n '''\n # We are required to have a base, either from environment or terminal\n self.base = self._get_and_update_setting('SREGISTRY_S3_BASE', self.base)\n self._id = self._required_get_and_update('AWS_ACCESS_KEY_ID')\n self._key = self._required_get_and_update('AWS_SECRET_ACCESS_KEY')\n\n # Get the desired S3 signature. Default is the current \"s3v4\" signature.\n # If specified, user can request \"s3\" (v2 old) signature\n self._signature = self._get_and_update_setting('SREGISTRY_S3_SIGNATURE')\n\n if self._signature == 's3':\n # Requested signature is S3 V2\n self._signature = 's3'\n else:\n # self._signature is not set or not set to s3 (v2), default to s3v4\n self._signature = 's3v4'\n\n # Define self.bucket_name, self.s3, then self.bucket\n self.get_bucket_name()\n self.get_resource()\n self.get_bucket()", "response": "update secrets will update the base for the server along\n with the bucket name defaulting to sregistry."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _add_https(self, q):\n '''for push, pull, and other api interactions, the user can optionally\n define a custom registry. If the registry name doesn't include http\n or https, add it.\n \n Parameters\n ==========\n q: the parsed image query (names), including the original\n '''\n\n # If image uses http or https, add back\n if not q['registry'].startswith('http'):\n\n if q['original'].startswith('http:'):\n q['registry'] = 'http://%s' % q['registry']\n\n elif q['original'].startswith('https:'):\n q['registry'] = 'https://%s' % q['registry']\n\n # Otherwise, guess from the user's environment\n else:\n\n prefix = 'https://'\n\n # The user can set an environment variable to specify nohttps\n nohttps = os.environ.get('SREGISTRY_REGISTRY_NOHTTPS')\n if nohttps != None:\n prefix = 'http://'\n q['registry'] = '%s%s' %(prefix, q['registry'])\n\n return q", "response": "for push pull and other api interactions add https to the registry"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _update_secrets(self):\n '''update secrets will take a secrets credential file\n either located at .sregistry or the environment variable\n SREGISTRY_CLIENT_SECRETS and update the current client \n secrets as well as the associated API base.\n '''\n self.secrets = read_client_secrets()\n if self.secrets is not None:\n if \"registry\" in self.secrets:\n if \"base\" in self.secrets['registry']:\n self.base = self.secrets['registry']['base']\n self._update_base()", "response": "update secrets will take a secrets credential file located at. sregistry or the environment variable SREGISTRY_CLIENT_SECRETS and update the current client \n secrets as well as the API base."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _load_secrets(self):\n '''load the secrets credentials file with the Globus OAuthTokenResponse\n '''\n\n # Second priority: load from cache\n \n self.auth = self._get_and_update_setting('GLOBUS_AUTH_RESPONSE')\n self.transfer = self._get_and_update_setting('GLOBUS_TRANSFER_RESPONSE')", "response": "load the secrets credentials file with the Globus OAuthTokenResponse\n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _tokens_need_update(self):\n '''return boolean True or False if the client tokens (self._auth and\n self._transfer) need updating.\n '''\n\n # Assumes that auth and transfer have same refresh time\n\n needs_update = True\n if self.auth is not None:\n if self.auth['expires_at_seconds'] > time.time():\n needs_update = False\n return needs_update", "response": "return boolean True or False if the client tokens need updating."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\npresenting the client with authentication flow to get tokens from code.", "response": "def _update_tokens(self):\n '''Present the client with authentication flow to get tokens from code.\n This simply updates the client _response to be used to get tokens\n for auth and transfer (both use access_token as index). We call\n this not on client initialization, but when the client is actually\n needed.\n '''\n\n self._client.oauth2_start_flow(refresh_tokens=True)\n authorize_url = self._client.oauth2_get_authorize_url()\n print('Please go to this URL and login: {0}'.format(authorize_url))\n\n auth_code = raw_input(\n 'Please enter the code you get after login here: ').strip()\n\n # Save to client\n\n self._response = self._client.oauth2_exchange_code_for_tokens(auth_code)\n self.auth = self._response.by_resource_server['auth.globus.org']\n self.transfer = self._response.by_resource_server['transfer.api.globus.org']\n self._update_setting('GLOBUS_TRANSFER_RESPONSE', self.transfer)\n self._update_setting('GLOBUS_AUTH_RESPONSE', self.auth)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn logs for a particular container.", "response": "def logs(self, name=None):\n '''return logs for a particular container. The logs file is equivalent to\n the name, but with extension .log. If there is no name, the most recent\n log is returned.\n\n Parameters\n ==========\n name: the container name to print logs for.\n\n '''\n content = None\n results = self._list_logs()\n print(results)\n\n # If we are searching for a name\n if name is not None:\n for result in results:\n\n matches = False\n\n # Case 1: the name is in the storage path\n if name in result.name:\n matches=True\n\n # Case 2: match in metadata\n for key,val in result.metadata.items():\n if name in val:\n matches = True\n\n if matches is True:\n content = self._print_log(result.name) \n\n # Otherwise return the last\n else:\n\n if len(results) > 0:\n latest = results[0]\n \n # Get the most recent\n for result in results:\n if result.time_created >= latest.time_created:\n latest = result \n content = self._print_log(result.name) \n\n return content"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef list_logs(self):\n '''return a list of logs. We return any file that ends in .log\n '''\n results = []\n for image in self._bucket.list_blobs():\n if image.name.endswith('log'):\n results.append(image)\n\n if len(results) == 0:\n bot.info(\"No containers found, based on extension .log\")\n\n return results", "response": "return a list of logs. We return any file that ends in. log"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsplits an endpoint name by colon and return the endpoint and path", "response": "def parse_endpoint_name(self, endpoint):\n '''split an endpoint name by colon, as the user can provide an\n endpoint name separated from a path:\n\n Parameters\n ==========\n endpoint 12345:/path/on/remote\n \n '''\n parts = [x for x in endpoint.split(':') if x]\n endpoint = parts[0]\n if len(parts) == 1:\n path = ''\n else:\n path = '/'.join(parts[1:])\n\n return endpoint, path"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_endpoint_folder(self, endpoint_id, folder):\n '''create an endpoint folder, catching the error if it exists.\n\n Parameters\n ==========\n endpoint_id: the endpoint id parameters\n folder: the relative path of the folder to create\n\n '''\n try:\n res = self.transfer_client.operation_mkdir(endpoint_id, folder)\n bot.info(\"%s --> %s\" %(res['message'], folder))\n except TransferAPIError:\n bot.info('%s already exists at endpoint' %folder)", "response": "create an endpoint folder"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the first fullpath to a folder in the endpoint based on the globus config file expanding the user s home from the globus config file.", "response": "def get_endpoint_path(self, endpoint_id):\n '''return the first fullpath to a folder in the endpoint based on\n expanding the user's home from the globus config file. This\n function is fragile but I don't see any other way to do it.\n \n Parameters\n ==========\n endpoint_id: the endpoint id to look up the path for\n\n '''\n config = os.path.expanduser(\"~/.globusonline/lta/config-paths\")\n if not os.path.exists(config):\n bot.error('%s not found for a local Globus endpoint.')\n sys.exit(1)\n\n path = None\n\n # Read in the config and get the root path\n\n config = [x.split(',')[0] for x in read_file(config)]\n for path in config:\n if os.path.exists(path):\n break\n\n # If we don't have an existing path, exit\n\n if path is None:\n bot.error('No path was found for a local Globus endpoint.')\n sys.exit(1)\n\n return path"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a transfer client for the user", "response": "def init_transfer_client(self):\n '''return a transfer client for the user''' \n\n if self._tokens_need_update():\n self._update_tokens()\n\n access_token = self.transfer['access_token']\n\n # Createe Refresh Token Authorizer\n\n authorizer = globus_sdk.RefreshTokenAuthorizer(\n self.transfer['refresh_token'],\n self._client, \n access_token=self.transfer['access_token'], \n expires_at=self.transfer['expires_at_seconds'])\n\n self.transfer_client = globus_sdk.TransferClient(authorizer=authorizer)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_endpoint(self, endpoint_id):\n '''use a transfer client to get a specific endpoint based on an endpoint id.\n \n Parameters\n ==========\n endpoint_id: the endpoint_id to retrieve\n\n ''' \n endpoint = None\n \n if not hasattr(self, 'transfer_client'):\n self._init_transfer_client()\n\n try:\n endpoint = self.transfer_client.get_endpoint(endpoint_id).data\n except TransferAPIError:\n bot.info('%s does not exist.' %endpoint_id)\n \n return endpoint", "response": "use a transfer client to get a specific endpoint based on an endpoint id."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_endpoints(self, query=None):\n '''use a transfer client to get endpoints. If a search term is included,\n we use it to search a scope of \"all\" in addition to personal and shared\n endpoints. Endpoints are organized\n by type (my-endpoints, shared-with-me, optionally all) and then id.\n\n Parameters\n ==========\n query: an endpoint search term to add to a scope \"all\" search. If not \n defined, no searches are done with \"all\"\n\n ''' \n self.endpoints = {}\n \n if not hasattr(self, 'transfer_client'):\n self._init_transfer_client()\n\n # We assume the user wants to always see owned and shared\n\n scopes = {'my-endpoints':None,\n 'shared-with-me': None}\n\n # If the user provides query, add to search\n\n if query is not None:\n scopes.update({'all': query})\n\n for scope, q in scopes.items(): \n self.endpoints[scope] = {} \n for ep in self.transfer_client.endpoint_search(q, filter_scope=scope):\n ep = ep.__dict__['_data']\n self.endpoints[scope][ep['id']] = ep\n\n # Alert the user not possible without personal lookup\n\n if len(self.endpoints['my-endpoints']) == 0:\n bot.warning('No personal endpoint found for local transfer.')\n bot.warning('https://www.globus.org/globus-connect-personal')\n\n return self.endpoints", "response": "use a transfer client to get all endpoints for a given user"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a list of containers.", "response": "def list_containers(self):\n '''return a list of containers. Since Google Drive definitely has other\n kinds of files, we look for containers in a special sregistry folder,\n (meaning the parent folder is sregistry) and with properties of type\n as container.\n '''\n # Get or create the base\n folder = self._get_or_create_folder(self._base)\n\n next_page = None\n containers = []\n\n # Parse the base for all containers, possibly over multiple pages\n\n while True:\n query = \"mimeType='application/octet-stream'\" # ensures container\n query += \" and properties has { key='type' and value='container' }\"\n query += \" and '%s' in parents\" %folder['id'] # ensures in parent folder\n response = self._service.files().list(q=query,\n spaces='drive',\n fields='nextPageToken, files(id, name, properties)',\n pageToken=next_page).execute()\n containers += response.get('files', [])\n \n # If there is a next page, keep going!\n next_page = response.get('nextPageToken')\n if not next_page:\n break\n\n if len(containers) == 0:\n bot.info(\"No containers found, based on properties type:container\")\n sys.exit(1)\n\n return containers"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef search_all(self):\n '''a \"list all\" search that doesn't require a query. Here we return to\n the user all objects that have custom properties value type set to\n container, which is set when the image is pushed. \n\n IMPORTANT: the upload function adds this metadata. For a container to\n be found by the client, it must have the properties value with type\n as container. It also should have a \"uri\" in properties to show the \n user, otherwise the user will have to query / download based on the id\n '''\n \n results = self._list_containers()\n matches = []\n\n bot.info(\"[drive://%s] Containers\" %self._base)\n\n rows = []\n for i in results:\n\n # Fallback to the image name without the extension\n uri = i['name'].replace('.simg','')\n\n # However the properties should include the uri\n if 'properties' in i:\n if 'uri' in i['properties']:\n uri = i['properties']['uri']\n rows.append([i['id'],uri])\n\n # Give the user back a uri\n i['uri'] = uri\n matches.append(i)\n\n bot.custom(prefix=\" [drive://%s]\" %self._base, \n message=\"\\t\\t[id]\\t[uri]\", \n color=\"PURPLE\")\n\n bot.table(rows)\n return matches", "response": "a list all search that doesn t require a query"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsearching for a specific container.", "response": "def container_query(self, query, quiet=False):\n '''search for a specific container.\n This function is the same as the search all, but instead of showing all\n results, filters them down based on user criteria (the query)\n '''\n\n results = self._list_containers()\n\n matches = []\n for result in results:\n \n is_match = False\n if query in result['id']:\n is_match = True\n\n elif query in result['name']:\n is_match = True\n\n else:\n for key,val in result['properties'].items():\n if query in val and is_match is False:\n is_match = True\n break\n\n if is_match is True:\n matches.append(result)\n\n\n if not quiet:\n bot.info(\"[drive://%s] Found %s containers\" %(self._base,len(matches)))\n for image in matches:\n\n # If the image has properties, show to the user\n if 'properties' in image:\n image.update(image['properties'])\n\n bot.info(image['uri'])\n\n for key in sorted(image, key=len):\n val = image[key]\n if isinstance(val,str):\n bot.custom(prefix=key.ljust(10), message=val, color=\"CYAN\")\n bot.newline()\n return matches"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef status(backend):\n '''print the status for all or one of the backends.\n '''\n print('[backend status]')\n settings = read_client_secrets()\n print('There are %s clients found in secrets.' %len(settings))\n if 'SREGISTRY_CLIENT' in settings:\n print('active: %s' %settings['SREGISTRY_CLIENT'])\n update_secrets(settings)\n else:\n print('There is no active client.')", "response": "print the status for all or one backend."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd the variable to the config", "response": "def add(backend, variable, value, force=False):\n '''add the variable to the config\n '''\n print('[add]')\n settings = read_client_secrets()\n\n # If the variable begins with the SREGISTRY_ don't add it\n prefix = 'SREGISTRY_%s_' %backend.upper()\n if not variable.startswith(prefix):\n variable = '%s%s' %(prefix, variable)\n\n # All must be uppercase\n variable = variable.upper()\n bot.info(\"%s %s\" %(variable, value))\n \n # Does the setting already exist?\n\n if backend in settings:\n if variable in settings[backend] and force is False:\n previous = settings[backend][variable]\n bot.error('%s is already set as %s. Use --force to override.' %(variable, previous))\n sys.exit(1)\n\n if backend not in settings:\n settings[backend] = {}\n\n settings[backend][variable] = value\n update_secrets(settings)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nremoves a variable from the config if found.", "response": "def remove(backend, variable):\n '''remove a variable from the config, if found.\n '''\n print('[remove]')\n settings = read_client_secrets()\n\n # If the variable begins with the SREGISTRY_ don't add it\n prefixed = variable\n prefix = 'SREGISTRY_%s_' %backend.upper()\n if not variable.startswith(prefix):\n prefixed = '%s%s' %(prefix, variable)\n\n # All must be uppercase\n variable = variable.upper()\n bot.info(variable)\n \n # Does the setting already exist?\n if backend in settings:\n if variable in settings[backend]:\n del settings[backend][variable] \n if prefixed in settings[backend]:\n del settings[backend][prefixed] \n update_secrets(settings)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nactivates a backend by adding it to the. sregistry configuration file.", "response": "def activate(backend):\n '''activate a backend by adding it to the .sregistry configuration file.\n '''\n settings = read_client_secrets()\n if backend is not None:\n settings['SREGISTRY_CLIENT'] = backend\n update_secrets(settings)\n print('[activate] %s' %backend)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef delete_backend(backend):\n '''delete a backend, and update the secrets file\n '''\n settings = read_client_secrets()\n if backend in settings:\n del settings[backend]\n\n # If the backend was the active client, remove too\n if 'SREGISTRY_CLIENT' in settings:\n if settings['SREGISTRY_CLIENT'] == backend:\n del settings['SREGISTRY_CLIENT']\n\n update_secrets(settings)\n print('[delete] %s' %backend)\n else:\n if backend is not None:\n print('%s is not a known client.' %backend)\n else:\n print('Please specify a backend to delete.')", "response": "delete a backend and update the secrets file\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a list of backends installed for the user", "response": "def list_backends(backend=None):\n '''return a list of backends installed for the user, which is based on\n the config file keys found present\n \n Parameters\n ==========\n backend: a specific backend to list. If defined, just list parameters.\n\n '''\n settings = read_client_secrets()\n\n # Backend names are the keys\n backends = list(settings.keys()) \n backends = [b for b in backends if b!='SREGISTRY_CLIENT']\n\n if backend in backends:\n bot.info(backend)\n print(json.dumps(settings[backend], indent=4, sort_keys=True))\n else:\n if backend is not None:\n print('%s is not a known client.' %backend)\n bot.info(\"Backends Installed\")\n print('\\n'.join(backends))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef pull(self, images, file_name=None, save=True, **kwargs):\n '''pull an image from a dropbox. The image is found based on the storage \n uri\n \n Parameters\n ==========\n images: refers to the uri given by the user to pull in the format\n /. You should have an API that is able to \n retrieve a container based on parsing this uri.\n file_name: the user's requested name for the file. It can \n optionally be None if the user wants a default.\n save: if True, you should save the container to the database\n using self.add()\n \n Returns\n =======\n finished: a single container path, or list of paths\n\n '''\n force = False\n if \"force\" in kwargs:\n force = kwargs['force']\n\n if not isinstance(images, list):\n images = [images]\n\n bot.debug('Execution of PULL for %s images' %len(images))\n\n # If used internally we want to return a list to the user.\n finished = []\n for image in images:\n\n names = parse_image_name(remove_uri(image))\n\n # Dropbox path is the path in storage with a slash\n dropbox_path = '/%s' % names['storage']\n \n # If the user didn't provide a file, make one based on the names\n if file_name is None:\n file_name = self._get_storage_name(names)\n\n # If the file already exists and force is False\n if os.path.exists(file_name) and force is False:\n bot.error('Image exists! Remove first, or use --force to overwrite')\n sys.exit(1) \n\n # First ensure that exists\n if self.exists(dropbox_path) is True:\n\n # _stream is a function to stream using the response to start\n metadata, response = self.dbx.files_download(dropbox_path)\n image_file = self._stream(response, stream_to=file_name)\n\n # parse the metadata (and add inspected image)\n metadata = self._get_metadata(image_file, metadata)\n\n # If we save to storage, the uri is the dropbox_path\n if save is True:\n container = self.add(image_path = image_file,\n image_uri = dropbox_path.strip('/'),\n metadata = metadata,\n url = response.url)\n\n # When the container is created, this is the path to the image\n image_file = container.image\n\n if os.path.exists(image_file):\n bot.debug('Retrieved image file %s' %image_file)\n bot.custom(prefix=\"Success!\", message=image_file)\n finished.append(image_file)\n\n else:\n bot.error('%s does not exist. Try sregistry search to see images.' %path)\n\n if len(finished) == 1:\n finished = finished[0]\n return finished", "response": "pull an image from a dropbox"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef push(self, path, name, tag=None):\n '''push an image to your Dropbox\n \n Parameters\n ==========\n path: should correspond to an absolute image path (or derive it)\n name: should be the complete uri that the user has requested to push.\n tag: should correspond with an image tag. This is provided to mirror Docker\n\n if the image is less than 150MB, the standard file_upload is used. If\n larger, the image is uploaded in chunks with a progress bar.\n\n '''\n path = os.path.abspath(path)\n bot.debug(\"PUSH %s\" % path)\n\n if not os.path.exists(path):\n bot.error('%s does not exist.' %path)\n sys.exit(1)\n\n # here is an exampole of getting metadata for a container\n names = parse_image_name(remove_uri(name), tag=tag)\n metadata = self.get_metadata(path, names=names)\n\n # Get the size of the file\n file_size = os.path.getsize(path)\n chunk_size = 4 * 1024 * 1024\n storage_path = \"/%s\" %names['storage']\n\n # This is MB\n # image_size = os.path.getsize(path) >> 20\n\n # prepare the progress bar\n progress = 0\n bot.show_progress(progress, file_size, length=35)\n\n # If image is smaller than 150MB, use standard upload\n with open(path, 'rb') as F:\n if file_size <= chunk_size:\n self.dbx.files_upload(F.read(), storage_path)\n\n # otherwise upload in chunks\n else:\n\n start = self.dbx.files_upload_session_start(F.read(chunk_size))\n cursor = dropbox.files.UploadSessionCursor(session_id=start.session_id,\n offset=F.tell())\n commit = dropbox.files.CommitInfo(path=storage_path)\n\n while F.tell() < file_size:\n\n progress+=chunk_size\n\n # Finishing up the file, less than chunk_size to go\n if ((file_size - F.tell()) <= chunk_size):\n self.dbx.files_upload_session_finish(F.read(chunk_size),\n cursor,\n commit)\n\n # Finishing up the file, less than chunk_size to go\n else:\n self.dbx.files_upload_session_append(F.read(chunk_size),\n cursor.session_id,\n cursor.offset)\n cursor.offset = F.tell()\n\n # Update the progress bar\n bot.show_progress(iteration=progress,\n total=file_size,\n length=35,\n carriage_return=False)\n\n\n # Finish up\n bot.show_progress(iteration=file_size,\n total=file_size,\n length=35,\n carriage_return=True)\n\n # Newline to finish download\n sys.stdout.write('\\n')", "response": "push an image to your Dropbox user"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nupdates a base based on an image name", "response": "def _update_base(self, image):\n ''' update a base based on an image name, meaning detecting a particular\n registry and if necessary, updating the self.base. When the image\n name is parsed, the base will be given to remove the registry.\n '''\n base = None\n\n # Google Container Cloud\n if \"gcr.io\" in image:\n base = 'gcr.io'\n self._set_base(default_base=base)\n self._update_secrets()\n\n return base"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nset the base or default to use Docker Hub", "response": "def _set_base(self, default_base=None):\n '''set the API base or default to use Docker Hub. The user is able\n to set the base, api version, and protocol via a settings file\n of environment variables:\n \n SREGISTRY_DOCKERHUB_BASE: defaults to index.docker.io\n SREGISTRY_DOCKERHUB_VERSION: defaults to v1\n SREGISTRY_DOCKERHUB_NO_HTTPS: defaults to not set (so https)\n\n '''\n\n base = self._get_setting('SREGISTRY_DOCKERHUB_BASE')\n version = self._get_setting('SREGISTRY_DOCKERHUB_VERSION')\n\n # If we re-set the base after reading the image\n if base is None:\n if default_base is None:\n base = \"index.docker.io\"\n else:\n base = default_base\n\n if version is None:\n version = \"v2\"\n\n nohttps = self._get_setting('SREGISTRY_DOCKERHUB_NOHTTPS')\n if nohttps is None:\n nohttps = \"https://\"\n else:\n nohttps = \"http://\"\n\n # :///\n\n self._base = \"%s%s\" %(nohttps, base)\n self._version = version\n self.base = \"%s%s/%s\" %(nohttps, base.strip('/'), version)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _update_secrets(self):\n '''update secrets will take a secrets credential file\n either located at .sregistry or the environment variable\n SREGISTRY_CLIENT_SECRETS and update the current client \n secrets as well as the associated API base. For the case of\n using Docker Hub, if we find a .docker secrets file, we update\n from there.\n '''\n\n # If the user has defined secrets, use them\n credentials = self._get_setting('SREGISTRY_DOCKERHUB_SECRETS')\n\n # First try for SINGULARITY exported, then try sregistry\n username = self._get_setting('SINGULARITY_DOCKER_USERNAME')\n password = self._get_setting('SINGULARITY_DOCKER_PASSWORD')\n username = self._get_setting('SREGISTRY_DOCKERHUB_USERNAME', username)\n password = self._get_setting('SREGISTRY_DOCKERHUB_PASSWORD', password)\n\n # Option 1: the user exports username and password\n auth = None\n if username is not None and password is not None:\n auth = basic_auth_header(username, password)\n self.headers.update(auth)\n \n # Option 2: look in .docker config file\n if credentials is not None and auth is None:\n if os.path.exists(credentials):\n credentials = read_json(credentials)\n\n # Find a matching auth in .docker config\n if \"auths\" in credentials:\n for auths, params in credentials['auths'].items():\n if self._base in auths:\n if 'auth' in params:\n auth = \"Basic %s\" % params['auth']\n self.headers['Authorization'] = auth\n\n\n # Also update headers\n if 'HttpHeaders' in credentials:\n for key, value in credentials['HttpHeaders'].items():\n self.headers[key] = value\n\n else:\n bot.warning('Credentials file set to %s, but does not exist.')", "response": "update secrets will take a secrets credential file located at. sregistry or the environment variable SREGISTRY_CLIENT_SECRETS and update the current client \n secrets as well as the associated API base."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nshare will use the client to get an image based on a query and then the link with an email or endpoint of choice.", "response": "def share(self, query, share_to):\n '''share will use the client to get an image based on a query, and then\n the link with an email or endpoint (share_to) of choice.\n '''\n\n images = self._container_query(query, quiet=True)\n if len(images) == 0:\n bot.error('Cannot find a remote image matching %s' %query)\n sys.exit(0)\n\n image = images[0]\n\n def callback(request_id, response, exception):\n if exception:\n # Handle error\n print(exception)\n else:\n share_id = response.get('id')\n bot.info('Share to %s complete: %s!' %(share_to, share_id))\n\n batch = self._service.new_batch_http_request(callback=callback)\n user_permission = {\n 'type': 'user',\n 'role': 'reader',\n 'emailAddress': share_to\n }\n\n batch.add(self._service.permissions().create(\n fileId=image['id'],\n body=user_permission,\n fields='id',\n ))\n\n batch.execute()\n return image"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef pull(self, images, file_name=None, save=True, **kwargs):\n '''pull an image from google drive, based on a query (uri or id)\n \n Parameters\n ==========\n images: refers to the uri given by the user to pull in the format\n /. You should have an API that is able to \n retrieve a container based on parsing this uri.\n file_name: the user's requested name for the file. It can \n optionally be None if the user wants a default.\n save: if True, you should save the container to the database\n using self.add()\n \n Returns\n =======\n finished: a single container path, or list of paths\n '''\n\n if not isinstance(images,list):\n images = [images]\n\n bot.debug('Execution of PULL for %s images' %len(images))\n\n # If used internally we want to return a list to the user.\n finished = []\n for image in images:\n\n q = parse_image_name( remove_uri(image) )\n\n # Use container search to find the container based on uri\n bot.info('Searching for %s in drive://%s' %(q['uri'],self._base))\n matches = self._container_query(q['uri'], quiet=True)\n\n if len(matches) == 0:\n bot.info('No matching containers found.')\n sys.exit(0)\n \n # If the user didn't provide a file, make one based on the names\n if file_name is None:\n file_name = q['storage'].replace('/','-')\n\n # We give the first match, the uri should be unique and known\n image = matches[0]\n\n request = self._service.files().get_media(fileId=image['id'])\n\n with open(file_name, 'wb') as fh:\n downloader = MediaIoBaseDownload(fh, request)\n done = False\n bar = None\n\n # Download and update the user with progress bar\n while done is False:\n status, done = downloader.next_chunk()\n response = None\n\n # Create bar on first call\n if bar is None:\n total = status.total_size / (1024*1024.0)\n bar = ProgressBar(expected_size=total,\n filled_char='=',\n hide=self.quiet)\n\n bar.show(status.resumable_progress / (1024*1024.0))\n \n\n # If the user is saving to local storage, you need to assumble the uri\n # here in the expected format /:@\n if save is True:\n image_uri = q['uri']\n if \"uri\" in image:\n image_uri = image['uri']\n\n # Update metadata with selfLink\n image['selfLink'] = downloader._uri\n\n container = self.add(image_path = image_file,\n image_uri = image_uri,\n metadata = image,\n url=downloader._uri)\n\n # When the container is created, this is the path to the image\n image_file = container.image\n\n if os.path.exists(image_file):\n bot.debug('Retrieved image file %s' %image_file)\n bot.custom(prefix=\"Success!\", message=image_file)\n finished.append(image_file)\n\n if len(finished) == 1:\n finished = finished[0]\n return finished", "response": "pull an image from google drive"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ngenerate a base64 encoded header to ask for a token. This means base64 encoding a username and password and adding to the Authorization header to identify the client.", "response": "def basic_auth_header(username, password):\n '''generate a base64 encoded header to ask for a token. This means\n base64 encoding a username and password and adding to the\n Authorization header to identify the client.\n\n Parameters\n ==========\n username: the username\n password: the password\n \n '''\n s = \"%s:%s\" % (username, password)\n if sys.version_info[0] >= 3:\n s = bytes(s, 'utf-8')\n credentials = base64.b64encode(s).decode('utf-8')\n else:\n credentials = base64.b64encode(s)\n auth = {\"Authorization\": \"Basic %s\" % credentials}\n return auth"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef generate_signature(payload, secret):\n '''use an endpoint specific payload and client secret to generate\n a signature for the request'''\n payload = _encode(payload)\n secret = _encode(secret)\n return hmac.new(secret, digestmod=hashlib.sha256,\n msg=payload).hexdigest()", "response": "use an endpoint specific payload and client secret to generate\n a signature for the request"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef generate_credential(s):\n '''basic_auth_header will return a base64 encoded header object to\n :param username: the username\n '''\n if sys.version_info[0] >= 3:\n s = bytes(s, 'utf-8')\n credentials = base64.b64encode(s).decode('utf-8')\n else:\n credentials = base64.b64encode(s)\n return credentials", "response": "This function generates a base64 encoded header object to\n "} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef generate_header_signature(secret, payload, request_type):\n '''Authorize a client based on encrypting the payload with the client\n secret, timestamp, and other metadata\n '''\n\n # Use the payload to generate a digest push|collection|name|tag|user\n timestamp = generate_timestamp()\n credential = \"%s/%s\" %(request_type,timestamp)\n\n signature = generate_signature(payload,secret)\n return \"SREGISTRY-HMAC-SHA256 Credential=%s,Signature=%s\" %(credential,signature)", "response": "Generate a header signature for a client based on encrypting the payload with the client\n secret timestamp and other metadata\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef delete(self, url,\n headers=None,\n return_json=True,\n default_headers=True):\n\n '''delete request, use with caution\n '''\n bot.debug('DELETE %s' %url)\n return self._call(url,\n headers=headers,\n func=requests.delete,\n return_json=return_json,\n default_headers=default_headers)", "response": "delete request, use with caution"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef head(self, url):\n '''head request, typically used for status code retrieval, etc.\n '''\n bot.debug('HEAD %s' %url)\n return self._call(url, func=requests.head)", "response": "head request for url"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndetermining if a resource is healthy based on an accepted response 200 or redirect 301", "response": "def healthy(self, url):\n '''determine if a resource is healthy based on an accepted response (200)\n or redirect (301)\n\n Parameters\n ==========\n url: the URL to check status for, based on the status_code of HEAD\n\n '''\n response = requests.get(url)\n status_code = response.status_code\n if status_code != 200:\n bot.error('%s, response status code %s.' %(url, status_code)) \n return False\n return True"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nposting will use requests to get a particular url", "response": "def post(self,url,\n headers=None,\n data=None,\n return_json=True,\n default_headers=True):\n\n '''post will use requests to get a particular url\n '''\n\n bot.debug(\"POST %s\" %url)\n return self._call(url,\n headers=headers,\n func=requests.post,\n data=data,\n return_json=return_json,\n default_headers=default_headers)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get(self,url,\n headers=None,\n token=None,\n data=None,\n return_json=True,\n default_headers=True,\n quiet=False):\n\n '''get will use requests to get a particular url\n '''\n bot.debug(\"GET %s\" %url)\n return self._call(url,\n headers=headers,\n func=requests.get,\n data=data,\n return_json=return_json,\n default_headers=default_headers,\n quiet=quiet)", "response": "get will use requests. get to get a particular url"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef paginate_get(self, url, \n headers=None, \n return_json=True,\n start_page=None):\n '''paginate_call is a wrapper for get to paginate results\n '''\n\n geturl = '%s&page=1' %(url)\n if start_page is not None:\n geturl = '%s&page=%s' %(url,start_page)\n\n results = []\n while geturl is not None:\n result = self._get(url, headers=headers, return_json=return_json)\n # If we have pagination:\n if isinstance(result, dict):\n if 'results' in result:\n results = results + result['results']\n geturl = result['next']\n # No pagination is a list\n else:\n return result\n return results", "response": "Paginate a get request to get a list of items from a url."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef verify(self):\n '''\n verify will return a True or False to determine to verify the\n requests call or not. If False, we should the user a warning message,\n as this should not be done in production!\n\n '''\n from sregistry.defaults import DISABLE_SSL_CHECK\n\n if DISABLE_SSL_CHECK is True:\n bot.warning('Verify of certificates disabled! ::TESTING USE ONLY::')\n\n return not DISABLE_SSL_CHECK", "response": "This function is used to verify the certificate of the current object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef call(self, url, func, data=None,\n headers=None, \n return_json=True,\n stream=False, \n retry=True,\n default_headers=True,\n quiet=False):\n\n '''call will issue the call, and issue a refresh token\n given a 401 response, and if the client has a _update_token function\n\n Parameters\n ==========\n func: the function (eg, post, get) to call\n url: the url to send file to\n headers: if not None, update the client self.headers with dictionary\n data: additional data to add to the request\n return_json: return json if successful\n default_headers: use the client's self.headers (default True)\n\n '''\n \n if data is not None:\n if not isinstance(data, dict):\n data = json.dumps(data)\n\n heads = dict()\n if default_headers is True:\n heads = self.headers.copy()\n \n if headers is not None:\n if isinstance(headers, dict):\n heads.update(headers)\n\n response = func(url=url,\n headers=heads,\n data=data,\n verify=self._verify(),\n stream=stream)\n\n # Errored response, try again with refresh\n if response.status_code in [500, 502]:\n bot.error(\"Beep boop! %s: %s\" %(response.reason,\n response.status_code))\n sys.exit(1)\n\n # Errored response, try again with refresh\n if response.status_code == 404:\n\n # Not found, we might want to continue on\n if quiet is False:\n bot.error(\"Beep boop! %s: %s\" %(response.reason,\n response.status_code))\n sys.exit(1)\n\n\n # Errored response, try again with refresh\n if response.status_code == 401:\n\n # If client has method to update token, try it once\n if retry is True and hasattr(self,'_update_token'):\n\n # A result of None indicates no update to the call\n self._update_token(response)\n return self._call(url, func, data=data,\n headers=headers,\n return_json=return_json,\n stream=stream, retry=False)\n\n bot.error(\"Your credentials are expired! %s: %s\" %(response.reason,\n response.status_code))\n sys.exit(1)\n\n elif response.status_code == 200:\n\n if return_json:\n\n try:\n response = response.json()\n except ValueError:\n bot.error(\"The server returned a malformed response.\")\n sys.exit(1)\n\n return response", "response": "Issue a call to the specified url and return the response."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef pull(self, images, file_name=None, save=True, **kwargs):\n '''pull an image from a singularity registry\n \n Parameters\n ==========\n images: refers to the uri given by the user to pull in the format\n /. You should have an API that is able to \n retrieve a container based on parsing this uri.\n file_name: the user's requested name for the file. It can \n optionally be None if the user wants a default.\n save: if True, you should save the container to the database\n using self.add()\n \n Returns\n =======\n finished: a single container path, or list of paths\n '''\n\n if not isinstance(images,list):\n images = [images]\n\n # Interaction with a registry requires secrets\n self.require_secrets()\n\n bot.debug('Execution of PULL for %s images' %len(images))\n\n finished = []\n for image in images:\n\n q = parse_image_name(remove_uri(image))\n\n # If a custom registry is not set, use default base\n if q['registry'] == None:\n q['registry'] = self.base\n\n # Ensure https is added back to the registry uri\n q = self._add_https(q)\n\n # All custom registries need api appended\n if not q['registry'].endswith('api'):\n q['registry'] = '%s/api' % q['registry']\n\n # Verify image existence, and obtain id\n url = \"%s/container/%s/%s:%s\" %(q['registry'], \n q['collection'], \n q['image'], \n q['tag'])\n\n bot.debug('Retrieving manifest at %s' % url)\n\n try:\n manifest = self._get(url)\n except SSLError:\n bot.exit('Issue with %s, try exporting SREGISTRY_REGISTRY_NOHTTPS.' % url)\n\n # Private container collection\n if isinstance(manifest, Response):\n\n # Requires token\n if manifest.status_code == 403:\n\n SREGISTRY_EVENT = self.authorize(request_type=\"pull\",\n names=q)\n headers = {'Authorization': SREGISTRY_EVENT }\n self._update_headers(headers)\n manifest = self._get(url)\n\n # Still denied\n if isinstance(manifest, Response):\n if manifest.status_code == 403:\n manifest = 403\n\n if isinstance(manifest, int):\n if manifest == 400:\n bot.error('Bad request (400). Is this a private container?')\n elif manifest == 404:\n bot.error('Container not found (404)')\n elif manifest == 403:\n bot.error('Unauthorized (403)')\n sys.exit(1)\n\n\n # Successful pull\n if \"image\" in manifest:\n\n # Add self link to manifest\n manifest['selfLink'] = url\n\n if file_name is None:\n file_name = q['storage'].replace('/','-')\n \n # Show progress if not quiet\n image_file = self.download(url=manifest['image'],\n file_name=file_name,\n show_progress=not self.quiet)\n\n # If the user is saving to local storage\n if save is True:\n image_uri = \"%s/%s:%s\" %(manifest['collection'], \n manifest['name'],\n manifest['tag'])\n container = self.add(image_path = image_file,\n image_uri = image_uri,\n metadata = manifest,\n url = manifest['image'])\n image_file = container.image\n\n if os.path.exists(image_file):\n bot.debug('Retrieved image file %s' %image_file)\n bot.custom(prefix=\"Success!\", message=image_file)\n finished.append(image_file)\n\n if len(finished) == 1:\n finished = finished[0]\n return finished", "response": "pull an image from a singularity registry"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef pull(self, images, file_name=None, save=True, **kwargs):\n '''pull an image from a s3 storage\n \n Parameters\n ==========\n \n images: refers to the uri given by the user to pull in the format\n /. You should have an API that is able to \n retrieve a container based on parsing this uri.\n file_name: the user's requested name for the file. It can \n optionally be None if the user wants a default.\n save: if True, you should save the container to the database\n using self.add()\n \n Returns\n =======\n finished: a single container path, or list of paths\n '''\n\n if not isinstance(images,list):\n images = [images]\n\n bot.debug('Execution of PULL for %s images' %len(images))\n\n finished = []\n for image in images:\n\n image = remove_uri(image)\n names = parse_image_name(image)\n\n if file_name is None:\n file_name = names['storage'].replace('/','-')\n\n # Assume the user provided the correct uri to start\n uri = names['storage_uri']\n\n # First try to get the storage uri directly.\n try:\n self.bucket.download_file(uri, file_name)\n\n # If we can't find the file, help the user\n except botocore.exceptions.ClientError as e:\n if e.response['Error']['Code'] == \"404\":\n\n # Case 1, image not found, but not error with API\n bot.error('Cannot find %s!' % name) \n\n # Try to help the user with suggestions\n results = self._search_all(query=image)\n if len(results) > 0:\n bot.info('Did you mean:\\n' % '\\n'.join(results)) \n sys.exit(1)\n\n else:\n # Case 2: error with request, exit.\n bot.exit('Error downloading image %s' % image)\n \n # if we get down here, we have a uri\n found = None\n for obj in self.bucket.objects.filter(Prefix=image):\n if image in obj.key:\n found = obj\n\n # If we find the object, get metadata\n metadata = {}\n if found != None:\n metadata = found.get()['Metadata']\n\n # Metadata bug will capitalize all fields, workaround is to lowercase\n # https://github.com/boto/boto3/issues/1709\n metadata = dict((k.lower(), v) for k, v in metadata.items())\n \n metadata.update(names)\n\n # If the user is saving to local storage\n if save is True and os.path.exists(file_name):\n container = self.add(image_path = file_name,\n image_uri = names['tag_uri'],\n metadata = metadata)\n file_name = container.image\n\n # If the image was pulled to either\n if os.path.exists(file_name):\n bot.custom(prefix=\"Success!\", message = file_name)\n finished.append(file_name)\n\n if len(finished) == 1:\n finished = finished[0]\n return finished", "response": "pull an image from a s3 storage object"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndeletes an image to Singularity Registry", "response": "def remove(self, image, force=False):\n '''delete an image to Singularity Registry'''\n\n q = parse_image_name(remove_uri(image))\n\n # If the registry is provided in the uri, use it\n if q['registry'] == None:\n q['registry'] = self.base\n\n # If the base doesn't start with http or https, add it\n q = self._add_https(q)\n\n url = '%s/container/%s/%s:%s' % (q['registry'], \n q[\"collection\"],\n q[\"image\"], \n q[\"tag\"])\n\n SREGISTRY_EVENT = self.authorize(request_type=\"delete\", names=q)\n headers = {'Authorization': SREGISTRY_EVENT }\n self._update_headers(fields=headers)\n\n continue_delete = True\n if force is False:\n response = input(\"Are you sure you want to delete %s?\" % q['uri'])\n while len(response) < 1 or response[0].lower().strip() not in \"ynyesno\":\n response = input(\"Please answer yes or no: \")\n if response[0].lower().strip() in \"no\":\n continue_delete = False\n\n if continue_delete is True:\n response = self._delete(url)\n message = self._read_response(response)\n bot.info(\"Response %s, %s\" %(response.status_code, message))\n\n else:\n bot.info(\"Delete cancelled.\")"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_lookup():\n '''get version by way of sregistry.version, returns a \n lookup dictionary with several global variables without\n needing to import singularity\n '''\n lookup = dict()\n version_file = os.path.join('sregistry', 'version.py')\n with open(version_file) as filey:\n exec(filey.read(), lookup)\n return lookup", "response": "get version by way of sregistry. version returns a \n lookup dictionary with several global variables without having to import singularity\n "} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets requirements mean reading in requirements and versions from the lookup obtained with get_lookup", "response": "def get_reqs(lookup=None, key='INSTALL_REQUIRES'):\n '''get requirements, mean reading in requirements and versions from\n the lookup obtained with get_lookup'''\n\n if lookup == None:\n lookup = get_lookup()\n\n install_requires = []\n for module in lookup[key]:\n module_name = module[0]\n module_meta = module[1]\n if \"exact_version\" in module_meta:\n dependency = \"%s==%s\" %(module_name,module_meta['exact_version'])\n elif \"min_version\" in module_meta:\n if module_meta['min_version'] == None:\n dependency = module_name\n else:\n dependency = \"%s>=%s\" %(module_name,module_meta['min_version'])\n install_requires.append(dependency)\n return install_requires"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn the installation directory of the application", "response": "def get_installdir():\n '''get_installdir returns the installation directory of the application\n '''\n return os.path.abspath(os.path.dirname(os.path.dirname(__file__)))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the robot. png thumbnail from the database folder.", "response": "def get_thumbnail():\n '''return the robot.png thumbnail from the database folder.\n if the user has exported a different image, use that instead.\n '''\n from sregistry.defaults import SREGISTRY_THUMBNAIL\n if SREGISTRY_THUMBNAIL is not None:\n if os.path.exists(SREGISTRY_THUMBNAIL):\n return SREGISTRY_THUMBNAIL\n return \"%s/database/robot.png\" %get_installdir()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef run_command(cmd, sudo=False):\n '''run_command uses subprocess to send a command to the terminal.\n\n Parameters\n ==========\n cmd: the command to send, should be a list for subprocess\n error_message: the error message to give to user if fails,\n if none specified, will alert that command failed.\n\n '''\n if sudo is True:\n cmd = ['sudo'] + cmd\n\n try:\n output = Popen(cmd, stderr=STDOUT, stdout=PIPE)\n\n except FileNotFoundError:\n cmd.pop(0)\n output = Popen(cmd, stderr=STDOUT, stdout=PIPE)\n\n t = output.communicate()[0],output.returncode\n output = {'message':t[0],\n 'return_code':t[1]}\n\n if isinstance(output['message'], bytes):\n output['message'] = output['message'].decode('utf-8')\n\n return output", "response": "run_command uses subprocess to send a command to the terminal."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _speak(self):\n '''if you want to add an extra print (of a parameter, for example)\n for the user when the client initalizes, write it here, eg:\n bot.info('[setting] value')\n '''\n if hasattr(self, 'account'):\n bot.info('connected to %s' %self.account.name.display_name)", "response": "speak the user who is connected"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_metadata(self, image_file=None, dbx_metadata=None):\n '''this is a wrapper around the main client.get_metadata to first parse\n a Dropbox FileMetadata into a dicionary, then pass it on to the \n primary get_metadata function.\n\n Parameters\n ==========\n image_file: the full path to the image file that had metadata\n extracted\n metadata: the Dropbox FileMetadata to parse.\n\n '''\n metadata = dict()\n\n if dbx_metadata is not None:\n for key in dbx_metadata.__dir__():\n value = getattr(dbx_metadata, key)\n if type(value) in [str, datetime.datetime, bool, int, float]:\n metadata[key.strip('_')] = value\n \n return self.get_metadata(image_file, names=metadata)", "response": "This is a wrapper around the main client. get_metadata method that parses Dropbox FileMetadata into a dictionary and then passes it on to the \n primary get_metadata function."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates secrets will look for a dropbox token in the environment at SREGISTRY_DROPBOX_TOKEN and create a client.", "response": "def _update_secrets(self):\n '''update secrets will look for a dropbox token in the environment at\n SREGISTRY_DROPBOX_TOKEN and if found, create a client. If not,\n an error message is returned and the client exits.\n '''\n\n # Retrieve the user token. Exit if not found \n token = self._required_get_and_update('SREGISTRY_DROPBOX_TOKEN')\n\n # Create the dropbox client\n self.dbx = Dropbox(token)\n\n # Verify that the account is valid\n try:\n self.account = self.dbx.users_get_current_account()\n except AuthError as err:\n bot.error('Account invalid. Exiting.')\n sys.exit(1)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nprints the output of the content of a specific resource.", "response": "def print_output(response, output_file=None):\n '''print the output to the console for the user. If the user wants the content\n also printed to an output file, do that.\n\n Parameters\n ==========\n response: the response from the builder, with metadata added\n output_file: if defined, write output also to file\n\n '''\n # If successful built, show container uri\n if response['status'] == 'SUCCESS':\n bucket = response['artifacts']['objects']['location']\n obj = response['artifacts']['objects']['paths'][0]\n bot.custom(\"MD5HASH\", response['file_hash'], 'CYAN')\n bot.custom(\"SIZE\", response['size'], 'CYAN')\n bot.custom(response['status'], bucket + obj , 'CYAN')\n else:\n bot.custom(response['status'], 'see logs for details', 'CYAN')\n\n # Show the logs no matter what\n bot.custom(\"LOGS\", response['logUrl'], 'CYAN')\n\n # Did the user make the container public?\n if \"public_url\" in response:\n bot.custom('URL', response['public_url'], 'CYAN')\n\n # Does the user also need writing to an output file?\n if output_file != None: \n with open(output_file, 'w') as filey:\n if response['status'] == 'SUCCESS':\n filey.writelines('MD5HASH %s\\n' % response['file_hash']) \n filey.writelines('SIZE %s\\n' % response['size']) \n filey.writelines('%s %s%s\\n' % (response['status'], bucket, obj))\n filey.writelines('LOGS %s\\n' % response['logUrl'])\n if \"public_url\" in response:\n filey.writelines('URL %s\\n' % response['public_url'])"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nkill is a helper function to call the destroy function of the client and then exit.", "response": "def kill(args):\n '''kill is a helper function to call the \"kill\" function of the client,\n meaning we bring down an instance.\n '''\n from sregistry.main import Client as cli\n if len(args.commands) > 0:\n for name in args.commands:\n cli.destroy(name)\n sys.exit(0)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef templates(args, template_name=None):\n '''list a specific template (if a name is provided) or all templates\n available.\n\n Parameters\n ==========\n args: the argparse object to look for a template name\n template_name: if not set, show all\n\n '''\n from sregistry.main import get_client\n\n # We don't need storage/compute connections\n cli = get_client(init=False)\n\n if len(args.commands) > 0:\n template_name = args.commands.pop(0)\n cli.list_templates(template_name)\n sys.exit(0)", "response": "list a specific template"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef list_logs(args, container_name=None):\n '''list a specific log for a builder, or the latest log if none provided\n\n Parameters\n ==========\n args: the argparse object to look for a container name\n container_name: a default container name set to be None (show latest log)\n\n '''\n from sregistry.main import Client as cli \n if len(args.commands) > 0:\n container_name = args.commands.pop(0)\n cli.logs(container_name)\n sys.exit(0)", "response": "list a specific log for a builder or the latest log if none provided"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_collections(self):\n '''get a listing of collections that the user has access to.\n '''\n collections = []\n for container in self.conn.get_account()[1]:\n collections.append(container['name'])\n return collections", "response": "get a listing of collections that the user has access to."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting or create a collection", "response": "def _get_or_create_collection(self, name):\n '''get or create a collection, meaning that if the get returns\n None, create and return the response to the user.\n \n Parameters\n ==========\n name: the name of the collection to get (and create)\n '''\n try: \n collection = self._get_collection(name)\n except:\n bot.info('Creating collection %s...' % name)\n collection = self.conn.put_container(name)\n return collection"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _update_secrets(self):\n '''update secrets will look for a user and token in the environment\n If we find the values, cache and continue. Otherwise, exit with error\n '''\n\n # Get the swift authentication type first. That will determine what we\n # will need to collect for proper authentication\n self.config['SREGISTRY_SWIFT_AUTHTYPE'] = self._required_get_and_update(\n 'SREGISTRY_SWIFT_AUTHTYPE')\n\n # Check what auth version is requested and setup the connection\n if self.config['SREGISTRY_SWIFT_AUTHTYPE'] == 'preauth':\n\n # Pre-Authenticated Token/URL - Use OS_AUTH_TOKEN/OS_STORAGE_URL\n # Retrieve the user token, user, and base. Exit if not found \n for envar in ['SREGISTRY_SWIFT_OS_AUTH_TOKEN',\n 'SREGISTRY_SWIFT_OS_STORAGE_URL' ]:\n self.config[envar] = self._required_get_and_update(envar)\n\n self.conn = swiftclient.Connection(\n preauthurl=self.config['SREGISTRY_SWIFT_OS_STORAGE_URL'],\n preauthtoken=self.config['SREGISTRY_SWIFT_OS_AUTH_TOKEN']\n )\n elif self.config['SREGISTRY_SWIFT_AUTHTYPE'] == 'keystonev3':\n\n # Keystone v3 Authentication\n # Retrieve the user token, user, and base. Exit if not found \n for envar in ['SREGISTRY_SWIFT_USER',\n 'SREGISTRY_SWIFT_TOKEN',\n 'SREGISTRY_SWIFT_URL']:\n self.config[envar] = self._required_get_and_update(envar)\n\n auth_url = '%s/v3' % self.config['SREGISTRY_SWIFT_URL']\n # Setting to default as a safety. No v3 environment to test\n # May require ENV vars for real use. - M. Moore\n _os_options = {\n 'user_domain_name': 'Default',\n 'project_domain_name': 'Default',\n 'project_name': 'Default'\n }\n\n # Save the connection to use for some command\n self.conn = swiftclient.Connection(\n user=self.config['SREGISTRY_SWIFT_USER'],\n key=self.config['SREGISTRY_SWIFT_TOKEN'],\n os_options=_os_options,\n authurl=auth_url,\n auth_version='3'\n )\n\n elif self.config['SREGISTRY_SWIFT_AUTHTYPE'] == 'keystonev2':\n\n # Keystone v2 Authentication\n # Retrieve the user token, user, and base. Exit if not found \n for envar in ['SREGISTRY_SWIFT_USER',\n 'SREGISTRY_SWIFT_TOKEN',\n 'SREGISTRY_SWIFT_TENANT',\n 'SREGISTRY_SWIFT_REGION',\n 'SREGISTRY_SWIFT_URL']:\n self.config[envar] = self._required_get_and_update(envar)\n\n # More human friendly to interact with\n auth_url = '%s/v2.0/' % self.config['SREGISTRY_SWIFT_URL']\n # Set required OpenStack options for tenant/region\n _os_options = {\n 'tenant_name': self.config['SREGISTRY_SWIFT_TENANT'],\n 'region_name': self.config['SREGISTRY_SWIFT_REGION']\n }\n\n # Save the connection to use for some command\n self.conn = swiftclient.Connection(\n user=self.config['SREGISTRY_SWIFT_USER'],\n key=self.config['SREGISTRY_SWIFT_TOKEN'],\n os_options=_os_options,\n authurl=auth_url,\n auth_version='2'\n )\n else:\n\n # Legacy Authentication\n # Retrieve the user token, user, and base. Exit if not found \n for envar in ['SREGISTRY_SWIFT_USER',\n 'SREGISTRY_SWIFT_TOKEN',\n 'SREGISTRY_SWIFT_URL']:\n self.config[envar] = self._required_get_and_update(envar)\n\n # More human friendly to interact with\n auth_url = '%s/auth/' % self.config['SREGISTRY_SWIFT_URL']\n\n # Save the connection to use for some command\n self.conn = swiftclient.Connection(\n user=self.config['SREGISTRY_SWIFT_USER'],\n key=self.config['SREGISTRY_SWIFT_TOKEN'],\n authurl=auth_url,\n )", "response": "update secrets will look for a user and token in the environment and continue. If we find the values we will update the secrets and exit with error\n "} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates the secrets file with the values from the environment variable.", "response": "def _update_secrets(self):\n '''The user is required to have an application secrets file in his\n or her environment. The information isn't saved to the secrets\n file, but the client exists with error if the variable isn't found.\n '''\n env = 'GOOGLE_APPLICATION_CREDENTIALS'\n self._secrets = self._get_and_update_setting(env)\n if self._secrets is None:\n bot.error('You must export %s to use Google Storage client' %env)\n sys.exit(1)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets a bucket based on a bucket name. create it if it doesn t exist", "response": "def _get_bucket(self, bucket_name):\n '''get a bucket based on a bucket name. If it doesn't exist, create it.\n\n Parameters\n ==========\n bucket_name: the name of the bucket to get (or create). It should\n not contain google, and should be all lowercase with -\n or underscores.\n '''\n\n # Case 1: The bucket already exists\n try:\n bucket = self._bucket_service.get_bucket(bucket_name)\n\n # Case 2: The bucket needs to be created\n except google.cloud.exceptions.NotFound:\n bucket = self._bucket_service.create_bucket(bucket_name)\n\n # Case 3: The bucket name is already taken\n except:\n bot.error('Cannot get or create %s' % bucket_name)\n sys.exit(1)\n\n return bucket"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the correct client based on the driver of interest", "response": "def get_client(image=None, quiet=False, **kwargs):\n '''\n get the correct client depending on the driver of interest. The\n selected client can be chosen based on the environment variable\n SREGISTRY_CLIENT, and later changed based on the image uri parsed\n If there is no preference, the default is to load the singularity \n hub client.\n\n Parameters\n ==========\n image: if provided, we derive the correct client based on the uri\n of an image. If not provided, we default to environment, then hub.\n quiet: if True, suppress most output about the client (e.g. speak)\n\n '''\n from sregistry.defaults import SREGISTRY_CLIENT\n\n # Give the user a warning:\n if not check_install():\n bot.warning('Singularity is not installed, function might be limited.')\n\n # If an image is provided, use to determine client\n client_name = get_uri(image)\n if client_name is not None:\n SREGISTRY_CLIENT = client_name\n\n # If no obvious credential provided, we can use SREGISTRY_CLIENT\n if SREGISTRY_CLIENT == 'aws': from .aws import Client\n elif SREGISTRY_CLIENT == 'docker': from .docker import Client\n elif SREGISTRY_CLIENT == 'dropbox': from .dropbox import Client\n elif SREGISTRY_CLIENT == 'gitlab': from .gitlab import Client\n elif SREGISTRY_CLIENT == 'globus': from .globus import Client\n elif SREGISTRY_CLIENT == 'nvidia': from .nvidia import Client\n elif SREGISTRY_CLIENT == 'hub': from .hub import Client\n elif SREGISTRY_CLIENT == 'google-drive': from .google_drive import Client\n elif SREGISTRY_CLIENT == 'google-compute': from .google_storage import Client\n elif SREGISTRY_CLIENT == 'google-storage': from .google_storage import Client\n elif SREGISTRY_CLIENT == 'google-build': from .google_build import Client\n elif SREGISTRY_CLIENT == 'registry': from .registry import Client\n elif SREGISTRY_CLIENT == 's3': from .s3 import Client\n elif SREGISTRY_CLIENT == 'swift': from .swift import Client\n else: from .hub import Client\n\n Client.client_name = SREGISTRY_CLIENT\n Client.quiet = quiet\n\n # Create credentials cache, if it doesn't exist\n Client._credential_cache = get_credential_cache()\n\n # Add the database, if wanted\n if SREGISTRY_DATABASE is not None:\n\n # These are global functions used across modules\n from sregistry.database import (\n init_db, add, cp, get, mv, rm, rmi, \n images, inspect,\n rename,\n get_container,\n get_collection, \n get_or_create_collection \n )\n\n # Actions\n Client._init_db = init_db\n Client.add = add\n Client.cp = cp\n Client.get = get\n Client.inspect = inspect\n Client.mv = mv\n Client.rename = rename\n Client.rm = rm\n Client.rmi = rmi\n Client.images = images\n\n # Collections\n Client.get_or_create_collection = get_or_create_collection\n Client.get_container = get_container\n Client.get_collection = get_collection\n\n # If no database, import dummy functions that return the equivalent\n else:\n from sregistry.database import ( add, init_db )\n Client.add = add\n Client._init_db = init_db \n\n # Initialize the database\n cli = Client()\n\n if hasattr(Client, '_init_db'):\n cli._init_db(SREGISTRY_DATABASE)\n return cli"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef pull(self, images, file_name=None, save=True, force=False, **kwargs):\n ''' pull an image from an endpoint\n \n Parameters\n ==========\n images: refers to the uri given by the user to pull in the format\n /. You should have an API that is able to \n retrieve a container based on parsing this uri.\n file_name: the user's requested name for the file. It can \n optionally be None if the user wants a default.\n save: if True, you should save the container to the database\n using self.add()\n \n Returns\n =======\n finished: a single container path, or list of paths\n '''\n\n if not isinstance(images,list):\n images = [images]\n\n bot.debug('Execution of PULL for %s images' % len(images))\n\n finished = []\n for image in images:\n\n q = parse_image_name(remove_uri(image), lowercase=False)\n\n # Verify image existence, and obtain id\n url = \"%s/container/%s/%s:%s\" %(self.base, q['collection'], \n q['image'],\n q['tag'])\n\n bot.debug('Retrieving manifest at %s' % url)\n\n manifest = self._get(url)\n manifest['selfLink'] = url \n\n # If the manifest reveals a version, update names \n if \"version\" in manifest:\n q = parse_image_name('%s@%s' %(q['uri'], manifest['version']))\n\n if file_name is None:\n file_name = self._get_storage_name(q)\n file_name = os.path.abspath(file_name)\n\n # Determine if the user already has the image\n if os.path.exists(file_name) and force is False:\n bot.error('Image exists! Remove first, or use --force to overwrite')\n sys.exit(1)\n\n show_bar = not bool(self.quiet)\n image_file = self.download(url=manifest['image'],\n file_name=os.path.basename(file_name),\n show_progress=show_bar)\n\n # If the user is saving to local storage\n if save is True:\n image_uri = \"%s:%s@%s\" %(manifest['name'], manifest['tag'], manifest['version'])\n container = self.add(image_path=image_file,\n image_uri=image_uri,\n image_name=file_name,\n metadata=manifest,\n url=manifest['image'])\n image_file = container.image\n\n\n if os.path.exists(image_file):\n bot.debug('Retrieved image file %s' %image_file)\n bot.custom(prefix=\"Success!\", message=image_file)\n finished.append(image_file)\n\n # Reset file name back to None in case of multiple downloads\n file_name = None\n\n # If the user is only asking for one image\n if len(finished) == 1:\n finished = finished[0]\n return finished", "response": "pull an image from an endpoint"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef ipython(args):\n '''give the user an ipython shell, optionally with an endpoint of choice.\n '''\n\n # The client will announce itself (backend/database) unless it's get\n from sregistry.main import get_client\n client = get_client(args.endpoint)\n client.announce(args.command)\n from IPython import embed\n embed()", "response": "give the user an ipython shell optionally with an endpoint of choice.\n "} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_manifest_selfLink(self, repo_name, digest=None):\n ''' get a selfLink for the manifest, for use by the client get_manifest\n function, along with the parents pull\n \n Parameters\n ==========\n repo_name: reference to the /: to obtain\n digest: a tag or shasum version\n\n '''\n url = \"%s/%s/manifests\" % (self.base, repo_name)\n\n # Add a digest - a tag or hash (version)\n if digest is None:\n digest = 'latest'\n return \"%s/%s\" % (url, digest)", "response": "get a selfLink for the manifest"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndownloads layers from the cache and return a list of tar. gz files.", "response": "def download_layers(self, repo_name, digest=None, destination=None):\n ''' download layers is a wrapper to do the following for a client loaded\n with a manifest for an image:\n \n 1. use the manifests to retrieve list of digests (get_digests)\n 2. atomically download the list to destination (get_layers)\n\n This function uses the MultiProcess client to download layers\n at the same time.\n '''\n from sregistry.main.workers import ( Workers, download_task )\n\n # 1. Get manifests if not retrieved\n if not hasattr(self, 'manifests'):\n self._get_manifests(repo_name, digest)\n\n # Obtain list of digets, and destination for download\n digests = self._get_digests()\n destination = self._get_download_cache(destination)\n\n # Create multiprocess download client\n workers = Workers()\n\n # Download each layer atomically\n tasks = []\n layers = []\n for digest in digests:\n\n targz = \"%s/%s.tar.gz\" % (destination, digest)\n\n # Only download if not in cache already\n if not os.path.exists(targz):\n url = \"%s/%s/blobs/%s\" % (self.base, repo_name, digest)\n tasks.append((url, self.headers, targz))\n layers.append(targz)\n\n # Download layers with multiprocess workers\n if len(tasks) > 0:\n download_layers = workers.run(func=download_task,\n tasks=tasks)\n # Create the metadata tar\n metadata = self._create_metadata_tar(destination)\n if metadata is not None:\n layers.append(metadata)\n\n\n return layers"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndetermining the user preference for atomic download of layers. If the user has set a singularity cache directory honor it. If the user has disabled use tmp. If the user has set a singularity cache directory honor it. If the user has set a singularity cache directory use the Singularity default.", "response": "def get_download_cache(self, destination, subfolder='docker'):\n '''determine the user preference for atomic download of layers. If\n the user has set a singularity cache directory, honor it. Otherwise,\n use the Singularity default.\n '''\n # First priority after user specification is Singularity Cache\n if destination is None:\n destination = self._get_setting('SINGULARITY_CACHEDIR', \n SINGULARITY_CACHE)\n \n # If not set, the user has disabled (use tmp)\n destination = get_tmpdir(destination)\n\n if not destination.endswith(subfolder):\n destination = \"%s/%s\" %(destination, subfolder)\n\n # Create subfolders, if don't exist\n mkdir_p(destination)\n return destination"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns a list of layers from a manifest.", "response": "def get_digests(self):\n '''return a list of layers from a manifest.\n The function is intended to work with both version\n 1 and 2 of the schema. All layers (including redundant)\n are returned. By default, we try version 2 first,\n then fall back to version 1.\n\n For version 1 manifests: extraction is reversed\n\n Parameters\n ==========\n manifest: the manifest to read_layers from\n\n '''\n if not hasattr(self, 'manifests'):\n bot.error('Please retrieve manifests for an image first.')\n sys.exit(1)\n\n digests = []\n\n reverseLayers = False\n schemaVersions = list(self.manifests.keys())\n schemaVersions.reverse()\n\n # Select the manifest to use\n for schemaVersion in schemaVersions:\n\n manifest = self.manifests[schemaVersion]\n\n if manifest['schemaVersion'] == 1:\n reverseLayers = True\n\n # version 2 indices used by default\n layer_key = 'layers'\n digest_key = 'digest'\n\n # Docker manifest-v2-2.md#image-manifest\n if 'layers' in manifest:\n bot.debug('Image manifest version 2.2 found.')\n break\n\n # Docker manifest-v2-1.md#example-manifest # noqa\n elif 'fsLayers' in manifest:\n layer_key = 'fsLayers'\n digest_key = 'blobSum'\n bot.debug('Image manifest version 2.1 found.')\n break\n\n else:\n msg = \"Improperly formed manifest, \"\n msg += \"layers, manifests, or fsLayers must be present\"\n bot.error(msg)\n sys.exit(1)\n\n for layer in manifest[layer_key]:\n if digest_key in layer:\n bot.debug(\"Adding digest %s\" % layer[digest_key])\n digests.append(layer[digest_key])\n\n # Reverse layer order for manifest version 1.0\n if reverseLayers is True:\n message = 'v%s manifest, reversing layers' % schemaVersion\n bot.debug(message)\n digests.reverse()\n\n return digests"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndownload an image layer to a specified download folder", "response": "def get_layer(self, image_id, repo_name, download_folder=None):\n '''download an image layer (.tar.gz) to a specified download folder.\n\n Parameters\n ==========\n download_folder: download to this folder. If not set, uses temp.\n repo_name: the image name (library/ubuntu) to retrieve\n\n '''\n url = self._get_layerLink(repo_name, image_id)\n\n bot.verbose(\"Downloading layers from %s\" % url)\n\n download_folder = get_tmpdir(download_folder)\n download_folder = \"%s/%s.tar.gz\" % (download_folder, image_id)\n\n # Update user what we are doing\n bot.debug(\"Downloading layer %s\" % image_id)\n\n # Step 1: Download the layer atomically\n file_name = \"%s.%s\" % (download_folder,\n next(tempfile._get_candidate_names()))\n\n tar_download = self.download(url, file_name)\n\n try:\n shutil.move(tar_download, download_folder)\n except Exception:\n msg = \"Cannot untar layer %s,\" % tar_download\n msg += \" was there a problem with download?\"\n bot.error(msg)\n sys.exit(1)\n return download_folder"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_size(self, add_padding=True, round_up=True, return_mb=True):\n '''get_size will return the image size (must use v.2.0 manifest)\n \n Parameters\n ==========\n add_padding: if true, return reported size * 5\n round_up: if true, round up to nearest integer\n return_mb: if true, defaults bytes are converted to MB\n \n '''\n if not hasattr(self,'manifests'):\n bot.error('Please retrieve manifests for an image first.')\n sys.exit(1)\n\n size = 768 # default size\n for schemaVersion, manifest in self.manifests.items():\n if \"layers\" in manifest:\n size = 0\n for layer in manifest[\"layers\"]:\n if \"size\" in layer:\n size += layer['size']\n\n if add_padding is True:\n size = size * 5\n\n if return_mb is True:\n size = size / (1024 * 1024) # 1MB = 1024*1024 bytes\n\n if round_up is True:\n size = math.ceil(size)\n size = int(size)\n\n return size", "response": "get_size will return the image size"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_config(self, key=\"Entrypoint\", delim=None):\n '''get_config returns a particular key (default is Entrypoint)\n from a VERSION 1 manifest obtained with get_manifest.\n\n Parameters\n ==========\n key: the key to return from the manifest config\n delim: Given a list, the delim to use to join the entries.\n Default is newline\n\n '''\n if not hasattr(self,'manifests'):\n bot.error('Please retrieve manifests for an image first.')\n sys.exit(1)\n\n cmd = None\n\n # If we didn't find the config value in version 2\n for version in ['config', 'v1']:\n if cmd is None and 'config' in self.manifests:\n \n # First try, version 2.0 manifest config has upper level config\n manifest = self.manifests['config']\n if \"config\" in manifest:\n if key in manifest['config']:\n cmd = manifest['config'][key]\n\n # Second try, config manifest (not from verison 2.0 schema blob)\n\n if cmd is None and \"history\" in manifest:\n for entry in manifest['history']:\n if 'v1Compatibility' in entry:\n entry = json.loads(entry['v1Compatibility'])\n if \"config\" in entry:\n if key in entry[\"config\"]:\n cmd = entry[\"config\"][key]\n\n # Standard is to include commands like ['/bin/sh']\n if isinstance(cmd, list):\n if delim is not None:\n cmd = delim.join(cmd)\n bot.verbose(\"Found Docker config (%s) %s\" % (key, cmd))\n return cmd", "response": "get_config returns a particular key from a VERSION 1 manifest obtained with get_manifest."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the environment. tar generated with the Singularity software", "response": "def get_environment_tar(self):\n '''return the environment.tar generated with the Singularity software.\n We first try the Linux Filesystem expected location in /usr/libexec\n If not found, we detect the system archicture\n\n dirname $(singularity selftest 2>&1 | grep 'lib' | awk '{print $4}' | sed -e 's@\\(.*/singularity\\).*@\\1@')\n '''\n from sregistry.utils import ( which, run_command )\n\n # First attempt - look at File System Hierarchy Standard (FHS)\n res = which('singularity')['message']\n libexec = res.replace('/bin/singularity','')\n envtar = '%s/libexec/singularity/bootstrap-scripts/environment.tar' %libexec\n\n if os.path.exists(envtar):\n return envtar\n\n # Second attempt, debian distribution will identify folder\n try:\n res = which('dpkg-architecture')['message']\n if res is not None:\n cmd = ['dpkg-architecture', '-qDEB_HOST_MULTIARCH']\n triplet = run_command(cmd)['message'].strip('\\n')\n envtar = '/usr/lib/%s/singularity/bootstrap-scripts/environment.tar' %triplet\n if os.path.exists(envtar):\n return envtar\n except:\n pass\n\n # Final, return environment.tar provided in package\n return \"%s/environment.tar\" %os.path.abspath(os.path.dirname(__file__))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a tar file to add to the singularity image", "response": "def create_metadata_tar(self, destination=None, metadata_folder=\".singularity.d\"):\n '''create a metadata tar (runscript and environment) to add to the\n downloaded image. This function uses all functions in this section\n to obtain key--> values from the manifest config, and write\n to a .tar.gz\n\n Parameters\n ==========\n metadata_folder: the metadata folder in the singularity image.\n default is .singularity.d\n ''' \n tar_file = None\n \n # We will add these files to it\n files = []\n\n # Extract and add environment\n environ = self._extract_env()\n if environ not in [None, \"\"]:\n bot.verbose3('Adding Docker environment to metadata tar')\n template = get_template('tarinfo')\n template['name'] = './%s/env/10-docker.sh' % (metadata_folder)\n template['content'] = environ\n files.append(template)\n\n # Extract and add labels\n labels = self._extract_labels()\n if labels is not None:\n labels = print_json(labels)\n bot.verbose3('Adding Docker labels to metadata tar')\n template = get_template('tarinfo')\n template['name'] = \"./%s/labels.json\" % metadata_folder\n template['content'] = labels\n files.append(template)\n\n # Runscript\n runscript = self._extract_runscript()\n if runscript is not None:\n bot.verbose3('Adding Docker runscript to metadata tar')\n template = get_template('tarinfo')\n template['name'] = \"./%s/runscript\" % metadata_folder\n template['content'] = runscript\n files.append(template)\n\n if len(files) > 0:\n dest = self._get_download_cache(destination, subfolder='metadata')\n tar_file = create_tar(files, dest)\n else:\n bot.warning(\"No metadata will be included.\")\n return tar_file"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nextracts the environment from the manifest or return None.", "response": "def extract_env(self):\n '''extract the environment from the manifest, or return None.\n Used by functions env_extract_image, and env_extract_tar\n '''\n environ = self._get_config('Env')\n if environ is not None:\n if not isinstance(environ, list):\n environ = [environ]\n\n lines = []\n for line in environ:\n line = re.findall(\"(?P.+?)=(?P.+)\", line)\n line = ['export %s=\"%s\"' % (x[0], x[1]) for x in line]\n lines = lines + line\n\n environ = \"\\n\".join(lines)\n bot.verbose3(\"Found Docker container environment!\")\n return environ"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nextract the runscript from the Entrypoint or Cmd", "response": "def extract_runscript(self):\n '''extract the runscript (EntryPoint) as first priority, unless the\n user has specified to use the CMD. If Entrypoint is not defined,\n we default to None:\n \n 1. IF SREGISTRY_DOCKERHUB_CMD is set, use Cmd\n 2. If not set, or Cmd is None/blank, try Entrypoint\n 3. If Entrypoint is not set, use default /bin/bash\n\n '''\n use_cmd = self._get_setting('SREGISTRY_DOCKER_CMD')\n\n # Does the user want to use the CMD instead of ENTRYPOINT?\n commands = [\"Entrypoint\", \"Cmd\"]\n if use_cmd is not None:\n commands.reverse()\n\n # Parse through commands until we hit one\n for command in commands:\n cmd = self._get_config(command)\n if cmd is not None:\n break\n\n # Only continue if command still isn't None\n if cmd is not None:\n bot.verbose3(\"Adding Docker %s as Singularity runscript...\"\n % command.upper())\n\n # If the command is a list, join. (eg ['/usr/bin/python','hello.py']\n bot.debug(cmd)\n if not isinstance(cmd, list):\n cmd = [cmd]\n\n cmd = \" \".join(['\"%s\"' % x for x in cmd])\n\n cmd = 'exec %s \"$@\"' % cmd\n cmd = \"#!/bin/sh\\n\\n%s\\n\" % cmd\n return cmd\n\n bot.debug(\"CMD and ENTRYPOINT not found, skipping runscript.\")\n return cmd"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _update_base(self):\n '''update the base, including the URL for GitLab and the API endpoint.\n '''\n self.base = self._get_and_update_setting('SREGISTRY_GITLAB_BASE',\n \"https://gitlab.com/\")\n self.api_base = \"%s/api/v4\" % self.base.strip('/')\n self.artifacts = self._get_and_update_setting('SREGISTRY_GITLAB_FOLDER',\n 'build')\n\n self.job = self._get_and_update_setting('SREGISTRY_GITLAB_JOB', 'build')\n\n bot.debug(' Api: %s' % self.api_base)\n bot.debug('Artifacts: %s' % self.artifacts)\n bot.debug(' Job: %s' % self.job)", "response": "update the base including the URL for GitLab and the API endpoint."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _update_secrets(self):\n '''update secrets will update metadata needed for pull and search\n '''\n self.token = self._required_get_and_update('SREGISTRY_GITLAB_TOKEN')\n self.headers[\"Private-Token\"] = self.token", "response": "update secrets will update metadata needed for pull and search\n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_metadata(self):\n '''since the user needs a job id and other parameters, save this\n for them.\n '''\n metadata = {'SREGISTRY_GITLAB_FOLDER': self.artifacts,\n 'api_base': self.api_base,\n 'SREGISTRY_GITLAB_BASE': self.base,\n 'SREGISTRY_GITLAB_JOB': self.job }\n return metadata", "response": "return the metadata for the current job"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _parse_image_name(self, image, retry=True):\n '''starting with an image string in either of the following formats:\n job_id|collection\n job_id|collection|job_name\n \n Parse the job_name, job_id, and collection uri from it. If the user\n provides the first option, we use the job_name set by the client\n (default is build).\n \n Parameters\n ==========\n image: the string to parse, with values separated by |\n retry: the client can call itself recursively once, providing the\n default job_name if the user doesn't.\n '''\n try:\n job_id, collection, job_name = image.split(',')\n except:\n # Retry and add job_name\n if retry:\n return self._parse_image_name(\"%s,%s\" %(image, self.job),\n retry=False)\n\n # Or fail\n bot.exit('''Malformed image string! Please provide:\n job_id,collection (or)\n job_id,collection,job_name''')\n\n return job_id, collection, job_name", "response": "parse the image name and return the job_id collection uri"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_settings(self, client_name=None):\n '''get all settings, either for a particular client if a name is provided,\n or across clients.\n\n Parameters\n ==========\n client_name: the client name to return settings for (optional)\n\n '''\n settings = read_client_secrets()\n if client_name is not None and client_name in settings:\n return settings[client_name] \n return settings", "response": "get all settings for a particular client"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets a setting from the environment then secrets file and then the client", "response": "def get_setting(self, name, default=None):\n '''return a setting from the environment (first priority) and then\n secrets (second priority) if one can be found. If not, return None.\n\n Parameters\n ==========\n name: they key (index) of the setting to look up\n default: (optional) if not found, return default instead.\n ''' \n\n # First priority is the environment\n setting = os.environ.get(name)\n\n # Second priority is the secrets file\n if setting is None:\n secrets = read_client_secrets()\n if self.client_name in secrets:\n secrets = secrets[self.client_name]\n if name in secrets:\n setting = secrets[name]\n\n if setting is None and default is not None:\n setting = default\n return setting"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nlook for a setting in the environment and then update the client secrets with the client secrets.", "response": "def get_and_update_setting(self, name, default=None):\n '''Look for a setting in the environment (first priority) and then\n the settings file (second). If something is found, the settings\n file is updated. The order of operations works as follows:\n\n 1. The .sregistry settings file is used as a cache for the variable\n 2. the environment variable always takes priority to cache, and if\n found, will update the cache.\n 3. If the variable is not found and the cache is set, we are good\n 5. If the variable is not found and the cache isn't set, return\n default (default is None)\n\n So the user of the function can assume a return of None equates to\n not set anywhere, and take the appropriate action.\n ''' \n\n setting = self._get_setting(name)\n\n if setting is None and default is not None:\n setting = default\n\n # If the setting is found, update the client secrets\n if setting is not None:\n updates = {name : setting}\n update_client_secrets(backend=self.client_name, \n updates=updates)\n\n return setting"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nupdates a specific setting.", "response": "def update_setting(self, name, value):\n '''Just update a setting, doesn't need to be returned.\n ''' \n\n if value is not None:\n updates = {name : value}\n update_client_secrets(backend=self.client_name, \n updates=updates)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_storage_name(self, names, remove_dir=False):\n '''use a parsed names dictionary from get_image_name (above) to return\n the path in storage based on the user's preferences\n\n Parameters\n ==========\n names: the output from parse_image_name\n '''\n storage_folder = os.path.dirname(names['storage'])\n\n # If the client doesn't have a database, default to PWD\n if not hasattr(self, 'storage'):\n return os.path.basename(names['storage'])\n \n storage_folder = \"%s/%s\" %(self.storage, storage_folder)\n mkdir_p(storage_folder)\n file_name = names['storage'].replace('/','-')\n storage_path = \"%s/%s\" %(self.storage, file_name)\n if remove_dir is True:\n return file_name\n return storage_path", "response": "use a parsed names dictionary from get_image_name to return\n the path in storage based on the user s preferences"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nauthorize a client based on encrypting the payload with the client token which should be matched on the receiving server", "response": "def authorize(self, names, payload=None, request_type=\"push\"):\n '''Authorize a client based on encrypting the payload with the client\n token, which should be matched on the receiving server'''\n\n if self.secrets is not None:\n\n if \"registry\" in self.secrets:\n\n # Use the payload to generate a digest push|collection|name|tag|user\n timestamp = generate_timestamp()\n credential = generate_credential(self.secrets['registry']['username'])\n credential = \"%s/%s/%s\" %(request_type,credential,timestamp)\n\n if payload is None:\n payload = \"%s|%s|%s|%s|%s|\" %(request_type,\n names['collection'],\n timestamp,\n names['image'],\n names['tag'])\n\n signature = generate_signature(payload,self.secrets['registry']['token'])\n return \"SREGISTRY-HMAC-SHA256 Credential=%s,Signature=%s\" %(credential,signature)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef push(self, path, name, tag=None):\n '''push an image to your Storage. If the collection doesn't exist,\n it is created.\n \n Parameters\n ==========\n path: should correspond to an absolute image path (or derive it)\n name: should be the complete uri that the user has requested to push.\n tag: should correspond with an image tag. This is provided to mirror Docker\n\n '''\n path = os.path.abspath(path)\n bot.debug(\"PUSH %s\" % path)\n\n if not os.path.exists(path):\n bot.error('%s does not exist.' %path)\n sys.exit(1)\n\n # Parse image names\n names = parse_image_name(remove_uri(name), tag=tag)\n\n # Get the size of the file\n file_size = os.path.getsize(path)\n chunk_size = 4 * 1024 * 1024\n storage_path = \"/%s\" %names['storage']\n\n # Create / get the collection\n collection = self._get_or_create_collection(names['collection'])\n\n # The image name is the name followed by tag\n image_name = os.path.basename(names['storage'])\n \n # prepare the progress bar\n progress = 0\n bot.show_progress(progress, file_size, length=35)\n\n # Put the (actual) container into the collection\n with open(path, 'rb') as F:\n self.conn.put_object(names['collection'], image_name,\n contents= F.read(),\n content_type='application/octet-stream')\n\n # Finish up\n bot.show_progress(iteration=file_size,\n total=file_size,\n length=35,\n carriage_return=True)\n\n # Newline to finish download\n sys.stdout.write('\\n')", "response": "push an image to your Storage"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntrigger a build on Google Cloud Storage", "response": "def build(self, repo, \n config=None,\n name=None, \n commit=None,\n tag=\"latest\",\n recipe=\"Singularity\",\n preview=False):\n\n '''trigger a build on Google Cloud (storage then compute) given a name\n recipe, and Github URI where the recipe can be found.\n \n Parameters\n ==========\n name: should be the complete uri that the user has requested to push.\n commit: a commit to use, not required, and can be parsed from URI\n repo: should correspond to a Github URL or (if undefined) used local repo.\n tag: a user specified tag, to take preference over tag in name\n config: The local config file to use. If the file doesn't exist, then\n we attempt looking up the config based on the name.\n recipe: If defined, limit builder to build a single recipe\n\n '''\n\n bot.debug(\"BUILD %s\" % repo)\n\n # Ensure that repo exists (200 response)\n if not self._healthy(repo):\n sys.exit(1)\n\n config = self._load_build_config(config)\n\n # If name not provided, parse name based on repository\n if name is None:\n name = '/'.join(repo.split('/')[-2:])\n\n # This returns a data structure with collection, container, based on uri\n names = parse_image_name(remove_uri(name))\n\n # First priority - user has provided a tag\n names['tag'] = tag or names['tag']\n\n # If we still don't have custom tag, check the recipe\n if names['tag'] == \"latest\" and recipe != \"Singularity\":\n tag = get_recipe_tag(recipe)\n names = parse_image_name(remove_uri(name), tag=tag)\n\n # The commit is the version (after the @)\n commit = commit or names['version']\n\n # Setup the build\n config = self._setup_build(name=names['url'], recipe=recipe,\n repo=repo, config=config,\n tag=tag, commit=commit)\n\n # The user only wants to preview the configuration\n if preview is True:\n return config\n\n # Otherwise, run the build!\n return self._run_build(config)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef list_builders(self, project=None, zone='us-west1-a'):\n '''list builders, or instances for the project. They should start with\n sregistry-builder\n\n Parameters\n ==========\n project: specify a project, will default to environment first\n zone: the zone to use, defaults to us-west1-a if environment not set\n\n '''\n builders = []\n instances = self._get_instances(project, zone)\n\n for instance in instances['items']:\n builders.append([instance['name'], instance['status']])\n\n bot.info(\"[google-compute] Found %s instances\" %(len(builders)))\n bot.table(builders)\n bot.newline()", "response": "list builders or instances for the project"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef list_templates(self, name=None):\n '''list templates in the builder bundle library. If a name is provided,\n look it up\n\n Parameters\n ==========\n name: the name of a template to look up\n '''\n configs = self._get_templates()\n rows = []\n\n # DETAIL: The user wants to retrieve a particular configuration\n if name:\n matches = self._load_templates(name)\n bot.debug('Found %s matches for %s' %(len(matches), name))\n for match in matches:\n print(json.dumps(match, indent=4, sort_keys=True))\n\n # LISTING: If we don't have a specific name, just show all\n else:\n for config in configs['data']:\n rows.append([config['name']])\n bot.table(rows)", "response": "list templates in the builder bundle library. If a name is provided look it up\nAttributeNames. Otherwise list all templates."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nlist templates in the builder bundle library.", "response": "def get_templates(self):\n '''list templates in the builder bundle library. If a name is provided,\n look it up\n\n '''\n\n base = 'https://singularityhub.github.io/builders'\n base = self._get_and_update_setting('SREGISTRY_BUILDER_REPO', base)\n base = \"%s/configs.json\" %base\n return self._get(base)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nloads a particular template based on a name", "response": "def load_templates(self, name):\n '''load a particular template based on a name. We look for a name IN data,\n so the query name can be a partial string of the full name.\n\n Parameters\n ==========\n name: the name of a template to look up\n '''\n configs = self._get_templates()\n templates = []\n\n # The user wants to retrieve a particular configuration\n matches = [x for x in configs['data'] if name in x['name']]\n if len(matches) > 0:\n for match in matches:\n response = self._get(match['id'])\n templates.append(response)\n return templates\n\n bot.info('No matches found for %s' %name)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_instances(self, project=None, zone='us-west1-a'):\n '''get instances will return the (unparsed) list of instances, for\n functions for the user. This is primarily used by get_builders\n to print a list of builder instances.\n\n Parameters\n ==========\n project: specify a project, will default to environment first\n zone: the zone to use, defaults to us-west1-a if environment not set\n\n '''\n project = self._get_project(project)\n zone = self._get_zone(zone)\n\n return self._compute_service.instances().list(project=project, \n zone=zone).execute()", "response": "get instances will return the unparsed list of instances for the user"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_ipaddress(self, name, retries=3, delay=3):\n '''get the ip_address of an inserted instance. Will try three times with\n delay to give the instance time to start up.\n\n Parameters\n ==========\n name: the name of the instance to get the ip address for.\n retries: the number of retries before giving up\n delay: the delay between retry\n\n Note from @vsoch: this function is pretty nasty.\n\n '''\n for rr in range(retries):\n\n # Retrieve list of instances\n instances = self._get_instances()\n\n for instance in instances['items']:\n if instance['name'] == name:\n\n # Iterate through network interfaces\n for network in instance['networkInterfaces']:\n if network['name'] == 'nic0':\n\n # Access configurations\n for subnet in network['accessConfigs']:\n if subnet['name'] == 'External NAT':\n if 'natIP' in subnet:\n return subnet['natIP']\n\n sleep(delay) \n bot.warning('Did not find IP address, check Cloud Console!')", "response": "get the ip_address of an inserted instance"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef load_build_config(self, config=None):\n '''load a google compute config, meaning that we have the following cases:\n\n 1. the user has not provided a config file directly, we look in env.\n 2. the environment is not set, so we use a reasonable default\n 3. if the final string is not found as a file, we look for it in library\n 4. we load the library name, or the user file, else error\n\n Parameters\n ==========\n config: the config file the user has provided, or the library URI\n\n '''\n # If the config is already a dictionary, it's loaded\n if isinstance(config, dict):\n bot.debug('Config is already loaded.')\n return config\n\n # if the config is not defined, look in environment, then choose a default\n if config is None:\n config = self._get_and_update_setting('SREGISTRY_COMPUTE_CONFIG',\n 'google/compute/ubuntu/securebuild-2.4.3')\n\n # If the config is a file, we read it\n elif os.path.exists(config):\n return read_json(config)\n\n # otherwise, try to look it up in library\n configs = self._load_templates(config)\n if configs is None:\n bot.error('%s is not a valid config. %s' %name)\n sys.exit(1)\n\n bot.info('Found config %s in library!' %config)\n config = configs[0]\n\n return config", "response": "load a google compute config"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef setconfig(lookup, key, value=None):\n '''setconfig will update a lookup to give priority based on the following:\n \n 1. If both values are None, we set the value to None\n 2. If the currently set (the config.json) is set but not runtime, use config\n 3. If the runtime is set but not config.json, we use runtime\n 4. If both are set, we use runtime\n\n '''\n lookup[key] = value or lookup.get(key)\n return lookup", "response": "setconfig will update a lookup to give priority based on the following key"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef run_build(self, config):\n '''run a build, meaning inserting an instance. Retry if there is failure\n\n Parameters\n ==========\n config: the configuration dictionary generated by setup_build\n\n '''\n project = self._get_project()\n zone = self._get_zone()\n\n bot.custom(prefix='INSTANCE', message=config['name'], color=\"CYAN\")\n bot.info(config['description'])\n\n response = self._compute_service.instances().insert(project=project,\n zone=zone,\n body=config).execute()\n\n # Direct the user to the web portal with log\n ipaddress = self._get_ipaddress(config['name'])\n bot.info('Robot Logger: http://%s' %ipaddress)\n bot.info('Allow a few minutes for web server install, beepboop!')\n return response", "response": "run a build, meaning inserting an instance. Retry if there is failure\n\n Parameters\n ==========\n config: the configuration dictionary generated by setup_build"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef search_all(self):\n '''a \"show all\" search that doesn't require a query'''\n\n results = set()\n\n # Here we get names of collections, and then look up containers\n for container in self.conn.get_account()[1]:\n\n # The result here is just the name\n for result in self.conn.get_container(container['name'])[1]:\n results.add('%s/%s' %(container['name'], result['name']))\n\n if len(results) == 0:\n bot.info(\"No container collections found.\")\n sys.exit(1)\n\n bot.info(\"Collections\")\n bot.table([[x] for x in list(results)])\n return list(results)", "response": "a show all search that doesn t require a query"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsearches for a specific container.", "response": "def container_query(self, query):\n '''search for a specific container.\n This function would likely be similar to the above, but have different\n filter criteria from the user (based on the query)\n '''\n\n results = set()\n\n query = remove_uri(query)\n\n # Here we get names of collections, and then look up containers\n for container in self.conn.get_account()[1]:\n\n # The result here is just the name\n for result in self.conn.get_container(container['name'])[1]:\n if query in collection['name']:\n results.add('%s/%s' %(container['name'], result['name']))\n \n if len(results) == 0:\n bot.info(\"No container collections found.\")\n sys.exit(1)\n\n bot.info(\"Collections\")\n bot.table([list(results)])\n return list(results)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef search_all(self):\n '''a \"list all\" search that doesn't require a query. Here we return to\n the user all objects that have custom metadata value of \"container\"\n\n IMPORTANT: the upload function adds this metadata. For a container to\n be found by the client, it must have the type as container in metadata.\n '''\n \n results = self._list_containers()\n\n bot.info(\"[gs://%s] Containers\" %self._bucket_name)\n\n rows = []\n for i in results:\n size = round(i.size / (1024*1024.0))\n size = (\"%s MB\" %size).rjust(10)\n rows.append([size, i.metadata['name']])\n\n bot.table(rows)\n return rows", "response": "a list all search that doesn t require a query. Here we return to\nWorkItem the user all objects that have custom metadata value of container to\n "} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsearches for a specific container.", "response": "def container_query(self, query, quiet=False):\n '''search for a specific container.\n This function would likely be similar to the above, but have different\n filter criteria from the user (based on the query)\n '''\n results = self._list_containers()\n\n matches = []\n for result in results:\n for key,val in result.metadata.items():\n if query in val and result not in matches:\n matches.append(result)\n\n if not quiet:\n bot.info(\"[gs://%s] Found %s containers\" %(self._bucket_name,len(matches)))\n for image in matches:\n size = round(image.size / (1024*1024.0))\n bot.custom(prefix=image.name, color=\"CYAN\")\n bot.custom(prefix='id: ', message=image.id)\n bot.custom(prefix='uri: ', message=image.metadata['name'])\n bot.custom(prefix='updated:', message=image.updated)\n bot.custom(prefix='size: ', message=' %s MB' %(size))\n bot.custom(prefix='md5: ', message=image.md5_hash)\n if \"public_url\" in image.metadata:\n public_url = image.metadata['public_url']\n bot.custom(prefix='url: ', message=public_url)\n bot.newline()\n return matches"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef search_all(self):\n '''a \"show all\" search that doesn't require a query'''\n\n results = []\n\n # Parse through folders (collections):\n for entry in self.dbx.files_list_folder('').entries:\n\n # Parse through containers\n for item in self.dbx.files_list_folder(entry.path_lower).entries:\n name = item.name.replace('.simg','')\n results.append([ \"%s/%s\" % (entry.name, name) ])\n \n\n if len(results) == 0:\n bot.info(\"No container collections found.\")\n sys.exit(1)\n\n bot.info(\"Collections\")\n bot.table(results)\n return results", "response": "a show all search that doesn t require a query"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef main(args,parser,subparser):\n '''the list command corresponds with listing images for an external\n resource. This is different from listing images that are local to the\n database, which should be done with \"images\"\n '''\n from sregistry.main import get_client\n cli = get_client(quiet=args.quiet)\n \n for query in args.query:\n if query in ['','*']:\n query = None\n\n cli.ls(query=query)", "response": "the list command corresponds with listing images for an external\n resource."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef pull(self, images, file_name=None, save=True, **kwargs):\n '''pull an image from google storage, based on the identifier\n \n Parameters\n ==========\n images: refers to the uri given by the user to pull in the format\n /. You should have an API that is able to \n retrieve a container based on parsing this uri.\n file_name: the user's requested name for the file. It can \n optionally be None if the user wants a default.\n save: if True, you should save the container to the database\n using self.add()\n \n Returns\n =======\n finished: a single container path, or list of paths\n '''\n\n if not isinstance(images,list):\n images = [images]\n\n bot.debug('Execution of PULL for %s images' %len(images))\n\n # If used internally we want to return a list to the user.\n finished = []\n for image in images:\n\n q = parse_image_name(remove_uri(image))\n\n # Use container search to find the container based on uri\n bot.info('Searching for %s in gs://%s' %(q['tag_uri'],self._bucket_name))\n matches = self._container_query(q['tag_uri'], quiet=True)\n\n if len(matches) == 0:\n bot.info('No matching containers found.')\n sys.exit(0)\n \n # If the user didn't provide a file, make one based on the names\n if file_name is None:\n file_name = q['storage_version'].replace('/','-')\n\n # We give the first match, the uri should be unique and known\n image = matches[0]\n image_file = self.download(url=image.media_link,\n file_name=file_name,\n show_progress=True)\n\n # If the user is saving to local storage, you need to assumble the uri\n # here in the expected format /:@\n if save is True:\n image_uri = q['tag_uri']\n\n # Update metadata with selfLink\n metadata = image.metadata\n\n # Rename public URL to URL so it's found by add client\n if \"public_url\" in metadata:\n metadata['url'] = metadata['public_url']\n\n metadata['selfLink'] = image.self_link\n\n container = self.add(image_path=image_file,\n image_uri=image_uri,\n metadata=metadata,\n url=image.media_link)\n\n # When the container is created, this is the path to the image\n image_file = container.image\n\n if os.path.exists(image_file):\n bot.debug('Retrieved image file %s' % image_file)\n bot.custom(prefix=\"Success!\", message=image_file)\n finished.append(image_file)\n\n if len(finished) == 1:\n finished = finished[0]\n return finished", "response": "pull an image from google storage based on the identifier"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef main(args, parser, subparser):\n '''sharing an image means sending a remote share from an image you\n control to a contact, usually an email.\n '''\n from sregistry.main import get_client\n images = args.image\n\n if not isinstance(images,list):\n images = [images]\n\n for image in images:\n print(image)\n \n # Detect any uri, and refresh client if necessary\n cli = get_client(image, quiet=args.quiet)\n cli.announce(args.command)\n cli.share(image, share_to=args.share_to)", "response": "sharing an image means sending a remote share from an image you\n control to a contact, usually an email."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ninitializing the database, with the default database path or custom of the format sqlite:////scif/data/expfactory.db The custom path can be set with the environment variable SREGISTRY_DATABASE when a user creates the client, we must initialize this db the database should use the .singularity cache folder to cache layers and images, and .singularity/sregistry.db as a database", "response": "def init_db(self, db_path):\n '''initialize the database, with the default database path or custom of\n\n the format sqlite:////scif/data/expfactory.db\n\n The custom path can be set with the environment variable SREGISTRY_DATABASE\n when a user creates the client, we must initialize this db\n the database should use the .singularity cache folder to cache\n layers and images, and .singularity/sregistry.db as a database\n '''\n\n # Database Setup, use default if uri not provided\n self.database = 'sqlite:///%s' % db_path\n self.storage = SREGISTRY_STORAGE\n\n bot.debug(\"Database located at %s\" % self.database)\n self.engine = create_engine(self.database, convert_unicode=True)\n self.session = scoped_session(sessionmaker(autocommit=False,\n autoflush=False,\n bind=self.engine))\n \n Base.query = self.session.query_property()\n\n # import all modules here that might define models so that\n # they will be registered properly on the metadata. Otherwise\n # you will have to import them first before calling init_db()\n Base.metadata.create_all(bind=self.engine)\n self.Base = Base"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_uri(self):\n '''generate a uri on the fly from database parameters if one is not\n saved with the initial model (it should be, but might not be possible)\n '''\n uri = \"%s/%s:%s\" %(self.collection.name, self.name, self.tag)\n if self.version not in [None,'']:\n uri = \"%s@%s\" %(uri, self.version)\n return uri", "response": "generate a uri on the fly from database parameters if one is not there"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets default build template.", "response": "def get_build_template():\n '''get default build template.\n '''\n base = get_installdir()\n name = \"%s/main/templates/build/singularity-cloudbuild.json\" % base\n\n if os.path.exists(name):\n bot.debug(\"Found template %s\" %name)\n return read_json(name)\n\n bot.warning(\"Template %s not found.\" % name)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsearches for a given container", "response": "def search(self, query=None, args=None):\n '''query will show images determined by the extension of img\n or simg.\n\n Parameters\n ==========\n query: the container name (path) or uri to search for\n args.endpoint: can be an endpoint id and optional path, e.g.:\n\n --endpoint 6881ae2e-db26-11e5-9772-22000b9da45e:.singularity'\n --endpoint 6881ae2e-db26-11e5-9772-22000b9da45e'\n\n if not defined, we show the user endpoints to choose from\n\n Usage\n =====\n If endpoint is defined with a query, then we search the given endpoint\n for a container of interested (designated by ending in .img or .simg\n\n If no endpoint is provided but instead just a query, we use the query\n to search endpoints.\n \n '''\n\n # No query is defined\n if query is None:\n\n # Option 1: No query or endpoints lists all shared and personal\n if args.endpoint is None:\n bot.info('Listing shared endpoints. Add query to expand search.')\n return self._list_endpoints()\n\n # Option 2: An endpoint without query will just list containers there\n else:\n return self._list_endpoint(args.endpoint)\n\n # Option 3: A query without an endpoint will search endpoints for it\n if args.endpoint is None:\n bot.info('You must specify an endpoint id to query!')\n return self._list_endpoints(query)\n\n # Option 4: A query with an endpoint will search the endpoint for pattern\n return self._list_endpoint(endpoint=args.endpoint, \n query=query)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef list_endpoints(self, query=None):\n '''list all endpoints, providing a list of endpoints to the user to\n better filter the search. This function takes no arguments,\n as the user has not provided an endpoint id or query.\n '''\n bot.info('Please select an endpoint id to query from')\n \n endpoints = self._get_endpoints(query)\n \n # Iterate through endpoints to provide user a list\n\n bot.custom(prefix=\"Globus\", message=\"Endpoints\", color=\"CYAN\")\n rows = []\n for kind,eps in endpoints.items():\n for epid,epmeta in eps.items():\n rows.append([epid, '[%s]' %kind, epmeta['name']])\n\n bot.table(rows)\n return rows", "response": "list all endpoints, providing a list of endpoints to the user to\n better filter the search. This function takes no arguments,\n as the user has not provided an endpoint id or query."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsharing will use the client to get a shareable link for an image of choice.", "response": "def share(self, query, share_to=None):\n '''share will use the client to get a shareable link for an image of choice.\n the functions returns a url of choice to send to a recipient.\n '''\n\n names = parse_image_name(remove_uri(query))\n\n # Dropbox path is the path in storage with a slash\n dropbox_path = '/%s' % names['storage'] \n\n # First ensure that exists\n if self.exists(dropbox_path) is True:\n\n # Create new shared link\n try:\n share = self.dbx.sharing_create_shared_link_with_settings(dropbox_path)\n\n # Already exists!\n except ApiError as err:\n share = self.dbx.sharing_create_shared_link(dropbox_path)\n\n bot.info(share.url)\n return share.url"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _set_base(self):\n '''set the API base or default to use Docker Hub. The user is able\n to set the base, api version, and protocol via a settings file\n of environment variables:\n \n SREGISTRY_NVIDIA_BASE: defaults to nvcr.io\n SREGISTRY_NVIDIA_TOKEN: defaults to $oauthtoken\n SREGISTRY_NVIDIA_VERSION: defaults to v2\n SREGISTRY_NVIDIA_NO_HTTPS: defaults to not set (so https)\n\n '''\n base = self._get_setting('SREGISTRY_NVIDIA_BASE')\n version = self._get_setting('SREGISTRY_NVIDIA_VERSION')\n\n if base is None:\n base = \"nvcr.io\"\n\n if version is None:\n version = \"v2\"\n\n nohttps = self._get_setting('SREGISTRY_NVIDIA_NOHTTPS')\n if nohttps is None:\n nohttps = \"https://\"\n else:\n nohttps = \"http://\"\n\n # :///\n self.base = \"%s%s/%s\" %(nohttps, base.strip('/'), version)", "response": "set the base or default to use Docker Hub."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nupdate secrets will take a secrets credential file located at. sregistry or the environment variable SREGISTRY_CLIENT_SECRETS and update the current client secrets as well as the API base base.", "response": "def _update_secrets(self):\n '''update secrets will take a secrets credential file\n either located at .sregistry or the environment variable\n SREGISTRY_CLIENT_SECRETS and update the current client \n secrets as well as the associated API base. For the case of\n using Docker Hub, if we find a .docker secrets file, we update\n from there.\n '''\n # If the user has defined secrets, use them\n token = self._required_get_and_update('SREGISTRY_NVIDIA_TOKEN')\n username = self._get_and_update_setting('SREGISTRY_NVIDIA_USERNAME')\n\n if username is None:\n username = \"$oauthtoken\"\n\n # Option 1: the user exports username and password\n if token is not None:\n auth = basic_auth_header(username, token)\n self.headers.update(auth)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\npull an image from a Globus endpoint", "response": "def pull(self, images, file_name=None, save=True, **kwargs):\n '''pull an image from a Globus endpoint. The user must have the default\n local endpoint set up. For example:\n\n 6881ae2e-db26-11e5-9772-22000b9da45e:.singularity/shub/sherlock_vep.simg\n\n Parameters\n ==========\n images: refers to the globus endpoint id and image path.\n file_name: the user's requested name for the file. It can \n optionally be None if the user wants a default.\n save: if True, you should save the container to the database\n using self.add(). For globus this is the only option, and\n we don't have control over when this happens.\n \n Returns\n =======\n finished: a single container path, or list of paths\n '''\n\n # Ensure we have a transfer client\n if not hasattr(self, 'transfer_client'):\n self._init_transfer_client()\n\n if not isinstance(images,list):\n images = [images]\n\n bot.debug('Execution of PULL for %s images' %len(images))\n\n finished = []\n for image in images:\n\n # Split the name into endpoint and rest\n\n endpoint, remote = self._parse_endpoint_name(image)\n source = self.transfer_client.get_endpoint(endpoint)\n\n name = os.path.basename(remote)\n q = parse_image_name(name, default_collection=source['name'])\n\n # The user must have a personal endpoint\n\n endpoints = self._get_endpoints()\n\n if len(endpoints['my-endpoints']) == 0:\n bot.error('You must have a personal endpoint to transfer the container')\n sys.exit(1) \n\n # Take the first endpoint that is active\n\n dest = None\n for eid,contender in endpoints['my-endpoints'].items():\n if contender['gcp_connected'] is True:\n dest = contender\n break\n\n # Exit if none are active, required!\n\n if dest is None:\n bot.error('No activated local endpoints online! Start to transfer')\n sys.exit(1)\n\n # We need to know the full path of the endpoint\n\n base = self._get_endpoint_path(dest['id'])\n storage_folder = '%s/%s' %(base, q['collection'])\n self._create_endpoint_folder(dest['id'], storage_folder)\n\n label = \"Singularity Registry Pull\"\n tdata = globus_sdk.TransferData(self.transfer_client, \n source['id'],\n dest['id'],\n label=label,\n sync_level=\"checksum\")\n\n image = os.path.join(base, q['storage'])\n tdata.add_item(remote, image)\n bot.info('Requesting transfer to %s' %q['storage'])\n transfer_result = self.transfer_client.submit_transfer(tdata)\n bot.info(transfer_result['message'])\n finished.append(transfer_result)\n\n if len(finished) == 1:\n finished = finished[0]\n return finished"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the path to the credential cache for the given client level.", "response": "def get_credential_cache():\n '''if the user has specified settings to provide a cache for credentials\n files, initialize it. The root for the folder is created if it doesn't\n exist. The path for the specific client is returned, and it's\n not assumed to be either a folder or a file (this is up to the\n developer of the client).\n '''\n from sregistry.defaults import ( CREDENTIAL_CACHE, SREGISTRY_CLIENT )\n\n client_credential_cache = None\n\n # Check 1: user can disable a credential cache on the client level\n if CREDENTIAL_CACHE is not None: \n env = 'SREGISTRY_DISABLE_CREDENTIAL_%s' %SREGISTRY_CLIENT.upper()\n if os.environ.get(env) is not None:\n bot.debug('[%s] cache disabled' %SREGISTRY_CLIENT)\n CREDENTIAL_CACHE = None\n\n # Check 2: user can disable a credential cache on the client level\n if CREDENTIAL_CACHE is not None:\n if not os.path.exists(CREDENTIAL_CACHE):\n mkdir_p(CREDENTIAL_CACHE)\n client_credential_cache = '%s/%s' %(CREDENTIAL_CACHE, SREGISTRY_CLIENT)\n if client_credential_cache is not None:\n bot.debug('credentials cache')\n return client_credential_cache"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef update_client_secrets(backend, updates, secrets=None, save=True):\n '''update client secrets will update the data structure for a particular\n authentication. This should only be used for a (quasi permanent) token\n or similar. The secrets file, if found, is updated and saved by default.\n '''\n if secrets is None:\n secrets = read_client_secrets()\n if backend not in secrets:\n secrets[backend] = {}\n secrets[backend].update(updates)\n\n # The update typically includes a save\n if save is True:\n secrets_file = get_secrets_file()\n if secrets_file is not None:\n write_json(secrets,secrets_file)\n\n return secrets", "response": "update client secrets for a particular backend"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef read_client_secrets():\n '''for private or protected registries, a client secrets file is required\n to be located at .sregistry. If no secrets are found, we use default\n of Singularity Hub, and return a dummy secrets.\n '''\n client_secrets = _default_client_secrets()\n\n # If token file not provided, check environment\n secrets = get_secrets_file()\n\n # If exists, load\n if secrets is not None:\n client_secrets = read_json(secrets)\n\n # Otherwise, initialize\n else:\n from sregistry.defaults import SREGISTRY_CLIENT_SECRETS\n write_json(client_secrets, SREGISTRY_CLIENT_SECRETS)\n\n return client_secrets", "response": "Read client secrets from the secrets file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _init_client(self):\n '''init client will check if the user has defined a bucket that\n differs from the default, use the application credentials to \n get the bucket, and then instantiate the client.\n '''\n\n # Get storage and compute services\n self._get_services()\n\n env = 'SREGISTRY_GOOGLE_STORAGE_BUCKET'\n self._bucket_name = self._get_and_update_setting(env)\n\n # If the user didn't set in environment, use default\n if self._bucket_name is None:\n self._bucket_name = 'sregistry-%s' %os.environ['USER']\n\n self._get_bucket()", "response": "init client will check if the user has defined a bucket that has changed from the default"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_services(self, version='v1'):\n '''get version 1 of the google compute and storage service\n\n Parameters\n ==========\n version: version to use (default is v1)\n '''\n self._bucket_service = storage.Client()\n creds = GoogleCredentials.get_application_default()\n self._storage_service = discovery_build('storage', version, credentials=creds)\n self._compute_service = discovery_build('compute', version, credentials=creds)", "response": "get version 1 of the google compute and storage service"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget a bucket based on a bucket name. create it if it doesn t exist.", "response": "def _get_bucket(self):\n '''get a bucket based on a bucket name. If it doesn't exist, create it.\n '''\n\n # Case 1: The bucket already exists\n try:\n self._bucket = self._bucket_service.get_bucket(self._bucket_name)\n\n # Case 2: The bucket needs to be created\n except google.cloud.exceptions.NotFound:\n self._bucket = self._bucket_service.create_bucket(self._bucket_name)\n\n # Case 3: The bucket name is already taken\n except:\n bot.error('Cannot get or create %s' %self._bucket_name)\n sys.exit(1)\n\n return self._bucket"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef delete_object(service, bucket_name, object_name):\n '''delete object will delete a file from a bucket\n\n Parameters\n ==========\n storage_service: the service obtained with get_storage_service\n bucket_name: the name of the bucket\n object_name: the \"name\" parameter of the object.\n\n '''\n try:\n operation = service.objects().delete(bucket=bucket_name,\n object=object_name).execute()\n except HttpError as e:\n pass\n operation = e\n return operation", "response": "delete object from a bucket"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndelete an image from Google Storage.", "response": "def delete(self, name):\n '''delete an image from Google Storage.\n\n Parameters\n ==========\n name: the name of the file (or image) to delete\n\n '''\n\n bot.debug(\"DELETE %s\" % name)\n\n for file_object in files:\n if isinstance(file_object, dict):\n if \"kind\" in file_object:\n if file_object['kind'] == \"storage#object\":\n object_name = \"/\".join(file_object['id'].split('/')[:-1])\n object_name = re.sub('%s/' %self._bucket['name'],'', object_name,1)\n\n delete_object(service=self._bucket_service,\n bucket_name=bucket['name'],\n object_name=object_name)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef destroy(self, name):\n '''destroy an instance, meaning take down the instance and stop the build.\n\n Parameters\n ==========\n name: the name of the instance to stop building.\n '''\n\n instances = self._get_instances()\n project = self._get_project()\n zone = self._get_zone()\n found = False\n\n if 'items' in instances:\n for instance in instances['items']:\n if instance['name'] == name:\n found = True\n break\n\n if found: \n bot.info('Killing instance %s' %name)\n return self._compute_service.instances().delete(project=project, \n zone=zone, \n instance=name).execute()", "response": "destroy an instance, meaning take down the instance and stop the build.\n\n Parameters\n ==========\n name: the name of the instance to stop building."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_subparsers(parser):\n '''get_subparser will get a dictionary of subparsers, to help with printing help\n '''\n\n actions = [action for action in parser._actions \n if isinstance(action, argparse._SubParsersAction)]\n\n subparsers = dict()\n for action in actions:\n # get all subparsers and print help\n for choice, subparser in action.choices.items():\n subparsers[choice] = subparser\n\n return subparsers", "response": "get_subparser will get a dictionary of subparsers to help with printing help\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef generate(self, delim='-', length=4, chars='0123456789'):\n '''\n Generate a robot name. Inspiration from Haikunator, but much more\n poorly implemented ;)\n\n Parameters\n ==========\n delim: Delimiter\n length: TokenLength\n chars: TokenChars\n '''\n\n descriptor = self._select(self._descriptors)\n noun = self._select(self._nouns)\n numbers = ''.join((self._select(chars) for _ in range(length)))\n return delim.join([descriptor, noun, numbers])", "response": "Generate a robot name. Inspiration from Haikunator but much more\n poorly implemented ; )"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef mkdir_p(path):\n '''mkdir_p attempts to get the same functionality as mkdir -p\n :param path: the path to create.\n '''\n try:\n os.makedirs(path)\n except OSError as e:\n if e.errno == errno.EEXIST and os.path.isdir(path):\n pass\n else:\n bot.error(\"Error creating path %s, exiting.\" % path)\n sys.exit(1)", "response": "mkdir - p attempts to get the same functionality as mkdir - p\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_tmpfile(requested_tmpdir=None, prefix=\"\"):\n '''get a temporary file with an optional prefix. By default will be\n created in /tmp unless SREGISTRY_TMPDIR is set. By default, the file\n is closed (and just a name returned).\n\n Parameters\n ==========\n requested_tmpdir: an optional requested temporary directory, first\n priority as is coming from calling function.\n prefix: Given a need for a sandbox (or similar), prefix the file\n with this string.\n '''\n\n # First priority for the base goes to the user requested.\n tmpdir = get_tmpdir(requested_tmpdir)\n\n # If tmpdir is set, add to prefix\n if tmpdir is not None:\n prefix = os.path.join(tmpdir, os.path.basename(prefix))\n\n fd, tmp_file = tempfile.mkstemp(prefix=prefix) \n os.close(fd)\n\n return tmp_file", "response": "get a temporary file with an optional prefix. By default the file is created in the default directory. By default the file is returned. By default the file is returned. By default the file is returned. By default the file is returned. By default the file is returned. By default the file is returned."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget a temporary directory for an operation.", "response": "def get_tmpdir(requested_tmpdir=None, prefix=\"\", create=True):\n '''get a temporary directory for an operation. If SREGISTRY_TMPDIR\n is set, return that. Otherwise, return the output of tempfile.mkdtemp\n\n Parameters\n ==========\n requested_tmpdir: an optional requested temporary directory, first\n priority as is coming from calling function.\n prefix: Given a need for a sandbox (or similar), we will need to \n create a subfolder *within* the SREGISTRY_TMPDIR.\n create: boolean to determine if we should create folder (True)\n '''\n from sregistry.defaults import SREGISTRY_TMPDIR\n\n # First priority for the base goes to the user requested.\n tmpdir = requested_tmpdir or SREGISTRY_TMPDIR\n\n prefix = prefix or \"sregistry-tmp\"\n prefix = \"%s.%s\" %(prefix, next(tempfile._get_candidate_names()))\n tmpdir = os.path.join(tmpdir, prefix)\n\n if not os.path.exists(tmpdir) and create is True:\n os.mkdir(tmpdir)\n\n return tmpdir"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nextract a tar archive to a specified output folder", "response": "def extract_tar(archive, output_folder, handle_whiteout=False):\n '''extract a tar archive to a specified output folder\n\n Parameters\n ==========\n archive: the archive file to extract\n output_folder: the output folder to extract to\n handle_whiteout: use docker2oci variation to handle whiteout files\n\n '''\n from .terminal import run_command\n\n # Do we want to remove whiteout files?\n if handle_whiteout is True:\n return _extract_tar(archive, output_folder)\n\n # If extension is .tar.gz, use -xzf\n args = '-xf'\n if archive.endswith(\".tar.gz\"):\n args = '-xzf'\n\n # Just use command line, more succinct.\n command = [\"tar\", args, archive, \"-C\", output_folder, \"--exclude=dev/*\"]\n if not bot.is_quiet():\n print(\"Extracting %s\" % archive)\n\n return run_command(command)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _extract_tar(archive, output_folder):\n '''use blob2oci to handle whiteout files for extraction. Credit for this\n script goes to docker2oci by Olivier Freyermouth, and see script\n folder for license.\n\n Parameters\n ==========\n archive: the archive to extract\n output_folder the output folder (sandbox) to extract to\n\n '''\n from .terminal import ( run_command, which )\n\n result = which('blob2oci')\n if result['return_code'] != 0:\n bot.error('Cannot find blob2oci script on path, exiting.')\n sys.exit(1)\n \n script = result['message'] \n command = ['exec' ,script, '--layer', archive, '--extract', output_folder]\n\n if not bot.is_quiet():\n print(\"Extracting %s\" % archive)\n\n return run_command(command)", "response": "extract a tar archive"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_file_hash(filename):\n '''find the SHA256 hash string of a file\n '''\n hasher = hashlib.sha256()\n with open(filename, \"rb\") as f:\n for chunk in iter(lambda: f.read(4096), b\"\"):\n hasher.update(chunk)\n return hasher.hexdigest()", "response": "find the SHA256 hash string of a file\n "} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef write_json(json_obj, filename, mode=\"w\", print_pretty=True):\n '''write_json will (optionally,pretty print) a json object to file\n\n Parameters\n ==========\n json_obj: the dict to print to json\n filename: the output file to write to\n pretty_print: if True, will use nicer formatting\n '''\n with open(filename, mode) as filey:\n if print_pretty:\n filey.writelines(print_json(json_obj))\n else:\n filey.writelines(json.dumps(json_obj))\n return filename", "response": "write_json will print a json object to file"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nread in a json file and returns the data structure as dict.", "response": "def read_json(filename, mode='r'):\n '''read_json reads in a json file and returns\n the data structure as dict.\n '''\n with open(filename, mode) as filey:\n data = json.load(filey)\n return data"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nclean up will delete a list of files only if they exist", "response": "def clean_up(files):\n '''clean up will delete a list of files, only if they exist\n '''\n if not isinstance(files, list):\n files = [files]\n\n for f in files:\n if os.path.exists(f):\n bot.verbose3(\"Cleaning up %s\" % f)\n os.remove(f)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\npulls an image from a docker hub", "response": "def pull(self, images, file_name=None, save=True, force=False, **kwargs):\n '''pull an image from a docker hub. This is a (less than ideal) workaround\n that actually does the following:\n\n - creates a sandbox folder\n - adds docker layers, metadata folder, and custom metadata to it\n - converts to a squashfs image with build\n\n the docker manifests are stored with registry metadata.\n \n Parameters\n ==========\n images: refers to the uri given by the user to pull in the format\n /. You should have an API that is able to \n retrieve a container based on parsing this uri.\n file_name: the user's requested name for the file. It can \n optionally be None if the user wants a default.\n save: if True, you should save the container to the database\n using self.add()\n \n Returns\n =======\n finished: a single container path, or list of paths\n '''\n\n if not isinstance(images,list):\n images = [images]\n\n bot.debug('Execution of PULL for %s images' %len(images))\n\n # If used internally we want to return a list to the user.\n\n finished = []\n for image in images:\n\n q = parse_image_name( remove_uri(image), \n default_collection='aws' )\n\n image_file = self._pull(file_name=file_name, \n save=save, \n force=force, \n names=q,\n kwargs=kwargs)\n\n finished.append(image_file)\n\n if len(finished) == 1:\n finished = finished[0]\n return finished"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\npulls an image from a singularity registry", "response": "def pull(self, images, file_name=None, save=True, **kwargs):\n '''pull an image from a singularity registry\n \n Parameters\n ==========\n images: refers to the uri given by the user to pull in the format\n /. You should have an API that is able to \n retrieve a container based on parsing this uri.\n file_name: the user's requested name for the file. It can \n optionally be None if the user wants a default.\n save: if True, you should save the container to the database\n using self.add()\n \n Returns\n =======\n finished: a single container path, or list of paths\n '''\n\n # Here we take an entire list or a single image by ensuring we have a list\n # This makes the client flexible to command line or internal function use,\n # for one or more images.\n if not isinstance(images,list):\n images = [images]\n\n bot.debug('Execution of PULL for %s images' %len(images))\n\n # If used internally we want to return a list to the user.\n finished = []\n for image in images:\n\n q = parse_image_name(remove_uri(image))\n\n # Verify image existence, and obtain id\n url = \"...\" # write your custom endpoint URL here \n bot.debug('Retrieving manifest at %s' %url)\n\n # You can use the client get function to retrieve a url manifest\n manifest = self._get(url)\n\n # it's good practice to add the url as a `selfLink`\n manifest['selfLink'] = url\n\n # Make sure to parse the response (manifest) in case it's not what\n # you expect!\n\n # If the user didn't provide a file, make one based on the names\n if file_name is None:\n file_name = q['storage'].replace('/','-')\n\n # You can then use the client download function to get the url\n # for some image in your manifest. In this example, it's in the `image`\n # field and we want to show the progress bar. \n image_file = self.download(url=manifest['image'],\n file_name=file_name,\n show_progress=True)\n\n # If the user is saving to local storage, you need to assumble the uri\n # here in the expected format /:@\n if save is True:\n image_uri = \"%s/%s:%s\" %(manifest['collection'], \n manifest['name'],\n manifest['tag'])\n\n # Importantly, the client add function will take the image file, the\n # uri, the download link, and any relevant metadata (dictionary)\n # for the database\n container = self.add(image_path = image_file, # the file path\n image_uri = image_uri, # the full uri\n image_name = file_name, # a custom name?\n metadata = manifest,\n url = manifest['image'])\n\n # When the container is created, this is the path to the image\n image_file = container.image\n\n if os.path.exists(image_file):\n bot.debug('Retrieved image file %s' %image_file)\n bot.custom(prefix=\"Success!\", message=image_file)\n finished.append(image_file)\n\n if len(finished) == 1:\n finished = finished[0]\n return finished"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\npushing an image to an S3 endpoint", "response": "def push(self, path, name, tag=None):\n '''push an image to an S3 endpoint'''\n\n path = os.path.abspath(path)\n image = os.path.basename(path)\n bot.debug(\"PUSH %s\" % path)\n\n if not os.path.exists(path):\n bot.error('%s does not exist.' %path)\n sys.exit(1)\n\n # Extract the metadata\n names = parse_image_name(remove_uri(name), tag=tag)\n image_size = os.path.getsize(path) >> 20\n\n # Create extra metadata, this is how we identify the image later\n # *important* bug in boto3 will return these capitalized\n # see https://github.com/boto/boto3/issues/1709\n metadata = {'sizemb': \"%s\" % image_size,\n 'client': 'sregistry' }\n\n self.bucket.upload_file(path, names['storage_uri'], {\"Metadata\": metadata })"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets a collection if it exists create it first.", "response": "def get_or_create_collection(self, name):\n '''get a collection if it exists. If it doesn't exist, create it first.\n\n Parameters\n ==========\n name: the collection name, usually parsed from get_image_names()['name']\n\n '''\n from sregistry.database.models import Collection\n collection = self.get_collection(name)\n\n # If it doesn't exist, create it\n if collection is None:\n collection = Collection(name=name)\n self.session.add(collection)\n self.session.commit()\n\n return collection"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting a collection if it exists otherwise return None", "response": "def get_collection(self, name):\n '''get a collection, if it exists, otherwise return None.\n '''\n from sregistry.database.models import Collection\n return Collection.query.filter(Collection.name == name).first()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_container(self, name, collection_id, tag=\"latest\", version=None):\n '''get a container, otherwise return None.\n '''\n from sregistry.database.models import Container\n if version is None:\n container = Container.query.filter_by(collection_id = collection_id,\n name = name,\n tag = tag).first()\n else:\n container = Container.query.filter_by(collection_id = collection_id,\n name = name,\n tag = tag,\n version = version).first()\n return container", "response": "get a container if it exists otherwise return None."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get(self, name, quiet=False):\n '''Do a get for a container, and then a collection, and then return None\n if no result is found.\n Parameters\n ==========\n name: should coincide with either the collection name, or the container\n name with the collection. A query is done first for the collection,\n and then the container, and the path to the image file returned.\n '''\n from sregistry.database.models import Collection, Container\n names = parse_image_name( remove_uri (name) )\n \n # First look for a collection (required)\n collection = self.get_collection(name=names['collection'])\n container = None\n\n if collection is not None:\n container = self.get_container(collection_id=collection.id,\n name=names['image'], \n tag=names['tag'],\n version=names['version'])\n\n if container is not None and quiet is False:\n\n # The container image file exists [local]\n if container.image is not None:\n print(container.image)\n\n # The container has a url (but not local file)\n elif container.url is not None:\n print(container.url)\n else:\n bot.info('No storage file found for %s' %name)\n\n return container", "response": "Do a get for a container and then a collection and then a file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef images(self, query=None):\n '''List local images in the database, optionally with a query.\n\n Paramters\n =========\n query: a string to search for in the container or collection name|tag|uri\n\n '''\n from sregistry.database.models import Collection, Container\n\n rows = []\n if query is not None: \n like = \"%\" + query + \"%\"\n containers = Container.query.filter(or_(Container.name == query,\n Container.tag.like(like),\n Container.uri.like(like),\n Container.name.like(like))).all() \n else:\n containers = Container.query.all()\n\n if len(containers) > 0:\n message = \" [date] [client]\\t[uri]\"\n bot.custom(prefix='Containers:', message=message, color=\"RED\")\n for c in containers:\n uri = c.get_uri()\n created_at = c.created_at.strftime('%B %d, %Y')\n rows.append([created_at, \" [%s]\" %c.client, uri])\n bot.table(rows) \n return containers", "response": "List local images in the database optionally with a query."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ninspects a local image in the database which typically includes the basic fields in the model.", "response": "def inspect(self, name):\n '''Inspect a local image in the database, which typically includes the\n basic fields in the model.\n\n '''\n print(name)\n container = self.get(name)\n if container is not None:\n collection = container.collection.name\n fields = container.__dict__.copy()\n fields['collection'] = collection \n fields['metrics'] = json.loads(fields['metrics'])\n del fields['_sa_instance_state']\n fields['created_at'] = str(fields['created_at'])\n print(json.dumps(fields, indent=4, sort_keys=True))\n return fields"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nrenames performs a move and returns the container object", "response": "def rename(self, image_name, path):\n '''rename performs a move, but ensures the path is maintained in storage\n\n Parameters\n ==========\n image_name: the image name (uri) to rename to.\n path: the name to rename (basename is taken)\n\n '''\n container = self.get(image_name, quiet=True)\n\n if container is not None:\n if container.image is not None:\n\n # The original directory for the container stays the same\n dirname = os.path.dirname(container.image)\n\n # But we derive a new filename and uri\n\n names = parse_image_name( remove_uri (path) )\n storage = os.path.join( self.storage,\n os.path.dirname(names['storage']) )\n\n # This is the collection folder\n\n if not os.path.exists(storage):\n os.mkdir(storage)\n\n # Here we get the new full path, rename the container file\n\n fullpath = os.path.abspath(os.path.join(dirname, names['storage']))\n container = self.cp(move_to=fullpath,\n container=container,\n command=\"rename\")\n\n # On successful rename of file, update the uri\n\n if container is not None:\n container.uri = names['uri']\n self.session.commit()\n return container\n\n bot.warning('%s not found' %(image_name))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef mv(self, image_name, path):\n '''Move an image from it's current location to a new path.\n Removing the image from organized storage is not the recommended approach\n however is still a function wanted by some.\n\n Parameters\n ==========\n image_name: the parsed image name.\n path: the location to move the image to\n\n '''\n\n container = self.get(image_name, quiet=True)\n\n if container is not None:\n\n name = container.uri or container.get_uri()\n image = container.image or ''\n\n # Only continue if image file exists\n if os.path.exists(image):\n\n # Default assume directory, use image name and path fully\n filename = os.path.basename(image)\n filedir = os.path.abspath(path)\n\n # If it's a file, use filename provided\n if not os.path.isdir(path):\n filename = os.path.basename(path)\n filedir = os.path.dirname(path)\n \n # If directory is empty, assume $PWD\n if filedir == '':\n filedir = os.getcwd()\n \n # Copy to the fullpath from the storage\n fullpath = os.path.abspath(os.path.join(filedir,filename))\n return self.cp(move_to=fullpath, \n container=container,\n command=\"move\")\n \n bot.warning('%s not found' %(image_name))", "response": "Move an image from it s current location to a new location."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef rmi(self, image_name):\n '''Remove an image from the database and filesystem.\n '''\n container = self.rm(image_name, delete=True)\n if container is not None:\n bot.info(\"[rmi] %s\" % container)", "response": "Remove an image from the database and filesystem."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef rm(self, image_name, delete=False):\n '''Remove an image from the database, akin to untagging the image. This\n does not delete the file from the cache, unless delete is set to True\n (as called by rmi).\n '''\n container = self.get(image_name)\n if container is not None:\n name = container.uri or container.get_uri()\n image = container.image\n self.session.delete(container)\n self.session.commit()\n if image is not None:\n if os.path.exists(image) and delete is True:\n os.remove(container.image)\n return image\n bot.info(\"[rm] %s\" % name)", "response": "Remove an image from the database."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets or create a container including the collection to add it to the database", "response": "def add(self, image_path=None,\n image_uri=None,\n image_name=None,\n url=None,\n metadata=None,\n save=True, \n copy=False):\n\n '''get or create a container, including the collection to add it to.\n This function can be used from a file on the local system, or via a URL\n that has been downloaded. Either way, if one of url, version, or image_file\n is not provided, the model is created without it. If a version is not\n provided but a file path is, then the file hash is used.\n\n Parameters\n ==========\n image_path: full path to image file\n image_name: if defined, the user wants a custom name (and not based on uri)\n metadata: any extra metadata to keep for the image (dict)\n save: if True, move the image to the cache if it's not there\n copy: If True, copy the image instead of moving it.\n\n image_name: a uri that gets parsed into a names object that looks like:\n\n {'collection': 'vsoch',\n 'image': 'hello-world',\n 'storage': 'vsoch/hello-world-latest.img',\n 'tag': 'latest',\n 'version': '12345'\n 'uri': 'vsoch/hello-world:latest@12345'}\n\n After running add, the user will take some image in a working\n directory, add it to the database, and have it available for search\n and use under SREGISTRY_STORAGE//\n\n If the container was retrieved from a webby place, it should have version\n If no version is found, the file hash is used.\n '''\n\n from sregistry.database.models import (\n Container,\n Collection\n )\n\n # We can only save if the image is provided\n if image_path is not None:\n if not os.path.exists(image_path) and save is True:\n bot.error('Cannot find %s' %image_path)\n sys.exit(1)\n\n # An image uri is required for version, tag, etc.\n if image_uri is None:\n bot.error('You must provide an image uri /')\n sys.exit(1)\n\n names = parse_image_name( remove_uri(image_uri) )\n bot.debug('Adding %s to registry' % names['uri']) \n\n # If Singularity is installed, inspect image for metadata\n metadata = self.get_metadata(image_path, names=names)\n collection = self.get_or_create_collection(names['collection'])\n\n # Get a hash of the file for the version, or use provided\n version = names.get('version')\n if version == None:\n if image_path != None:\n version = get_image_hash(image_path)\n else:\n version = '' # we can't determine a version, not in API/no file\n names = parse_image_name( remove_uri(image_uri), version=version )\n\n # If save, move to registry storage first\n if save is True and image_path is not None:\n\n # If the user hasn't defined a custom name\n if image_name is None: \n image_name = self._get_storage_name(names)\n\n if copy is True:\n copyfile(image_path, image_name)\n else:\n shutil.move(image_path, image_name)\n \n image_path = image_name\n\n # Just in case the client didn't provide it, see if we have in metadata\n if url is None and \"url\" in metadata:\n url = metadata['url']\n\n # First check that we don't have one already!\n container = self.get_container(name=names['image'],\n collection_id=collection.id, \n tag=names['tag'],\n version=version)\n\n # The container did not exist, create it\n if container is None:\n action = \"new\"\n container = Container(metrics=json.dumps(metadata),\n name=names['image'],\n image=image_path,\n client=self.client_name,\n tag=names['tag'],\n version=version,\n url=url,\n uri=names['uri'],\n collection_id=collection.id)\n\n self.session.add(container)\n collection.containers.append(container)\n\n # The container existed, update it.\n else:\n action=\"update\"\n metrics=json.loads(container.metrics)\n metrics.update(metadata)\n container.url= url\n container.client=self.client_name\n if image_path is not None:\n container.image=image_path\n container.metrics=json.dumps(metrics)\n\n self.session.commit()\n bot.info(\"[container][%s] %s\" % (action,names['uri']))\n return container"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\npushes an image to a Singularity registry", "response": "def push(self, path, name, tag=None):\n '''push an image to Singularity Registry'''\n\n path = os.path.abspath(path)\n image = os.path.basename(path)\n bot.debug(\"PUSH %s\" % path)\n\n if not os.path.exists(path):\n bot.error('%s does not exist.' %path)\n sys.exit(1)\n\n # Interaction with a registry requires secrets\n self.require_secrets()\n\n # Extract the metadata\n names = parse_image_name(remove_uri(name), tag=tag)\n image_size = os.path.getsize(path) >> 20\n\n# COLLECTION ###################################################################\n\n # If the registry is provided in the uri, use it\n if names['registry'] == None:\n names['registry'] = self.base\n\n # If the base doesn't start with http or https, add it\n names = self._add_https(names)\n\n # Prepare push request, this will return a collection ID if permission\n url = '%s/push/' % names['registry']\n auth_url = '%s/upload/chunked_upload' % names['registry']\n SREGISTRY_EVENT = self.authorize(request_type=\"push\",\n names=names)\n\n # Data fields for collection\n fields = { 'collection': names['collection'],\n 'name':names['image'],\n 'tag': names['tag']}\n\n headers = { 'Authorization': SREGISTRY_EVENT }\n\n r = requests.post(auth_url, json=fields, headers=headers)\n\n # Always tell the user what's going on!\n message = self._read_response(r)\n print('\\n[1. Collection return status {0} {1}]'.format(r.status_code, message))\n\n # Get the collection id, if created, and continue with upload\n if r.status_code != 200:\n sys.exit(1)\n\n\n# UPLOAD #######################################################################\n\n url = '%s/upload' % names['registry'].replace('/api','')\n bot.debug('Seting upload URL to {0}'.format(url))\n\n cid = r.json()['cid']\n upload_to = os.path.basename(names['storage'])\n\n SREGISTRY_EVENT = self.authorize(request_type=\"upload\",\n names=names)\n\n encoder = MultipartEncoder(fields={'SREGISTRY_EVENT': SREGISTRY_EVENT,\n 'name': names['image'],\n 'collection': str(cid),\n 'tag': names['tag'],\n 'file1': (upload_to, open(path, 'rb'), 'text/plain')})\n\n progress_callback = create_callback(encoder, self.quiet)\n monitor = MultipartEncoderMonitor(encoder, progress_callback)\n headers = {'Content-Type': monitor.content_type,\n 'Authorization': SREGISTRY_EVENT }\n\n try:\n r = requests.post(url, data=monitor, headers=headers)\n r.raise_for_status()\n message = r.json()['message']\n print('\\n[Return status {0} {1}]'.format(r.status_code, message))\n except requests.HTTPError as e:\n print('\\nUpload failed: {0}.'.format(e))\n except KeyboardInterrupt:\n print('\\nUpload cancelled.')\n except Exception as e:\n print(e)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef parse_header(recipe, header=\"from\", remove_header=True):\n '''take a recipe, and return the complete header, line. If\n remove_header is True, only return the value.\n\n Parameters\n ==========\n recipe: the recipe file\n headers: the header key to find and parse\n remove_header: if true, remove the key\n\n '''\n parsed_header = None\n fromline = [x for x in recipe.split('\\n') if \"%s:\" %header in x.lower()]\n\n # Case 1: We did not find the fromline\n if len(fromline) == 0:\n return \"\"\n\n # Case 2: We found it!\n if len(fromline) > 0:\n fromline = fromline[0]\n parsed_header = fromline.strip()\n\n # Does the user want to clean it up?\n if remove_header is True:\n parsed_header = fromline.split(':', 1)[-1].strip()\n return parsed_header", "response": "take a recipe and return the complete header line."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef find_recipes(folders, pattern=None, base=None):\n '''find recipes will use a list of base folders, files,\n or patterns over a subset of content to find recipe files\n (indicated by Starting with Singularity\n \n Parameters\n ==========\n base: if defined, consider folders recursively below this level.\n\n ''' \n # If the user doesn't provide a list of folders, use $PWD\n if folders is None:\n folders = os.getcwd()\n\n if not isinstance(folders,list):\n folders = [folders]\n\n manifest = dict()\n for base_folder in folders:\n\n # If we find a file, return the one file\n custom_pattern = None\n if os.path.isfile(base_folder): # updates manifest\n manifest = find_single_recipe(filename=base_folder,\n pattern=pattern,\n manifest=manifest)\n continue\n\n # The user likely provided a custom pattern\n elif not os.path.isdir(base_folder):\n custom_pattern = base_folder.split('/')[-1:][0]\n base_folder = \"/\".join(base_folder.split('/')[0:-1])\n \n # If we don't trigger loop, we have directory\n manifest = find_folder_recipes(base_folder=base_folder,\n pattern=custom_pattern or pattern,\n manifest=manifest,\n base=base)\n \n return manifest", "response": "find recipes will use a list of base folders files or patterns over a subset of content to find recipes"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef find_folder_recipes(base_folder,\n pattern=\"Singularity\",\n manifest=None,\n base=None):\n\n '''find folder recipes will find recipes based on a particular pattern.\n \n Parameters\n ==========\n base_folder: the base folder to recursively walk\n pattern: a default pattern to search for\n manifest: an already started manifest\n base: if defined, consider folders under this level recursively.\n \n '''\n\n # The user is not appending to an existing manifest\n if manifest is None:\n manifest = dict()\n\n for root, dirnames, filenames in os.walk(base_folder):\n\n for filename in fnmatch.filter(filenames, pattern):\n\n container_path = os.path.join(root, filename)\n if base is not None:\n container_base = container_path.replace(base,'').strip('/')\n collection = container_base.split('/')[0]\n recipe = os.path.basename(container_base)\n container_uri = \"%s/%s\" %(collection,recipe)\n else:\n container_uri = '/'.join(container_path.strip('/').split('/')[-2:])\n\n add_container = True\n\n # Add the most recently updated container\n if container_uri in manifest:\n if manifest[container_uri]['modified'] > os.path.getmtime(container_path):\n add_container = False\n\n if add_container:\n manifest[container_uri] = {'path': os.path.abspath(container_path),\n 'modified':os.path.getmtime(container_path)}\n\n return manifest", "response": "find folder recipes based on a particular pattern"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef build(self, name,\n recipe=\"Singularity\",\n context=None, \n preview=False):\n\n '''trigger a build on Google Cloud (builder then storage) given a name\n recipe, and Github URI where the recipe can be found.\n \n Parameters\n ==========\n recipe: the local recipe to build.\n name: should be the complete uri that the user has requested to push.\n context: the dependency files needed for the build. If not defined, only\n the recipe is uploaded.\n preview: if True, preview but don't run the build\n\n Environment\n ===========\n SREGISTRY_GOOGLE_BUILD_SINGULARITY_VERSION: the version of Singularity\n to use, defaults to 3.0.2-slim\n SREGISTRY_GOOGLE_BUILD_CLEANUP: after build, delete intermediate \n dependencies in cloudbuild bucket.\n\n '''\n bot.debug(\"BUILD %s\" % recipe)\n\n # This returns a data structure with collection, container, based on uri\n names = parse_image_name(remove_uri(name))\n\n # Load the build configuration\n config = self._load_build_config(name=names['uri'], recipe=recipe)\n\n build_package = [recipe]\n if context not in [None, '', []]:\n\n # If the user gives a ., include recursive $PWD\n if '.' in context:\n context = glob(os.getcwd() + '/**/*', recursive=True)\n build_package = build_package + context\n \n package = create_build_package(build_package)\n\n # Does the package already exist? If the user cached, it might\n destination='source/%s' % os.path.basename(package)\n blob = self._build_bucket.blob(destination)\n\n # if it doesn't exist, upload it\n if not blob.exists() and preview is False:\n bot.log('Uploading build package!')\n manifest = self._upload(source=package, \n bucket=self._build_bucket,\n destination=destination)\n else:\n bot.log('Build package found in %s.' % self._build_bucket.name)\n\n # The source should point to the bucket with the .tar.gz, latest generation\n config[\"source\"][\"storageSource\"]['bucket'] = self._build_bucket.name\n config[\"source\"][\"storageSource\"]['object'] = destination\n\n # If not a preview, run the build and return the response\n if preview is False:\n config = self._run_build(config, self._bucket, names)\n\n # If the user wants to cache cloudbuild files, this will be set\n if not self._get_and_update_setting('SREGISTRY_GOOGLE_BUILD_CACHE'):\n blob.delete()\n\n # Clean up either way, return config or response\n shutil.rmtree(os.path.dirname(package))\n return config", "response": "trigger a build on Google Cloud"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngives a list of files copy them to a temporary folder compress into a. tar. gz and rename based on the file hash.", "response": "def create_build_package(package_files):\n '''given a list of files, copy them to a temporary folder,\n compress into a .tar.gz, and rename based on the file hash.\n Return the full path to the .tar.gz in the temporary folder.\n\n Parameters\n ==========\n package_files: a list of files to include in the tar.gz\n\n '''\n # Ensure package files all exist\n for package_file in package_files:\n if not os.path.exists(package_file):\n bot.exit('Cannot find %s.' % package_file)\n\n bot.log('Generating build package for %s files...' % len(package_files))\n build_dir = get_tmpdir(prefix=\"sregistry-build\")\n build_tar = '%s/build.tar.gz' % build_dir\n tar = tarfile.open(build_tar, \"w:gz\")\n\n # Create the tar.gz\n for package_file in package_files:\n tar.add(package_file)\n tar.close()\n\n # Get hash (sha256), and rename file\n sha256 = get_file_hash(build_tar)\n hash_tar = \"%s/%s.tar.gz\" %(build_dir, sha256)\n shutil.move(build_tar, hash_tar)\n return hash_tar"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef load_build_config(self, name, recipe):\n '''load a google compute config, meaning that we start with a template,\n and mimic the following example cloudbuild.yaml:\n\n steps:\n - name: \"singularityware/singularity:${_SINGULARITY_VERSION}\"\n args: ['build', 'julia-centos-another.sif', 'julia.def']\n artifacts:\n objects:\n location: 'gs://sregistry-gcloud-build-vanessa'\n paths: ['julia-centos-another.sif']\n\n\n Parameters\n ==========\n recipe: the local recipe file for the builder.\n name: the name of the container, based on the uri\n\n '''\n version_envar = 'SREGISTRY_GOOGLE_BUILD_SINGULARITY_VERSION'\n version = self._get_and_update_setting(version_envar, '3.0.2-slim')\n config = get_build_template()\n\n # Name is in format 'dinosaur/container-latest'\n\n # The command to give the builder, with image name\n container_name = '%s.sif' % name.replace('/','-', 1)\n config['steps'][0]['name'] = 'singularityware/singularity:%s' % version\n config['steps'][0]['args'] = ['build', container_name, recipe]\n\n config[\"artifacts\"][\"objects\"][\"location\"] = \"gs://%s\" % self._bucket_name\n config[\"artifacts\"][\"objects\"][\"paths\"] = [container_name]\n\n return config", "response": "load a google compute config"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef run_build(self, config, bucket, names):\n '''run a build, meaning creating a build. Retry if there is failure\n '''\n\n project = self._get_project()\n\n # prefix, message, color\n bot.custom('PROJECT', project, \"CYAN\")\n bot.custom('BUILD ', config['steps'][0]['name'], \"CYAN\")\n\n response = self._build_service.projects().builds().create(body=config, \n projectId=project).execute()\n\n build_id = response['metadata']['build']['id']\n status = response['metadata']['build']['status']\n bot.log(\"build %s: %s\" % (build_id, status))\n\n start = time.time()\n while status not in ['COMPLETE', 'FAILURE', 'SUCCESS']:\n time.sleep(15)\n response = self._build_service.projects().builds().get(id=build_id, \n projectId=project).execute()\n\n build_id = response['id']\n status = response['status']\n bot.log(\"build %s: %s\" % (build_id, status))\n\n end = time.time()\n bot.log('Total build time: %s seconds' % (round(end - start, 2)))\n \n # If successful, update blob metadata and visibility\n if status == 'SUCCESS':\n\n # Does the user want to keep the container private?\n env = 'SREGISTRY_GOOGLE_STORAGE_PRIVATE'\n blob = bucket.blob(response['artifacts']['objects']['paths'][0])\n \n # Make Public, if desired\n if self._get_and_update_setting(env) == None:\n blob.make_public()\n response['public_url'] = blob.public_url\n\n # Add the metadata directly to the object\n update_blob_metadata(blob, response, config, bucket, names)\n response['media_link'] = blob.media_link\n response['size'] = blob.size\n response['file_hash'] = blob.md5_hash\n\n return response", "response": "run a build, meaning creating a build. Retry if there is failure"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef update_blob_metadata(blob, response, config, bucket, names):\n '''a specific function to take a blob, along with a SUCCESS response\n from Google build, the original config, and update the blob \n metadata with the artifact file name, dependencies, and image hash.\n '''\n\n manifest = os.path.basename(response['results']['artifactManifest'])\n manifest = json.loads(bucket.blob(manifest).download_as_string())\n\n metadata = {'file_hash': manifest['file_hash'][0]['file_hash'][0]['value'],\n 'artifactManifest': response['results']['artifactManifest'],\n 'location': manifest['location'],\n 'storageSourceBucket': config['source']['storageSource']['bucket'],\n 'storageSourceObject': config['source']['storageSource']['object'],\n 'buildCommand': ' '.join(config['steps'][0]['args']),\n 'builder': config['steps'][0]['name'],\n 'media_link': blob.media_link,\n 'self_link': blob.self_link,\n 'size': blob.size,\n 'name': names['tag_uri'],\n 'type': \"container\"} # identifier that the blob is a container\n\n blob.metadata = metadata \n blob._properties['metadata'] = metadata\n blob.patch()", "response": "a specific function to take a blob along with a SUCCESS response and update the blob s metadata with the artifact file name dependencies and image hash."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nquery a Singularity registry for a list of images.", "response": "def search(self, query=None, args=None):\n '''query a Singularity registry for a list of images. \n If query is None, collections are listed. \n\n EXAMPLE QUERIES: \n '''\n\n # You can optionally better parse the image uri (query), but not\n # necessary\n # names = parse_image_name(remove_uri(query))\n\n if query is not None:\n\n # Here you might do a function that is a general list\n # Note that this means adding the function Client in __init__\n return self._container_query(query)\n\n\n # or default to listing (searching) all things.\n return self._search_all()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse_image_name(image_name,\n tag=None,\n version=None, \n defaults=True, \n ext=\"sif\",\n default_collection=\"library\",\n default_tag=\"latest\",\n base=None,\n lowercase=True):\n\n '''return a collection and repo name and tag\n for an image file.\n \n Parameters\n =========\n image_name: a user provided string indicating a collection,\n image, and optionally a tag.\n tag: optionally specify tag as its own argument\n over-rides parsed image tag\n defaults: use defaults \"latest\" for tag and \"library\"\n for collection. \n base: if defined, remove from image_name, appropriate if the\n user gave a registry url base that isn't part of namespace.\n lowercase: turn entire URI to lowercase (default is True)\n '''\n\n # Save the original string\n original = image_name\n \n if base is not None:\n image_name = image_name.replace(base,'').strip('/')\n\n # If a file is provided, remove extension\n image_name = re.sub('[.](img|simg|sif)','', image_name)\n\n # Parse the provided name\n uri_regexes = [ _reduced_uri,\n _default_uri,\n _docker_uri ]\n\n for r in uri_regexes:\n match = r.match(image_name)\n if match:\n break\n\n if not match:\n bot.exit('Could not parse image \"%s\"! Exiting.' % image)\n\n # Get matches\n registry = match.group('registry')\n collection = match.group('collection')\n repo_name = match.group('repo')\n repo_tag = match.group('tag')\n version = match.group('version')\n \n # A repo_name is required\n assert(repo_name)\n\n # If a collection isn't provided\n collection = set_default(collection, default_collection, defaults)\n repo_tag = set_default(repo_tag, default_tag, defaults)\n\n # The collection, name must be all lowercase\n if lowercase:\n collection = collection.lower().rstrip('/')\n repo_name = repo_name.lower()\n repo_tag = repo_tag.lower()\n else:\n collection = collection.rstrip('/')\n\n if version != None:\n version = version.lower()\n \n # Piece together the uri base\n if registry == None:\n uri = \"%s/%s\" % (collection, repo_name) \n else:\n uri = \"%s/%s/%s\" % (registry, collection, repo_name) \n\n url = uri\n\n # Tag is defined\n if repo_tag != None:\n uri = \"%s-%s\" % (uri, repo_tag)\n tag_uri = \"%s:%s\" % (url, repo_tag) \n\n # Version is defined\n storage_version = None\n if version is not None:\n uri = \"%s@%s\" % (uri, version)\n tag_uri = \"%s@%s\" % (tag_uri, version) \n storage_version = \"%s@%s.%s\" % (tag_uri, version, ext)\n\n # A second storage URI honors the tag (:) separator\n\n storage = \"%s.%s\" %(uri, ext)\n storage_uri = \"%s.%s\" %(tag_uri, ext)\n result = {\"collection\": collection,\n \"original\": original,\n \"registry\": registry,\n \"image\": repo_name,\n \"url\": url,\n \"tag\": repo_tag,\n \"version\": version,\n \"storage\": storage,\n \"storage_uri\": storage_uri,\n \"storage_version\": storage_version or storage_uri,\n \"tag_uri\": tag_uri,\n \"uri\": uri}\n\n return result", "response": "parse an image name and return a collection repo name and tag for an image file."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef format_container_name(name, special_characters=None):\n '''format_container_name will take a name supplied by the user,\n remove all special characters (except for those defined by \"special-characters\"\n and return the new image name.\n '''\n if special_characters is None:\n special_characters = []\n return ''.join(e.lower()\n for e in name if e.isalnum() or e in special_characters)", "response": "Format a container name."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_uri(image):\n '''get the uri for an image, if within acceptable\n \n Parameters\n ==========\n image: the image uri, in the format :///:\n\n '''\n # Ensure we have a string\n image = image or ''\n\n # Find uri prefix, including ://\n regexp = re.compile('^.+://')\n uri = regexp.match(image)\n\n if uri is not None:\n uri = (uri.group().lower()\n .replace('_','-')\n .replace('://',''))\n \n accepted_uris = ['aws',\n 'docker',\n 'http', 'https', # Must be allowed for pull\n 'dropbox',\n 'gitlab',\n 'globus',\n 'google-build',\n 'google-storage',\n 'google-drive',\n 'hub',\n 'nvidia', \n 'registry',\n 's3', \n 'swift']\n\n # Allow for Singularity compatability\n if \"shub\" in uri: uri = \"hub\"\n\n if uri not in accepted_uris:\n bot.warning('%s is not a recognized uri.' % uri)\n uri = None\n\n return uri", "response": "get the uri for an image if within acceptable\nATTRIBS"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef pull(self, images, file_name=None, save=True, **kwargs):\n '''pull an image from storage using Swift. The image is found based on the\n storage uri\n \n Parameters\n ==========\n images: refers to the uri given by the user to pull in the format\n /. You should have an API that is able to \n retrieve a container based on parsing this uri.\n file_name: the user's requested name for the file. It can \n optionally be None if the user wants a default.\n save: if True, you should save the container to the database\n using self.add()\n \n Returns\n =======\n finished: a single container path, or list of paths\n\n '''\n force = False\n if \"force\" in kwargs:\n force = kwargs['force']\n\n if not isinstance(images, list):\n images = [images]\n\n bot.debug('Execution of PULL for %s images' % len(images))\n\n # If used internally we want to return a list to the user.\n finished = []\n for image in images:\n\n names = parse_image_name(remove_uri(image))\n\n # First try to get the collection\n collection = self._get_collection(names['collection'])\n if collection is None:\n bot.error('Collection %s does not exist.' % names['collection'])\n\n # Show the user collections he/she does have access to\n collections = self.get_collections()\n if collections:\n bot.info('Collections available to you: \\n%s' %'\\n'.join(collections))\n sys.exit(1)\n\n # Determine if the container exists in storage\n image_name = os.path.basename(names['storage'])\n \n try:\n obj_tuple = self.conn.get_object(names['collection'], image_name)\n except ClientException:\n bot.exit('%s does not exist.' % names['storage'])\n\n # Give etag as version if version not defined\n if names['version'] == None:\n names['version'] = obj_tuple[0]['etag']\n \n # If the user didn't provide a file, make one based on the names\n if file_name is None:\n file_name = self._get_storage_name(names)\n\n # If the file already exists and force is False\n if os.path.exists(file_name) and force is False:\n bot.error('Image exists! Remove first, or use --force to overwrite')\n sys.exit(1) \n\n # Write to file\n with open(file_name, 'wb') as filey:\n filey.write(obj_tuple[1])\n\n # If we save to storage, the uri is the dropbox_path\n if save is True:\n\n names.update(obj_tuple[0])\n container = self.add(image_path = file_name,\n image_uri = names['uri'],\n metadata = names)\n\n # When the container is created, this is the path to the image\n image_file = container.image\n\n if os.path.exists(image_file):\n bot.debug('Retrieved image file %s' %image_file)\n bot.custom(prefix=\"Success!\", message=image_file)\n finished.append(image_file)\n\n else:\n bot.error('%s does not exist. Try sregistry search to see images.' % path)\n\n if len(finished) == 1:\n finished = finished[0]\n return finished", "response": "pull an image from storage using Swift"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\npushing an image to Google Cloud Drive", "response": "def push(self, path, name, tag=None):\n '''push an image to Google Cloud Drive, meaning uploading it\n \n path: should correspond to an absolte image path (or derive it)\n name: should be the complete uri that the user has requested to push.\n tag: should correspond with an image tag. This is provided to mirror Docker\n '''\n # The root of the drive for containers (the parent folder)\n parent = self._get_or_create_folder(self._base)\n\n image = None\n path = os.path.abspath(path)\n bot.debug(\"PUSH %s\" % path)\n\n if not os.path.exists(path):\n bot.error('%s does not exist.' %path)\n sys.exit(1)\n\n names = parse_image_name(remove_uri(name),tag=tag)\n if names['version'] is None:\n version = get_image_hash(path)\n names = parse_image_name(remove_uri(name), tag=tag, version=version)\n\n # Update metadata with names, flatten to only include labels\n metadata = self.get_metadata(path, names=names)\n metadata = metadata['data']\n metadata.update(names)\n metadata.update(metadata['attributes']['labels'])\n del metadata['attributes']\n\n file_metadata = {\n 'name': names['storage'],\n 'mimeType' : 'application/octet-stream',\n 'parents': [parent['id']],\n 'properties': metadata\n }\n\n media = MediaFileUpload(path,resumable=True)\n try:\n bot.spinner.start()\n image = self._service.files().create(body=file_metadata,\n media_body=media,\n fields='id').execute()\n\n # Add a thumbnail!\n thumbnail = get_thumbnail()\n\n with open(thumbnail, \"rb\") as f:\n body = { \"contentHints\": { \n \"thumbnail\": { \"image\": base64.urlsafe_b64encode(f.read()).decode('utf8'),\n \"mimeType\": \"image/png\" }\n }}\n image = self._service.files().update(fileId=image['id'],\n body = body).execute()\n \n bot.spinner.stop()\n print(image['name'])\n\n except HttpError:\n bot.error('Error uploading %s' %path)\n pass\n\n return image"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _set_base(self, zone=None):\n '''set the API base or default to use Docker Hub. The user is able\n to set the base, api version, and protocol via a settings file\n of environment variables:\n '''\n if hasattr(self.aws._client_config, 'region_name'):\n zone = self.aws._client_config.region_name\n\n aws_id = self._required_get_and_update('SREGISTRY_AWS_ID')\n aws_zone = self._required_get_and_update('SREGISTRY_AWS_ZONE', zone)\n version = self._get_setting('SREGISTRY_AWS_VERSION', 'v2')\n base = self._get_setting('SREGISTRY_AWS_BASE')\n\n if base is None:\n base = \"%s.dkr.ecr.%s.amazonaws.com\" % (aws_id, aws_zone)\n\n nohttps = self._get_setting('SREGISTRY_AWS_NOHTTPS')\n if nohttps is None:\n nohttps = \"https://\"\n else:\n nohttps = \"http://\"\n\n # :///\n self.base = \"%s%s/%s\" %(nohttps, base.strip('/'), version)", "response": "set the base or default to use Docker Hub."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _update_secrets(self):\n '''update secrets will take a secrets credential file\n either located at .sregistry or the environment variable\n SREGISTRY_CLIENT_SECRETS and update the current client \n secrets as well as the associated API base. For the case of\n using Docker Hub, if we find a .docker secrets file, we update\n from there.\n '''\n bot.debug('Creating aws client...')\n try:\n from awscli.clidriver import create_clidriver\n except:\n bot.exit('Please install pip install sregistry[aws]')\n\n driver = create_clidriver()\n self.aws = driver.session.create_client('ecr')", "response": "update secrets will take a secrets credential file located at. sregistry or the environment variable SREGISTRY_CLIENT_SECRETS SREGISTRY_CLIENT_SECRETSS and update the current client \n secrets with the associated API base."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_logging_level():\n '''get_logging_level will configure a logging to standard out based on the user's\n selected level, which should be in an environment variable called\n MESSAGELEVEL. if MESSAGELEVEL is not set, the maximum level\n (5) is assumed (all messages).\n '''\n level = os.environ.get(\"MESSAGELEVEL\", INFO)\n\n # User knows logging levels and set one\n if isinstance(level, int):\n return level\n\n # Otherwise it's a string\n if level == \"CRITICAL\":\n return CRITICAL\n elif level == \"ABORT\":\n return ABORT\n elif level == \"ERROR\":\n return ERROR\n elif level == \"WARNING\":\n return WARNING\n elif level == \"LOG\":\n return LOG\n elif level == \"INFO\":\n return INFO\n elif level == \"QUIET\":\n return QUIET\n elif level.startswith(\"VERBOSE\"):\n return VERBOSE3\n elif level == \"LOG\":\n return LOG\n elif level == \"DEBUG\":\n return DEBUG\n\n return level", "response": "This function returns the logging level for the current user."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndetermining if a level should print to stderr includes all levels but INFO and QUIET", "response": "def emitError(self, level):\n '''determine if a level should print to\n stderr, includes all levels but INFO and QUIET'''\n if level in [ABORT,\n ERROR,\n WARNING,\n VERBOSE,\n VERBOSE1,\n VERBOSE2,\n VERBOSE3,\n DEBUG]:\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nemit is the main function to print the message optionally with a prefix :param level: the level of the message :param message: the message to print :param prefix: a prefix for the message", "response": "def emit(self, level, message, prefix=None, color=None):\n '''emit is the main function to print the message\n optionally with a prefix\n :param level: the level of the message\n :param message: the message to print\n :param prefix: a prefix for the message\n '''\n if color is None:\n color = level\n\n if prefix is not None:\n prefix = self.addColor(color, \"%s \" % (prefix))\n else:\n prefix = \"\"\n message = self.addColor(color, message)\n\n # Add the prefix\n message = \"%s%s\" % (prefix, message)\n\n if not message.endswith('\\n'):\n message = \"%s\\n\" % message\n\n # If the level is quiet, only print to error\n if self.level == QUIET:\n pass\n\n # Otherwise if in range print to stdout and stderr\n elif self.isEnabledFor(level):\n if self.emitError(level):\n self.write(self.errorStream, message)\n else:\n self.write(self.outputStream, message)\n\n # Add all log messages to history\n self.history.append(message)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nwriting will write a message to a stream", "response": "def write(self, stream, message):\n '''write will write a message to a stream,\n first checking the encoding\n '''\n if isinstance(message, bytes):\n message = message.decode('utf-8')\n stream.write(message)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_logs(self, join_newline=True):\n ''''get_logs will return the complete history, joined by newline\n (default) or as is.\n '''\n if join_newline:\n return '\\n'.join(self.history)\n return self.history", "response": "get_logs will return the complete history of the assessment"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nprinting a table of entries.", "response": "def table(self, rows, col_width=2):\n '''table will print a table of entries. If the rows is \n a dictionary, the keys are interpreted as column names. if\n not, a numbered list is used.\n '''\n\n labels = [str(x) for x in range(1,len(rows)+1)]\n if isinstance(rows, dict):\n labels = list(rows.keys())\n rows = list(rows.values())\n\n for row in rows: \n label = labels.pop(0)\n label = label.ljust(col_width)\n message = \"\\t\".join(row)\n self.custom(prefix=label,\n message=message)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef push(self, path, name, tag=None):\n '''push an image to Singularity Registry\n \n path: should correspond to an absolte image path (or derive it)\n name: should be the complete uri that the user has requested to push.\n tag: should correspond with an image tag. This is provided to mirror Docker\n '''\n path = os.path.abspath(path)\n bot.debug(\"PUSH %s\" % path)\n\n if not os.path.exists(path):\n bot.error('%s does not exist.' %path)\n sys.exit(1)\n\n # This returns a data structure with collection, container, based on uri\n names = parse_image_name(remove_uri(name),tag=tag)\n\n # use Singularity client, if exists, to inspect to extract metadata\n metadata = self.get_metadata(path, names=names)\n\n # If you want a spinner\n bot.spinner.start()\n # do your push request here. Generally you want to except a KeyboardInterrupt\n # and give the user a status from the response\n bot.spinner.stop()", "response": "push an image to Singularity registry"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\npushing an image to a Globus endpoint", "response": "def push(self, path, name, tag=None):\n '''push an image to Globus endpoint. In this case, the name is the\n globus endpoint id and path.\n\n --name :/path/for/image\n\n '''\n\n # Split the name into endpoint and rest\n\n endpoint, remote = self._parse_endpoint_name(name)\n\n path = os.path.abspath(path)\n image = os.path.basename(path)\n bot.debug(\"PUSH %s\" % path)\n\n # Flatten image uri into image name\n\n q = parse_image_name(image)\n\n if not os.path.exists(path):\n bot.error('%s does not exist.' %path)\n sys.exit(1)\n\n # Ensure we have a transfer client\n if not hasattr(self, 'transfer_client'):\n self._init_transfer_client()\n\n # The user must have a personal endpoint\n\n endpoints = self._get_endpoints()\n\n if len(endpoints['my-endpoints']) == 0:\n bot.error('You must have a personal endpoint to transfer the container')\n sys.exit(1) \n\n # Take the first endpoint that is active\n\n source_endpoint = None\n for eid,contender in endpoints['my-endpoints'].items():\n if contender['gcp_connected'] is True:\n source_endpoint = contender\n break\n\n # Exit if none are active, required!\n\n if source_endpoint is None:\n bot.error('No activated local endpoints online! Go online to transfer')\n sys.exit(1)\n\n\n # The destination endpoint should have an .singularity/shub folder set\n self._create_endpoint_cache(endpoint)\n\n # SREGISTRY_STORAGE must be an endpoint\n # if the image isn't already there, add it first\n\n added = self.add(image_path=path, \n image_uri=q['uri'],\n copy=True)\n \n label = \"Singularity Registry Transfer for %s\" %added.name\n tdata = globus_sdk.TransferData(self.transfer_client, \n source_endpoint['id'],\n endpoint,\n label=label,\n sync_level=\"checksum\")\n image = \".singularity/shub/%s\" %image\n tdata.add_item(added.image, image)\n bot.info('Requesting transfer from local %s to %s:%s' %(SREGISTRY_STORAGE,\n endpoint, image))\n transfer_result = self.transfer_client.submit_transfer(tdata)\n bot.info(transfer_result['message'])\n return transfer_result"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a default template for some function in sregistry", "response": "def get_template(name):\n '''return a default template for some function in sregistry\n If there is no template, None is returned.\n\n Parameters\n ==========\n name: the name of the template to retrieve\n\n '''\n name = name.lower()\n templates = dict()\n\n templates['tarinfo'] = {\"gid\": 0,\n \"uid\": 0,\n \"uname\": \"root\",\n \"gname\": \"root\",\n \"mode\": 493}\n\n if name in templates:\n bot.debug(\"Found template for %s\" % (name))\n return templates[name]\n else:\n bot.warning(\"Cannot find template %s\" % (name))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef download_layers(self, repo_name, digest=None, destination=None):\n ''' download layers is a wrapper to do the following for a client loaded\n with a manifest for an image:\n \n 1. use the manifests to retrieve list of digests (get_digests)\n 2. atomically download the list to destination (get_layers)\n\n This function uses the MultiProcess client to download layers\n at the same time.\n '''\n from sregistry.main.workers import Workers\n from sregistry.main.workers.aws import download_task\n\n # Obtain list of digets, and destination for download\n self._get_manifest(repo_name, digest)\n digests = self._get_digests(repo_name, digest)\n destination = self._get_download_cache(destination)\n\n # Create multiprocess download client\n workers = Workers()\n\n # Download each layer atomically\n tasks = []\n layers = []\n\n # Start with a fresh token\n self._update_token()\n\n for digest in digests:\n\n targz = \"%s/%s.tar.gz\" % (destination, digest['digest'])\n url = '%s/%s/blobs/%s' % (self.base, repo_name, digest['digest'])\n \n # Only download if not in cache already\n if not os.path.exists(targz):\n tasks.append((url, self.headers, targz))\n layers.append(targz)\n\n # Download layers with multiprocess workers\n if len(tasks) > 0:\n\n download_layers = workers.run(func=download_task,\n tasks=tasks)\n\n return layers, url", "response": "Download layers from the cache"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the manifest via the aws client", "response": "def get_manifest(self, repo_name, tag):\n '''return the image manifest via the aws client, saved in self.manifest\n '''\n\n image = None\n repo = self.aws.describe_images(repositoryName=repo_name)\n if 'imageDetails' in repo:\n for contender in repo.get('imageDetails'):\n if tag in contender['imageTags']:\n image = contender\n break\n\n # if the image isn't found, we need to exit\n if image is None:\n bot.exit('Cannot find %s:%s, is the uri correct?' %(repo_name, digest))\n\n digest = image['imageDigest']\n digests = self.aws.batch_get_image(repositoryName=repo_name, \n imageIds=[{\"imageDigest\": digest,\n \"imageTag\": tag}])\n\n self.manifest = json.loads(digests['images'][0]['imageManifest'])\n return self.manifest"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_digests(self, repo_name, tag):\n '''\n return a list of layers from a manifest.\n The function is intended to work with both version\n 1 and 2 of the schema. All layers (including redundant)\n are returned. By default, we try version 2 first,\n then fall back to version 1.\n\n For version 1 manifests: extraction is reversed\n\n Parameters\n ==========\n manifest: the manifest to read_layers from\n\n '''\n if not hasattr(self, 'manifest'):\n bot.error('Please retrieve manifest for the image first.')\n sys.exit(1)\n\n # version 2 manifest here!\n return self.manifest['layers']", "response": "get_digests is a function that returns a list of layers from a manifest."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef prepare_metadata(metadata):\n '''prepare a key/value list of metadata for the request. The metadata\n object that comes in is only parsed one level.\n '''\n pairs = {\n 'metadata': {\n 'items': [{\n 'key': 'client',\n 'value': 'sregistry'\n }\n ]\n }\n }\n for key,val in metadata.items():\n if not isinstance(val,dict) and not isinstance(val,list):\n pairs['metadata']['items'].append({'key':key,'value':val})\n elif isinstance(val,dict): \n for k,v in val.items():\n if not isinstance(v,dict) and not isinstance(v,list):\n pairs['metadata']['items'].append({'key':k,'value':v})\n\n return pairs", "response": "prepare a key - value list of metadata for the request."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_build_template(name=None, manager='apt'):\n '''get a particular build template, by default we return templates\n that are based on package managers.\n\n Parameters\n ==========\n name: the full path of the template file to use.\n manager: the package manager to use in the template (yum or apt)\n\n '''\n base = get_installdir()\n if name is None:\n name = \"%s/main/templates/build/singularity-builder-%s.sh\" %(base,\n manager)\n\n if os.path.exists(name):\n bot.debug(\"Found template %s\" %name)\n return ''.join(read_file(name)) \n\n bot.warning(\"Template %s not found.\" %name)", "response": "get a particular build template by default we return templates\n that are based on package managers."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nextract metadata using Singularity inspect if the executable is found and return the names of the image", "response": "def get_metadata(self, image_file, names={}):\n '''extract metadata using Singularity inspect, if the executable is found.\n If not, return a reasonable default (the parsed image name)\n\n Parameters\n ==========\n image_file: the full path to a Singularity image\n names: optional, an extracted or otherwise created dictionary of\n variables for the image, likely from utils.parse_image_name\n\n '''\n metadata = dict()\n\n # We can't return anything without image_file or names\n if image_file is not None:\n if not os.path.exists(image_file):\n bot.error('Cannot find %s.' %image_file)\n return names or metadata\n\n # The user provided a file, but no names\n if not names:\n names = parse_image_name(remove_uri(image_file))\n\n # Look for the Singularity Executable\n singularity = which('singularity')['message']\n\n # Inspect the image, or return names only\n if os.path.exists(singularity) and image_file is not None:\n from spython.main import Client as Singularity\n\n # We try and inspect, but not required (wont work within Docker)\n try:\n Singularity.quiet = True\n updates = Singularity.inspect(image=image_file)\n except:\n bot.warning('Inspect command not supported, metadata not included.')\n updates = None\n\n # Try loading the metadata\n if updates is not None:\n try:\n updates = json.loads(updates)\n metadata.update(updates)\n except:\n pass\n\n metadata.update(names)\n return metadata"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\npulling an image from a docker hub", "response": "def _pull(self, \n file_name, \n names, \n save=True, \n force=False, \n uri=\"docker://\", \n **kwargs):\n\n '''pull an image from a docker hub. This is a (less than ideal) workaround\n that actually does the following:\n\n - creates a sandbox folder\n - adds docker layers, metadata folder, and custom metadata to it\n - converts to a squashfs image with build\n\n the docker manifests are stored with registry metadata.\n \n Parameters\n ==========\n images: refers to the uri given by the user to pull in the format\n /. You should have an API that is able to \n retrieve a container based on parsing this uri.\n file_name: the user's requested name for the file. It can \n optionally be None if the user wants a default.\n save: if True, you should save the container to the database\n using self.add()\n \n Returns\n =======\n finished: a single container path, or list of paths\n '''\n\n # Use Singularity to build the image, based on user preference\n if file_name is None:\n file_name = self._get_storage_name(names)\n\n # Determine if the user already has the image\n if os.path.exists(file_name) and force is False:\n bot.error('Image exists! Remove first, or use --force to overwrite')\n sys.exit(1)\n\n digest = names['version'] or names['tag']\n\n\n # Build from sandbox, prefix with sandbox\n sandbox = get_tmpdir(prefix=\"sregistry-sandbox\")\n\n # First effort, get image via Sregistry\n layers = self._download_layers(names['url'], digest)\n\n # This is the url where the manifests were obtained\n url = self._get_manifest_selfLink(names['url'], digest)\n\n # Add environment to the layers\n envtar = self._get_environment_tar()\n layers = [envtar] + layers\n\n # Create singularity image from an empty folder\n for layer in layers:\n bot.info('Exploding %s' %layer)\n result = extract_tar(layer, sandbox, handle_whiteout=True)\n if result['return_code'] != 0:\n bot.error(result['message'])\n sys.exit(1) \n\n sudo = kwargs.get('sudo', False)\n\n # Build from a sandbox (recipe) into the image_file (squashfs)\n image_file = Singularity.build(image=file_name,\n recipe=sandbox,\n sudo=sudo)\n\n # Fall back to using Singularity\n if image_file is None:\n bot.info('Downloading with native Singularity, please wait...')\n image = image.replace('docker://', uri)\n image_file = Singularity.pull(image, pull_folder=sandbox)\n\n # Save to local storage\n if save is True:\n\n # Did we get the manifests?\n manifests = {}\n if hasattr(self, 'manifests'):\n manifests = self.manifests\n\n container = self.add(image_path = image_file,\n image_uri = names['uri'],\n metadata = manifests,\n url = url)\n\n # When the container is created, this is the path to the image\n image_file = container.image\n\n if os.path.exists(image_file):\n bot.debug('Retrieved image file %s' %image_file)\n bot.custom(prefix=\"Success!\", message=image_file)\n\n # Clean up sandbox\n shutil.rmtree(sandbox)\n\n return image_file"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nupdating secrets will take a secrets credential file located at. sregistry or the environment variable SREGISTRY_CLIENT_SECRETS and update the current client secrets as well as the associated API base.", "response": "def _update_secrets(self):\n '''update secrets will take a secrets credential file\n either located at .sregistry or the environment variable\n SREGISTRY_CLIENT_SECRETS and update the current client \n secrets as well as the associated API base. This is where you\n should do any customization of the secrets flie, or using\n it to update your client, if needed.\n '''\n # Get a setting for client myclient and some variable name VAR. \n # returns None if not set\n setting = self._get_setting('SREGISTRY_MYCLIENT_VAR')\n\n # Get (and if found in environment (1) settings (2) update the variable\n # It will still return None if not set\n setting = self._get_and_update_setting('SREGISTRY_MYCLIENT_VAR')\n\n # If you have a setting that is required and not found, you should exit.\n\n # Here is how to read all client secrets\n self.secrets = read_client_secrets()\n \n # If you don't want to use the shared settings file, you have your own.\n # Here is how to get if the user has a cache for you enabled, this\n # returns a path (enabled) or None (disabled) that you should honor\n # You can use this as a file path or folder and for both cases, you\n # need to create the file or folder\n if self._credential_cache is not None:\n bot.info(\"credential cache set to %s\" %self._credential_cache)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngenerates a repr string for the given class.", "response": "def _make_repr(class_name, *args, **kwargs):\n \"\"\"\n Generate a repr string.\n\n Positional arguments should be the positional arguments used to\n construct the class. Keyword arguments should consist of tuples of\n the attribute value and default. If the value is the default, then\n it won't be rendered in the output.\n\n Here's an example::\n\n def __repr__(self):\n return make_repr('MyClass', 'foo', name=(self.name, None))\n\n The output of this would be something line ``MyClass('foo',\n name='Will')``.\n\n \"\"\"\n arguments = [repr(arg) for arg in args]\n arguments.extend(\n \"{}={!r}\".format(name, value)\n for name, (value, default) in sorted(kwargs.items())\n if value != default\n )\n return \"{}({})\".format(class_name, \", \".join(arguments))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ntranslate S3 errors to FSErrors.", "response": "def s3errors(path):\n \"\"\"Translate S3 errors to FSErrors.\"\"\"\n try:\n yield\n except ClientError as error:\n _error = error.response.get(\"Error\", {})\n error_code = _error.get(\"Code\", None)\n response_meta = error.response.get(\"ResponseMetadata\", {})\n http_status = response_meta.get(\"HTTPStatusCode\", 200)\n error_msg = _error.get(\"Message\", None)\n if error_code == \"NoSuchBucket\":\n raise errors.ResourceError(path, exc=error, msg=error_msg)\n if http_status == 404:\n raise errors.ResourceNotFound(path)\n elif http_status == 403:\n raise errors.PermissionDenied(path=path, msg=error_msg)\n else:\n raise errors.OperationFailed(path=path, exc=error)\n except SSLError as error:\n raise errors.OperationFailed(path, exc=error)\n except EndpointConnectionError as error:\n raise errors.RemoteConnectionError(path, exc=error, msg=\"{}\".format(error))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a S3File backed with a temporary file.", "response": "def factory(cls, filename, mode, on_close):\n \"\"\"Create a S3File backed with a temporary file.\"\"\"\n _temp_file = tempfile.TemporaryFile()\n proxy = cls(_temp_file, filename, mode, on_close=on_close)\n return proxy"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef gravatar_url(user_or_email, size=GRAVATAR_DEFAULT_SIZE):\n if hasattr(user_or_email, 'email'):\n email = user_or_email.email\n else:\n email = user_or_email\n\n try:\n return escape(get_gravatar_url(email=email, size=size))\n except:\n return ''", "response": "Builds a gravatar url from a user or email."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef gravatar(user_or_email, size=GRAVATAR_DEFAULT_SIZE, alt_text='', css_class='gravatar'):\n if hasattr(user_or_email, 'email'):\n email = user_or_email.email\n else:\n email = user_or_email\n\n try:\n url = escape(get_gravatar_url(email=email, size=size))\n except:\n return ''\n\n return mark_safe(\n '\"{alt}\"'.format(\n css_class=css_class, src=url, width=size, height=size, alt=alt_text\n )\n )", "response": "Builds an image tag from an email address."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_gravatar_url(email, size=GRAVATAR_DEFAULT_SIZE, default=GRAVATAR_DEFAULT_IMAGE,\n rating=GRAVATAR_DEFAULT_RATING, secure=GRAVATAR_DEFAULT_SECURE):\n \"\"\"\n Builds a url to a gravatar from an email address.\n\n :param email: The email to fetch the gravatar for\n :param size: The size (in pixels) of the gravatar to fetch\n :param default: What type of default image to use if the gravatar does not exist\n :param rating: Used to filter the allowed gravatar ratings\n :param secure: If True use https, otherwise plain http\n \"\"\"\n if secure:\n url_base = GRAVATAR_SECURE_URL\n else:\n url_base = GRAVATAR_URL\n\n # Calculate the email hash\n email_hash = calculate_gravatar_hash(email)\n\n # Build querystring\n query_string = urlencode({\n 's': str(size),\n 'd': default,\n 'r': rating,\n })\n\n # Build url\n url = '{base}avatar/{hash}.jpg?{qs}'.format(base=url_base,\n hash=email_hash, qs=query_string)\n\n return url", "response": "Builds a url to a gravatar from an email address."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn True if the user has a gravatar False otherwise.", "response": "def has_gravatar(email):\n \"\"\"\n Returns True if the user has a gravatar, False if otherwise\n \"\"\"\n # Request a 404 response if the gravatar does not exist\n url = get_gravatar_url(email, default=GRAVATAR_DEFAULT_IMAGE_404)\n\n # Verify an OK response was received\n try:\n request = Request(url)\n request.get_method = lambda: 'HEAD'\n return 200 == urlopen(request).code\n except (HTTPError, URLError):\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_gravatar_profile_url(email, secure=GRAVATAR_DEFAULT_SECURE):\n if secure:\n url_base = GRAVATAR_SECURE_URL\n else:\n url_base = GRAVATAR_URL\n\n # Calculate the email hash\n email_hash = calculate_gravatar_hash(email)\n\n # Build url\n url = '{base}{hash}'.format(base=url_base, hash=email_hash)\n\n return url", "response": "Builds a url to a gravatar profile from an email address."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef graph_coloring_qubo(graph, k):\n\n K = nx.complete_graph(k)\n g1 = nx.cartesian_product(nx.create_empty_copy(graph), K)\n g2 = nx.cartesian_product(graph, nx.create_empty_copy(K))\n return nx.compose(g1, g2)", "response": "Returns a QUBO for k - coloring a graph A."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef chimera_blocks(M=16, N=16, L=4):\n for x in xrange(M):\n for y in xrange(N):\n for u in (0, 1):\n yield tuple((x, y, u, k) for k in xrange(L))", "response": "Generator for a chimera block quotient"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef chimera_block_quotient(G, blocks):\n from networkx import Graph\n from itertools import product\n\n BG = Graph()\n blockid = {}\n for i, b in enumerate(blocks):\n BG.add_node(i)\n if not b or not all(G.has_node(x) for x in b):\n continue\n for q in b:\n if q in blockid:\n raise(RuntimeError, \"two blocks overlap\")\n blockid[q] = i\n\n for q, u in blockid.items():\n ublock = blocks[u]\n for p in G[q]:\n if p not in blockid:\n continue\n v = blockid[p]\n if BG.has_edge(u, v) or u == v:\n continue\n vblock = blocks[v]\n\n if ublock[0][2] == vblock[0][2]:\n block_edges = zip(ublock, vblock)\n else:\n block_edges = product(ublock, vblock)\n\n if all(G.has_edge(x, y) for x, y in block_edges):\n BG.add_edge(u, v)\n\n return BG", "response": "This function extracts the blocks from a networkx graph and returns a new block - quotient graph according to the acceptability\n functions block_good and eblock_good functions."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nproduce an embedding in target_graph suitable to check if source_graph is 4-colorable. More generally, if target_graph is a (M,N,L) Chimera subgraph, the test is for L-colorability. This depends heavily upon the Chimera structure Inputs: source_graph, target_graph: networkx graphs M,N,L: integers defining the base chimera topology Outputs: emb: a dictionary mapping (v,i)", "response": "def embed_with_quotient(source_graph, target_graph, M=16, N=16, L=4, **args):\n \"\"\"\n Produce an embedding in target_graph suitable to\n check if source_graph is 4-colorable. More generally,\n if target_graph is a (M,N,L) Chimera subgraph, the\n test is for L-colorability. This depends heavily upon\n the Chimera structure\n\n Inputs:\n source_graph, target_graph: networkx graphs\n M,N,L: integers defining the base chimera topology\n\n Outputs:\n emb: a dictionary mapping (v,i)\n\n \"\"\"\n from random import sample\n blocks = list(chimera_blocks(M, N, L))\n\n BG = chimera_block_quotient(target_graph, blocks)\n\n ublocks = {block: (block[0][2], i)\n for (i, block) in enumerate(blocks) if BG.has_node(i)}\n source_e = list(source_graph.edges())\n source_n = {x for e in source_e for x in e}\n fabric_e = list(BG.edges())\n\n # Construct the hints:\n # Goal: each source node must be connected to one horizontal block and one\n # vertical block (by Chimera structure, each source node will\n # contain a full (horizontal and vertical) unit cell\n # Construction:\n # 0. for each source node `z`, construct two dummy nodes (z,0) and (z,1)\n # in both the source and target graphs used by the embedder\n # 1. fix the embedding `(z,u) -> (z,u)` for all dummy nodes\n # 2. for each target block `i` with orientation `u`, and for each source\n # node `z`, add the target edge `((z,u), i)`\n # 3. for each source node `z` and each orientation `u`, add the source\n # edge `((z,u), z)`\n\n fix_chains = {}\n for z in source_n:\n for u in (0, 1):\n source_e.append(((z, u), z))\n fix_chains[z, u] = [(z, u)]\n for u, i in ublocks.values():\n fabric_e.append(((z, u), i))\n\n # first, grab a few embeddings in the quotient graph. this is super fast\n embs = filter(None, [find_embedding(source_e, fabric_e,\n fixed_chains=fix_chains,\n chainlength_patience=0,\n **args) for _ in range(10)])\n\n # select the best-looking candidate so far\n emb = min(embs, key=lambda e: sorted((len(c)\n for c in e.values()), reverse=True))\n\n # work down the chainlengths in our embeding\n for _ in range(10):\n emb = find_embedding(source_e, fabric_e,\n fixed_chains=fix_chains,\n initial_chains=emb,\n chainlength_patience=3,\n skip_initialization=True,\n **args)\n\n # next, translate the block-embedding to a qubit-embedding\n newemb = {}\n for v in source_n:\n for k in range(L):\n newemb[v, k] = [blocks[i][k] for i in emb[v]]\n\n return newemb"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a set of resonance forms as SMILES strings given a SMILES string.", "response": "def enumerate_resonance_smiles(smiles):\n \"\"\"Return a set of resonance forms as SMILES strings, given a SMILES string.\n\n :param smiles: A SMILES string.\n :returns: A set containing SMILES strings for every possible resonance form.\n :rtype: set of strings.\n \"\"\"\n mol = Chem.MolFromSmiles(smiles)\n #Chem.SanitizeMol(mol) # MolFromSmiles does Sanitize by default\n mesomers = ResonanceEnumerator().enumerate(mol)\n return {Chem.MolToSmiles(m, isomericSmiles=True) for m in mesomers}"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nenumerates all possible resonance forms and return them as a list.", "response": "def enumerate(self, mol):\n \"\"\"Enumerate all possible resonance forms and return them as a list.\n\n :param mol: The input molecule.\n :type mol: rdkit.Chem.rdchem.Mol\n :return: A list of all possible resonance forms of the molecule.\n :rtype: list of rdkit.Chem.rdchem.Mol\n \"\"\"\n flags = 0\n if self.kekule_all:\n flags = flags | Chem.KEKULE_ALL\n if self.allow_incomplete_octets:\n flags = flags | Chem.ALLOW_INCOMPLETE_OCTETS\n if self.allow_charge_separation:\n flags = flags | Chem.ALLOW_CHARGE_SEPARATION\n if self.unconstrained_anions:\n flags = flags | Chem.UNCONSTRAINED_ANIONS\n if self.unconstrained_cations:\n flags = flags | Chem.UNCONSTRAINED_CATIONS\n results = []\n for result in Chem.ResonanceMolSupplier(mol, flags=flags, maxStructs=self.max_structures):\n # This seems necessary? ResonanceMolSupplier only does a partial sanitization\n Chem.SanitizeMol(result)\n results.append(result)\n return results"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef normalize(self, mol):\n log.debug('Running Normalizer')\n # Normalize each fragment separately to get around quirky RunReactants behaviour\n fragments = []\n for fragment in Chem.GetMolFrags(mol, asMols=True):\n fragments.append(self._normalize_fragment(fragment))\n # Join normalized fragments into a single molecule again\n outmol = fragments.pop()\n for fragment in fragments:\n outmol = Chem.CombineMols(outmol, fragment)\n Chem.SanitizeMol(outmol)\n return outmol", "response": "Normalizes a molecule by applying a series of normalization transforms to correct functional groups and recombine charges."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef canonicalize(self, mol):\n # TODO: Overload the mol parameter to pass a list of pre-enumerated tautomers\n tautomers = self._enumerate_tautomers(mol)\n if len(tautomers) == 1:\n return tautomers[0]\n # Calculate score for each tautomer\n highest = None\n for t in tautomers:\n smiles = Chem.MolToSmiles(t, isomericSmiles=True)\n log.debug('Tautomer: %s', smiles)\n score = 0\n # Add aromatic ring scores\n ssr = Chem.GetSymmSSSR(t)\n for ring in ssr:\n btypes = {t.GetBondBetweenAtoms(*pair).GetBondType() for pair in pairwise(ring)}\n elements = {t.GetAtomWithIdx(idx).GetAtomicNum() for idx in ring}\n if btypes == {BondType.AROMATIC}:\n log.debug('Score +100 (aromatic ring)')\n score += 100\n if elements == {6}:\n log.debug('Score +150 (carbocyclic aromatic ring)')\n score += 150\n # Add SMARTS scores\n for tscore in self.scores:\n for match in t.GetSubstructMatches(tscore.smarts):\n log.debug('Score %+d (%s)', tscore.score, tscore.name)\n score += tscore.score\n # Add (P,S,Se,Te)-H scores\n for atom in t.GetAtoms():\n if atom.GetAtomicNum() in {15, 16, 34, 52}:\n hs = atom.GetTotalNumHs()\n if hs:\n log.debug('Score %+d (%s-H bonds)', -hs, atom.GetSymbol())\n score -= hs\n # Set as highest if score higher or if score equal and smiles comes first alphabetically\n if not highest or highest['score'] < score or (highest['score'] == score and smiles < highest['smiles']):\n log.debug('New highest tautomer: %s (%s)', smiles, score)\n highest = {'smiles': smiles, 'tautomer': t, 'score': score}\n return highest['tautomer']", "response": "Return a canonical tautomer by enumerating and scoring all possible tautomers."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef enumerate(self, mol):\n smiles = Chem.MolToSmiles(mol, isomericSmiles=True)\n tautomers = {smiles: copy.deepcopy(mol)}\n # Create a kekulized form of the molecule to match the SMARTS against\n kekulized = copy.deepcopy(mol)\n Chem.Kekulize(kekulized)\n kekulized = {smiles: kekulized}\n done = set()\n while len(tautomers) < self.max_tautomers:\n for tsmiles in sorted(tautomers):\n if tsmiles in done:\n continue\n for transform in self.transforms:\n for match in kekulized[tsmiles].GetSubstructMatches(transform.tautomer):\n # log.debug('Matched rule: %s to %s for %s', transform.name, tsmiles, match)\n # Create a copy of in the input molecule so we can modify it\n # Use kekule form so bonds are explicitly single/double instead of aromatic\n product = copy.deepcopy(kekulized[tsmiles])\n # Remove a hydrogen from the first matched atom and add one to the last\n first = product.GetAtomWithIdx(match[0])\n last = product.GetAtomWithIdx(match[-1])\n # log.debug('%s: H%s -> H%s' % (first.GetSymbol(), first.GetTotalNumHs(), first.GetTotalNumHs() - 1))\n # log.debug('%s: H%s -> H%s' % (last.GetSymbol(), last.GetTotalNumHs(), last.GetTotalNumHs() + 1))\n first.SetNumExplicitHs(max(0, first.GetTotalNumHs() - 1))\n last.SetNumExplicitHs(last.GetTotalNumHs() + 1)\n # Remove any implicit hydrogens from the first and last atoms now we have set the count explicitly\n first.SetNoImplicit(True)\n last.SetNoImplicit(True)\n # Adjust bond orders\n for bi, pair in enumerate(pairwise(match)):\n if transform.bonds:\n # Set the resulting bond types as manually specified in the transform\n # log.debug('%s-%s: %s -> %s' % (product.GetAtomWithIdx(pair[0]).GetSymbol(), product.GetAtomWithIdx(pair[1]).GetSymbol(), product.GetBondBetweenAtoms(*pair).GetBondType(), transform.bonds[bi]))\n product.GetBondBetweenAtoms(*pair).SetBondType(transform.bonds[bi])\n else:\n # If no manually specified bond types, just swap single and double bonds\n current_bond_type = product.GetBondBetweenAtoms(*pair).GetBondType()\n product.GetBondBetweenAtoms(*pair).SetBondType(BondType.DOUBLE if current_bond_type == BondType.SINGLE else BondType.SINGLE)\n # log.debug('%s-%s: %s -> %s' % (product.GetAtomWithIdx(pair[0]).GetSymbol(), product.GetAtomWithIdx(pair[1]).GetSymbol(), current_bond_type, product.GetBondBetweenAtoms(*pair).GetBondType()))\n # Adjust charges\n if transform.charges:\n for ci, idx in enumerate(match):\n atom = product.GetAtomWithIdx(idx)\n # log.debug('%s: C%s -> C%s' % (atom.GetSymbol(), atom.GetFormalCharge(), atom.GetFormalCharge() + transform.charges[ci]))\n atom.SetFormalCharge(atom.GetFormalCharge() + transform.charges[ci])\n try:\n Chem.SanitizeMol(product)\n smiles = Chem.MolToSmiles(product, isomericSmiles=True)\n log.debug('Applied rule: %s to %s', transform.name, tsmiles)\n if smiles not in tautomers:\n log.debug('New tautomer produced: %s' % smiles)\n kekulized_product = copy.deepcopy(product)\n Chem.Kekulize(kekulized_product)\n tautomers[smiles] = product\n kekulized[smiles] = kekulized_product\n else:\n log.debug('Previous tautomer produced again: %s' % smiles)\n except ValueError:\n log.debug('ValueError Applying rule: %s', transform.name)\n done.add(tsmiles)\n if len(tautomers) == len(done):\n break\n else:\n log.warning('Tautomer enumeration stopped at maximum %s', self.max_tautomers)\n # Clean up stereochemistry\n for tautomer in tautomers.values():\n Chem.AssignStereochemistry(tautomer, force=True, cleanIt=True)\n for bond in tautomer.GetBonds():\n if bond.GetBondType() == BondType.DOUBLE and bond.GetStereo() > BondStereo.STEREOANY:\n begin = bond.GetBeginAtomIdx()\n end = bond.GetEndAtomIdx()\n for othertautomer in tautomers.values():\n if not othertautomer.GetBondBetweenAtoms(begin, end).GetBondType() == BondType.DOUBLE:\n neighbours = tautomer.GetAtomWithIdx(begin).GetBonds() + tautomer.GetAtomWithIdx(end).GetBonds()\n for otherbond in neighbours:\n if otherbond.GetBondDir() in {BondDir.ENDUPRIGHT, BondDir.ENDDOWNRIGHT}:\n otherbond.SetBondDir(BondDir.NONE)\n Chem.AssignStereochemistry(tautomer, force=True, cleanIt=True)\n log.debug('Removed stereochemistry from unfixed double bond')\n break\n return list(tautomers.values())", "response": "Enumerate all possible tautomers and return them as a list."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef validate_smiles(smiles):\n # Skip sanitize as standardize does this anyway\n mol = Chem.MolFromSmiles(smiles)\n logs = Validator().validate(mol)\n return logs", "response": "Validate a single SMILES string using default validations."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ndisconnect all metal atoms from any metal and returns the original molecule with metals disconnected.", "response": "def disconnect(self, mol):\n \"\"\"Break covalent bonds between metals and organic atoms under certain conditions.\n\n The algorithm works as follows:\n\n - Disconnect N, O, F from any metal.\n - Disconnect other non-metals from transition metals + Al (but not Hg, Ga, Ge, In, Sn, As, Tl, Pb, Bi, Po).\n - For every bond broken, adjust the charges of the begin and end atoms accordingly.\n\n :param mol: The input molecule.\n :type mol: rdkit.Chem.rdchem.Mol\n :return: The molecule with metals disconnected.\n :rtype: rdkit.Chem.rdchem.Mol\n \"\"\"\n log.debug('Running MetalDisconnector')\n # Remove bonds that match SMARTS\n for smarts in [self._metal_nof, self._metal_non]:\n pairs = mol.GetSubstructMatches(smarts)\n rwmol = Chem.RWMol(mol)\n orders = []\n for i, j in pairs:\n # TODO: Could get the valence contributions of the bond instead of GetBondTypeAsDouble?\n orders.append(int(mol.GetBondBetweenAtoms(i, j).GetBondTypeAsDouble()))\n rwmol.RemoveBond(i, j)\n # Adjust neighbouring charges accordingly\n mol = rwmol.GetMol()\n for n, (i, j) in enumerate(pairs):\n chg = orders[n]\n atom1 = mol.GetAtomWithIdx(i)\n atom1.SetFormalCharge(atom1.GetFormalCharge() + chg)\n atom2 = mol.GetAtomWithIdx(j)\n atom2.SetFormalCharge(atom2.GetFormalCharge() - chg)\n log.info('Removed covalent bond between %s and %s', atom1.GetSymbol(), atom2.GetSymbol())\n Chem.SanitizeMol(mol)\n return mol"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef standardize_smiles(smiles):\n # Skip sanitize as standardize does this anyway\n mol = Chem.MolFromSmiles(smiles, sanitize=False)\n mol = Standardizer().standardize(mol)\n return Chem.MolToSmiles(mol, isomericSmiles=True)", "response": "Return a standardized canonical SMILES string given a SMILES string."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a set of tautomers as SMILES strings given a SMILES string.", "response": "def enumerate_tautomers_smiles(smiles):\n \"\"\"Return a set of tautomers as SMILES strings, given a SMILES string.\n\n :param smiles: A SMILES string.\n :returns: A set containing SMILES strings for every possible tautomer.\n :rtype: set of strings.\n \"\"\"\n # Skip sanitize as standardize does this anyway\n mol = Chem.MolFromSmiles(smiles, sanitize=False)\n mol = Standardizer().standardize(mol)\n tautomers = TautomerEnumerator().enumerate(mol)\n return {Chem.MolToSmiles(m, isomericSmiles=True) for m in tautomers}"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef canonicalize_tautomer_smiles(smiles):\n # Skip sanitize as standardize does this anyway\n mol = Chem.MolFromSmiles(smiles, sanitize=False)\n mol = Standardizer().standardize(mol)\n tautomer = TautomerCanonicalizer().canonicalize(mol)\n return Chem.MolToSmiles(tautomer, isomericSmiles=True)", "response": "Return a standardized tautomer SMILES string given a SMILES string."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef standardize(self, mol):\n mol = copy.deepcopy(mol)\n Chem.SanitizeMol(mol)\n mol = Chem.RemoveHs(mol)\n mol = self.disconnect_metals(mol)\n mol = self.normalize(mol)\n mol = self.reionize(mol)\n Chem.AssignStereochemistry(mol, force=True, cleanIt=True)\n # TODO: Check this removes symmetric stereocenters\n return mol", "response": "Return a standardized version of the given molecule."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef tautomer_parent(self, mol, skip_standardize=False):\n if not skip_standardize:\n mol = self.standardize(mol)\n tautomer = self.canonicalize_tautomer(mol)\n tautomer = self.standardize(tautomer)\n return tautomer", "response": "Return the tautomer parent of a given molecule."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the fragment parent of a given molecule.", "response": "def fragment_parent(self, mol, skip_standardize=False):\n \"\"\"Return the fragment parent of a given molecule.\n\n The fragment parent is the largest organic covalent unit in the molecule.\n\n :param mol: The input molecule.\n :type mol: rdkit.Chem.rdchem.Mol\n :param bool skip_standardize: Set to True if mol has already been standardized.\n :returns: The fragment parent molecule.\n :rtype: rdkit.Chem.rdchem.Mol\n \"\"\"\n if not skip_standardize:\n mol = self.standardize(mol)\n # TODO: Consider applying FragmentRemover first to remove salts, solvents?\n fragment = self.largest_fragment(mol)\n return fragment"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef stereo_parent(self, mol, skip_standardize=False):\n if not skip_standardize:\n mol = self.standardize(mol)\n else:\n mol = copy.deepcopy(mol)\n Chem.RemoveStereochemistry(mol)\n return mol", "response": "Returns the stereo parent of a given molecule."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the isotope parent of a given molecule.", "response": "def isotope_parent(self, mol, skip_standardize=False):\n \"\"\"Return the isotope parent of a given molecule.\n\n The isotope parent has all atoms replaced with the most abundant isotope for that element.\n\n :param mol: The input molecule.\n :type mol: rdkit.Chem.rdchem.Mol\n :param bool skip_standardize: Set to True if mol has already been standardized.\n :returns: The isotope parent molecule.\n :rtype: rdkit.Chem.rdchem.Mol\n \"\"\"\n if not skip_standardize:\n mol = self.standardize(mol)\n else:\n mol = copy.deepcopy(mol)\n # Replace isotopes with common weight\n for atom in mol.GetAtoms():\n atom.SetIsotope(0)\n return mol"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the charge parent of a given molecule.", "response": "def charge_parent(self, mol, skip_standardize=False):\n \"\"\"Return the charge parent of a given molecule.\n\n The charge parent is the uncharged version of the fragment parent.\n\n :param mol: The input molecule.\n :type mol: rdkit.Chem.rdchem.Mol\n :param bool skip_standardize: Set to True if mol has already been standardized.\n :returns: The charge parent molecule.\n :rtype: rdkit.Chem.rdchem.Mol\n \"\"\"\n # TODO: All ionized acids and bases should be neutralised.\n if not skip_standardize:\n mol = self.standardize(mol)\n fragment = self.fragment_parent(mol, skip_standardize=True)\n if fragment:\n uncharged = self.uncharge(fragment)\n # During final standardization, the Reionizer ensures any remaining charges are in the right places\n uncharged = self.standardize(uncharged)\n return uncharged"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef super_parent(self, mol, skip_standardize=False):\n if not skip_standardize:\n mol = self.standardize(mol)\n # We don't need to get fragment parent, because the charge parent is the largest fragment\n mol = self.charge_parent(mol, skip_standardize=True)\n mol = self.isotope_parent(mol, skip_standardize=True)\n mol = self.stereo_parent(mol, skip_standardize=True)\n mol = self.tautomer_parent(mol, skip_standardize=True)\n mol = self.standardize(mol)\n return mol", "response": "Return the super parent of a given molecule."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a canonicalized version of the TautomerCanonicalizer.", "response": "def canonicalize_tautomer(self):\n \"\"\"\n :returns: A callable :class:`~molvs.tautomer.TautomerCanonicalizer` instance.\n \"\"\"\n return TautomerCanonicalizer(transforms=self.tautomer_transforms, scores=self.tautomer_scores,\n max_tautomers=self.max_tautomers)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef reionize(self, mol):\n log.debug('Running Reionizer')\n\n start_charge = Chem.GetFormalCharge(mol)\n\n # Apply forced charge corrections\n for cc in self.charge_corrections:\n for match in mol.GetSubstructMatches(cc.smarts):\n atom = mol.GetAtomWithIdx(match[0])\n log.info('Applying charge correction %s (%s %+d)', cc.name, atom.GetSymbol(), cc.charge)\n atom.SetFormalCharge(cc.charge)\n\n current_charge = Chem.GetFormalCharge(mol)\n charge_diff = Chem.GetFormalCharge(mol) - start_charge\n # If molecule is now neutral, assume everything is now fixed\n # But otherwise, if charge has become more positive, look for additional protonated acid groups to ionize\n if not current_charge == 0:\n while charge_diff > 0:\n ppos, poccur = self._strongest_protonated(mol)\n if ppos is None:\n break\n log.info('Ionizing %s to balance previous charge corrections', self.acid_base_pairs[ppos].name)\n patom = mol.GetAtomWithIdx(poccur[-1])\n patom.SetFormalCharge(patom.GetFormalCharge() - 1)\n if patom.GetNumExplicitHs() > 0:\n patom.SetNumExplicitHs(patom.GetNumExplicitHs() - 1)\n # else:\n patom.UpdatePropertyCache()\n charge_diff -= 1\n\n already_moved = set()\n while True:\n ppos, poccur = self._strongest_protonated(mol)\n ipos, ioccur = self._weakest_ionized(mol)\n if ioccur and poccur and ppos < ipos:\n if poccur[-1] == ioccur[-1]:\n # Bad! H wouldn't be moved, resulting in infinite loop.\n log.warning('Aborted reionization due to unexpected situation')\n break\n\n key = tuple(sorted([poccur[-1], ioccur[-1]]))\n if key in already_moved:\n log.warning('Aborting reionization to avoid infinite loop due to it being ambiguous where to put a Hydrogen')\n break\n already_moved.add(key)\n\n log.info('Moved proton from %s to %s', self.acid_base_pairs[ppos].name, self.acid_base_pairs[ipos].name)\n\n # Remove hydrogen from strongest protonated\n patom = mol.GetAtomWithIdx(poccur[-1])\n patom.SetFormalCharge(patom.GetFormalCharge() - 1)\n # If no implicit Hs to autoremove, and at least 1 explicit H to remove, reduce explicit count by 1\n if patom.GetNumImplicitHs() == 0 and patom.GetNumExplicitHs() > 0:\n patom.SetNumExplicitHs(patom.GetNumExplicitHs() - 1)\n # TODO: Remove any chiral label on patom?\n patom.UpdatePropertyCache()\n\n # Add hydrogen to weakest ionized\n iatom = mol.GetAtomWithIdx(ioccur[-1])\n iatom.SetFormalCharge(iatom.GetFormalCharge() + 1)\n # Increase explicit H count if no implicit, or aromatic N or P, or non default valence state\n if (iatom.GetNoImplicit() or\n ((patom.GetAtomicNum() == 7 or patom.GetAtomicNum() == 15) and patom.GetIsAromatic()) or\n iatom.GetTotalValence() not in list(Chem.GetPeriodicTable().GetValenceList(iatom.GetAtomicNum()))):\n iatom.SetNumExplicitHs(iatom.GetNumExplicitHs() + 1)\n iatom.UpdatePropertyCache()\n else:\n break\n\n # TODO: Canonical ionization position if multiple equivalent positions?\n\n Chem.SanitizeMol(mol)\n return mol", "response": "Enforce charges on certain atoms then perform competitive reionization."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nneutralizes the molecule by adding hydrogen to positive charges and removing hydrogen from negative charges.", "response": "def uncharge(self, mol):\n \"\"\"Neutralize molecule by adding/removing hydrogens.\n\n :param mol: The molecule to uncharge.\n :type mol: rdkit.Chem.rdchem.Mol\n :return: The uncharged molecule.\n :rtype: rdkit.Chem.rdchem.Mol\n \"\"\"\n log.debug('Running Uncharger')\n mol = copy.deepcopy(mol)\n\n # Neutralize positive charges\n pos_remainder = 0\n neg_count = 0\n for atom in mol.GetAtoms():\n # Remove hydrogen from positive atoms and reduce formal change until neutral or no more hydrogens\n while atom.GetFormalCharge() > 0 and atom.GetNumExplicitHs() > 0:\n atom.SetNumExplicitHs(atom.GetNumExplicitHs() - 1)\n atom.SetFormalCharge(atom.GetFormalCharge() - 1)\n log.info('Removed positive charge')\n chg = atom.GetFormalCharge()\n if chg > 0:\n # Record number of non-neutralizable positive charges\n pos_remainder += chg\n elif chg < 0:\n # Record total number of negative charges\n neg_count += -chg\n\n # Choose negative charges to leave in order to balance non-neutralizable positive charges\n neg_skip = self._get_neg_skip(mol, pos_remainder)\n\n # Neutralize remaining negative charges\n for atom in mol.GetAtoms():\n log.info(atom.GetIdx())\n if atom.GetIdx() in neg_skip:\n continue\n # Make sure to stop when neg_count <= pos_remainder, as it is possible that neg_skip is not large enough\n while atom.GetFormalCharge() < 0 and neg_count > pos_remainder:\n atom.SetNumExplicitHs(atom.GetNumExplicitHs() + 1)\n atom.SetFormalCharge(atom.GetFormalCharge() + 1)\n neg_count -= 1\n log.info('Removed negative charge')\n return mol"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets negatively charged atoms to skip up to pos_count.", "response": "def _get_neg_skip(self, mol, pos_count):\n \"\"\"Get negatively charged atoms to skip (up to pos_count).\"\"\"\n neg_skip = set()\n if pos_count:\n # Get negative oxygens in charge-separated nitro groups TODO: Any other special cases to skip?\n for occurrence in mol.GetSubstructMatches(self.nitro):\n neg_skip.add(occurrence[-1])\n if len(neg_skip) >= pos_count:\n return neg_skip\n # Get strongest ionized acids\n for position, pair in enumerate(self.acid_base_pairs):\n for occurrence in mol.GetSubstructMatches(pair.base):\n neg_skip.add(occurrence[-1])\n if len(neg_skip) >= pos_count:\n return neg_skip\n return neg_skip"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nremove the fragments from the molecule.", "response": "def remove(self, mol):\n \"\"\"Return the molecule with specified fragments removed.\n\n :param mol: The molecule to remove fragments from.\n :type mol: rdkit.Chem.rdchem.Mol\n :return: The molecule with fragments removed.\n :rtype: rdkit.Chem.rdchem.Mol\n \"\"\"\n log.debug('Running FragmentRemover')\n # Iterate FragmentPatterns and remove matching fragments\n for frag in self.fragments:\n # If nothing is left or leave_last and only one fragment, end here\n if mol.GetNumAtoms() == 0 or (self.leave_last and len(Chem.GetMolFrags(mol)) <= 1):\n break\n # Apply removal for this FragmentPattern\n removed = Chem.DeleteSubstructs(mol, frag.smarts, onlyFrags=True)\n if not mol.GetNumAtoms() == removed.GetNumAtoms():\n log.info('Removed fragment: %s', frag.name)\n if self.leave_last and removed.GetNumAtoms() == 0:\n # All the remaining fragments match this pattern - leave them all\n break\n mol = removed\n return mol"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the largest covalent unit from the given molecule.", "response": "def choose(self, mol):\n \"\"\"Return the largest covalent unit.\n\n The largest fragment is determined by number of atoms (including hydrogens). Ties are broken by taking the\n fragment with the higher molecular weight, and then by taking the first alphabetically by SMILES if needed.\n\n :param mol: The molecule to choose the largest fragment from.\n :type mol: rdkit.Chem.rdchem.Mol\n :return: The largest fragment.\n :rtype: rdkit.Chem.rdchem.Mol\n \"\"\"\n log.debug('Running LargestFragmentChooser')\n # TODO: Alternatively allow a list of fragments to be passed as the mol parameter\n fragments = Chem.GetMolFrags(mol, asMols=True)\n largest = None\n for f in fragments:\n smiles = Chem.MolToSmiles(f, isomericSmiles=True)\n log.debug('Fragment: %s', smiles)\n organic = is_organic(f)\n if self.prefer_organic:\n # Skip this fragment if not organic and we already have an organic fragment as the largest so far\n if largest and largest['organic'] and not organic:\n continue\n # Reset largest if it wasn't organic and this fragment is organic\n if largest and organic and not largest['organic']:\n largest = None\n # Count atoms\n atoms = 0\n for a in f.GetAtoms():\n atoms += 1 + a.GetTotalNumHs()\n # Skip this fragment if fewer atoms than the largest\n if largest and atoms < largest['atoms']:\n continue\n # Skip this fragment if equal number of atoms but weight is lower\n weight = rdMolDescriptors.CalcExactMolWt(f)\n if largest and atoms == largest['atoms'] and weight < largest['weight']:\n continue\n # Skip this fragment if equal atoms and equal weight but smiles comes last alphabetically\n if largest and atoms == largest['atoms'] and weight == largest['weight'] and smiles > largest['smiles']:\n continue\n # Otherwise this is the largest so far\n log.debug('New largest fragment: %s (%s)', smiles, atoms)\n largest = {'smiles': smiles, 'fragment': f, 'atoms': atoms, 'weight': weight, 'organic': organic}\n return largest['fragment']"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nintegrates an IVP problem of van der Polymer.", "response": "def integrate_ivp(u0=1.0, v0=0.0, mu=1.0, tend=10.0, dt0=1e-8, nt=0,\n nsteps=600, t0=0.0, atol=1e-8, rtol=1e-8, plot=False,\n savefig='None', method='bdf', dpi=100, verbose=False):\n \"\"\"\n Example program integrating an IVP problem of van der Pol oscillator\n \"\"\"\n f, j = get_f_and_j(mu)\n if nt > 1:\n tout = np.linspace(t0, tend, nt)\n yout, nfo = integrate_predefined(\n f, j, [u0, v0], tout, dt0, atol, rtol, nsteps=nsteps,\n check_indexing=False, method=method)\n else:\n tout, yout, nfo = integrate_adaptive(\n f, j, [u0, v0], t0, tend, dt0, atol, rtol, nsteps=nsteps,\n check_indexing=False, method=method) # dfdt[:] also for len == 1\n if verbose:\n print(nfo)\n if plot:\n import matplotlib.pyplot as plt\n plt.plot(tout, yout[:, 1], 'g--')\n plt.plot(tout, yout[:, 0], 'k-', linewidth=2)\n if savefig == 'None':\n plt.show()\n else:\n plt.savefig(savefig, dpi=dpi)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nintegrates an adaptive system of ordinary differential equations.", "response": "def integrate_adaptive(rhs, jac, y0, x0, xend, atol, rtol, dx0=.0,\n dx_min=.0, dx_max=.0, nsteps=500, method=None, nderiv=0,\n roots=None, nroots=0, return_on_root=False,\n check_callable=False, check_indexing=False,\n **kwargs):\n \"\"\" Integrates a system of ordinary differential equations.\n\n Solves the initial value problem (IVP) defined by the user supplied\n arguments. The solver chooses at what values of the independent variable\n results should be reported.\n\n Parameters\n ----------\n rhs : callable\n Function with signature f(t, y, fout) which modifies fout *inplace*.\n jac : callable\n Function with signature either jac(t, y, jmat_out, dfdx_out) for\n dense/banded jacobians, or jac(t, y, data, colptrs, rowvals) for\n sparse (CSC) jacobians. ``jac`` should modify ``jmat_out``, ``dfdx_out``\n (dense, banded) or (``data``, ``colptrs``, ``rowvals``) *inplace*.\n (see also ``lband``, ``uband``, ``nnz``)\n y0 : array_like\n Initial values of the dependent variables.\n x0 : float\n Initial value of the independent variable.\n xend : float\n Stopping value for the independent variable.\n dx0 : float\n Initial step-size.\n atol : float\n Absolute tolerance.\n rtol : float\n Relative tolerance.\n dx_min : float\n Minimum step (default: 0.0).\n dx_max : float\n Maximum step (default: 0.0).\n nsteps : int\n Maximum number of steps (default: 500).\n method : str\n One of: 'adams' or 'bdf' (default: 'bdf')\n nderiv : int\n Number of derivatives (default: 0).\n roots : callback\n With signature ``roots(x, yarr[:ny], out[:nroots]) -> None``.\n nroots : int\n Number of root functions in roots.\n return_on_root : bool\n Exit early (on first found root).\n check_callable : bool\n perform signature sanity checks on ``rhs`` and ``jac``\n check_indexing : bool\n Perform item setting sanity checks on ``rhs`` and ``jac``.\n \\*\\*kwargs:\n 'lband' : int\n Number of lower bands.\n Indexing: ``banded[row_i - col_i + uband, col_i]``.\n 'uband' : int\n Number of upper bands.\n Indexing: ``banded[row_i - col_i + uband, col_i]``.\n 'iter_type' : str (default: 'default')\n One of: 'default', 'functional', 'newton'\n 'linear_solver': str (default: 'default')\n One of: 'default', 'dense', 'banded', 'gmres',\n 'gmres_classic', 'bicgstab', 'tfqmr'\n 'return_on_error' : bool\n Returns on error without raising an excpetion (with ``'success'==False``).\n 'autorestart' : int\n Useful for autonomous systems where conditions change during integration.\n Will restart the integration with ``x==0``. Maximum number of steps is then\n given by ``autorestart * nsteps``.\n 'record_rhs_xvals' : bool\n When True: will return x values for rhs calls in ``info['rhs_xvals']``.\n 'record_jac_xvals' : bool\n When True will return x values for jac calls in ``info['jac_xvals']``.\n 'record_order' : bool\n When True will return used time stepper order in ``info['orders']``.\n 'record_fpe' : bool\n When True will return observed floating point errors in ``info['fpes']``. (see ``fpes``)\n 'record_steps' : bool\n When True will return stepsizes taken in ``info['steps']``.\n 'dx0cb' : callable\n Callback for calculating dx0 (make sure to pass ``dx0==0.0``) to enable.\n Signature: ``f(x, y[:]) -> float``.\n 'dx_max_cb' : callable\n Callback for calculating dx_max.\n Signature: ``f(x, y[:]) -> float``.\n 'autonomous_exprs' bool\n Whether expressions contain the independent variable. If not, autorestart\n is allowed to shift the independent variable to zero at restart).\n 'nnz' : int\n Maximum number of nonzero entries in the sparse (CSC) jacobian (default: -1).\n Must set ``nnz >= 0`` and ``linear_solver`` to 'klu' to enable use of sparse\n ``jac`` signature.\n 'jtimes' : callable\n Function with signature f(v, Jv, t, y, fy) to calculate the product of the\n Jacobian evaluated at t, y with a vector v. Should modify Jv *inplace*.\n For use with linear solvers 'gmres', 'gmres_classic', 'bicgstab', 'tfqmr'.\n 'ew_ele' : bool\n Whether to return error_weights, estimated_local_errors in info dict.\n 'constraints': array\n Per component constraints 0.0: no constraint, 1.0: >=0, -1.0: <=0, 2.0: >0.0, -2.0: <0.0.\n\n Returns\n -------\n (xout, yout, info):\n xout: 1-dimensional array of values for the independent variable\n yout: 2-dimensional array of the dependent variables (axis 1) for\n values corresponding to xout (axis 0).\n info: Dictionary with information about the integration.\n\n \"\"\"\n # Sanity checks to reduce risk of having a segfault:\n lband, uband = kwargs.get('lband', None), kwargs.get('uband', None)\n nnz = kwargs.get('nnz', None)\n _check_jac_type(lband=lband, uband=uband, nnz=nnz)\n\n if check_callable:\n _check_callable(rhs, jac, x0, y0, lband, uband, nnz)\n\n if check_indexing:\n _check_indexing(rhs, jac, x0, y0, lband, uband, nnz)\n\n return adaptive(rhs, jac, np.ascontiguousarray(y0, dtype=np.float64), x0, xend,\n atol, rtol, method or ('adams' if jac is None else 'bdf'),\n nsteps, dx0, dx_min, dx_max, nderiv=nderiv, roots=roots, nroots=nroots,\n return_on_root=return_on_root, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef integrate_predefined(rhs, jac, y0, xout, atol, rtol, jac_type=\"dense\",\n dx0=.0, dx_min=.0, dx_max=.0, nsteps=500, method=None,\n nderiv=0, roots=None, nroots=0, check_callable=False,\n check_indexing=False, **kwargs):\n \"\"\" Integrates a system of ordinary differential equations.\n\n Solves the initial value problem (IVP) defined by the user supplied\n arguments. The user chooses at what values of the independent variable\n results should be reported.\n\n Parameters\n ----------\n rhs : callable\n Function with signature f(t, y, fout) which modifies fout *inplace*.\n jac : callable\n Function with signature either jac(t, y, jmat_out, dfdx_out) for\n dense/banded jacobians, or jac(t, y, data, colptrs, rowvals) for\n sparse (CSC) jacobians. ``jac`` should modify ``jmat_out``, ``dfdx_out``\n (dense, banded) or (``data``, ``colptrs``, ``rowvals``) *inplace*.\n (see also ``lband``, ``uband``, ``nnz``)\n y0 : array_like\n Initial values of the dependent variables.\n xout : array_like\n Values of the independent variable.\n dx0 : float\n Initial step-size.\n atol : float\n Absolute tolerance.\n rtol : float\n Relative tolerance.\n dx_min : float\n Minimum step (default: 0.0).\n dx_max : float\n Maximum step (default: 0.0).\n nsteps : int\n Maximum number of steps (default: 500).\n method : str\n One of: 'adams' or 'bdf' (default: 'bdf').\n nderiv : int\n Number of derivatives (default: 0).\n roots : callback (default: None)\n With signature ``roots(x, yarr[:ny], out[:nroots]) -> None``,\n see info['root_indices'], note that xout is unaffected.\n nroots : int (default: 0)\n Number of root functions in roots.\n check_callable : bool (default: False)\n Perform signature sanity checks on ``rhs`` and ``jac``.\n check_indexing : bool (default: False)\n Perform item setting sanity checks on ``rhs`` and ``jac``.\n \\*\\*kwargs:\n 'lband' : int\n Number of lower bands.\n Indexing: ``banded[row_i - col_i + uband, col_i]``.\n 'uband' : int\n Number of upper bands.\n Indexing: ``banded[row_i - col_i + uband, col_i]``.\n 'iter_type' : str (default: 'default')\n One of: 'default', 'functional', 'newton'.\n 'linear_solver' : str (default: 'default')\n One of: 'default', 'dense', 'banded', 'gmres',\n 'gmres_classic', 'bicgstab', 'tfqmr', 'klu'.\n 'return_on_error' : bool\n Returns on error without raising an excpetion (with ``'success'==False``).\n 'autorestart' : int\n Useful for autonomous systems where conditions change during integration.\n Will restart the integration with ``x==0``. Maximum number of steps is then\n given by ``2**autorestart * nsteps``.\n 'record_rhs_xvals' : bool\n When True: will return x values for rhs calls in ``info['rhs_xvals']``.\n 'record_jac_xvals' : bool\n When True will return x values for jac calls in ``info['jac_xvals']``.\n 'record_order' : bool\n When True will return used time stepper order in ``info['orders']``.\n 'record_fpe' : bool\n When True will return observed floating point errors in ``info['fpes']``. (see ``fpes``)\n 'record_steps' : bool\n When True will return stepsizes taken in ``info['steps']``.\n 'dx0cb': callable\n Callback for calculating dx0 (make sure to pass ``dx0==0.0``) to enable.\n Signature: ``f(x, y[:]) -> float``.\n 'dx_max_cb' : callable\n Callback for calculating dx_max.\n Signature: ``f(x, y[:]) -> float``.\n 'autonomous_exprs' : bool\n Whether expressions contain the independent variable. If not, autorestart\n is allowed to shift the independent variable to zero at restart).\n 'nnz' : int\n Maximum number of nonzero entries in the sparse (CSC) jacobian (default: -1).\n Must set ``nnz >= 0`` and ``linear_solver`` to 'klu' to enable use of sparse\n ``jac`` signature.\n 'jtimes' : callable\n Function with signature f(v, Jv, t, y, fy) to calculate the product of the\n Jacobian evaluated at t, y with a vector v. Should modify Jv *inplace*.\n For use with linear solvers 'gmres', 'gmres_classic', 'bicgstab', 'tfqmr'.\n 'ew_ele' : bool\n Whether to return error_weights, estimated_local_errors in info dict.\n 'constraints': array\n Per component constraints 0.0: no constraint, 1.0: >=0, -1.0: <=0, 2.0: >0.0, -2.0: <0.0.\n\n Returns\n -------\n (yout, info):\n yout: 2-dimensional array of the dependent variables (axis 1) for\n values corresponding to xout (axis 0)\n info: Dictionary with information about the integration.\n\n \"\"\"\n # Sanity checks to reduce risk of having a segfault:\n x0 = xout[0]\n lband, uband = kwargs.get('lband', None), kwargs.get('uband', None)\n nnz = kwargs.get('nnz', None)\n _check_jac_type(lband=lband, uband=uband, nnz=nnz)\n\n if check_callable:\n _check_callable(rhs, jac, x0, y0, lband, uband, nnz)\n\n if check_indexing:\n _check_indexing(rhs, jac, x0, y0, lband, uband, nnz)\n\n return predefined(\n rhs, jac,\n np.ascontiguousarray(y0, dtype=np.float64),\n np.ascontiguousarray(xout, dtype=np.float64),\n atol, rtol, method or ('adams' if jac is None else 'bdf'),\n nsteps, dx0, dx_min, dx_max, nderiv=nderiv, roots=roots,\n nroots=nroots, **kwargs)", "response": "Integrates a system of ordinary differential equations with the given initial values."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the statistics from the given organization with the given credentials. Will not retreive data if file exists and force is set to True.", "response": "def get_stats(self, username='', password='', organization='llnl',\n force=True, repo_type='public'):\n \"\"\"\n Retrieves the statistics from the given organization with the given\n credentials. Will not retreive data if file exists and force hasn't been\n set to True. This is to save GH API requests.\n \"\"\"\n date = str(datetime.date.today())\n file_path = ('../github_stats_output/' + date[:4] + '/' + date[:7] + '/'\n + date + '.csv')\n if force or not os.path.isfile(file_path):\n my_github.login(username, password)\n calls_beginning = self.logged_in_gh.ratelimit_remaining + 1\n print 'Rate Limit: ' + str(calls_beginning)\n my_github.get_org(organization)\n count_members = my_github.get_mems_of_org()\n count_teams = my_github.get_teams_of_org()\n my_github.repos(repo_type=repo_type, organization=organization)\n #Write JSON\n my_github.write_org_json(dict_to_write=self.members_json,\n path_ending_type='members', is_list=True)\n my_github.write_org_json(dict_to_write=\n {'singleton': self.org_retrieved.to_json()},\n path_ending_type='organization')\n my_github.write_org_json(dict_to_write=self.teams_json,\n path_ending_type='teams', is_list=True)\n\n my_github.write_repo_json(dict_to_write=self.repos_json,\n path_ending_type='repo')\n my_github.write_repo_json(dict_to_write=self.contributors_json,\n path_ending_type='contributors', is_list=True)\n my_github.write_repo_json(dict_to_write=self.pull_requests_json,\n path_ending_type='pull-requests', is_list=True)\n my_github.write_repo_json(dict_to_write=self.issues_json,\n path_ending_type='issues', is_list=True)\n my_github.write_repo_json(dict_to_write=self.languages_json,\n path_ending_type='languages', is_dict=True)\n my_github.write_repo_json(dict_to_write=self.commits_json,\n path_ending_type='commits', is_list=True)\n #Write CSV\n my_github.write_to_file(file_path,\n date,\n organization,\n count_members,\n count_teams)\n calls_remaining = self.logged_in_gh.ratelimit_remaining\n calls_used = calls_beginning - calls_remaining\n print ('Rate Limit Remaining: ' + str(calls_remaining) + '\\nUsed '\n + str(calls_used) + ' API calls.')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_mems_of_org(self):\n print 'Getting members.'\n counter = 0\n for member in self.org_retrieved.iter_members():\n self.members_json[member.id] = member.to_json()\n counter += 1\n return counter", "response": "Returns the number of members of the organization."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the number of teams of the organization.", "response": "def get_teams_of_org(self):\n \"\"\"\n Retrieves the number of teams of the organization.\n \"\"\"\n print 'Getting teams.'\n counter = 0\n for team in self.org_retrieved.iter_teams():\n self.teams_json[team.id] = team.to_json()\n counter += 1\n return counter"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef repos(self, repo_type='public', organization='llnl'):\n print 'Getting repos.'\n for repo in self.org_retrieved.iter_repos(type=repo_type):\n #JSON\n json = repo.to_json()\n self.repos_json[repo.name] = json\n #CSV\n temp_repo = my_repo.My_Repo()\n temp_repo.name = repo.full_name\n self.total_repos += 1\n temp_repo.contributors = my_github.get_total_contributors(repo)\n self.total_contributors += temp_repo.contributors\n temp_repo.forks = repo.forks_count\n self.total_forks += temp_repo.forks\n temp_repo.stargazers = repo.stargazers\n self.total_stars += temp_repo.stargazers\n temp_repo.pull_requests_open, temp_repo.pull_requests_closed = \\\n my_github.get_pull_reqs(repo)\n temp_repo.pull_requests = (temp_repo.pull_requests_open\n + temp_repo.pull_requests_closed)\n self.total_pull_reqs += temp_repo.pull_requests_open\n self.total_pull_reqs += temp_repo.pull_requests_closed\n self.total_pull_reqs_open += temp_repo.pull_requests_open\n self.total_pull_reqs_closed += temp_repo.pull_requests_closed\n temp_repo.open_issues = repo.open_issues_count\n self.total_open_issues += temp_repo.open_issues\n temp_repo.closed_issues = my_github.get_issues(repo, organization=organization)\n temp_repo.issues = temp_repo.closed_issues + temp_repo.open_issues\n self.total_closed_issues += temp_repo.closed_issues\n self.total_issues += temp_repo.issues\n my_github.get_languages(repo, temp_repo)\n temp_repo.readme = my_github.get_readme(repo)\n #temp_repo.license = my_github.get_license(repo)\n temp_repo.commits = self.get_commits(repo=repo, organization=organization)\n self.total_commits += temp_repo.commits\n self.all_repos.append(temp_repo)", "response": "This method extracts the info about the repos of the current organization."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the number of contributors to a repo in the organization.", "response": "def get_total_contributors(self, repo):\n \"\"\"\n Retrieves the number of contributors to a repo in the organization.\n Also adds to unique contributor list.\n \"\"\"\n repo_contributors = 0\n for contributor in repo.iter_contributors():\n repo_contributors += 1\n self.unique_contributors[contributor.id].append(repo.name)\n self.contributors_json[repo.name].append(contributor.to_json())\n return repo_contributors"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_pull_reqs(self, repo):\n pull_reqs_open = 0\n pull_reqs_closed = 0\n for pull_request in repo.iter_pulls(state='all'):\n self.pull_requests_json[repo.name].append(pull_request.to_json())\n if pull_request.closed_at is not None:\n pull_reqs_closed += 1\n else:\n pull_reqs_open += 1\n return pull_reqs_open, pull_reqs_closed", "response": "Retrieves the number of pull requests on a repo in the organization."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_issues(self, repo, organization='llnl'):\n #JSON\n path = ('../github-data/' + organization + '/' + repo.name + '/issues')\n is_only_today = False\n if not os.path.exists(path): #no previous path, get all issues\n all_issues = repo.iter_issues(state='all')\n is_only_today = True\n else:\n files = os.listdir(path)\n date = str(files[-1][:-5])\n if date == str(datetime.date.today()):\n #most recent date is actually today, get previous most recent date\n if len(files) > 2:\n date = str(files[-2][:-5])\n else:\n #This means there is only one file, today. Retrieve every issue\n all_issues = repo.iter_issues(state='all')\n is_only_today = True\n if not is_only_today:#there's a previous saved JSON that's not today\n all_issues = repo.iter_issues(since=date, state='all')\n for issue in all_issues:\n self.issues_json[repo.name].append(issue.to_json())\n #CSV\n closed_issues = 0\n for issue in repo.iter_issues(state='closed'):\n if issue is not None:\n closed_issues += 1\n return closed_issues", "response": "Retrieves the number of closed issues in a repository."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_languages(self, repo, temp_repo):\n try:\n self.languages[repo.language] += 1\n except KeyError:\n count = self.languages[repo.language] = 1\n for repo_languages in repo.iter_languages():\n self.languages_json[repo.name][repo_languages[0]] = repo_languages[1]\n for language in repo_languages:\n if isinstance(language, basestring):#is language\n temp_repo.languages.append(language)\n self.previous_language = language\n else:#record size bytes of language\n try:\n self.languages_size[self.previous_language] += \\\n language\n except KeyError:\n size = self.languages_size[self.previous_language] \\\n = language", "response": "Retrieves the languages used in the repository and increments the respective\n counts of those languages."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nchecking to see if the given repo has a readme. MD means it has a correct readme. MIS means it has a correct readme. MIS means it has a correct readme. MIS means it has a correct readme. MIS means it has a correct readme. MIS means it has a correct readme.", "response": "def get_readme(self, repo):\n \"\"\"\n Checks to see if the given repo has a ReadMe. MD means it has a correct\n Readme recognized by GitHub.\n \"\"\"\n readme_contents = repo.readme()\n if readme_contents is not None:\n self.total_readmes += 1\n return 'MD'\n if self.search_limit >= 28:\n print 'Hit search limit. Sleeping for 60 sec.'\n time.sleep(60)\n self.search_limit = 0\n self.search_limit += 1\n search_results = self.logged_in_gh.search_code('readme'\n + 'in:path repo:' + repo.full_name)\n try:\n for result in search_results:\n path = result.path[1:]\n if '/' not in path and 'readme' in path.lower():\n self.total_readmes += 1\n return path\n return 'MISS'\n except (github3.models.GitHubError, StopIteration) as e:\n return 'MISS'"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nchecking to see if the given repo has a top level LICENSE file. Returns the path of the LICENSE file.", "response": "def get_license(self, repo):\n \"\"\"\n Checks to see if the given repo has a top level LICENSE file.\n \"\"\"\n if self.search_limit >= 28:\n print 'Hit search limit. Sleeping for 60 sec.'\n time.sleep(60)\n self.search_limit = 0\n self.search_limit += 1\n search_results = self.logged_in_gh.search_code('license'\n + 'in:path repo:' + repo.full_name)\n try:\n for result in search_results:\n path = result.path[1:]\n if '/' not in path and 'license' in path.lower():\n self.total_licenses += 1\n return path\n return 'MISS'\n except (StopIteration) as e:\n return 'MISS'"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the number of commits to a repo in the organization.", "response": "def get_commits(self, repo, organization='llnl'):\n \"\"\"\n Retrieves the number of commits to a repo in the organization. If it is\n the first time getting commits for a repo, it will get all commits and\n save them to JSON. If there are previous commits saved, it will only get\n commits that have not been saved to disk since the last date of commits.\n \"\"\"\n #JSON\n path = ('../github-data/' + organization + '/' + repo.name + '/commits')\n is_only_today = False\n if not os.path.exists(path): #no previous path, get all commits\n all_commits = repo.iter_commits()\n is_only_today = True\n else:\n files = os.listdir(path)\n date = str(files[-1][:-5])\n if date == str(datetime.date.today()):\n #most recent date is actually today, get previous most recent date\n if len(files) > 2:\n date = str(files[-2][:-5])\n else:\n #This means there is only one file, today. Retrieve every commit\n all_commits = repo.iter_commits()\n is_only_today = True\n if not is_only_today:#there's a previous saved JSON that's not today\n all_commits = repo.iter_commits(since=date)\n for commit in all_commits:\n self.commits_json[repo.name].append(commit.to_json())\n #for csv\n count = 0\n for commit in repo.iter_commits():\n count += 1\n return count"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef write_org_json(self, date=(datetime.date.today()),\n organization='llnl',dict_to_write={}, path_ending_type='',\n is_list=False):\n \"\"\"\n Writes stats from the organization to JSON.\n \"\"\"\n path = ('../github-data/' + organization + '-org/'\n + path_ending_type + '/' + str(date) + '.json')\n self.checkDir(path)\n with open(path, 'w') as out_clear:#clear old data\n out_clear.close()\n with open(path, 'a') as out:\n if is_list:#used for list of items\n out.write('[')\n for item in dict_to_write:\n out.write(json.dumps(dict_to_write[item], sort_keys=True,\n indent=4, separators=(',', ': ')) + ',')\n out.seek(-1, os.SEEK_END)#kill last comma\n out.truncate()\n if is_list:\n out.write(']')\n out.close()", "response": "Writes stats from the organization to JSON."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nwrites repo specific data to JSON.", "response": "def write_repo_json(self, date=(datetime.date.today()),\n organization='llnl', dict_to_write={}, path_ending_type='',\n is_list=False, is_dict=False):\n \"\"\"\n #Writes repo specific data to JSON.\n \"\"\"\n for repo in dict_to_write:\n path = ('../github-data/' + organization + '/' + repo + '/' +\n path_ending_type + '/' + str(date) + '.json')\n self.checkDir(path)\n with open(path, 'w') as out:\n if is_list:\n out.write('[')\n for value in dict_to_write[repo]:\n if is_dict:\n for inner_dict in value:\n out.write(json.dumps(inner_dict, sort_keys=True,\n indent=4, separators=(',', ': ')) + ',')\n else:\n out.write(json.dumps(value, sort_keys=True,\n indent=4, separators=(',', ': ')) + ',')\n out.seek(-1, os.SEEK_END)#kill last comma\n out.truncate()\n out.write(']')\n else:\n out.write(json.dumps(dict_to_write[repo], sort_keys=True,\n indent=4, separators=(',', ': ')))\n out.close()"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nwrite the current information to a file.", "response": "def write_to_file(self, file_path='', date=str(datetime.date.today()),\n organization='N/A', members=0, teams=0):\n \"\"\"\n Writes the current organization information to file (csv).\n \"\"\"\n self.checkDir(file_path)\n with open(file_path, 'w+') as output:\n output.write('date,organization,members,teams,unique_contributors,'\n + 'repository,contributors,forks,stargazers,pull_requests,'\n + 'open_issues,has_readme,has_license,languages,pull_requests_open,'\n + 'pull_requests_closed,commits,closed_issues,issues\\n' + date + ','\n + organization + ',' + str(members) + ',' + str(teams) + ','\n + str(len(self.unique_contributors)) + '\\n')\n for repo in self.all_repos:\n output.write(',,,,,' + repo.name + ',' + str(repo.contributors)\n + ',' + str(repo.forks) + ','\n + str(repo.stargazers) + ',' + str(repo.pull_requests) + ','\n + str(repo.open_issues) + ',' + str(repo.readme) + ','\n + str(repo.license) + ',' + ' '.join(sorted(repo.languages))\n + ',' + str(repo.pull_requests_open) + ','\n + str(repo.pull_requests_closed) + ',' + str(repo.commits)\n + ',' + str(repo.closed_issues) + ',' + str(repo.issues)\n + '\\n')\n output.write(',,,,total,' + str(self.total_repos) + ','\n + str(self.total_contributors) + ','\n + str(self.total_forks) + ',' + str(self.total_stars) + ','\n + str(self.total_pull_reqs) + ',' + str(self.total_open_issues)\n + ',' + str(self.total_readmes) + ',' + str(self.total_licenses)\n + ',,' + str(self.total_pull_reqs_open) + ','\n + str(self.total_pull_reqs_closed) + ','\n + str(self.total_commits) + ',' + str(self.total_closed_issues)\n + ',' + str(self.total_issues))\n output.close()\n #Update total\n self.write_totals(file_path=\"../github_stats_output/total.csv\", date=date,\n organization=organization, members=members, teams=teams)\n #Update language sizes\n self.write_languages(file_path='../github_stats_output/languages.csv',\n date=date)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef write_totals(self, file_path='', date=str(datetime.date.today()),\n organization='N/A', members=0, teams=0):\n \"\"\"\n Updates the total.csv file with current data.\n \"\"\"\n\n total_exists = os.path.isfile(file_path)\n with open(file_path, 'a') as out_total:\n if not total_exists:\n out_total.write('date,organization,repos,members,teams,'\n + 'unique_contributors,total_contributors,forks,'\n + 'stargazers,pull_requests,open_issues,has_readme,'\n + 'has_license,pull_requests_open,pull_requests_closed,'\n + 'commits,id,closed_issues,issues\\n')\n self.delete_last_line(date=date, file_path=file_path)\n out_total.close()\n with open(file_path, 'r') as file_read:\n row_count = sum(1 for row in file_read) - 1\n file_read.close()\n with open(file_path, 'a') as out_total:\n out_total.write(date + ',' + organization + ','\n + str(self.total_repos) + ',' + str(members) + ',' + str(teams)\n + ',' + str(len(self.unique_contributors)) + ','\n + str(self.total_contributors) + ',' + str(self.total_forks)\n + ',' + str(self.total_stars) + ',' + str(self.total_pull_reqs)\n + ',' + str(self.total_open_issues) + ','\n + str(self.total_readmes) + ',' + str(self.total_licenses) + ','\n + str(self.total_pull_reqs_open) + ','\n + str(self.total_pull_reqs_closed) + ','\n + str(self.total_commits) + ',' + str(row_count) + ','\n + str(self.total_closed_issues) + ',' + str(self.total_issues)\n + '\\n')\n out_total.close()", "response": "Updates the total. csv file with current data."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nupdate languages. csv file with current data.", "response": "def write_languages(self, file_path='',date=str(datetime.date.today())):\n \"\"\"\n Updates languages.csv file with current data.\n \"\"\"\n self.remove_date(file_path=file_path, date=date)\n languages_exists = os.path.isfile(file_path)\n with open(file_path, 'a') as out_languages:\n if not languages_exists:\n out_languages.write('date,language,count,size,size_log\\n')\n languages_sorted = sorted(self.languages_size)\n #self.delete_last_line(date=date, file_path=file_path)\n for language in languages_sorted:\n try:\n out_languages.write(date + ',' + language + ','\n + str(self.languages[language]) + ','\n + str(self.languages_size[language]) + ','\n + str(math.log10(int(self.languages_size[language])))\n + '\\n')\n except (TypeError, KeyError) as e:\n out_languages.write(date + ',' + language + ','\n + str(0) + ','\n + str(self.languages_size[language]) + ','\n + str(math.log10(int(self.languages_size[language])))\n + '\\n')"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef checkDir(self, file_path=''):\n if not os.path.exists(os.path.dirname(file_path)):\n try:\n os.makedirs(os.path.dirname(file_path))\n except OSError as e:\n if e.errno != errno.EEXIST:\n raise", "response": "Checks if a directory exists. If not creates one with the specified file_path."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nremove all rows of the associated date from the given csv file.", "response": "def remove_date(self, file_path='', date=str(datetime.date.today())):\n \"\"\"\n Removes all rows of the associated date from the given csv file.\n Defaults to today.\n \"\"\"\n languages_exists = os.path.isfile(file_path)\n if languages_exists:\n with open(file_path, 'rb') as inp, open('temp.csv', 'wb') as out:\n writer = csv.writer(out)\n for row in csv.reader(inp):\n if row[0] != date:\n writer.writerow(row)\n inp.close()\n out.close()\n os.remove(file_path)\n os.rename(\"temp.csv\",file_path)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef delete_last_line(self, file_path='', date=str(datetime.date.today())):\n deleted_line = False\n if os.path.isfile(file_path):\n with open(file_path, 'r+') as file:\n reader = csv.reader(file, delimiter=',')\n for row in reader:\n if date == row[0]:\n file.seek(0, os.SEEK_END)\n pos = file.tell() - 1\n while pos > 0 and file.read(1) != \"\\n\":\n pos -= 1\n file.seek(pos, os.SEEK_SET)\n if pos > 0:\n file.seek(pos, os.SEEK_SET)\n file.truncate()\n deleted_line = True\n break\n if deleted_line: file.write('\\n')\n file.close()", "response": "Delete the last line of the file that is not in the file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef gov_orgs():\n us_gov_github_orgs = set()\n\n gov_orgs = requests.get('https://government.github.com/organizations.json').json()\n\n us_gov_github_orgs.update(gov_orgs['governments']['U.S. Federal'])\n us_gov_github_orgs.update(gov_orgs['governments']['U.S. Military and Intelligence'])\n us_gov_github_orgs.update(gov_orgs['research']['U.S. Research Labs'])\n\n return list(us_gov_github_orgs)", "response": "Returns a list of the names of US Government GitHub organizations"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a github3. py session connected to GitHub. com", "response": "def create_session(token=None):\n \"\"\"\n Create a github3.py session connected to GitHub.com\n\n If token is not provided, will attempt to use the GITHUB_API_TOKEN\n environment variable if present.\n \"\"\"\n if token is None:\n token = os.environ.get('GITHUB_API_TOKEN', None)\n\n gh_session = github3.login(token=token)\n\n if gh_session is None:\n raise RuntimeError('Invalid or missing GITHUB_API_TOKEN')\n\n return gh_session"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a github3. py session for a GitHub Enterprise instance.", "response": "def create_enterprise_session(url, token=None):\n \"\"\"\n Create a github3.py session for a GitHub Enterprise instance\n\n If token is not provided, will attempt to use the GITHUB_API_TOKEN\n environment variable if present.\n \"\"\"\n\n gh_session = github3.enterprise_login(url=url, token=token)\n\n if gh_session is None:\n msg = 'Unable to connect to GitHub Enterprise (%s) with provided token.'\n raise RuntimeError(msg, url)\n\n return gh_session"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _check_api_limits(gh_session, api_required=250, sleep_time=15):\n api_rates = gh_session.rate_limit()\n\n api_remaining = api_rates['rate']['remaining']\n api_reset = api_rates['rate']['reset']\n logger.debug('Rate Limit - %d requests remaining', api_remaining)\n\n if api_remaining > api_required:\n return\n\n now_time = time.time()\n time_to_reset = int(api_reset - now_time)\n logger.warn('Rate Limit Depleted - Sleeping for %d seconds', time_to_reset)\n\n while now_time < api_reset:\n time.sleep(10)\n now_time = time.time()\n\n return", "response": "Simplified check for API limits\n If necessary sleep in place waiting for API to reset before returning."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconnecting to a GitHub session", "response": "def connect(url='https://github.com', token=None):\n \"\"\"\n Create a GitHub session for making requests\n \"\"\"\n\n gh_session = None\n if url == 'https://github.com':\n gh_session = create_session(token)\n else:\n gh_session = create_enterprise_session(url, token)\n\n if gh_session is None:\n msg = 'Unable to connect to (%s) with provided token.'\n raise RuntimeError(msg, url)\n\n logger.info('Connected to: %s', url)\n\n return gh_session"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nyields GitHub3. py repo objects for the provided organizations and repositories.", "response": "def query_repos(gh_session, orgs=None, repos=None, public_only=True):\n \"\"\"\n Yields GitHub3.py repo objects for provided orgs and repo names\n\n If orgs and repos are BOTH empty, execute special mode of getting ALL\n repositories from the GitHub Server.\n\n If public_only is True, will return only those repos that are marked as\n public. Set this to false to return all organizations that the session has\n permissions to access.\n \"\"\"\n\n if orgs is None:\n orgs = []\n if repos is None:\n repos = []\n if public_only:\n privacy = 'public'\n else:\n privacy = 'all'\n\n _check_api_limits(gh_session, 10)\n\n for org_name in orgs:\n org = gh_session.organization(org_name)\n num_repos = org.public_repos_count\n\n _check_api_limits(gh_session, _num_requests_needed(num_repos))\n\n for repo in org.repositories(type=privacy):\n _check_api_limits(gh_session, 10)\n yield repo\n\n for repo_name in repos:\n _check_api_limits(gh_session, 10)\n org, name = repo_name.split('/')\n yield gh_session.repository(org, name)\n\n if not (orgs or repos):\n for repo in gh_session.all_repositories():\n yield repo"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the stats for the users of the given organization.", "response": "def get_stats(self, username='', password='', organization='llnl', force=True):\n \"\"\"\n Retrieves the traffic for the users of the given organization.\n Requires organization admin credentials token to access the data.\n \"\"\"\n date = str(datetime.date.today())\n stargazers_file_path = ('../github_stats_output/stargazers.csv')\n if force or not os.path.isfile(file_path):\n my_github.login(username, password)\n calls_beginning = self.logged_in_gh.ratelimit_remaining + 1\n print 'Rate Limit: ' + str(calls_beginning)\n my_github.get_org(organization)\n my_github.get_repos()\n my_github.write_to_file(file_path=stargazers_file_path)\n #my_github.write_to_file(file_path=stargazers_file_path)\n calls_remaining = self.logged_in_gh.ratelimit_remaining\n calls_used = calls_beginning - calls_remaining\n print ('Rate Limit Remaining: ' + str(calls_remaining) + '\\nUsed '\n + str(calls_used) + ' API calls.')"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget an organization via given org name.", "response": "def get_org(self, organization_name=''):\n \"\"\"\n Retrieves an organization via given org name. If given\n empty string, prompts user for an org name.\n \"\"\"\n self.organization_name = organization_name\n if(organization_name == ''):\n self.organization_name = raw_input('Organization: ')\n print 'Getting organization.'\n self.org_retrieved = self.logged_in_gh.organization(organization_name)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the repos for the organization and builds the URL and headers for the organization and calculates the timestamps of stargazers.", "response": "def get_repos(self):\n \"\"\"\n Gets the repos for the organization and builds the URL/headers for\n getting timestamps of stargazers.\n \"\"\"\n print 'Getting repos.'\n #Uses the developer API. Note this could change.\n\n headers = {'Accept': 'application/vnd.github.v3.star+json', 'Authorization': 'token ' + self.token}\n temp_count = 0\n for repo in self.org_retrieved.iter_repos():\n temp_count += 1\n url = ('https://api.github.com/repos/' + self.organization_name + '/' + repo.name)\n self.repos[repo.name] = self.get_stargazers(url=url, headers=headers)\n self.calc_stargazers(start_count=650)\n print 'total count: \\t' + str(self.total_count)\n print str(temp_count) + ' repos'"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_stargazers(self, url, headers={}):\n url = url + '/stargazers?per_page=100&page=%s'\n page = 1\n gazers = []\n\n json_data = requests.get(url % page, headers=headers).json()\n while json_data:\n gazers.extend(json_data)\n page += 1\n json_data = requests.get(url % page, headers=headers).json()\n return gazers", "response": "Return a list of the stargazers of a GitHub repo"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwrites the data to a file.", "response": "def write_to_file(self, file_path='', date=(datetime.date.today()),\n organization='llnl'):\n \"\"\"\n Writes stargazers data to file.\n \"\"\"\n with open(file_path, 'w+') as out:\n out.write('date,organization,stargazers\\n')\n sorted_stargazers = sorted(self.stargazers)#sort based on lowercase\n for star in sorted_stargazers:\n out.write(star + ',' + str(self.stargazers[star]) + '\\n')\n out.close()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef from_github3(klass, repository, labor_hours=True):\n if not isinstance(repository, github3.repos.repo._Repository):\n raise TypeError('Repository must be a github3 Repository object')\n\n logger.info('Processing: %s', repository.full_name)\n\n project = klass()\n\n logger.debug('GitHub3: repository=%s', repository)\n\n # -- REQUIRED FIELDS --\n\n project['name'] = repository.name\n project['repositoryURL'] = repository.git_url\n project['description'] = repository.description\n\n try:\n repo_license = repository.license()\n except github3.exceptions.NotFoundError:\n logger.debug('no license found for repo=%s', repository)\n repo_license = None\n\n if repo_license:\n license = repo_license.license\n if license:\n logger.debug('license spdx=%s; url=%s', license.spdx_id, license.url)\n if license.url is None:\n project['permissions']['licenses'] = [{\"name\": license.spdx_id}]\n else:\n project['permissions']['licenses'] = [{\"URL\": license.url, \"name\": license.spdx_id}]\n else:\n project['permissions']['licenses'] = None\n\n public_server = repository.html_url.startswith('https://github.com')\n if not repository.private and public_server:\n project['permissions']['usageType'] = 'openSource'\n elif date_parse(repository.created_at) < POLICY_START_DATE:\n project['permissions']['usageType'] = 'exemptByPolicyDate'\n\n if labor_hours:\n project['laborHours'] = labor_hours_from_url(project['repositoryURL'])\n else:\n project['laborHours'] = 0\n\n project['tags'] = ['github']\n old_accept = repository.session.headers['Accept']\n repository.session.headers['Accept'] = 'application/vnd.github.mercy-preview+json'\n topics = repository._get(repository.url + '/topics').json()\n project['tags'].extend(topics.get('names', []))\n repository.session.headers['Accept'] = old_accept\n\n # Hacky way to get an Organization object back with GitHub3.py >= 1.2.0\n owner_url = repository.owner.url\n owner_api_response = repository._get(owner_url)\n organization = repository._json(owner_api_response, 200)\n project['contact']['email'] = organization['email']\n project['contact']['URL'] = organization['html_url']\n\n # -- OPTIONAL FIELDS --\n\n # project['version'] = ''\n\n project['organization'] = organization['name']\n\n # TODO: Currently, can't be an empty string, see: https://github.com/GSA/code-gov-web/issues/370\n project['status'] = 'Development'\n\n project['vcs'] = 'git'\n\n project['homepageURL'] = repository.html_url\n\n project['downloadURL'] = repository.downloads_url\n\n project['languages'] = [l for l, _ in repository.languages()]\n\n # project['partners'] = []\n\n # project['relatedCode'] = []\n\n # project['reusedCode'] = []\n\n # date: [object] A date object describing the release.\n # created: [string] The date the release was originally created, in YYYY-MM-DD or ISO 8601 format.\n # lastModified: [string] The date the release was modified, in YYYY-MM-DD or ISO 8601 format.\n # metadataLastUpdated: [string] The date the metadata of the release was last updated, in YYYY-MM-DD or ISO 8601 format.\n try:\n created_at = repository.created_at.date()\n except AttributeError:\n created_at = date_parse(repository.created_at).date()\n try:\n updated_at = repository.updated_at.date()\n except AttributeError:\n updated_at = date_parse(repository.updated_at).date()\n\n project['date'] = {\n 'created': created_at.isoformat(),\n 'lastModified': updated_at.isoformat(),\n 'metadataLastUpdated': '',\n }\n\n _prune_dict_null_str(project)\n\n return project", "response": "Create CodeGovProject object from github3 Repository object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate CodeGovProject object from GitLab Repository object.", "response": "def from_gitlab(klass, repository, labor_hours=True):\n \"\"\"\n Create CodeGovProject object from GitLab Repository\n \"\"\"\n if not isinstance(repository, gitlab.v4.objects.Project):\n raise TypeError('Repository must be a gitlab Repository object')\n\n project = klass()\n\n logger.debug(\n 'GitLab: repository_id=%d path_with_namespace=%s',\n repository.id,\n repository.path_with_namespace,\n )\n\n # -- REQUIRED FIELDS --\n\n project['name'] = repository.name\n project['repositoryURL'] = repository.http_url_to_repo\n project['description'] = repository.description\n\n # TODO: Update licenses from GitLab API\n project['permissions']['licenses'] = None\n\n web_url = repository.web_url\n public_server = web_url.startswith('https://gitlab.com')\n\n if repository.visibility in ('public') and public_server:\n project['permissions']['usageType'] = 'openSource'\n elif date_parse(repository.created_at) < POLICY_START_DATE:\n project['permissions']['usageType'] = 'exemptByPolicyDate'\n\n if labor_hours:\n project['laborHours'] = labor_hours_from_url(project['repositoryURL'])\n else:\n project['laborHours'] = 0\n\n project['tags'] = ['gitlab'] + repository.tag_list\n\n project['contact'] = {\n 'email': '',\n 'URL': web_url,\n }\n\n # -- OPTIONAL FIELDS --\n\n # project['version'] = ''\n\n project['organization'] = repository.namespace['name']\n\n # TODO: Currently, can't be an empty string, see: https://github.com/GSA/code-gov-web/issues/370\n project['status'] = 'Development'\n\n project['vcs'] = 'git'\n\n project['homepageURL'] = repository.web_url\n\n api_url = repository.manager.gitlab._url\n archive_suffix = '/projects/%s/repository/archive' % repository.get_id()\n project['downloadURL'] = api_url + archive_suffix\n\n # project['languages'] = [l for l, _ in repository.languages()]\n # project['partners'] = []\n # project['relatedCode'] = []\n # project['reusedCode'] = []\n\n project['date'] = {\n 'created': date_parse(repository.created_at).date().isoformat(),\n 'lastModified': date_parse(repository.last_activity_at).date().isoformat(),\n 'metadataLastUpdated': '',\n }\n\n _prune_dict_null_str(project)\n\n return project"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a new instance of the given class from a stashy repository.", "response": "def from_stashy(klass, repository, labor_hours=True):\n \"\"\"\n Handles crafting Code.gov Project for Bitbucket Server repositories\n \"\"\"\n # if not isinstance(repository, stashy.repos.Repository):\n # raise TypeError('Repository must be a stashy Repository object')\n if not isinstance(repository, dict):\n raise TypeError('Repository must be a dict')\n\n project = klass()\n\n logger.debug(\n 'Stashy: project_key=%s repository_slug=%s',\n repository['name'],\n repository['project']['key'],\n )\n\n # -- REQUIRED FIELDS --\n\n project['name'] = repository['name']\n\n clone_urls = [clone['href'] for clone in repository['links']['clone']]\n for url in clone_urls:\n # Only rely on SSH Urls for repository urls\n if url.startswith('ssh://'):\n project['repositoryURL'] = url\n break\n\n description = repository['project'].get('description', '')\n if description:\n project['description'] = 'Project description: %s' % description\n\n project['permissions']['licenses'] = None\n\n web_url = repository['links']['self'][0]['href']\n public_server = web_url.startswith('https://bitbucket.org')\n if repository['public'] and public_server:\n project['permissions']['usageType'] = 'openSource'\n\n if labor_hours:\n project['laborHours'] = labor_hours_from_url(project['repositoryURL'])\n else:\n project['laborHours'] = 0\n\n project['tags'] = ['bitbucket']\n\n project['contact']['email'] = ''\n project['contact']['URL'] = repository['links']['self'][0]['href']\n\n # -- OPTIONAL FIELDS --\n\n # project['version'] = ''\n\n # project['organization'] = organization.name\n\n # TODO: Currently, can't be an empty string, see: https://github.com/GSA/code-gov-web/issues/370\n project['status'] = 'Development'\n\n project['vcs'] = repository['scmId']\n\n project['homepageURL'] = repository['links']['self'][0]['href']\n\n # project['downloadURL'] =\n\n # project['languages'] =\n\n # project['partners'] = []\n\n # project['relatedCode'] = []\n\n # project['reusedCode'] = []\n\n # date: [object] A date object describing the release.\n # created: [string] The date the release was originally created, in YYYY-MM-DD or ISO 8601 format.\n # lastModified: [string] The date the release was modified, in YYYY-MM-DD or ISO 8601 format.\n # metadataLastUpdated: [string] The date the metadata of the release was last updated, in YYYY-MM-DD or ISO 8601 format.\n # project['date'] = {\n # 'created': repository.pushed_at.isoformat(),\n # 'lastModified': repository.updated_at.isoformat(),\n # 'metadataLastUpdated': '',\n # }\n\n _prune_dict_null_str(project)\n\n return project"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates CodeGovProject object from DOE CODE record.", "response": "def from_doecode(klass, record):\n \"\"\"\n Create CodeGovProject object from DOE CODE record\n\n Handles crafting Code.gov Project\n \"\"\"\n if not isinstance(record, dict):\n raise TypeError('`record` must be a dict')\n\n project = klass()\n\n # -- REQUIRED FIELDS --\n\n project['name'] = record['software_title']\n logger.debug('DOE CODE: software_title=\"%s\"', record['software_title'])\n\n link = record.get('repository_link', '')\n if not link:\n link = record.get('landing_page')\n logger.warning('DOE CODE: No repositoryURL, using landing_page: %s', link)\n\n project['repositoryURL'] = link\n\n project['description'] = record['description']\n\n licenses = set(record['licenses'])\n licenses.discard(None)\n logger.debug('DOE CODE: licenses=%s', licenses)\n\n license_objects = []\n if 'Other' in licenses:\n licenses.remove('Other')\n license_objects = [{\n 'name': 'Other',\n 'URL': record['proprietary_url']\n }]\n\n if licenses:\n license_objects.extend([_license_obj(license) for license in licenses])\n\n project['permissions']['licenses'] = license_objects\n\n if record['open_source']:\n usage_type = 'openSource'\n else:\n usage_type = 'exemptByLaw'\n project['permissions']['exemptionText'] = 'This source code is restricted by patent and / or intellectual property law.'\n\n project['permissions']['usageType'] = usage_type\n\n # TODO: Compute from git repo\n project['laborHours'] = 0\n\n project['tags'] = ['DOE CODE']\n lab_name = record.get('lab_display_name')\n if lab_name is not None:\n project['tags'].append(lab_name)\n\n project['contact']['email'] = record['owner']\n # project['contact']['URL'] = ''\n # project['contact']['name'] = ''\n # project['contact']['phone'] = ''\n\n # -- OPTIONAL FIELDS --\n\n if 'version_number' in record and record['version_number']:\n project['version'] = record['version_number']\n\n if lab_name is not None:\n project['organization'] = lab_name\n\n # Currently, can't be an empty string, see: https://github.com/GSA/code-gov-web/issues/370\n status = record.get('ever_announced')\n if status is None:\n raise ValueError('DOE CODE: Unable to determine \"ever_announced\" value!')\n elif status:\n status = 'Production'\n else:\n status = 'Development'\n\n project['status'] = status\n\n vcs = None\n link = project['repositoryURL']\n if 'github.com' in link:\n vcs = 'git'\n if vcs is None:\n logger.debug('DOE CODE: Unable to determine vcs for: name=\"%s\", repositoryURL=%s', project['name'], link)\n vcs = ''\n if vcs:\n project['vcs'] = vcs\n\n url = record.get('landing_page', '')\n if url:\n project['homepageURL'] = url\n\n # record['downloadURL'] = ''\n\n # self['disclaimerText'] = ''\n\n # self['disclaimerURL'] = ''\n\n if 'programming_languages' in record:\n project['languages'] = record['programming_languages']\n\n # self['partners'] = []\n # TODO: Look into using record['contributing_organizations']\n\n # self['relatedCode'] = []\n\n # self['reusedCode'] = []\n\n # date: [object] A date object describing the release.\n # created: [string] The date the release was originally created, in YYYY-MM-DD or ISO 8601 format.\n # lastModified: [string] The date the release was modified, in YYYY-MM-DD or ISO 8601 format.\n # metadataLastUpdated: [string] The date the metadata of the release was last updated, in YYYY-MM-DD or ISO 8601 format.\n if 'date_record_added' in record and 'date_record_updated' in record:\n project['date'] = {\n 'created': record['date_record_added'],\n # 'lastModified': '',\n 'metadataLastUpdated': record['date_record_updated']\n }\n\n return project"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef from_tfs(klass, tfs_project, labor_hours=True):\n project = klass()\n project_web_url = ''\n\n # -- REQUIRED FIELDS --\n project['name'] = tfs_project.projectInfo.name\n\n if 'web' in tfs_project.projectInfo._links.additional_properties:\n if 'href' in tfs_project.projectInfo._links.additional_properties['web']:\n # URL Encodes spaces that are in the Project Name for the Project Web URL\n project_web_url = requote_uri(tfs_project.projectInfo._links.additional_properties['web']['href'])\n\n project['repositoryURL'] = project_web_url\n\n project['homepageURL'] = project_web_url\n\n project['description'] = tfs_project.projectInfo.description\n\n project['vcs'] = 'TFS/AzureDevOps'\n\n project['permissions']['license'] = None\n\n project['tags'] = []\n\n if labor_hours:\n logger.debug('Sorry labor hour calculation not currently supported.')\n # project['laborHours'] = labor_hours_from_url(project['repositoryURL'])\n else:\n project['laborHours'] = 0\n\n if tfs_project.projectCreateInfo.last_update_time < POLICY_START_DATE:\n project['permissions']['usageType'] = 'exemptByPolicyDate'\n else:\n project['permissions']['usageType'] = 'exemptByAgencyMission'\n project['permissions']['exemptionText'] = 'This source code resides on a private server and has not been properly evaluated for releaseability.'\n\n project['contact'] = {\n 'email': '',\n 'URL': project_web_url\n }\n\n project['date'] = {\n 'lastModified': tfs_project.projectLastUpdateInfo.last_update_time.date().isoformat(),\n 'created': tfs_project.projectCreateInfo.last_update_time.date().isoformat(),\n 'metadataLastUpdated': ''\n }\n\n _prune_dict_null_str(project)\n\n return project", "response": "Creates CodeGovProject object from a TFS instance."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef process_config(config):\n\n agency = config.get('agency', 'UNKNOWN')\n logger.debug('Agency: %s', agency)\n\n method = config.get('method', 'other')\n logger.debug('Inventory Method: %s', method)\n\n compute_labor_hours = config.get('compute_labor_hours', True)\n\n if config.get('contact_email', None) is None:\n # A default contact email is required to handle the (frequent) case\n # where a project / repository has no available contact email.\n logger.warning('Config file should contain a \"contact_email\"')\n\n logger.debug('Creating inventory from config: %s', config)\n code_gov_metadata = Metadata(agency, method)\n\n # Parse config for GitHub repositories\n github_instances = config.get('GitHub', [])\n if config.get('github_gov_orgs', False):\n github_instances.append({\n 'url': 'https://github.com',\n 'orgs': gov_orgs(),\n })\n for instance in github_instances:\n url = instance.get('url', 'https://github.com')\n orgs = instance.get('orgs', [])\n repos = instance.get('repos', [])\n public_only = instance.get('public_only', True)\n excluded = instance.get('exclude', [])\n token = instance.get('token', None)\n\n gh_session = github.connect(url, token)\n\n for repo in github.query_repos(gh_session, orgs, repos, public_only):\n if repo.owner.login in excluded or repo.full_name in excluded:\n logger.info('Excluding: %s', repo.full_name)\n continue\n\n code_gov_project = Project.from_github3(repo, labor_hours=compute_labor_hours)\n code_gov_metadata['releases'].append(code_gov_project)\n\n # Parse config for GitLab repositories\n gitlab_instances = config.get('GitLab', [])\n for instance in gitlab_instances:\n url = instance.get('url')\n # orgs = instance.get('orgs', [])\n repos = instance.get('repos', [])\n # public_only = instance.get('public_only', True)\n excluded = instance.get('exclude', [])\n token = instance.get('token', None)\n\n gl_session = gitlab.connect(url, token)\n\n for repo in gitlab.query_repos(gl_session, repos):\n namespace = repo.namespace['path']\n path_with_namespace = repo.path_with_namespace\n if namespace in excluded or path_with_namespace in excluded:\n logger.info('Excluding: %s', repo.path_with_namespace)\n continue\n\n code_gov_project = Project.from_gitlab(repo, labor_hours=compute_labor_hours)\n code_gov_metadata['releases'].append(code_gov_project)\n\n # Parse config for Bitbucket repositories\n bitbucket_instances = config.get('Bitbucket', [])\n for instance in bitbucket_instances:\n url = instance.get('url')\n # orgs = instance.get('orgs', None)\n # public_only = instance.get('public_only', True)\n # token = instance.get('token', None)\n username = instance.get('username')\n password = instance.get('password')\n excluded = instance.get('exclude', [])\n\n bb_session = bitbucket.connect(url, username, password)\n\n for repo in bitbucket.all_repos(bb_session):\n project = repo['project']['key']\n project_repo = '%s/%s' % (project, repo['slug'])\n if project in excluded or project_repo in excluded:\n logger.info('Excluding: %s', project_repo)\n continue\n\n code_gov_project = Project.from_stashy(repo, labor_hours=compute_labor_hours)\n code_gov_metadata['releases'].append(code_gov_project)\n\n # Parse config for TFS repositories\n tfs_instances = config.get('TFS', [])\n for instance in tfs_instances:\n url = instance.get('url')\n token = instance.get('token', None)\n\n projects = tfs.get_projects_metadata(url, token)\n for project in projects:\n code_gov_project = Project.from_tfs(project, labor_hours=compute_labor_hours)\n code_gov_metadata['releases'].append(code_gov_project)\n\n # Handle parsing of DOE CODE records\n\n doecode_config = config.get('DOE CODE', {})\n doecode_json = doecode_config.get('json', None)\n doecode_url = doecode_config.get('url', None)\n doecode_key = doecode_config.get('api_key', None)\n\n for record in doecode.process(doecode_json, doecode_url, doecode_key):\n code_gov_project = Project.from_doecode(record)\n code_gov_metadata['releases'].append(code_gov_project)\n\n return code_gov_metadata", "response": "Process a Scraper config file and return a Code. gov Metadata file"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef force_attributes(metadata, config):\n\n organization = config.get('organization', '')\n logger.debug('Organization: %s', organization)\n\n contact_email = config.get('contact_email')\n logger.debug('Contact Email: %s', contact_email)\n\n permissions = config.get('permissions', {})\n default_usage = permissions.get('usageType', '')\n default_exemption_text = permissions.get('exemptionText', '')\n logger.debug('Default usageType: %s', default_usage)\n logger.debug('Default exemptionText: %s', default_exemption_text)\n\n # Force certain fields\n if organization:\n logger.debug('Forcing Organization to: %s', organization)\n\n if contact_email:\n logger.debug('Forcing Contact Email to: %s', contact_email)\n\n for release in metadata['releases']:\n if organization:\n release['organization'] = organization\n\n if contact_email:\n release['contact']['email'] = contact_email\n\n if 'licenses' not in release['permissions']:\n release['permissions']['licenses'] = None\n\n if 'description' not in release:\n release['description'] = 'No description available...'\n\n if 'usageType' not in release['permissions']:\n release['permissions']['usageType'] = default_usage\n release['permissions']['exemptionText'] = default_exemption_text\n\n return metadata", "response": "Force certain fields in the Code. gov Metadata json"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the stats for the given user and organization.", "response": "def get_stats(self, username='', password='', organization='llnl', force=True):\n \"\"\"\n Retrieves the traffic for the users of the given organization.\n Requires organization admin credentials token to access the data.\n \"\"\"\n date = str(datetime.date.today())\n referrers_file_path = ('../github_stats_output/referrers.csv')\n views_file_path = ('../github_stats_output/views.csv')\n clones_file_path = ('../github_stats_output/clones.csv')\n if force or not os.path.isfile(file_path):\n my_github.login(username, password)\n calls_beginning = self.logged_in_gh.ratelimit_remaining + 1\n print 'Rate Limit: ' + str(calls_beginning)\n my_github.get_org(organization)\n my_github.get_traffic()\n views_row_count = my_github.check_data_redundancy(file_path=views_file_path,\n dict_to_check=self.views)\n clones_row_count = my_github.check_data_redundancy(file_path=clones_file_path,\n dict_to_check=self.clones)\n my_github.write_to_file(referrers_file_path=referrers_file_path,\n views_file_path=views_file_path,\n clones_file_path=clones_file_path,\n views_row_count=views_row_count,\n clones_row_count=clones_row_count)\n my_github.write_json(dict_to_write=self.referrers_json,\n path_ending_type='traffic_popular_referrers')\n my_github.write_json(dict_to_write=self.views_json,\n path_ending_type='traffic_views')\n my_github.write_json(dict_to_write=self.clones_json,\n path_ending_type='traffic_clones')\n my_github.write_json(dict_to_write=self.releases_json,\n path_ending_type='releases')\n calls_remaining = self.logged_in_gh.ratelimit_remaining\n calls_used = calls_beginning - calls_remaining\n print ('Rate Limit Remaining: ' + str(calls_remaining) + '\\nUsed '\n + str(calls_used) + ' API calls.')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_traffic(self):\n print 'Getting traffic.'\n #Uses the developer API. Note this could change.\n headers = {'Accept': 'application/vnd.github.spiderman-preview', 'Authorization': 'token ' + self.token}\n headers_release = {'Authorization': 'token ' + self.token}\n for repo in self.org_retrieved.iter_repos(type='public'):\n url = ('https://api.github.com/repos/' + self.organization_name\n + '/' + repo.name)\n self.get_referrers(url=url, headers=headers, repo_name=repo.name)\n self.get_paths(url=url, headers=headers)\n self.get_data(url=url, headers=headers, dict_to_store=self.views,\n type='views', repo_name=repo.name)\n self.get_data(url=url, headers=headers, dict_to_store=self.clones,\n type='clones', repo_name=repo.name)\n self.get_releases(url=url, headers=headers_release, repo_name=repo.name)", "response": "Get the traffic for the repositories of the organization."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_releases(self, url='', headers={}, repo_name=''):\n url_releases = (url + '/releases')\n r = requests.get(url_releases, headers=headers)\n self.releases_json[repo_name] = r.json()", "response": "Retrieves the releases for the given repo in JSON."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_referrers(self, url='', headers={}, repo_name=''):\n #JSON\n url_referrers = (url + '/traffic/popular/referrers')\n r1 = requests.get(url_referrers, headers=headers)\n referrers_json = r1.json()\n self.referrers_json[repo_name] = referrers_json\n #CSV\n for referrer in referrers_json:\n ref_name = referrer['referrer']\n try:\n tuple_in = (referrer['count'], referrer['uniques'])#curr vals\n tuple = (self.referrers[ref_name][0] + tuple_in[0],#cal new vals\n self.referrers[ref_name][1] + tuple_in[1])\n self.referrers[ref_name] = tuple#record new vals\n except KeyError:\n tuple = self.referrers[ref_name] = (referrer['count'],\n referrer['uniques'])\n self.referrers_lower[ref_name.lower()] = ref_name", "response": "Retrieves the total referrers and unique referrers of all repos in url and stores them in a dict."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_data(self, url='',headers={}, date=str(datetime.date.today()),\n dict_to_store={}, type='', repo_name=''):\n \"\"\"\n Retrieves data from json and stores it in the supplied dict. Accepts\n 'clones' or 'views' as type.\n \"\"\"\n #JSON\n url = (url + '/traffic/' + type)\n r3 = requests.get(url, headers=headers)\n json = r3.json()\n if type == 'views':\n self.views_json[repo_name] = json\n elif type == 'clones':\n self.clones_json[repo_name] = json\n #CSV\n for day in json[type]:\n timestamp_seconds = day['timestamp']/1000\n try:\n date_timestamp = datetime.datetime.utcfromtimestamp(\n timestamp_seconds).strftime('%Y-%m-%d')\n #do not add todays date, some views might not be recorded yet\n if date_timestamp != date:\n tuple_in = (day['count'], day['uniques'])\n tuple = (dict_to_store[timestamp_seconds][0] + tuple_in[0],\n dict_to_store[timestamp_seconds][1] + tuple_in[1])\n dict_to_store[timestamp_seconds] = tuple\n except KeyError:\n tuple = dict_to_store[timestamp_seconds] = (day['count'],\n day['uniques'])", "response": "Retrieves data from json and stores it in the supplied dict. Accepts type = views or clones as type."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef write_json(self, date=(datetime.date.today()),\n organization='llnl',dict_to_write={}, path_ending_type=''):\n \"\"\"\n Writes all traffic data to file in JSON form.\n \"\"\"\n for repo in dict_to_write:\n if len(dict_to_write[repo]) != 0:#don't need to write out empty lists\n path = ('../github-data/' + organization + '/' + repo + '/' +\n path_ending_type + '/' + str(date) + '.json')\n self.checkDir(path)\n with open(path, 'w') as out:\n out.write(json.dumps(dict_to_write[repo], sort_keys=True,\n indent=4, separators=(',', ': ')))\n out.close()", "response": "Writes all traffic data to file in JSON form."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef write_to_file(self, referrers_file_path='', views_file_path='',\n clones_file_path='', date=(datetime.date.today()), organization='llnl',\n views_row_count=0, clones_row_count=0):\n \"\"\"\n Writes all traffic data to file.\n \"\"\"\n self.write_referrers_to_file(file_path=referrers_file_path)\n self.write_data_to_file(file_path=views_file_path,\n dict_to_write=self.views, name='views',\n row_count=views_row_count)\n self.write_data_to_file(file_path=clones_file_path,\n dict_to_write=self.clones, name='clones',\n row_count=clones_row_count)", "response": "Writes all traffic data to file."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef check_data_redundancy(self, file_path='', dict_to_check={}):\n count = 0\n exists = os.path.isfile(file_path)\n previous_dates = {}\n if exists:\n with open(file_path, 'r') as input:\n input.readline()#skip header line\n for row in csv.reader(input):\n timestamp = calendar.timegm(time.strptime(row[0],\n '%Y-%m-%d'))\n if timestamp in dict_to_check:#our date is already recorded\n del dict_to_check[timestamp]\n #calc current id max\n count += 1\n input.close()\n return count", "response": "Checks the given csv file against the json data scraped for the given dict."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwrites given dict to file.", "response": "def write_data_to_file(self, file_path='', date=str(datetime.date.today()),\n organization='llnl',dict_to_write={}, name='', row_count=0):\n \"\"\"\n Writes given dict to file.\n \"\"\"\n exists = os.path.isfile(file_path)\n with open(file_path, 'a') as out:\n if not exists:\n out.write('date,organization,' + name + ',unique_' + name\n + ',id\\n')\n sorted_dict = sorted(dict_to_write)\n for day in sorted_dict:\n day_formatted = datetime.datetime.utcfromtimestamp(\n day ).strftime('%Y-%m-%d')\n out.write(day_formatted + ',' + organization + ','\n + str(dict_to_write[day][0]) + ',' + str(dict_to_write[day][1])\n + ',' + str(row_count) + '\\n')\n row_count += 1"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef write_referrers_to_file(self, file_path='',\n date=str(datetime.date.today()), organization='llnl'):\n \"\"\"\n Writes the referrers data to file.\n \"\"\"\n self.remove_date(file_path=file_path, date=date)\n referrers_exists = os.path.isfile(file_path)\n with open(file_path, 'a') as out:\n if not referrers_exists:\n out.write('date,organization,referrer,count,count_log,uniques,'\n + 'uniques_logged\\n')\n sorted_referrers = sorted(self.referrers_lower)#sort based on lowercase\n for referrer in sorted_referrers:\n ref_name = self.referrers_lower[referrer]#grab real name from\n count = self.referrers[ref_name][0]\n uniques = self.referrers[ref_name][1]\n if count == 1:#so we don't display 0 for count of 1\n count = 1.5\n if uniques == 1:\n uniques = 1.5\n count_logged = math.log(count)\n uniques_logged = math.log(uniques)\n out.write(date + ',' + organization + ','\n + ref_name + ',' + str(count) + ',' + str(count_logged) + ','\n + str(uniques) + ',' + str(uniques_logged) + '\\n')\n out.close()", "response": "Writes the referrers data to file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ntaking a DOE CODE. json file into DOE CODE projects Yields DOE CODE records from a DOE CODE. json file into DOE CODE projects Yields DOE CODE records from a DOE CODE. json file into DOE CODE projects Yields DOE CODE records from a DOE CODE. json file into DOE CODE projects", "response": "def process_json(filename):\n \"\"\"\n Converts a DOE CODE .json file into DOE CODE projects\n Yields DOE CODE records from a DOE CODE .json file\n \"\"\"\n\n logger.debug('Processing DOE CODE json: %s', filename)\n\n doecode_json = json.load(open(filename))\n\n for record in doecode_json['records']:\n yield record"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nyielding DOE CODE records from a DOE CODE. json URL response into DOE CODE projects", "response": "def process_url(url, key):\n \"\"\"\n Yields DOE CODE records from a DOE CODE .json URL response\n Converts a DOE CODE API .json URL response into DOE CODE projects\n \"\"\"\n\n logger.debug('Fetching DOE CODE JSON: %s', url)\n\n if key is None:\n raise ValueError('DOE CODE API Key value is missing!')\n\n response = requests.get(url, headers={\"Authorization\": \"Basic \" + key})\n doecode_json = response.json()\n\n for record in doecode_json['records']:\n yield record"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nyielding DOE CODE records based on input sources.", "response": "def process(filename=None, url=None, key=None):\n \"\"\"\n Yeilds DOE CODE records based on provided input sources\n\n param:\n filename (str): Path to a DOE CODE .json file\n url (str): URL for a DOE CODE server json file\n key (str): API Key for connecting to DOE CODE server\n \"\"\"\n\n if filename is not None:\n yield from process_json(filename)\n elif url and key:\n yield from process_url(url, key)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef login(self, username='', password=''):\n try:\n\n token = ''\n id = ''\n if not os.path.isfile('CREDENTIALS_FILE'):\n if(username == '' or password == ''):\n username = raw_input('Username: ')\n password = getpass.getpass('Password: ')\n note = 'GitHub Organization Stats App'\n note_url = 'http://software.llnl.gov/'\n scopes = ['user', 'repo']\n auth = github3.authorize(username, password, scopes, note,\n note_url, two_factor_callback=self.prompt_2fa)\n token = auth.token\n id = auth.id\n with open('CREDENTIALS_FILE', 'w+') as fd:\n fd.write(token + '\\n')\n fd.write(str(id))\n fd.close()\n else:\n with open('CREDENTIALS_FILE', 'r') as fd:\n token = fd.readline().strip()\n id = fd.readline().strip()\n fd.close()\n print \"Logging in.\"\n self.logged_in_gh = github3.login(token=token, two_factor_callback=self.prompt_2fa)\n self.logged_in_gh.user().to_json()\n except (ValueError, AttributeError, github3.models.GitHubError) as e:\n print 'Bad credentials. Try again.'\n self.login()", "response": "Logs in and sets the Github object via given credentials."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_mems_of_org(self):\n print 'Getting members\\' emails.'\n for member in self.org_retrieved.iter_members():\n login = member.to_json()['login']\n user_email = self.logged_in_gh.user(login).to_json()['email']\n if user_email is not None:\n self.emails[login] = user_email\n else:#user has no public email\n self.emails[login] = 'none'\n #used for sorting regardless of case\n self.logins_lower[login.lower()] = login", "response": "Get the emails of the members of the organization."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nwrite the user emails to file.", "response": "def write_to_file(self, file_path=''):\n \"\"\"\n Writes the user emails to file.\n \"\"\"\n with open(file_path, 'w+') as out:\n out.write('user, email\\n')\n sorted_names = sorted(self.logins_lower)#sort based on lowercase\n for login in sorted_names:\n out.write(self.logins_lower[login] + ','\n + self.emails[self.logins_lower[login]] + '\\n')\n out.close()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef connect(url, username, password):\n\n bb_session = stashy.connect(url, username, password)\n\n logger.info('Connected to: %s as %s', url, username)\n\n return bb_session", "response": "Connect to a Bitbucket server and return a connected Bitbucket session"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_stargazers(url, session=None):\n headers = {'Accept': 'application/vnd.github.v3.star+json'}\n url = url + '?per_page=100&page=%s'\n page = 1\n gazers = []\n\n response = github.get(url % page, headers=headers)\n gazers.extend(response.json())\n\n\n #{rel: url for url, rel in LINK_REGEX.findall(r.headers['Link'])}\n while json_data:\n gazers.extend(json_data)\n page += 1\n json_data = github.get(url % page, headers=headers).json()\n\n return gazers", "response": "Return a list of the stargazers of a GitHub repo"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nconnecting to a GitLab API.", "response": "def connect(url='https://gitlab.com', token=None):\n \"\"\"\n Return a connected GitLab session\n\n ``token`` should be a ``private_token`` from Gitlab\n \"\"\"\n\n if token is None:\n token = os.environ.get('GITLAB_API_TOKEN', None)\n\n gl_session = gitlab.Gitlab(url, token)\n\n try:\n gl_session.version()\n except (gitlab.execeptions.GitlabAuthenticationError):\n raise RuntimeError('Invalid or missing GITLAB_API_TOKEN')\n\n logger.info('Connected to: %s', url)\n\n return gl_session"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef query_repos(gl_session, repos=None):\n\n if repos is None:\n repos = []\n\n for repo in repos:\n yield gl_session.projects.get(repo)\n\n if not repos:\n for project in gl_session.projects.list(as_list=False):\n yield project", "response": "Yields Gitlab project objects for all projects in repos"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngiving a Git repository URL returns the number of lines of code based on cloc aurants", "response": "def git_repo_to_sloc(url):\n \"\"\"\n Given a Git repository URL, returns number of lines of code based on cloc\n\n Reference:\n - cloc: https://github.com/AlDanial/cloc\n - https://www.omg.org/spec/AFP/\n - Another potential way to calculation effort\n\n Sample cloc output:\n {\n \"header\": {\n \"cloc_url\": \"github.com/AlDanial/cloc\",\n \"cloc_version\": \"1.74\",\n \"elapsed_seconds\": 0.195950984954834,\n \"n_files\": 27,\n \"n_lines\": 2435,\n \"files_per_second\": 137.78956000769,\n \"lines_per_second\": 12426.5769858787\n },\n \"C++\": {\n \"nFiles\": 7,\n \"blank\": 121,\n \"comment\": 314,\n \"code\": 371\n },\n \"C/C++ Header\": {\n \"nFiles\": 8,\n \"blank\": 107,\n \"comment\": 604,\n \"code\": 191\n },\n \"CMake\": {\n \"nFiles\": 11,\n \"blank\": 49,\n \"comment\": 465,\n \"code\": 165\n },\n \"Markdown\": {\n \"nFiles\": 1,\n \"blank\": 18,\n \"comment\": 0,\n \"code\": 30\n },\n \"SUM\": {\n \"blank\": 295,\n \"comment\": 1383,\n \"code\": 757,\n \"nFiles\": 27\n }\n }\n \"\"\"\n\n with tempfile.TemporaryDirectory() as tmp_dir:\n logger.debug('Cloning: url=%s tmp_dir=%s', url, tmp_dir)\n\n tmp_clone = os.path.join(tmp_dir, 'clone-dir')\n\n cmd = ['git', 'clone', '--depth=1', url, tmp_clone]\n execute(cmd)\n\n cmd = ['cloc', '--json', tmp_clone]\n out, _ = execute(cmd)\n\n try:\n json_start = out.find('{\"header\"')\n json_blob = out[json_start:].replace('\\\\n', '').replace('\\'', '')\n cloc_json = json.loads(json_blob)\n sloc = cloc_json['SUM']['code']\n except json.decoder.JSONDecodeError:\n logger.debug('Error Decoding: url=%s, out=%s', url, out)\n sloc = 0\n\n logger.debug('SLOC: url=%s, sloc=%d', url, sloc)\n\n return sloc"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncomputing the labor hours given a count of source lines of code.", "response": "def compute_labor_hours(sloc, month_hours='cocomo_book'):\n \"\"\"\n Compute the labor hours, given a count of source lines of code\n\n The intention is to use the COCOMO II model to compute this value.\n\n References:\n - http://csse.usc.edu/tools/cocomoii.php\n - http://docs.python-guide.org/en/latest/scenarios/scrape/\n \"\"\"\n # Calculation of hours in a month\n if month_hours == 'hours_per_year':\n # Use number of working hours in a year:\n # (40 Hours / week) * (52 weeks / year) / (12 months / year) ~= 173.33\n HOURS_PER_PERSON_MONTH = 40.0 * 52 / 12\n else:\n # Use value from COCOMO II Book (month_hours=='cocomo_book'):\n # Reference: https://dl.acm.org/citation.cfm?id=557000\n # This is the value used by the Code.gov team:\n # https://github.com/GSA/code-gov/blob/master/LABOR_HOUR_CALC.md\n HOURS_PER_PERSON_MONTH = 152.0\n\n cocomo_url = 'http://csse.usc.edu/tools/cocomoii.php'\n page = requests.post(cocomo_url, data={'new_size': sloc})\n\n try:\n person_months = float(EFFORT_REGEX.search(page.text).group(1))\n except AttributeError:\n logger.error('Unable to find Person Months in page text: sloc=%s', sloc)\n # If there is no match, and .search(..) returns None\n person_months = 0\n\n labor_hours = person_months * HOURS_PER_PERSON_MONTH\n logger.debug('sloc=%d labor_hours=%d', sloc, labor_hours)\n\n return labor_hours"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _prune_dict_null_str(dictionary):\n for key, value in list(dictionary.items()):\n if value is None or str(value) == '':\n del dictionary[key]\n\n if isinstance(value, dict):\n dictionary[key] = _prune_dict_null_str(dictionary[key])\n\n return dictionary", "response": "Remove None or emptry string values from dictionary items\n "} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _readGQL(self, filePath, verbose=False):\n if not os.path.isfile(filePath):\n raise RuntimeError(\"Query file '%s' does not exist.\" % (filePath))\n lastModified = os.path.getmtime(filePath)\n absPath = os.path.abspath(filePath)\n if absPath == self.__queryPath and lastModified == self.__queryTimestamp:\n _vPrint(verbose, \"Using cached query '%s'\" % (os.path.basename(self.__queryPath)))\n query_in = self.__query\n else:\n _vPrint(verbose, \"Reading '%s' ... \" % (filePath), end=\"\", flush=True)\n with open(filePath, \"r\") as q:\n # Strip all comments and newlines.\n query_in = re.sub(r'#.*(\\n|\\Z)', '\\n', q.read())\n # Condense extra whitespace.\n query_in = re.sub(r'\\s+', ' ', query_in)\n # Remove any leading or trailing whitespace.\n query_in = re.sub(r'(\\A\\s+)|(\\s+\\Z)', '', query_in)\n _vPrint(verbose, \"File read!\")\n self.__queryPath = absPath\n self.__queryTimestamp = lastModified\n self.__query = query_in\n return query_in", "response": "Read a pretty formatted GraphQL query file into a one - line string."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsubmitting a GitHub GraphQL query from a file.", "response": "def queryGitHubFromFile(self, filePath, gitvars={}, verbosity=0, **kwargs):\n \"\"\"Submit a GitHub GraphQL query from a file.\n\n Can only be used with GraphQL queries.\n For REST queries, see the 'queryGitHub' method.\n\n Args:\n filePath (str): A relative or absolute path to a file containing\n a GraphQL query.\n File may use comments and multi-line formatting.\n .. _GitHub GraphQL Explorer:\n https://developer.github.com/v4/explorer/\n gitvars (Optional[Dict]): All query variables.\n Defaults to empty.\n GraphQL Only.\n verbosity (Optional[int]): Changes output verbosity levels.\n If < 0, all extra printouts are suppressed.\n If == 0, normal print statements are displayed.\n If > 0, additional status print statements are displayed.\n Defaults to 0.\n **kwargs: Keyword arguments for the 'queryGitHub' method.\n\n Returns:\n Dict: A JSON style dictionary.\n\n \"\"\"\n gitquery = self._readGQL(filePath, verbose=(verbosity >= 0))\n return self.queryGitHub(gitquery, gitvars=gitvars, verbosity=verbosity, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef queryGitHub(self, gitquery, gitvars={}, verbosity=0, paginate=False, cursorVar=None, keysToList=[], rest=False, requestCount=0, pageNum=0):\n requestCount += 1\n if pageNum < 0: # no negative page numbers\n pageNum = 0\n pageNum += 1\n\n if paginate:\n _vPrint((verbosity >= 0), \"Page %d\" % (pageNum))\n _vPrint((verbosity >= 0), \"Sending %s query...\" % (\"REST\" if rest else \"GraphQL\"))\n response = self._submitQuery(gitquery, gitvars=gitvars, verbose=(verbosity > 0), rest=rest)\n _vPrint((verbosity >= 0), \"Checking response...\")\n _vPrint((verbosity >= 0), response[\"headDict\"][\"http\"])\n statusNum = response[\"statusNum\"]\n\n # Decrement page count before error checks to properly reflect any repeated queries\n pageNum -= 1\n\n # Make sure the query limit didn't run out\n try:\n apiStatus = {\n \"limit\": int(response[\"headDict\"][\"X-RateLimit-Limit\"]),\n \"remaining\": int(response[\"headDict\"][\"X-RateLimit-Remaining\"]),\n \"reset\": int(response[\"headDict\"][\"X-RateLimit-Reset\"])\n }\n _vPrint((verbosity >= 0), \"API Status %s\" % (json.dumps(apiStatus)))\n if not apiStatus[\"remaining\"] > 0:\n _vPrint((verbosity >= 0), \"API usage limit reached during query.\")\n self._awaitReset(apiStatus[\"reset\"])\n _vPrint((verbosity >= 0), \"Repeating query...\")\n return self.queryGitHub(gitquery, gitvars=gitvars, verbosity=verbosity, paginate=paginate, cursorVar=cursorVar, keysToList=keysToList, rest=rest, requestCount=(requestCount - 1), pageNum=pageNum)\n except KeyError:\n # Handles error cases that don't return X-RateLimit data\n _vPrint((verbosity >= 0), \"Failed to check API Status.\")\n\n # Check for accepted but not yet processed, usually due to un-cached data\n if statusNum == 202:\n if requestCount >= self.maxRetry:\n raise RuntimeError(\"Query attempted but failed %d times.\\n%s\\n%s\" % (self.maxRetry, response[\"headDict\"][\"http\"], response[\"result\"]))\n else:\n self._countdown(self.__retryDelay, printString=\"Query accepted but not yet processed. Trying again in %*dsec...\", verbose=(verbosity >= 0))\n return self.queryGitHub(gitquery, gitvars=gitvars, verbosity=verbosity, paginate=paginate, cursorVar=cursorVar, keysToList=keysToList, rest=rest, requestCount=requestCount, pageNum=pageNum)\n # Check for server error responses\n if statusNum == 502 or statusNum == 503:\n if requestCount >= self.maxRetry:\n raise RuntimeError(\"Query attempted but failed %d times.\\n%s\\n%s\" % (self.maxRetry, response[\"headDict\"][\"http\"], response[\"result\"]))\n else:\n self._countdown(self.__retryDelay, printString=\"Server error. Trying again in %*dsec...\", verbose=(verbosity >= 0))\n return self.queryGitHub(gitquery, gitvars=gitvars, verbosity=verbosity, paginate=paginate, cursorVar=cursorVar, keysToList=keysToList, rest=rest, requestCount=requestCount, pageNum=pageNum)\n # Check for other error responses\n if statusNum >= 400 or statusNum == 204:\n raise RuntimeError(\"Request got an Error response.\\n%s\\n%s\" % (response[\"headDict\"][\"http\"], response[\"result\"]))\n\n _vPrint((verbosity >= 0), \"Data received!\")\n outObj = json.loads(response[\"result\"])\n\n # Check for GraphQL API errors (e.g. repo not found)\n if not rest and \"errors\" in outObj:\n if requestCount >= self.maxRetry:\n raise RuntimeError(\"Query attempted but failed %d times.\\n%s\\n%s\" % (self.maxRetry, response[\"headDict\"][\"http\"], response[\"result\"]))\n elif len(outObj[\"errors\"]) == 1 and len(outObj[\"errors\"][0]) == 1:\n # Poorly defined error type, usually intermittent, try again.\n _vPrint((verbosity >= 0), \"GraphQL API error.\\n%s\" % (json.dumps(outObj[\"errors\"])))\n self._countdown(self.__retryDelay, printString=\"Unknown API error. Trying again in %*dsec...\", verbose=(verbosity >= 0))\n return self.queryGitHub(gitquery, gitvars=gitvars, verbosity=verbosity, paginate=paginate, cursorVar=cursorVar, keysToList=keysToList, rest=rest, requestCount=requestCount, pageNum=pageNum)\n else:\n raise RuntimeError(\"GraphQL API error.\\n%s\" % (json.dumps(outObj[\"errors\"])))\n\n # Re-increment page count before the next page query\n pageNum += 1\n\n # Pagination\n if paginate:\n if rest and response[\"linkDict\"]:\n if \"next\" in response[\"linkDict\"]:\n nextObj = self.queryGitHub(response[\"linkDict\"][\"next\"], gitvars=gitvars, verbosity=verbosity, paginate=paginate, cursorVar=cursorVar, keysToList=keysToList, rest=rest, requestCount=0, pageNum=pageNum)\n outObj.extend(nextObj)\n elif not rest:\n if not cursorVar:\n raise ValueError(\"Must specify argument 'cursorVar' to use GraphQL auto-pagination.\")\n if not len(keysToList) > 0:\n raise ValueError(\"Must specify argument 'keysToList' as a non-empty list to use GraphQL auto-pagination.\")\n aPage = outObj\n for key in keysToList[0:-1]:\n aPage = aPage[key]\n gitvars[cursorVar] = aPage[\"pageInfo\"][\"endCursor\"]\n if aPage[\"pageInfo\"][\"hasNextPage\"]:\n nextObj = self.queryGitHub(gitquery, gitvars=gitvars, verbosity=verbosity, paginate=paginate, cursorVar=cursorVar, keysToList=keysToList, rest=rest, requestCount=0, pageNum=pageNum)\n newPage = nextObj\n for key in keysToList[0:-1]:\n newPage = newPage[key]\n aPage[keysToList[-1]].extend(newPage[keysToList[-1]])\n aPage.pop(\"pageInfo\", None)\n\n return outObj", "response": "Submit a GitHub query to the user."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _submitQuery(self, gitquery, gitvars={}, verbose=False, rest=False):\n errOut = DEVNULL if not verbose else None\n authhead = 'Authorization: bearer ' + self.__githubApiToken\n\n bashcurl = 'curl -iH TMPauthhead -X POST -d TMPgitquery https://api.github.com/graphql' if not rest \\\n else 'curl -iH TMPauthhead https://api.github.com' + gitquery\n bashcurl_list = bashcurl.split()\n bashcurl_list[2] = authhead\n if not rest:\n gitqueryJSON = json.dumps({'query': gitquery, 'variables': json.dumps(gitvars)})\n bashcurl_list[6] = gitqueryJSON\n\n fullResponse = check_output(bashcurl_list, stderr=errOut).decode()\n _vPrint(verbose, \"\\n\" + fullResponse)\n fullResponse = fullResponse.split('\\r\\n\\r\\n')\n heads = fullResponse[0].split('\\r\\n')\n if len(fullResponse) > 1:\n result = fullResponse[1]\n else:\n result = \"\"\n http = heads[0].split()\n statusNum = int(http[1])\n\n # Parse headers into a useful dictionary\n headDict = {}\n headDict[\"http\"] = heads[0]\n for header in heads[1:]:\n h = header.split(': ')\n headDict[h[0]] = h[1]\n\n # Parse any Link headers even further\n linkDict = None\n if \"Link\" in headDict:\n linkProperties = headDict[\"Link\"].split(', ')\n propDict = {}\n for item in linkProperties:\n divided = re.split(r'; rel=\"|\"', item)\n propDict[divided[2]] = divided[1]\n linkDict = propDict\n\n return {'statusNum': statusNum, 'headDict': headDict, 'linkDict': linkDict, 'result': result}", "response": "Send a curl request to GitHub."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nwaits until the given UTC timestamp is reached.", "response": "def _awaitReset(self, utcTimeStamp, verbose=True):\n \"\"\"Wait until the given UTC timestamp.\n\n Args:\n utcTimeStamp (int): A UTC format timestamp.\n verbose (Optional[bool]): If False, all extra printouts will be\n suppressed. Defaults to True.\n\n \"\"\"\n resetTime = pytz.utc.localize(datetime.utcfromtimestamp(utcTimeStamp))\n _vPrint(verbose, \"--- Current Timestamp\")\n _vPrint(verbose, \" %s\" % (time.strftime('%c')))\n now = pytz.utc.localize(datetime.utcnow())\n waitTime = round((resetTime - now).total_seconds()) + 1\n _vPrint(verbose, \"--- Current UTC Timestamp\")\n _vPrint(verbose, \" %s\" % (now.strftime('%c')))\n _vPrint(verbose, \"--- GITHUB NEEDS A BREAK Until UTC Timestamp\")\n _vPrint(verbose, \" %s\" % (resetTime.strftime('%c')))\n self._countdown(waitTime, printString=\"--- Waiting %*d seconds...\", verbose=verbose)\n _vPrint(verbose, \"--- READY!\")"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _countdown(self, waitTime=0, printString=\"Waiting %*d seconds...\", verbose=True):\n if waitTime <= 0:\n waitTime = self.__retryDelay\n for remaining in range(waitTime, 0, -1):\n _vPrint(verbose, \"\\r\" + printString % (len(str(waitTime)), remaining), end=\"\", flush=True)\n time.sleep(1)\n if verbose:\n _vPrint(verbose, \"\\r\" + printString % (len(str(waitTime)), 0))", "response": "Makes a pretty countdown."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nloads a JSON data file into the internal data dictionary.", "response": "def fileLoad(self, filePath=None, updatePath=True):\n \"\"\"Load a JSON data file into the internal JSON data dictionary.\n\n Current internal data will be overwritten.\n If no file path is provided, the stored data file path will be used.\n\n Args:\n filePath (Optional[str]): A relative or absolute path to a\n '.json' file. Defaults to None.\n updatePath (Optional[bool]): Specifies whether or not to update\n the stored data file path. Defaults to True.\n\n \"\"\"\n if not filePath:\n filePath = self.filePath\n if not os.path.isfile(filePath):\n raise FileNotFoundError(\"Data file '%s' does not exist.\" % (filePath))\n else:\n print(\"Importing existing data file '%s' ... \" % (filePath), end=\"\", flush=True)\n with open(filePath, \"r\") as q:\n data_raw = q.read()\n print(\"Imported!\")\n self.data = json.loads(data_raw)\n if updatePath:\n self.filePath = filePath"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef fileSave(self, filePath=None, updatePath=False):\n if not filePath:\n filePath = self.filePath\n if not os.path.isfile(filePath):\n print(\"Data file '%s' does not exist, will create new file.\" % (filePath))\n if not os.path.exists(os.path.split(filePath)[0]):\n os.makedirs(os.path.split(filePath)[0])\n dataJsonString = json.dumps(self.data, indent=4, sort_keys=True)\n print(\"Writing to file '%s' ... \" % (filePath), end=\"\", flush=True)\n with open(filePath, \"w\") as fileout:\n fileout.write(dataJsonString)\n print(\"Wrote file!\")\n if updatePath:\n self.filePath = filePath", "response": "Writes the internal JSON data dictionary to a file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef create_tfs_connection(url, token):\n if token is None:\n token = os.environ.get('TFS_API_TOKEN', None)\n\n tfs_credentials = BasicAuthentication('', token)\n tfs_connection = VssConnection(base_url=url, creds=tfs_credentials)\n return tfs_connection", "response": "Creates the TFS Connection Context"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef create_tfs_project_analysis_client(url, token=None):\n if token is None:\n token = os.environ.get('TFS_API_TOKEN', None)\n\n tfs_connection = create_tfs_connection(url, token)\n project_analysis_client = tfs_connection.get_client('vsts.project_analysis.v4_1.project_analysis_client.ProjectAnalysisClient')\n\n if project_analysis_client is None:\n msg = 'Unable to connect to TFS Enterprise (%s) with provided token.'\n raise RuntimeError(msg, url)\n\n return project_analysis_client", "response": "Create a project analysis client."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_tfs_core_client(url, token=None):\n if token is None:\n token = os.environ.get('TFS_API_TOKEN', None)\n\n tfs_connection = create_tfs_connection(url, token)\n tfs_client = tfs_connection.get_client('vsts.core.v4_1.core_client.CoreClient')\n\n if tfs_client is None:\n msg = 'Unable to connect to TFS Enterprise (%s) with provided token.'\n raise RuntimeError(msg, url)\n\n return tfs_client", "response": "Create a core_client. py client for a Team Foundation Server Enterprise connection instance."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_tfs_git_client(url, token=None):\n if token is None:\n token = os.environ.get('TFS_API_TOKEN', None)\n\n tfs_connection = create_tfs_connection(url, token)\n tfs_git_client = tfs_connection.get_client('vsts.git.v4_1.git_client.GitClient')\n\n if tfs_git_client is None:\n msg = 'Unable to create TFS Git Client, failed to connect to TFS Enterprise (%s) with provided token.'\n raise RuntimeError(msg, url)\n\n return tfs_git_client", "response": "Creates a TFS Git Client to pull Git repo info from TFS Enterprise"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a TFS Git Client to pull TFS repo info", "response": "def create_tfs_tfvc_client(url, token=None):\n \"\"\"\n Creates a TFS TFVC Client to pull TFVC repo info\n \"\"\"\n if token is None:\n token = os.environ.get('TFS_API_TOKEN', None)\n\n tfs_connection = create_tfs_connection(url, token)\n tfs_tfvc_client = tfs_connection.get_client('vsts.tfvc.v4_1.tfvc_client.TfvcClient')\n\n if tfs_tfvc_client is None:\n msg = 'Unable to create TFS Git Client, failed to connect to TFS Enterprise (%s) with provided token.'\n raise RuntimeError(msg, url)\n\n return tfs_tfvc_client"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_all_projects(url, token, top=HARD_CODED_TOP):\n project_list = []\n tfs_client = create_tfs_core_client(url, token)\n\n collections = tfs_client.get_project_collections(top=top)\n\n for collection in collections:\n collection_client = create_tfs_core_client('{url}/{collection_name}'.format(url=url, collection_name=collection.name), token)\n\n logger.debug('Retrieving Projects for Project Collection: {collection_name}'.format(collection_name=collection.name))\n # Retrieves all projects in the project collection\n projects = collection_client.get_projects(top=HARD_CODED_TOP)\n # get_projects only gets the project references, have to call get_project_history_entries to get last update info for projects\n # Only calling this once per collection as its an expensive API call, wil refactor later if there is a better API call to use\n collection_history_list = collection_client.get_project_history_entries()\n for project in projects:\n\n # get_projects only gets team project ref objects,\n # have to call get_project to get the team project object which includes the TFS Web Url for the project\n logger.debug('Retrieving Team Project for Project: {project_name}'.format(project_name=project.name))\n projectInfo = collection_client.get_project(project.id, True, True)\n\n tfsProject = TFSProject(projectInfo, collection)\n\n logger.debug('Retrieving Last Updated and Created Info for Project: {project_name}'.format(project_name=project.name))\n tfsProject.projectLastUpdateInfo = get_project_last_update_time(collection_history_list, project.id)\n tfsProject.projectCreateInfo = get_project_create_time(collection_history_list, project.id)\n project_list.append(tfsProject)\n\n return project_list", "response": "Returns a list of all projects with their collection info from the server. Currently limited functionality to only return the first 1000 projects."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_git_repos(url, token, collection, project):\n git_client = create_tfs_git_client('{url}/{collection_name}'.format(url=url, collection_name=collection.name), token)\n logger.debug('Retrieving Git Repos for Project: {project_name}'.format(project_name=project.name))\n return git_client.get_repositories(project.id)", "response": "Returns a list of all git repos for the supplied project within the supplied collection"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_tfvc_repos(url, token, collection, project):\n branch_list = []\n tfvc_client = create_tfs_tfvc_client('{url}/{collection_name}'.format(url=url, collection_name=collection.name), token)\n\n logger.debug('Retrieving Tfvc Branches for Project: {project_name}'.format(project_name=project.name))\n branches = tfvc_client.get_branches(project.id, True, True, False, True)\n if branches:\n branch_list.extend(branches)\n else:\n logger.debug('No Tfvcc Branches in Project: {project_name}'.format(project_name=project.name))\n\n return branch_list", "response": "Returns a list of all Tfvc Branches within the supplied collection"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_year_commits(self, username='', password='', organization='llnl', force=True):\n date = str(datetime.date.today())\n file_path = ('year_commits.csv')\n if force or not os.path.isfile(file_path):\n my_github.login(username, password)\n calls_beginning = self.logged_in_gh.ratelimit_remaining + 1\n print 'Rate Limit: ' + str(calls_beginning)\n my_github.get_org(organization)\n my_github.repos(building_stats=True)\n print \"Letting GitHub build statistics.\"\n time.sleep(30)\n print \"Trying again.\"\n my_github.repos(building_stats=False)\n my_github.calc_total_commits(starting_commits=35163)\n my_github.write_to_file()\n calls_remaining = self.logged_in_gh.ratelimit_remaining\n calls_used = calls_beginning - calls_remaining\n print ('Rate Limit Remaining: ' + str(calls_remaining) + '\\nUsed '\n + str(calls_used) + ' API calls.')", "response": "Gets the last year of commits and prints them to file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef repos(self, building_stats=False):\n print 'Getting repos.'\n for repo in self.org_retrieved.iter_repos():\n for activity in repo.iter_commit_activity():\n if not building_stats:\n self.commits_dict_list.append(activity)", "response": "Get the last year of commits for the organization and store them in the commits_dict_list."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncalculating the total number of commits for the current locale.", "response": "def calc_total_commits(self, starting_commits=0):\n \"\"\"\n Uses the weekly commits and traverses back through the last\n year, each week subtracting the weekly commits and storing them. It\n needs an initial starting commits number, which should be taken from\n the most up to date number from github_stats.py output.\n \"\"\"\n for week_of_commits in self.commits_dict_list:\n try:\n self.commits[week_of_commits['week']] -= week_of_commits['total']\n except KeyError:\n total = self.commits[week_of_commits['week']] \\\n = -week_of_commits['total']\n self.sorted_weeks = sorted(self.commits)\n\n #reverse because lower numbered weeks are older in time.\n #we traverse from most recent to oldest\n for week in reversed(self.sorted_weeks):\n self.commits[week] = self.commits[week] + starting_commits\n starting_commits = self.commits[week]"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nwrites the weeks with associated commits to file.", "response": "def write_to_file(self):\n \"\"\"\n Writes the weeks with associated commits to file.\n \"\"\"\n with open('../github_stats_output/last_year_commits.csv', 'w+') as output:\n output.write('date,organization,repos,members,teams,'\n + 'unique_contributors,total_contributors,forks,'\n + 'stargazers,pull_requests,open_issues,has_readme,'\n + 'has_license,pull_requests_open,pull_requests_closed,'\n + 'commits\\n')\n #no reverse this time to print oldest first\n previous_commits = 0\n for week in self.sorted_weeks:\n if str(self.commits[week]) != previous_commits:#delete dups\n week_formatted = datetime.datetime.utcfromtimestamp(\n week ).strftime('%Y-%m-%d')\n output.write(week_formatted\n + ',llnl,0,0,0,0,0,0,0,0,0,0,0,0,0,'\n + str(self.commits[week]) + '\\n')\n previous_commits = str(self.commits[week])"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_metrics(thing, extra=''):\n thing = thing or ''\n\n if not isinstance(thing, str):\n # If it's not a str, it's either a class or an instance. Handle\n # accordingly.\n if type(thing) == type:\n thing = '%s.%s' % (thing.__module__, thing.__name__)\n else:\n thing = '%s.%s' % (\n thing.__class__.__module__, thing.__class__.__name__\n )\n\n if extra:\n thing = '%s.%s' % (thing, extra)\n\n return MetricsInterface(thing)", "response": "Return a MetricsInterface instance with specified name."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef timing(self, stat, value, tags=None):\n full_stat = self._full_stat(stat)\n for backend in _get_metrics_backends():\n backend.timing(full_stat, value=value, tags=tags)", "response": "Record a timing value for a statistical distribution."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nmeasures a timing for statistical distribution.", "response": "def timing(self, stat, value, tags=None):\n \"\"\"Measure a timing for statistical distribution.\"\"\"\n self.client.timing(stat=stat, delta=value)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nmeasuring a timing for statistical distribution.", "response": "def timing(self, stat, value, tags=None):\n \"\"\"Measure a timing for statistical distribution.\"\"\"\n self.client.timing(metric=stat, value=value, tags=tags)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef histogram(self, stat, value, tags=None):\n self.client.histogram(metric=stat, value=value, tags=tags)", "response": "Measure a value for statistical distribution."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngenerate a tag for use with the tag backends.", "response": "def generate_tag(key, value=None):\n \"\"\"Generate a tag for use with the tag backends.\n\n The key and value (if there is one) are sanitized according to the\n following rules:\n\n 1. after the first character, all characters must be alphanumeric,\n underscore, minus, period, or slash--invalid characters are converted\n to \"_\"\n 2. lowercase\n\n If a value is provided, the final tag is `key:value`.\n\n The final tag must start with a letter. If it doesn't, an \"a\" is prepended.\n\n The final tag is truncated to 200 characters.\n\n If the final tag is \"device\", \"host\", or \"source\", then a \"_\" will be\n appended the end.\n\n :arg str key: the key to use\n :arg str value: the value (if any)\n\n :returns: the final tag\n\n Examples:\n\n >>> generate_tag('yellow')\n 'yellow'\n >>> generate_tag('rule', 'is_yellow')\n 'rule:is_yellow'\n\n Example with ``incr``:\n\n >>> import markus\n >>> mymetrics = markus.get_metrics(__name__)\n\n >>> mymetrics.incr('somekey', value=1,\n ... tags=[generate_tag('rule', 'is_yellow')])\n\n \"\"\"\n # Verify the types\n if not isinstance(key, six.string_types):\n raise ValueError('key must be a string type, but got %r instead' % key)\n\n if not isinstance(value, six.string_types + (NONE_TYPE,)):\n raise ValueError('value must be None or a string type, but got %r instead' % value)\n\n # Sanitize the key\n key = BAD_TAG_CHAR_REGEXP.sub('_', key).strip()\n\n # Build the tag\n if value is None or not value.strip():\n tag = key\n else:\n value = BAD_TAG_CHAR_REGEXP.sub('_', value).strip()\n tag = '%s:%s' % (key, value)\n\n if tag and not tag[0].isalpha():\n tag = 'a' + tag\n\n # Lowercase and truncate\n tag = tag.lower()[:200]\n\n # Add _ if it's a reserved word\n if tag in ['device', 'host', 'source']:\n tag = tag + '_'\n\n return tag"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nrolls up stats and log them.", "response": "def rollup(self):\n \"\"\"Roll up stats and log them.\"\"\"\n now = time.time()\n if now < self.next_rollup:\n return\n\n self.next_rollup = now + self.flush_interval\n\n for key, values in sorted(self.incr_stats.items()):\n self.logger.info(\n '%s INCR %s: count:%d|rate:%d/%d',\n self.leader,\n key,\n len(values),\n sum(values),\n self.flush_interval\n )\n self.incr_stats[key] = []\n\n for key, values in sorted(self.gauge_stats.items()):\n if values:\n self.logger.info(\n '%s GAUGE %s: count:%d|current:%s|min:%s|max:%s',\n self.leader,\n key,\n len(values),\n values[-1],\n min(values),\n max(values),\n )\n else:\n self.logger.info('%s (gauge) %s: no data', self.leader, key)\n\n self.gauge_stats[key] = []\n\n for key, values in sorted(self.histogram_stats.items()):\n if values:\n self.logger.info(\n (\n '%s HISTOGRAM %s: '\n 'count:%d|min:%.2f|avg:%.2f|median:%.2f|ninety-five:%.2f|max:%.2f'\n ),\n self.leader,\n key,\n len(values),\n min(values),\n statistics.mean(values),\n statistics.median(values),\n values[int(len(values) * 95 / 100)],\n max(values)\n )\n else:\n self.logger.info('%s (histogram) %s: no data', self.leader, key)\n\n self.histogram_stats[key] = []"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef timing(self, stat, value, tags=None):\n self.histogram(stat, value, tags)", "response": "Measure a timing for statistical distribution."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nmeasuring a value for statistical distribution.", "response": "def histogram(self, stat, value, tags=None):\n \"\"\"Measure a value for statistical distribution.\"\"\"\n self.rollup()\n\n # FIXME(willkg): what to do with tags?\n self.histogram_stats.setdefault(stat, []).append(value)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef order_enum(field, members):\n members = list(members)\n\n return Case(\n *(When(**{field: member, 'then': i})\n for i, member in enumerate(members)),\n default=len(members),\n output_field=IntegerField())", "response": "Return an annotation value that can be used to sort an EnumChoiceField."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef from_db_value(self, value, expression, connection, context):\n if value is None:\n return value\n return self.enum[value]", "response": "Convert a string from the database into an Enum"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconvert a string from a form into an Enum value.", "response": "def to_python(self, value):\n \"\"\"\n Convert a string from a form into an Enum value.\n \"\"\"\n if value is None:\n return value\n if isinstance(value, self.enum):\n return value\n return self.enum[value]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nconverting an Enum value into a string for the database", "response": "def get_prep_value(self, value):\n \"\"\"\n Convert an Enum value into a string for the database\n \"\"\"\n if value is None:\n return None\n if isinstance(value, self.enum):\n return value.name\n raise ValueError(\"Unknown value {value:r} of type {cls}\".format(\n value=value, cls=type(value)))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef count_relations(w0):\n\n root_w0_relations = set(chain.from_iterable(relations[t.index, :].indices for t in w0.root))\n flexing_w0_relations = set(chain.from_iterable(relations[t.index, :].indices for t in w0.flexing))\n\n def f(w1):\n root_w1 = set(t.index for t in w1.root)\n flexing_w1 = set(t.index for t in w1.flexing)\n\n count = [root_w0_relations.intersection(root_w1),\n flexing_w0_relations.intersection(flexing_w1),\n root_w0_relations.intersection(flexing_w1) | flexing_w0_relations.intersection(root_w1)]\n\n if any(count):\n return max((1,2,3), key=lambda i: len(count[i - 1]))\n else:\n return 0\n return f", "response": "count_relations returns a function that counts the number of relations between two root and flexing terms"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef t_parse(self, s):\n # self.root = None\n # self.path = s\n with self.lock:\n try:\n return self.parser.parse(s, lexer=self.lexer, debug=False)\n except CannotParse as e:\n e.s = s\n raise e", "response": "Parses the input string and returns a reference to the created AST s root"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the context path of the p.", "response": "def p_ctx_path(self, p):\n \"\"\" ctx_path : ctx_coords\"\"\"\n if len(p[1]) == 1:\n p[0] = p[1][0]\n else:\n p[0] = ContextPath(p[1])"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting the context coordinates of the multiplicative path.", "response": "def p_ctx_coords(self, p):\n \"\"\" ctx_coords : multiplicative_path\n | ctx_coords COLON multiplicative_path\"\"\"\n if len(p) == 2:\n p[0] = [p[1]]\n else:\n p[0] = p[1] + [p[3]]"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nproducts is a list of 2 elements.", "response": "def p_product(self, p):\n \"\"\" product : additive_path_p\n | coordinate\n | product additive_path_p\n | product coordinate\"\"\"\n if len(p) == 2:\n p[0] = [p[1]]\n else:\n p[0] = p[1] + [p[2]]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef p_coordinate(self, p):\n\n if len(p) == 2:\n p[0] = Coordinate(p[1])\n else:\n p[0] = Coordinate(p[1], int(p[2]))", "response": "coordinate is a tuple of 2 - element tuples where the first element is the name of the target and the second is the index of the target."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ndisplays the terms added and removed between two versions", "response": "def diff(version0, version1):\n \"\"\"\n Display the terms added and removed between two versions\n :param version0:\n :param version1:\n :return:\n \"\"\"\n version0.load()\n version1.load()\n\n deleted = set(version0.terms) - set(version1.terms)\n added = set(version1.terms) - set(version0.terms)\n\n print(\"====\\n\\tfrom: {0}\".format(str(version0)))\n print(\"\\n\".join((\"-{0} -- {1}\".format(str(d), version0.translations['en'][d]) for d in deleted)))\n\n print(\"====\\n\\tto: {0}\".format(str(version1)))\n print(\"\\n\".join((\"+{0} -- {1}\".format(str(d), version1.translations['en'][d]) for d in added)))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef load(self):\n if self.loaded:\n return\n\n file_name = \"%s.json\" % str(self)\n file = os.path.join(VERSIONS_FOLDER, file_name)\n\n if not os.path.isfile(file):\n DICTIONARY_BUCKET_URL = get_configuration().get('VERSIONS', 'versionsurl')\n url = urllib.parse.urljoin(DICTIONARY_BUCKET_URL, file_name)\n logger.log(logging.INFO, \"Downloading dictionary %s at %s\" % (file_name, url))\n\n urlretrieve(url, file)\n\n with open(file, 'r') as fp:\n self.__setstate__(json.load(fp))", "response": "Download the dictionary version and cache the retrieved file."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _rotate_sc_additive(s):\n\n if isinstance(s, AdditiveScript):\n return AdditiveScript([_rotate_sc(_s) for _s in s])\n else:\n return _rotate_sc(s)", "response": "rotate the Sc additive script"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\npromoting and split a sequence of tokens into a sequence of tokens.", "response": "def _promote_and_split(s):\n \"\"\"\n E:F:.O:M:.t.- => E:.-F:.O:M:.-t.-\u2018\n E:F:.M:M:.l.- => E:.-F:.M:M:.-l.-\u2018\n \"\"\"\n subst, attr, mode = s\n subst0, subst1, _mode = subst\n assert isinstance(_mode, NullScript)\n\n return m(m(m(subst0)) ,m(m(subst1), attr) ,m(mode))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _transfer_substance(s):\n subst, attr, mode = s\n attr0, attr1, attr2 = attr\n assert isinstance(attr1, NullScript) and isinstance(attr2, NullScript)\n subst, subst1, subst2 = subst\n assert isinstance(subst1, NullScript) and isinstance(subst2, NullScript)\n\n subst0, subst1, subst2 = subst\n assert isinstance(subst2, NullScript)\n return m(m(m(subst0)), m(m(subst1), attr0), mode)", "response": "Transfer a substance into a single substance."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd a mode t to the sequence.", "response": "def _add_mode_t(s):\n \"\"\"\nO:O:.O:O:.- => O:O:.O:O:.t.-\n \"\"\"\n subst, attr, mode = s\n assert isinstance(mode, NullScript)\n return m(subst, attr, script('t.'))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _insert_f_additive(s):\n subst, attr, mode = s\n assert isinstance(mode, NullScript)\n\n if isinstance(subst, AdditiveScript):\n subst = AdditiveScript([_insert_attr_f(_s) for _s in subst])\n else:\n subst = _insert_attr_f(subst)\n\n return m(subst ,attr)", "response": "Insert a new record into the tree."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ntranslating the root paradigms in key in argument with the function in value", "response": "def translate_script(to_translate):\n \"\"\"\n translate the root paradigms in key in argument, with the function in value\n :param to_translate:\n :return:\n \"\"\"\n version = DictionaryVersion(latest_dictionary_version())\n version.load()\n to_remove = []\n to_add = {\n 'terms': [],\n 'roots': [],\n 'inhibitions': {},\n 'translations': {l: {} for l in LANGUAGES}\n }\n\n for root, func in to_translate.items():\n root = script(root)\n terms = list(filter(lambda s: s in root, map(script, version.terms)))\n\n new_root = func(root)\n new_terms = [func(s) for s in terms]\n\n to_add['terms'].extend(map(str, new_terms))\n to_add['roots'].append(str(new_root))\n to_add['inhibitions'].update({str(new_root): version.inhibitions[root]})\n for l in LANGUAGES:\n to_add['translations'][l].update({str(func(s)): version.translations[l][s] for s in terms})\n\n to_remove.extend(map(str, terms))\n\n return create_dictionary_version(version, add=to_add, remove=to_remove)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef translate_mouvements_et_milieux(s):\n subst, attr, mode = s\n assert isinstance(mode, NullScript)\n\n if isinstance(subst, AdditiveScript):\n subst = AdditiveScript([_remove_attr_f(_s) for _s in subst])\n else:\n subst = _remove_attr_f(subst)\n\n return m(subst, attr)", "response": "Translate a string of Mouvements et Milieux into a string of Milieux."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ntranslate competence en curr data into a single data structure.", "response": "def translate_competence_en_curr_data(s):\n \"\"\"M:.-O:.-'M:.-wa.e.-'t.-x.-s.y.-', => t.-x.-s.y.-' wa.e.-', M:M:.-',O:.-',_\"\"\"\n subst, attr, mode = s\n attr_s, attr_a, attr_m = attr\n assert isinstance(attr_m, NullScript)\n\n subst_s, subst_a, subst_m = subst\n assert isinstance(subst_m, NullScript)\n first_M = subst_s.children[0].children[0]\n\n return m(m(mode, m(attr_a)), m(m(m(m(first_M, attr_s.children[0].children[0])))), m(m(subst_a)))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ntranslates noetic text into a single noetic text.", "response": "def translate_noetic(s):\n \"\"\"M:.O:.-O:.O:.-B:.T:.n.-' => s.M:O:.O:O:.-\"\"\"\n subst, attr, mode = s\n return m(script('s.'),\n m(subst.children[0].children[0], subst.children[1].children[0]),\n m(attr.children[0].children[0], attr.children[1].children[0]))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef translate_tisse_intl_col(s):\n subst, attr, mode = s\n return m(m(subst), m(attr), script(\"s.o.-k.o.-'\"))", "response": "Translate TISSE intl column into a single column."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef translate_formes_visuelles(s):\n\n def set_bSU_subst(s):\n subst, attr, mode = s\n return m(script(\"b.-S:.U:.-'\"), attr, mode)\n\n if isinstance(s, AdditiveScript):\n return AdditiveScript([set_bSU_subst(i) for i in s.children])\n else:\n return set_bSU_subst(s)", "response": "Translate formes visuelles into a single page."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ntranslates an eCOS system intl column into a script tag.", "response": "def translate_ecosystem_intl_col(s):\n \"\"\"O:.M:.- => s.o.-k.o.-'M:O:.-',\"\"\"\n subst, attr, mode = s\n\n return m(script(\"s.o.-k.o.-'\"), m(m(m(attr.children[0], subst.children[0]))))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ntranslating the ecosystem intl column into a string.", "response": "def translate_ecosystem_intl_col_tern(s):\n \"\"\"O:.M:.-M:.-' => s.o.-k.o.-\u2018M:O:.-\u2018,M:.-',_\"\"\"\n subst, attr, mode = s\n\n return m(translate_ecosystem_intl_col(subst), m(m(attr)))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef parse(self, s):\n with self.lock:\n try:\n return self.parser.parse(s, lexer=self.lexer)\n except InvalidIEMLObjectArgument as e:\n raise CannotParse(s, str(e))\n except CannotParse as e:\n e.s = s\n raise e", "response": "Parses the input string and returns a reference to the created AST s root"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef p_literal_list(self, p):\n\n if len(p) == 3:\n p[0] = p[1] + [p[2][1:-1]]\n else:\n p[0] = [p[1][1:-1]]", "response": "literal_list : literal_list LITERAL\n | LITERAL"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef p_word(self, p):\n try:\n term = self._get_term(p[1 if len(p) == 2 else 2])\n except TermNotFoundInDictionary as e:\n raise CannotParse(self._ieml, str(e))\n\n if len(p) == 5:\n p[0] = Word(term, literals=p[4])\n else:\n p[0] = Word(term)", "response": "word | word | word | literal_list"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef p_proposition_sum(self, p):\n\n # closed_proposition_list : closed_proposition_list closed_proposition\n # | closed_proposition\"\"\"\n if len(p) == 4:\n p[0] = p[1] + [p[3]]\n elif len(p) == 3:\n p[0] = p[1] + [p[2]]\n else:\n p[0] = [p[1]]", "response": "P p = list of propositions"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef p_fact(self, p):\n if len(p) == 4:\n p[0] = Fact(p[2])\n else:\n p[0] = Fact(p[2], literals=p[4])", "response": "P fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact | fact|"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef p_closed_proposition_list(self, p):\n if len(p) == 2:\n p[0] = [p[1]]\n else:\n p[0] = p[1] + [p[4]]", "response": "SLASH SLASH closed_proposition | closed_proposition"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\npack a list of parser or tuple of factorisation into a single Script", "response": "def pack_factorisation(facto_list):\n \"\"\"\n :param facto_list: list of parser or tuple of factorisation\n :return:\n \"\"\"\n _sum = []\n for f in facto_list:\n if isinstance(f, Script):\n _sum.append(f)\n else:\n # tuple of factorisation\n _sum.append(MultiplicativeScript(children=(pack_factorisation(l_f) for l_f in f)))\n\n if len(_sum) == 1:\n return _sum[0]\n else:\n return AdditiveScript(children=_sum)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef connexity(self):\n return np.matrix(sum(self.relations.values()).todense(), dtype=bool)", "response": "A boolean matrix that contains the number of relation terms that are connected to the current instance."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nresolves a path to a set of resources.", "response": "def _resolve_path(obj, path):\n \"\"\"path is a mul of coord or a coord\"\"\"\n if obj.__class__ not in path.context.accept:\n result = set()\n for ctx in path.context.accept:\n result |= {e for u in obj[ctx] for e in _resolve_path(u, path)}\n\n return result\n\n if isinstance(obj, Text):\n if path.index is not None:\n return {obj.children[path.index]}\n\n return set(obj.children)\n\n if isinstance(obj, (Fact, Theory)):\n return _resolve_path_tree_graph(obj.tree_graph, path)\n\n if isinstance(obj, Topic):\n if path.kind == 'r':\n if path.index is not None:\n return {obj.root[path.index]}\n return set(obj.root)\n else:\n if path.index is not None:\n return {obj.flexing[path.index]}\n return set(obj.flexing)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nresolving the context of the rules and builds the ieml element.", "response": "def _resolve_ctx(rules):\n \"\"\"\n Resolve the context of the rules (the type of this element), and building the ieml element.\n :param rules:\n :return:\n \"\"\"\n if not rules:\n raise ResolveError(\"Missing node definition.\")\n\n # if rules == [(None, e)] --> e\n if len(rules) == 1 and rules[0][0] is None:\n return rules[0][1]\n\n if any(r[0] is None for r in rules):\n raise ResolveError(\"Multiple definition, multiple ieml object provided for the same node.\")\n\n if any(not isinstance(r[0], Path) for r in rules):\n raise ResolveError(\"Must have only path instance.\")\n\n # resolve all the possible types for this element\n r0 = rules[0]\n types = _inferred_types(*r0)\n for r in rules[1:]:\n types = types.intersection(_inferred_types(*r))\n\n if not types:\n raise ResolveError(\"No definition, no type inferred on rules list.\")\n\n if len(types) > 1:\n raise ResolveError(\"Multiple definition, multiple type inferred on rules list.\")\n\n type = next(types.__iter__())\n\n if type == Topic:\n error, deps = _build_deps_topic(rules)\n if error:\n return\n\n flexing = None\n if deps['f']:\n flexing = deps['f']\n if not deps['r']:\n raise ResolveError(\"No root for the topic node.\")\n\n return topic(deps['r'], flexing)\n\n if type == Text:\n error, deps = _build_deps_text(rules)\n if error:\n return\n\n return text(deps)\n\n if type in (Theory, Fact):\n error, deps = _build_deps_tree_graph(rules)\n if error:\n return\n\n if type == Fact:\n clauses = []\n for s, a, m in deps:\n clauses.append((s, a, m))\n return fact(clauses)\n else:\n clauses = []\n for s, a, m in deps:\n clauses.append((s, a, m))\n return theory(clauses)\n\n raise ResolveError(\"Invalid type inferred %s\"%type.__name__)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef project_usls_on_dictionary(usls, allowed_terms=None):\n\n cells_to_usls = defaultdict(set)\n tables = set()\n\n for u in usls:\n for t in u.objects(Term):\n for c in t.singular_sequences:\n # This is the first time we meet the cell c\n if not cells_to_usls[c]:\n tables.update(c.relations.contained)\n\n cells_to_usls[c].add(u)\n\n if allowed_terms:\n allowed_terms = set(allowed_terms)\n tables = tables & allowed_terms\n cells_to_usls = {c: l for c, l in cells_to_usls.items() if c in allowed_terms}\n\n tables_to_usls = {\n table: list(set(u for c in table.singular_sequences for u in cells_to_usls[c]))\n for table in tables if not isinstance(table, TableSet)\n }\n\n return tables_to_usls", "response": "projects a list of usl. Term objects into a dictionary of usl. TableSet objects"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef project_usl_with_data(usls_data, metric=None):\n\n projection = project_usls_on_dictionary(usls_data)\n all_terms = set(c for u in usls_data for t in u.objects(Term) for c in t.singular_sequences)\n if metric is None:\n metric = lambda e: len(e['posts']) * len(all_terms.intersection(e['table'].singular_sequences))\n\n return sorted(({\n 'table': table,\n 'usls': usls,\n 'posts': list(set(chain.from_iterable(usls_data[u] for u in usls)))\n } for table, usls in projection.items()), key=metric, reverse=True)", "response": "project a list of USLs into a single list of USLs"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef p_script_lvl_0(self, p):\n if p[1] == 'E':\n p[0] = NullScript(layer=0)\n elif p[1] in REMARKABLE_ADDITION:\n p[0] = AdditiveScript(character=p[1])\n else:\n p[0] = MultiplicativeScript(character=p[1])", "response": "script_lvl_0 : PRIMITIVE LAYER0_MARK\n | REMARKABLE_ADDITION LAYER0_MARK"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef p_sum_lvl_0(self, p):\n if len(p) == 4:\n p[3].append(p[1])\n p[0] = p[3]\n else:\n p[0] = [p[1]]", "response": "A helper function for adding sum_lvl_0 to the list of p."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef p_script_lvl_1(self, p):\n if isinstance(p[1], AdditiveScript):\n if len(p) == 3:\n p[0] = MultiplicativeScript(substance=p[1])\n elif len(p) == 4:\n p[0] = MultiplicativeScript(substance=p[1],\n attribute=p[2])\n else:\n p[0] = MultiplicativeScript(substance=p[1],\n attribute=p[2],\n mode=p[3])\n else:\n p[0] = MultiplicativeScript(character=p[1])", "response": "script_lvl_1 : additive_script_lvl_0 LAYER1_MARK\n | additive_script_lvl_0 additive_script_lvl_0 LAYER1_MARK\n | additive_script_lvl_0 additive_script_lvl_0 additive_script_lvl_0 LAYER1_MARK\n | REMARKABLE_MULTIPLICATION LAYER1_MARK"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef square_order_matrix(usl_list):\n usl_list = list(usl_list)\n indexes = {\n u: i for i, u in enumerate(usl_list)\n }\n\n order_mat = np.zeros(shape=(len(usl_list), len(usl_list)), dtype=int)\n\n for u in usl_list:\n sorted_list = QuerySort(u).sort(collection=usl_list)\n for i, u_s in enumerate(sorted_list):\n order_mat[indexes[u], indexes[u_s]] = i\n\n return order_mat", "response": "Compute the ordering of a list of usls from each usl and return the matrix m s. t. order_mat."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef accept_script(self, script):\n if isinstance(self.parent, TableSet):\n return False, False\n\n tables = [table for table in self.script.tables_script if table in script]\n\n if len(tables) >= 1 and {ss for t in tables for ss in t.singular_sequences} == set(script.singular_sequences):\n return True, False\n\n return False, False", "response": "Returns True when the term is a subset of this script tables."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _build_pools(self):\n if self.level >= Topic:\n # words\n self.topics_pool = set(self.topic() for i in range(self.pool_size))\n\n if self.level >= Fact:\n # sentences\n self.facts_pool = set(self.fact() for i in range(self.pool_size))\n\n if self.level >= Theory:\n self.theories_pool = set(self.theory() for i in range(self.pool_size))\n\n if self.level >= Text:\n self.propositions_pool = set(chain.from_iterable((self.topics_pool, self.facts_pool, self.theories_pool)))", "response": "Build the pool of all the terms in the database."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the mean value.", "response": "def mean(self):\n \"\"\"Returns the mean value.\"\"\"\n if self.counter.value > 0:\n return self.sum.value / self.counter.value\n return 0.0"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef variance(self):\n if self.counter.value <= 1:\n return 0.0\n return self.var.value[1] / (self.counter.value - 1)", "response": "Returns variance of the current class"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nrecord an event with the meter. By default it will record one event.", "response": "def mark(self, value=1):\n \"\"\"Record an event with the meter. By default it will record one event.\n\n :param value: number of event to record\n \"\"\"\n self.counter += value\n self.m1_rate.update(value)\n self.m5_rate.update(value)\n self.m15_rate.update(value)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef mean_rate(self):\n if self.counter.value == 0:\n return 0.0\n else:\n elapsed = time() - self.start_time\n return self.counter.value / elapsed", "response": "Returns the mean rate of the events since the start of the process."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nrecord an event with the derive. .", "response": "def mark(self, value=1):\n \"\"\"Record an event with the derive.\n\n :param value: counter value to record\n \"\"\"\n last = self.last.get_and_set(value)\n if last <= value:\n value = value - last\n super(Derive, self).mark(value)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nwrapping to make map function behave the same on Py2 and Py3.", "response": "def mmap(func, iterable):\n \"\"\"Wrapper to make map() behave the same on Py2 and Py3.\"\"\"\n\n if sys.version_info[0] > 2:\n return [i for i in map(func, iterable)]\n else:\n return map(func, iterable)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsend metric and its snapshot.", "response": "def send_metric(self, name, metric):\n \"\"\"Send metric and its snapshot.\"\"\"\n config = SERIALIZER_CONFIG[class_name(metric)]\n\n mmap(\n self._buffered_send_metric,\n self.serialize_metric(\n metric,\n name,\n config['keys'],\n config['serialized_type']\n )\n )\n\n if hasattr(metric, 'snapshot') and config.get('snapshot_keys'):\n mmap(\n self._buffered_send_metric,\n self.serialize_metric(\n metric.snapshot,\n name,\n config['snapshot_keys'],\n config['serialized_type']\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nserializing and send available measures of a metric.", "response": "def serialize_metric(self, metric, m_name, keys, m_type):\n \"\"\"Serialize and send available measures of a metric.\"\"\"\n\n return [\n self.format_metric_string(m_name, getattr(metric, key), m_type)\n for key in keys\n ]"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef format_metric_string(self, name, value, m_type):\n\n # NOTE(romcheg): This serialized metric template is based on\n # statsd's documentation.\n template = '{name}:{value}|{m_type}\\n'\n\n if self.prefix:\n name = \"{prefix}.{m_name}\".format(prefix=self.prefix, m_name=name)\n\n return template.format(name=name, value=value, m_type=m_type)", "response": "Compose a statsd compatible string for a metric s measurement."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding a metric to the buffered buffer.", "response": "def _buffered_send_metric(self, metric_str):\n \"\"\"Add a metric to the buffer.\"\"\"\n\n self.batch_count += 1\n\n self.batch_buffer += metric_str\n\n # NOTE(romcheg): Send metrics if the number of metrics in the buffer\n # has reached the threshold for sending.\n if self.batch_count >= self.batch_size:\n self._send()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get(self, section, option, **kwargs):\n try:\n ret = super(ExactOnlineConfig, self).get(section, option, **kwargs)\n except (NoOptionError, NoSectionError):\n raise MissingSetting(option, section)\n\n return ret", "response": "Get method that raises MissingSetting if the value was unset."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef set(self, section, option, value):\n try:\n super(ExactOnlineConfig, self).set(section, option, value)\n except NoSectionError:\n self.add_section(section)\n super(ExactOnlineConfig, self).set(section, option, value)\n\n # Save automatically!\n self.save()", "response": "Set the value of an option in the section."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconvert a JSON string into a list of objects.", "response": "def _json_safe(data):\n \"\"\"\n json.loads wants an unistr in Python3. Convert it.\n \"\"\"\n if not hasattr(data, 'encode'):\n try:\n data = data.decode('utf-8')\n except UnicodeDecodeError:\n raise ValueError(\n 'Expected valid UTF8 for JSON data, got %r' % (data,))\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef http_put(url, data=None, opt=opt_default):\n return _http_request(url, method='PUT', data=_marshalled(data), opt=opt)", "response": "Shortcut for urlopen + read"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef connect(self):\n \"Connect to a host on a given (SSL) port.\"\n sock = socket.create_connection((self.host, self.port),\n self.timeout, self.source_address)\n if self._tunnel_host:\n self.sock = sock\n self._tunnel()\n\n # Python 2.7.9+\n if create_default_context:\n # Newer python will use the \"right\" cacert file automatically. So\n # the default of None can safely be passed along.\n ctx = create_default_context(cafile=self.cacert_file)\n sock = ctx.wrap_socket(sock, server_hostname=self.host)\n else:\n # Take the supplied file, or FALLBACK_CACERT_FILE if nothing\n # was supplied.\n cacert_file = self.cacert_file or FALLBACK_CACERT_FILE\n sock = ssl.wrap_socket(sock, ca_certs=cacert_file,\n cert_reqs=ssl.CERT_REQUIRED)\n\n self.sock = sock", "response": "Connect to a host on a given ( SSL ) port."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nbases method to fetch values and set defaults in case they don t exist.", "response": "def get_or_set_default(self, section, option, value):\n \"\"\"\n Base method to fetch values and to set defaults in case they\n don't exist.\n \"\"\"\n try:\n ret = self.get(section, option)\n except MissingSetting:\n self.set(section, option, value)\n ret = value\n\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconverting set of human codes and to a dict of code to exactonline guid mappings.", "response": "def get_ledger_code_to_guid_map(self, codes):\n \"\"\"\n Convert set of human codes and to a dict of code to exactonline\n guid mappings.\n\n Example::\n\n ret = inv.get_ledger_code_to_guid_map(['1234', '5555'])\n ret == {'1234': '',\n '5555': ''}\n \"\"\"\n if codes:\n codes = set(str(i) for i in codes)\n ledger_ids = self._api.ledgeraccounts.filter(code__in=codes)\n ret = dict((str(i['Code']), i['ID']) for i in ledger_ids)\n found = set(ret.keys())\n missing = (codes - found)\n if missing:\n raise UnknownLedgerCodes(missing)\n return ret\n return {}"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_vatcode_for_ledger_line(self, ledger_line):\n # Exact accepts receiving 'VATPercentage', but only when it is\n # higher than 0. Possibly because we have more than one match\n # for 0%? So, we'll have to fetch the right VATCode instead.\n vat_percentage = ledger_line['vat_percentage']\n\n if vat_percentage == 0:\n vatcode = '0 ' # FIXME: hardcoded.. fetch from API?\n elif vat_percentage == 21:\n vatcode = '2 ' # FIXME: hardcoded.. fetch from API?\n else:\n raise NotImplementedError('Unknown VAT: %s' % (vat_percentage,))\n\n return vatcode", "response": "Get the VATCode for the specified ledger line."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the current division and return a dictionary of divisions so the user can select the right one.", "response": "def get_divisions(self):\n \"\"\"\n Get the \"current\" division and return a dictionary of divisions\n so the user can select the right one.\n \"\"\"\n ret = self.rest(GET('v1/current/Me?$select=CurrentDivision'))\n current_division = ret[0]['CurrentDivision']\n assert isinstance(current_division, int)\n\n urlbase = 'v1/%d/' % (current_division,)\n resource = urljoin(urlbase, 'hrm/Divisions?$select=Code,Description')\n ret = self.rest(GET(resource))\n\n choices = dict((i['Code'], i['Description']) for i in ret)\n return choices, current_division"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef set_division(self, division):\n try:\n division = int(division)\n except (TypeError, ValueError):\n raise V1DivisionError('Supplied division %r is not a number' %\n (division,))\n\n urlbase = 'v1/%d/' % (division,)\n resource = urljoin(\n urlbase,\n \"crm/Accounts?$select=ID&$filter=Name+eq+'DOES_NOT_EXIST'\")\n try:\n self.rest(GET(resource))\n except AssertionError:\n raise V1DivisionError('Invalid division %r according to server' %\n (division,))\n\n self.storage.set_division(division)", "response": "Set the current division."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning a dictionary of ExactOnline invoice numbers to foreign invoice numbers.", "response": "def map_exact2foreign_invoice_numbers(self, exact_invoice_numbers=None):\n \"\"\"\n Optionally supply a list of ExactOnline invoice numbers.\n\n Returns a dictionary of ExactOnline invoice numbers to foreign\n (YourRef) invoice numbers.\n \"\"\"\n # Quick, select all. Not the most nice to the server though.\n if exact_invoice_numbers is None:\n ret = self.filter(select='InvoiceNumber,YourRef')\n return dict((i['InvoiceNumber'], i['YourRef']) for i in ret)\n\n # Slower, select what we want to know. More work for us.\n exact_to_foreign_map = {}\n\n # Do it in batches. If we append 300 InvoiceNumbers at once, we\n # get a 12kB URI. (If the list is empty, we skip the entire\n # forloop and correctly return the empty dict.)\n exact_invoice_numbers = list(set(exact_invoice_numbers)) # unique\n for offset in range(0, len(exact_invoice_numbers), 40):\n batch = exact_invoice_numbers[offset:(offset + 40)]\n filter_ = ' or '.join(\n 'InvoiceNumber eq %s' % (i,) for i in batch)\n assert filter_ # if filter was empty, we'd get all!\n ret = self.filter(filter=filter_, select='InvoiceNumber,YourRef')\n exact_to_foreign_map.update(\n dict((i['InvoiceNumber'], i['YourRef']) for i in ret))\n\n # Any values we missed?\n for exact_invoice_number in exact_invoice_numbers:\n if exact_invoice_number not in exact_to_foreign_map:\n exact_to_foreign_map[exact_invoice_number] = None\n\n return exact_to_foreign_map"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a dictionary of your invoice numbers to ExactInvoiceNumbers.", "response": "def map_foreign2exact_invoice_numbers(self, foreign_invoice_numbers=None):\n \"\"\"\n Optionally supply a list of foreign (your) invoice numbers.\n\n Returns a dictionary of your invoice numbers (YourRef) to Exact\n Online invoice numbers.\n \"\"\"\n # Quick, select all. Not the most nice to the server though.\n if foreign_invoice_numbers is None:\n ret = self.filter(select='InvoiceNumber,YourRef')\n return dict((i['YourRef'], i['InvoiceNumber']) for i in ret)\n\n # Slower, select what we want to know. More work for us.\n foreign_to_exact_map = {}\n\n # Do it in batches. If we append 300 InvoiceNumbers at once, we\n # get a 12kB URI. (If the list is empty, we skip the entire\n # forloop and correctly return the empty dict.)\n foreign_invoice_numbers = list(set(foreign_invoice_numbers)) # unique\n for offset in range(0, len(foreign_invoice_numbers), 40):\n batch = foreign_invoice_numbers[offset:(offset + 40)]\n filter_ = ' or '.join(\n 'YourRef eq %s' % (self._remote_invoice_number(i),)\n for i in batch)\n assert filter_ # if filter was empty, we'd get all!\n ret = self.filter(filter=filter_, select='InvoiceNumber,YourRef')\n foreign_to_exact_map.update(\n dict((i['YourRef'], i['InvoiceNumber']) for i in ret))\n\n # Any values we missed?\n for foreign_invoice_number in foreign_invoice_numbers:\n if foreign_invoice_number not in foreign_to_exact_map:\n foreign_to_exact_map[foreign_invoice_number] = None\n\n return foreign_to_exact_map"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nfiltering by relation_id and duedate.", "response": "def filter(self, relation_id=None, duedate__lt=None, duedate__gte=None,\n **kwargs):\n \"\"\"\n A common query would be duedate__lt=date(2015, 1, 1) to get all\n Receivables that are due in 2014 and earlier.\n \"\"\"\n if relation_id is not None:\n # Filter by (relation) account_id. There doesn't seem to be\n # any reason to prefer\n # 'read/financial/ReceivablesListByAccount?accountId=X' over\n # this.\n relation_id = self._remote_guid(relation_id)\n self._filter_append(kwargs, u'AccountId eq %s' % (relation_id,))\n\n if duedate__lt is not None:\n # Not sure what the AgeGroup means in\n # ReceivablesListByAgeGroup, but we can certainly do\n # without.\n duedate__lt = self._remote_datetime(duedate__lt)\n self._filter_append(kwargs, u'DueDate lt %s' % (duedate__lt,))\n\n if duedate__gte is not None:\n # Not sure what the AgeGroup means in\n # ReceivablesListByAgeGroup, but we can certainly do\n # without.\n duedate__gte = self._remote_datetime(duedate__gte)\n self._filter_append(kwargs, u'DueDate ge %s' % (duedate__gte,))\n\n return super(Receivables, self).filter(**kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate the 11745 Sudoku clauses and return them as a list.", "response": "def sudoku_clauses():\n \"\"\"\n Create the (11745) Sudoku clauses, and return them as a list.\n Note that these clauses are *independent* of the particular\n Sudoku puzzle at hand.\n \"\"\"\n res = []\n # for all cells, ensure that the each cell:\n for i in range(1, 10):\n for j in range(1, 10):\n # denotes (at least) one of the 9 digits (1 clause)\n res.append([v(i, j, d) for d in range(1, 10)])\n # does not denote two different digits at once (36 clauses)\n for d in range(1, 10):\n for dp in range(d + 1, 10):\n res.append([-v(i, j, d), -v(i, j, dp)])\n\n def valid(cells):\n # Append 324 clauses, corresponding to 9 cells, to the result.\n # The 9 cells are represented by a list tuples. The new clauses\n # ensure that the cells contain distinct values.\n for i, xi in enumerate(cells):\n for j, xj in enumerate(cells):\n if i < j:\n for d in range(1, 10):\n res.append([-v(xi[0], xi[1], d), -v(xj[0], xj[1], d)])\n\n # ensure rows and columns have distinct values\n for i in range(1, 10):\n valid([(i, j) for j in range(1, 10)])\n valid([(j, i) for j in range(1, 10)])\n # ensure 3x3 sub-grids \"regions\" have distinct values\n for i in 1, 4, 7:\n for j in 1, 4 ,7:\n valid([(i + k % 3, j + k // 3) for k in range(9)])\n\n assert len(res) == 81 * (1 + 36) + 27 * 324\n return res"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsolving a Sudoku grid inplace", "response": "def solve(grid):\n \"\"\"\n solve a Sudoku grid inplace\n \"\"\"\n clauses = sudoku_clauses()\n for i in range(1, 10):\n for j in range(1, 10):\n d = grid[i - 1][j - 1]\n # For each digit already known, a clause (with one literal).\n # Note:\n # We could also remove all variables for the known cells\n # altogether (which would be more efficient). However, for\n # the sake of simplicity, we decided not to do that.\n if d:\n clauses.append([v(i, j, d)])\n\n # solve the SAT problem\n sol = set(pycosat.solve(clauses))\n\n def read_cell(i, j):\n # return the digit of cell i, j according to the solution\n for d in range(1, 10):\n if v(i, j, d) in sol:\n return d\n\n for i in range(1, 10):\n for j in range(1, 10):\n grid[i - 1][j - 1] = read_cell(i, j)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef view(injector):\n\n handler = create_handler(View, injector)\n apply_http_methods(handler, injector)\n return injector.let(as_view=handler.as_view)", "response": "Create Django class - based view from injector class."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate Django form processing class - based view from injector class.", "response": "def form_view(injector):\n \"\"\"Create Django form processing class-based view from injector class.\"\"\"\n\n handler = create_handler(FormView, injector)\n apply_form_methods(handler, injector)\n return injector.let(as_view=handler.as_view)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates Flask method based dispatching view from injector class.", "response": "def method_view(injector):\n \"\"\"Create Flask method based dispatching view from injector class.\"\"\"\n\n handler = create_handler(MethodView)\n apply_http_methods(handler, injector)\n return injector.let(as_view=handler.as_view)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates DRF class - based API view from injector class.", "response": "def api_view(injector):\n \"\"\"Create DRF class-based API view from injector class.\"\"\"\n\n handler = create_handler(APIView, injector)\n apply_http_methods(handler, injector)\n apply_api_view_methods(handler, injector)\n return injector.let(as_view=handler.as_view)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate DRF generic class - based API view from injector class.", "response": "def generic_api_view(injector):\n \"\"\"Create DRF generic class-based API view from injector class.\"\"\"\n\n handler = create_handler(GenericAPIView, injector)\n apply_http_methods(handler, injector)\n apply_api_view_methods(handler, injector)\n apply_generic_api_view_methods(handler, injector)\n return injector.let(as_view=handler.as_view)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef model_view_set(injector):\n\n handler = create_handler(ModelViewSet, injector)\n apply_api_view_methods(handler, injector)\n apply_generic_api_view_methods(handler, injector)\n apply_model_view_set_methods(handler, injector)\n return injector.let(as_viewset=lambda: handler)", "response": "Create DRF model view set from injector class."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncall by the event loop when the file descriptor is ready for reading.", "response": "def _read_ready(self):\n \"\"\"Called by the event loop whenever the fd is ready for reading.\"\"\"\n\n try:\n data = os.read(self._fileno, self.max_size)\n except InterruptedError:\n # No worries ;)\n pass\n except OSError as exc:\n # Some OS-level problem, crash.\n self._fatal_error(exc, \"Fatal read error on file descriptor read\")\n else:\n if data:\n self._protocol.data_received(data)\n else:\n # We reached end-of-file.\n if self._loop.get_debug():\n logger.info(\"%r was closed by the kernel\", self)\n self._closing = False\n self.pause_reading()\n self._loop.call_soon(self._protocol.eof_received)\n self._loop.call_soon(self._call_connection_lost, None)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef pause_reading(self):\n self._loop.remove_reader(self._fileno)\n self._active = False", "response": "Public API pause reading the transport."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef resume_reading(self):\n self._loop.add_reader(self._fileno, self._read_ready)\n self._active = True", "response": "Public API : resume reading."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef watch(self, path, flags, *, alias=None):\n if alias is None:\n alias = path\n if alias in self.requests:\n raise ValueError(\"A watch request is already scheduled for alias %s\" % alias)\n self.requests[alias] = (path, flags)\n if self._fd is not None:\n # We've started, register the watch immediately.\n self._setup_watch(alias, path, flags)", "response": "Add a new watching rule."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nstop watching a given rule.", "response": "def unwatch(self, alias):\n \"\"\"Stop watching a given rule.\"\"\"\n if alias not in self.descriptors:\n raise ValueError(\"Unknown watch alias %s; current set is %r\" % (alias, list(self.descriptors.keys())))\n wd = self.descriptors[alias]\n errno = LibC.inotify_rm_watch(self._fd, wd)\n if errno != 0:\n raise IOError(\"Failed to close watcher %d: errno=%d\" % (wd, errno))\n del self.descriptors[alias]\n del self.requests[alias]\n del self.aliases[wd]"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nstarting the watcher registering new watches if any.", "response": "def setup(self, loop):\n \"\"\"Start the watcher, registering new watches if any.\"\"\"\n self._loop = loop\n\n self._fd = LibC.inotify_init()\n for alias, (path, flags) in self.requests.items():\n self._setup_watch(alias, path, flags)\n\n # We pass ownership of the fd to the transport; it will close it.\n self._stream, self._transport = yield from aioutils.stream_from_fd(self._fd, loop)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_event(self):\n while True:\n prefix = yield from self._stream.readexactly(PREFIX.size)\n if prefix == b'':\n # We got closed, return None.\n return\n wd, flags, cookie, length = PREFIX.unpack(prefix)\n path = yield from self._stream.readexactly(length)\n\n # All async performed, time to look at the event's content.\n if wd not in self.aliases:\n # Event for a removed watch, skip it.\n continue\n\n decoded_path = struct.unpack('%ds' % length, path)[0].rstrip(b'\\x00').decode('utf-8')\n return Event(\n flags=flags,\n cookie=cookie,\n name=decoded_path,\n alias=self.aliases[wd],\n )", "response": "Fetch an event.\n\n This coroutine will swallow events for removed watches."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nresponding to nsqd that this message has been successfully processed.", "response": "def finish(self):\n \"\"\"\n Respond to ``nsqd`` that you've processed this message successfully (or would like\n to silently discard it).\n \"\"\"\n assert not self._has_responded\n self._has_responded = True\n self.trigger(event.FINISH, message=self)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nresponding to nsqd that this message successfully processed and would like it to be requeued.", "response": "def requeue(self, **kwargs):\n \"\"\"\n Respond to ``nsqd`` that you've failed to process this message successfully (and would\n like it to be requeued).\n\n :param backoff: whether or not :class:`nsq.Reader` should apply backoff handling\n :type backoff: bool\n\n :param delay: the amount of time (in seconds) that this message should be delayed\n if -1 it will be calculated based on # of attempts\n :type delay: int\n \"\"\"\n\n # convert delay to time_ms for fixing\n # https://github.com/nsqio/pynsq/issues/71 and maintaining\n # backward compatibility\n if 'delay' in kwargs and isinstance(kwargs['delay'], int) and kwargs['delay'] >= 0:\n kwargs['time_ms'] = kwargs['delay'] * 1000\n\n assert not self._has_responded\n self._has_responded = True\n self.trigger(event.REQUEUE, message=self, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrespond to the NSQQ event. TOUCH.", "response": "def touch(self):\n \"\"\"\n Respond to ``nsqd`` that you need more time to process the message.\n \"\"\"\n assert not self._has_responded\n self.trigger(event.TOUCH, message=self)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef run():\n signal.signal(signal.SIGTERM, _handle_term_signal)\n signal.signal(signal.SIGINT, _handle_term_signal)\n tornado.ioloop.IOLoop.instance().start()", "response": "Starts any instantiated : class : nsq. Reader or : class : nsq. Writer."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef success(self):\n if self.interval == 0.0:\n return\n self.short_interval -= self.short_unit\n self.long_interval -= self.long_unit\n self.short_interval = max(self.short_interval, Decimal(0))\n self.long_interval = max(self.long_interval, Decimal(0))\n self.update_interval()", "response": "Update the timer to reflect a successfull call"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nupdating the timer to reflect a failed call", "response": "def failure(self):\n \"\"\"Update the timer to reflect a failed call\"\"\"\n self.short_interval += self.short_unit\n self.long_interval += self.long_unit\n self.short_interval = min(self.short_interval, self.max_short_timer)\n self.long_interval = min(self.long_interval, self.max_long_timer)\n self.update_interval()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _utf8_params(params):\n assert isinstance(params, dict)\n encoded_params = []\n for k, v in params.items():\n if v is None:\n continue\n if isinstance(v, integer_types + (float,)):\n v = str(v)\n if isinstance(v, (list, tuple)):\n v = [to_bytes(x) for x in v]\n else:\n v = to_bytes(v)\n encoded_params.append((k, v))\n return dict(encoded_params)", "response": "encode a dictionary of URL parameters as utf - 8"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef close(self):\n for conn in self.conns.values():\n conn.close()\n\n self.redist_periodic.stop()\n if self.query_periodic is not None:\n self.query_periodic.stop()", "response": "Closes all connections stops all periodic callbacks and all periodic connections"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef is_starved(self):\n for conn in itervalues(self.conns):\n if conn.in_flight > 0 and conn.in_flight >= (conn.last_rdy * 0.85):\n return True\n return False", "response": "Returns True if all buffered messages have been processed and responded to the next available many\n ."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds a connection to the nsqd server.", "response": "def connect_to_nsqd(self, host, port):\n \"\"\"\n Adds a connection to ``nsqd`` at the specified address.\n\n :param host: the address to connect to\n :param port: the port to connect to\n \"\"\"\n assert isinstance(host, string_types)\n assert isinstance(port, int)\n\n conn = AsyncConn(host, port, **self.conn_kwargs)\n conn.on('identify', self._on_connection_identify)\n conn.on('identify_response', self._on_connection_identify_response)\n conn.on('auth', self._on_connection_auth)\n conn.on('auth_response', self._on_connection_auth_response)\n conn.on('error', self._on_connection_error)\n conn.on('close', self._on_connection_close)\n conn.on('ready', self._on_connection_ready)\n conn.on('message', self._on_message)\n conn.on('heartbeat', self._on_heartbeat)\n conn.on('backoff', functools.partial(self._on_backoff_resume, success=False))\n conn.on('resume', functools.partial(self._on_backoff_resume, success=True))\n conn.on('continue', functools.partial(self._on_backoff_resume, success=None))\n\n if conn.id in self.conns:\n return\n\n # only attempt to re-connect once every 10s per destination\n # this throttles reconnects to failed endpoints\n now = time.time()\n last_connect_attempt = self.connection_attempts.get(conn.id)\n if last_connect_attempt and last_connect_attempt > now - 10:\n return\n self.connection_attempts[conn.id] = now\n\n logger.info('[%s:%s] connecting to nsqd', conn.id, self.name)\n conn.connect()\n\n return conn"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ntriggering a query of the configured nsq_lookupd_http_addresses.", "response": "def query_lookupd(self):\n \"\"\"\n Trigger a query of the configured ``nsq_lookupd_http_addresses``.\n \"\"\"\n endpoint = self.lookupd_http_addresses[self.lookupd_query_index]\n self.lookupd_query_index = (self.lookupd_query_index + 1) % len(self.lookupd_http_addresses)\n\n # urlsplit() is faulty if scheme not present\n if '://' not in endpoint:\n endpoint = 'http://' + endpoint\n\n scheme, netloc, path, query, fragment = urlparse.urlsplit(endpoint)\n\n if not path or path == \"/\":\n path = \"/lookup\"\n\n params = parse_qs(query)\n params['topic'] = self.topic\n query = urlencode(_utf8_params(params), doseq=1)\n lookupd_url = urlparse.urlunsplit((scheme, netloc, path, query, fragment))\n\n req = tornado.httpclient.HTTPRequest(\n lookupd_url, method='GET',\n headers={'Accept': 'application/vnd.nsq; version=1.0'},\n connect_timeout=self.lookupd_connect_timeout,\n request_timeout=self.lookupd_request_timeout)\n callback = functools.partial(self._finish_query_lookupd, lookupd_url=lookupd_url)\n self.http_client.fetch(req, callback=callback)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncalls when a message has been received after max_tries times.", "response": "def giving_up(self, message):\n \"\"\"\n Called when a message has been received where ``msg.attempts > max_tries``\n\n This is useful to subclass and override to perform a task (such as writing to disk, etc.)\n\n :param message: the :class:`nsq.Message` received\n \"\"\"\n logger.warning('[%s] giving up on message %s after %d tries (max:%d) %r',\n self.name, message.id, message.attempts, self.max_tries, message.body)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef on(self, name, callback):\n assert callable(callback), 'callback is not callable'\n if callback in self.__listeners[name]:\n raise DuplicateListenerError\n self.__listeners[name].append(callback)", "response": "Add a callback to be invoked when an event is triggered."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nstopping listening for the named event via the specified callback.", "response": "def off(self, name, callback):\n \"\"\"\n Stop listening for the named event via the specified callback.\n\n :param name: the name of the event\n :type name: string\n\n :param callback: the callback that was originally used\n :type callback: callable\n \"\"\"\n if callback not in self.__listeners[name]:\n raise InvalidListenerError\n self.__listeners[name].remove(callback)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nexecute the callbacks for the specified event with the specified arguments.", "response": "def trigger(self, name, *args, **kwargs):\n \"\"\"\n Execute the callbacks for the listeners on the specified event with the\n supplied arguments.\n\n All extra arguments are passed through to each callback.\n\n :param name: the name of the event\n :type name: string\n \"\"\"\n for ev in self.__listeners[name]:\n ev(*args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef pub(self, topic, msg, callback=None):\n self._pub('pub', topic, msg, callback=callback)", "response": "publish a message to nsq"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef mpub(self, topic, msg, callback=None):\n if isinstance(msg, bytes_types):\n msg = [msg]\n assert isinstance(msg, (list, set, tuple))\n\n self._pub('mpub', topic, msg, callback=callback)", "response": "publish multiple messages in one command ( efficiently"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef dpub(self, topic, delay_ms, msg, callback=None):\n self._pub('dpub', topic, msg, delay_ms, callback=callback)", "response": "publish multiple messages in one command"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef score_function(self, x, W):\n # need refector\n\n '''\n Score function to calculate score\n '''\n\n if (self.svm_kernel == 'polynomial_kernel' or self.svm_kernel == 'gaussian_kernel' or self.svm_kernel == 'soft_polynomial_kernel' or self.svm_kernel == 'soft_gaussian_kernel'):\n x = x[1:]\n '''\n original_X = self.train_X[:, 1:]\n score = 0\n for i in range(len(self.sv_alpha)):\n if (self.svm_kernel == 'polynomial_kernel' or self.svm_kernel == 'soft_polynomial_kernel'):\n score += self.sv_alpha[i] * self.sv_Y[i] * utility.Kernel.polynomial_kernel(self, original_X[self.sv_index[i]], x)\n elif (self.svm_kernel == 'gaussian_kernel' or self.svm_kernel == 'soft_gaussian_kernel'):\n score += self.sv_alpha[i] * self.sv_Y[i] * utility.Kernel.gaussian_kernel(self, original_X[self.sv_index[i]], x)\n score = np.sign(score + self.sv_avg_b)\n '''\n score = np.sign(np.sum(self.sv_alpha * self.sv_Y * utility.Kernel.kernel_matrix_xX(self, x, self.sv_X)) + self.sv_avg_b)\n else:\n score = np.sign(np.inner(x, W))\n\n return score", "response": "Score function to calculate score of a single class instance."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef score_function(self, x, W):\n\n '''\n Score function to calculate score\n '''\n\n score = super(BinaryClassifier, self).score_function(x, W)\n if score >= 0.5:\n score = 1.0\n else:\n score = -1.0\n\n return score", "response": "Score function to calculate score\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ntraining Pocket Perceptron Learning Algorithm from f x = WX find best h = WX similar to f x = WX", "response": "def train(self):\n\n '''\n Train Pocket Perceptron Learning Algorithm\n From f(x) = WX\n Find best h(x) = WX similar to f(x)\n Output W\n '''\n\n if (self.status != 'init'):\n print(\"Please load train data and init W first.\")\n return self.W\n\n self.status = 'train'\n\n new_W = self.W\n\n self.temp_avg_error = self.calculate_avg_error(self.train_X, self.train_Y, new_W)\n\n for _ in range(self.updates):\n if (self.loop_mode is 'naive_cycle'):\n data_check_order = range(self.data_num)\n elif (self.loop_mode is 'random'):\n data_check_order = range(self.data_num)\n data_check_order = random.sample(data_check_order, self.data_num)\n else:\n data_check_order = range(self.data_num)\n data_check_order = random.sample(data_check_order, self.data_num)\n for i in data_check_order:\n\n if self.error_function(self.score_function(self.train_X[i], new_W), self.train_Y[i]):\n self.tune_times += 1\n new_W = new_W + self.step_alpha * (self.train_Y[i] * self.train_X[i])\n new_avg_error = self.calculate_avg_error(self.train_X, self.train_Y, new_W)\n if new_avg_error < self.temp_avg_error:\n self.put_in_pocket_times += 1\n self.temp_avg_error = new_avg_error\n self.W = new_W\n break\n\n return self.W"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef svm_score(self, x):\n\n x = x[1:]\n\n '''\n original_X = self.svm_processor.train_X[:, 1:]\n score = 0\n for i in range(len(self.svm_processor.sv_alpha)):\n score += self.svm_processor.sv_alpha[i] * self.svm_processor.sv_Y[i] * utility.Kernel.gaussian_kernel(self, original_X[self.svm_processor.sv_index[i]], x)\n score = score + self.svm_processor.sv_avg_b\n '''\n\n score = np.sum(self.svm_processor.sv_alpha * self.svm_processor.sv_Y * utility.Kernel.kernel_matrix_xX(self, x, self.svm_processor.sv_X)) + self.svm_processor.sv_avg_b\n\n return score", "response": "Compute the score of the SVM."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef train(self):\n\n '''\n Train Linear Regression Algorithm\n From f(x) = WX\n Find best h(x) = WX similar to f(x)\n Output W\n '''\n\n if (self.status != 'init'):\n print(\"Please load train data and init W first.\")\n return self.W\n\n self.status = 'train'\n\n self.xpsedo = self.calculate_psedo_X(self.train_X)\n self.W = np.dot(self.xpsedo, self.train_Y)\n\n return self.W", "response": "Train Linear Regression Algorithm"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nscores function to calculate score of the feature in the base set", "response": "def score_function(self, x, W):\n # need refector\n\n '''\n Score function to calculate score\n '''\n\n score = self.sign * np.sign(x[self.feature_index] - self.theta)\n\n return score"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ntrains Perceptron Learning Algorithm from f x = WX self. train_X self. train_Y self. train_Y self. output_W", "response": "def train(self):\n\n '''\n Train Perceptron Learning Algorithm\n From f(x) = WX\n Find best h(x) = WX similar to f(x)\n Output W\n '''\n\n if (self.status != 'init'):\n print(\"Please load train data and init W first.\")\n return self.W\n\n self.status = 'train'\n\n if (self.loop_mode is 'random'):\n data_check_order = range(self.data_num)\n data_check_order = random.sample(data_check_order, self.data_num)\n elif (self.loop_mode is 'naive_cycle'):\n data_check_order = range(self.data_num)\n else:\n data_check_order = range(self.data_num)\n\n self.tune_times = 0\n k = 0\n flag = True\n\n while True:\n if (self.tune_times > (2 * self.data_num)):\n print(\"Dataset not linear separable.\")\n break\n\n if k == self.data_num:\n if flag:\n break\n k = 0\n flag = True\n\n point_wise_i = data_check_order[k]\n\n if self.error_function(self.score_function(self.train_X[point_wise_i], self.W), self.train_Y[point_wise_i]):\n flag = False\n self.tune_times += 1\n self.W = self.W + self.step_alpha * (self.train_Y[point_wise_i] * self.train_X[point_wise_i])\n k += 1\n\n return self.W"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncomputes the kernel matrix for the given svm model.", "response": "def kernel_matrix(svm_model, original_X):\n\n if (svm_model.svm_kernel == 'polynomial_kernel' or svm_model.svm_kernel == 'soft_polynomial_kernel'):\n K = (svm_model.zeta + svm_model.gamma * np.dot(original_X, original_X.T)) ** svm_model.Q\n elif (svm_model.svm_kernel == 'gaussian_kernel' or svm_model.svm_kernel == 'soft_gaussian_kernel'):\n pairwise_dists = squareform(pdist(original_X, 'euclidean'))\n K = np.exp(-svm_model.gamma * (pairwise_dists ** 2))\n\n '''\n K = np.zeros((svm_model.data_num, svm_model.data_num))\n\n for i in range(svm_model.data_num):\n for j in range(svm_model.data_num):\n if (svm_model.svm_kernel == 'polynomial_kernel' or svm_model.svm_kernel == 'soft_polynomial_kernel'):\n K[i, j] = Kernel.polynomial_kernel(svm_model, original_X[i], original_X[j])\n elif (svm_model.svm_kernel == 'gaussian_kernel' or svm_model.svm_kernel == 'soft_gaussian_kernel'):\n K[i, j] = Kernel.gaussian_kernel(svm_model, original_X[i], original_X[j])\n '''\n\n return K"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef kernel_matrix_xX(svm_model, original_x, original_X):\n\n if (svm_model.svm_kernel == 'polynomial_kernel' or svm_model.svm_kernel == 'soft_polynomial_kernel'):\n K = (svm_model.zeta + svm_model.gamma * np.dot(original_x, original_X.T)) ** svm_model.Q\n elif (svm_model.svm_kernel == 'gaussian_kernel' or svm_model.svm_kernel == 'soft_gaussian_kernel'):\n K = np.exp(-svm_model.gamma * (cdist(original_X, np.atleast_2d(original_x), 'euclidean').T ** 2)).ravel()\n\n '''\n K = np.zeros((svm_model.data_num, svm_model.data_num))\n\n for i in range(svm_model.data_num):\n for j in range(svm_model.data_num):\n if (svm_model.svm_kernel == 'polynomial_kernel' or svm_model.svm_kernel == 'soft_polynomial_kernel'):\n K[i, j] = Kernel.polynomial_kernel(svm_model, original_x, original_X[j])\n elif (svm_model.svm_kernel == 'gaussian_kernel' or svm_model.svm_kernel == 'soft_gaussian_kernel'):\n K[i, j] = Kernel.gaussian_kernel(svm_model, original_x, original_X[j])\n '''\n\n return K", "response": "Compute the kernel matrix for the given original x and X."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset feature transform mode and degree of data", "response": "def set_feature_transform(self, mode='polynomial', degree=1):\n\n '''\n Transform data feature to high level\n '''\n\n if self.status != 'load_train_data':\n print(\"Please load train data first.\")\n return self.train_X\n\n self.feature_transform_mode = mode\n self.feature_transform_degree = degree\n\n self.train_X = self.train_X[:, 1:]\n\n self.train_X = utility.DatasetLoader.feature_transform(\n self.train_X,\n self.feature_transform_mode,\n self.feature_transform_degree\n )\n\n return self.train_X"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nmake prediction input test data and output the prediction", "response": "def prediction(self, input_data='', mode='test_data'):\n\n '''\n Make prediction\n input test data\n output the prediction\n '''\n\n prediction = {}\n\n if (self.status != 'train'):\n print(\"Please load train data and init W then train the W first.\")\n return prediction\n\n if (input_data == ''):\n print(\"Please input test data for prediction.\")\n return prediction\n\n if mode == 'future_data':\n data = input_data.split()\n input_data_x = [float(v) for v in data]\n input_data_x = utility.DatasetLoader.feature_transform(\n np.array(input_data_x).reshape(1, -1),\n self.feature_transform_mode,\n self.feature_transform_degree\n )\n input_data_x = np.ravel(input_data_x)\n prediction = self.score_function(input_data_x, self.W)\n return {\"input_data_x\": input_data_x, \"input_data_y\": None, \"prediction\": prediction}\n else:\n data = input_data.split()\n input_data_x = [float(v) for v in data[:-1]]\n input_data_x = utility.DatasetLoader.feature_transform(\n np.array(input_data_x).reshape(1, -1),\n self.feature_transform_mode,\n self.feature_transform_degree\n )\n input_data_x = np.ravel(input_data_x)\n input_data_y = float(data[-1])\n prediction = self.score_function(input_data_x, self.W)\n return {\"input_data_x\": input_data_x, \"input_data_y\": input_data_y, \"prediction\": prediction}"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget theta of the logarithm of the logarithm of the logarithm", "response": "def theta(self, s):\n\n '''\n Theta sigmoid function\n '''\n\n s = np.where(s < -709, -709, s)\n\n return 1 / (1 + np.exp((-1) * s))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nscoring function to calculate the score of a set of items in a set of items", "response": "def score_function(self, x, W):\n # need refector\n\n '''\n Score function to calculate score\n '''\n\n score = self.theta(np.inner(x, W))\n\n return score"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nparses a single Trimmomatic log file and stores some statistics in an OrderedDict.", "response": "def parse_log(log_file):\n \"\"\"Retrieves some statistics from a single Trimmomatic log file.\n\n This function parses Trimmomatic's log file and stores some trimming\n statistics in an :py:class:`OrderedDict` object. This object contains\n the following keys:\n\n - ``clean_len``: Total length after trimming.\n - ``total_trim``: Total trimmed base pairs.\n - ``total_trim_perc``: Total trimmed base pairs in percentage.\n - ``5trim``: Total base pairs trimmed at 5' end.\n - ``3trim``: Total base pairs trimmed at 3' end.\n\n Parameters\n ----------\n log_file : str\n Path to trimmomatic log file.\n\n Returns\n -------\n x : :py:class:`OrderedDict`\n Object storing the trimming statistics.\n\n \"\"\"\n\n template = OrderedDict([\n # Total length after trimming\n (\"clean_len\", 0),\n # Total trimmed base pairs\n (\"total_trim\", 0),\n # Total trimmed base pairs in percentage\n (\"total_trim_perc\", 0),\n # Total trimmed at 5' end\n (\"5trim\", 0),\n # Total trimmed at 3' end\n (\"3trim\", 0),\n # Bad reads (completely trimmed)\n (\"bad_reads\", 0)\n ])\n\n with open(log_file) as fh:\n\n for line in fh:\n # This will split the log fields into:\n # 0. read length after trimming\n # 1. amount trimmed from the start\n # 2. last surviving base\n # 3. amount trimmed from the end\n fields = [int(x) for x in line.strip().split()[-4:]]\n\n if not fields[0]:\n template[\"bad_reads\"] += 1\n\n template[\"5trim\"] += fields[1]\n template[\"3trim\"] += fields[3]\n template[\"total_trim\"] += fields[1] + fields[3]\n template[\"clean_len\"] += fields[0]\n\n total_len = template[\"clean_len\"] + template[\"total_trim\"]\n\n if total_len:\n template[\"total_trim_perc\"] = round(\n (template[\"total_trim\"] / total_len) * 100, 2)\n else:\n template[\"total_trim_perc\"] = 0\n\n return template"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef clean_up(fastq_pairs, clear):\n\n # Find unpaired fastq files\n unpaired_fastq = [f for f in os.listdir(\".\")\n if f.endswith(\"_U.fastq.gz\")]\n\n # Remove unpaired fastq files, if any\n for fpath in unpaired_fastq:\n os.remove(fpath)\n\n # Expected output to assess whether it is safe to remove temporary input\n expected_out = [f for f in os.listdir(\".\") if f.endswith(\"_trim.fastq.gz\")]\n\n if clear == \"true\" and len(expected_out) == 2:\n for fq in fastq_pairs:\n # Get real path of fastq files, following symlinks\n rp = os.path.realpath(fq)\n logger.debug(\"Removing temporary fastq file path: {}\".format(rp))\n if re.match(\".*/work/.{2}/.{30}/.*\", rp):\n os.remove(rp)", "response": "Cleans the working directory of unwanted temporary files"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef merge_default_adapters():\n\n default_adapters = [os.path.join(ADAPTERS_PATH, x) for x in\n os.listdir(ADAPTERS_PATH)]\n filepath = os.path.join(os.getcwd(), \"default_adapters.fasta\")\n\n with open(filepath, \"w\") as fh, \\\n fileinput.input(default_adapters) as in_fh:\n for line in in_fh:\n fh.write(\"{}{}\".format(line, \"\\\\n\"))\n\n return filepath", "response": "Merges the default adapters file in the trimmomatic adapters directory\n Returns the merged adapters file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nfunctioning that parses the samtools depth file and creates 3 dictionaries that will store the mean coverage for each sample and the number of reads aligned for each plasmid.", "response": "def depth_file_reader(depth_file):\n \"\"\"\n Function that parse samtools depth file and creates 3 dictionaries that\n will be useful to make the outputs of this script, both the tabular file\n and the json file that may be imported by pATLAS\n\n Parameters\n ----------\n depth_file: textIO\n the path to depth file for each sample\n\n Returns\n -------\n depth_dic_coverage: dict\n dictionary with the coverage per position for each plasmid\n \"\"\"\n\n # dict to store the mean coverage for each reference\n depth_dic_coverage = {}\n\n for line in depth_file:\n tab_split = line.split() # split by any white space\n reference = \"_\".join(tab_split[0].strip().split(\"_\")[0:3]) # store\n # only the gi for the reference\n position = tab_split[1]\n num_reads_align = float(tab_split[2].rstrip())\n\n if reference not in depth_dic_coverage:\n depth_dic_coverage[reference] = {}\n\n depth_dic_coverage[reference][position] = num_reads_align\n\n logger.info(\"Finished parsing depth file.\")\n depth_file.close()\n\n logger.debug(\"Size of dict_cov: {} kb\".format(\n asizeof(depth_dic_coverage)/1024))\n\n return depth_dic_coverage"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfunctions that handles the inputs required to parse depth files from bowtie and dumps a dict to a json file that can be imported into pATLAS. Parameters ---------- depth_file: str the path to depth file for each sample json_dict: str the file that contains the dictionary with keys and values for accessions and their respective lengths cutoff: str the cutoff used to trim the unwanted matches for the minimum coverage results from mapping. This value may range between 0 and 1. sample_id: str the id of the sample being parsed", "response": "def main(depth_file, json_dict, cutoff, sample_id):\n \"\"\"\n Function that handles the inputs required to parse depth files from bowtie\n and dumps a dict to a json file that can be imported into pATLAS.\n\n Parameters\n ----------\n depth_file: str\n the path to depth file for each sample\n json_dict: str\n the file that contains the dictionary with keys and values for\n accessions\n and their respective lengths\n cutoff: str\n the cutoff used to trim the unwanted matches for the minimum coverage\n results from mapping. This value may range between 0 and 1.\n sample_id: str\n the id of the sample being parsed\n\n \"\"\"\n\n # check for the appropriate value for the cutoff value for coverage results\n logger.debug(\"Cutoff value: {}. Type: {}\".format(cutoff, type(cutoff)))\n try:\n cutoff_val = float(cutoff)\n if cutoff_val < 0.4:\n logger.warning(\"This cutoff value will generate a high volume of \"\n \"plot data. Therefore '.report.json' can be too big\")\n except ValueError:\n logger.error(\"Cutoff value should be a string such as: '0.6'. \"\n \"The outputted value: {}. Make sure to provide an \"\n \"appropriate value for --cov_cutoff\".format(cutoff))\n sys.exit(1)\n\n # loads dict from file, this file is provided in docker image\n\n plasmid_length = json.load(open(json_dict))\n if plasmid_length:\n logger.info(\"Loaded dictionary of plasmid lengths\")\n else:\n logger.error(\"Something went wrong and plasmid lengths dictionary\"\n \"could not be loaded. Check if process received this\"\n \"param successfully.\")\n sys.exit(1)\n\n # read depth file\n depth_file_in = open(depth_file)\n\n # first reads the depth file and generates dictionaries to handle the input\n # to a simpler format\n logger.info(\"Reading depth file and creating dictionary to dump.\")\n depth_dic_coverage = depth_file_reader(depth_file_in)\n percentage_bases_covered, dict_cov = generate_jsons(depth_dic_coverage,\n plasmid_length,\n cutoff_val)\n\n if percentage_bases_covered and dict_cov:\n logger.info(\"percentage_bases_covered length: {}\".format(\n str(len(percentage_bases_covered))))\n logger.info(\"dict_cov length: {}\".format(str(len(dict_cov))))\n else:\n logger.error(\"Both dicts that dump to JSON file or .report.json are \"\n \"empty.\")\n\n # then dump do file\n logger.info(\"Dumping to {}\".format(\"{}_mapping.json\".format(depth_file)))\n with open(\"{}_mapping.json\".format(depth_file), \"w\") as output_json:\n output_json.write(json.dumps(percentage_bases_covered))\n\n json_dic = {\n \"tableRow\": [{\n \"sample\": sample_id,\n \"data\": [{\n \"header\": \"Mapping\",\n \"table\": \"plasmids\",\n \"patlas_mapping\": percentage_bases_covered,\n \"value\": len(percentage_bases_covered)\n }]\n }],\n \"sample\": sample_id,\n \"patlas_mapping\": percentage_bases_covered,\n \"plotData\": [{\n \"sample\": sample_id,\n \"data\": {\n \"patlasMappingSliding\": dict_cov\n },\n }]\n }\n\n logger.debug(\"Size of dict_cov: {} kb\".format(asizeof(json_dic)/1024))\n logger.info(\"Writing to .report.json\")\n with open(\".report.json\", \"w\") as json_report:\n json_report.write(json.dumps(json_dic, separators=(\",\", \":\")))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsetting the path to the appropriate jinja template file based on the template argument.", "response": "def _set_template(self, template):\n \"\"\"Sets the path to the appropriate jinja template file\n\n When a Process instance is initialized, this method will fetch\n the location of the appropriate template file, based on the\n ``template`` argument. It will raise an exception is the template\n file is not found. Otherwise, it will set the\n :py:attr:`Process.template_path` attribute.\n \"\"\"\n\n # Set template directory\n tpl_dir = join(dirname(abspath(__file__)), \"templates\")\n\n # Set template file path\n tpl_path = join(tpl_dir, template + \".nf\")\n\n if not os.path.exists(tpl_path):\n raise eh.ProcessError(\n \"Template {} does not exist\".format(tpl_path))\n\n self._template_path = join(tpl_dir, template + \".nf\")"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nset the main channel names based on the provide input and output channel suffixes. This is performed when connecting processes.", "response": "def set_main_channel_names(self, input_suffix, output_suffix, lane):\n \"\"\"Sets the main channel names based on the provide input and\n output channel suffixes. This is performed when connecting processes.\n\n Parameters\n ----------\n input_suffix : str\n Suffix added to the input channel. Should be based on the lane\n and an arbitrary unique id\n output_suffix : str\n Suffix added to the output channel. Should be based on the lane\n and an arbitrary unique id\n lane : int\n Sets the lane of the process.\n \"\"\"\n\n self.input_channel = \"{}_in_{}\".format(self.template, input_suffix)\n self.output_channel = \"{}_out_{}\".format(self.template, output_suffix)\n self.lane = lane"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the main raw channel for the process with the specified input channel and input type.", "response": "def get_user_channel(self, input_channel, input_type=None):\n \"\"\"Returns the main raw channel for the process\n\n Provided with at least a channel name, this method returns the raw\n channel name and specification (the nextflow string definition)\n for the process. By default, it will fork from the raw input of\n the process' :attr:`~Process.input_type` attribute. However, this\n behaviour can be overridden by providing the ``input_type`` argument.\n\n If the specified or inferred input type exists in the\n :attr:`~Process.RAW_MAPPING` dictionary, the channel info dictionary\n will be retrieved along with the specified input channel. Otherwise,\n it will return None.\n\n An example of the returned dictionary is::\n\n {\"input_channel\": \"myChannel\",\n \"params\": \"fastq\",\n \"channel\": \"IN_fastq_raw\",\n \"channel_str\":\"IN_fastq_raw = Channel.fromFilePairs(params.fastq)\"\n }\n\n Returns\n -------\n dict or None\n Dictionary with the complete raw channel info. None if no\n channel is found.\n \"\"\"\n\n res = {\"input_channel\": input_channel}\n\n itype = input_type if input_type else self.input_type\n\n if itype in self.RAW_MAPPING:\n\n channel_info = self.RAW_MAPPING[itype]\n\n return {**res, **channel_info}"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef render(template, context):\n\n path, filename = os.path.split(template)\n\n return jinja2.Environment(\n loader=jinja2.FileSystemLoader(path or './')\n ).get_template(filename).render(context)", "response": "Wrapper to the jinja2 render method from a template file."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef template_str(self):\n\n if not self._context:\n raise eh.ProcessError(\"Channels must be setup first using the \"\n \"set_channels method\")\n\n logger.debug(\"Setting context for template {}: {}\".format(\n self.template, self._context\n ))\n\n x = self.render(self._template_path, self._context)\n return x", "response": "Property that returns a populated template string"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef set_channels(self, **kwargs):\n\n if not self.pid:\n self.pid = \"{}_{}\".format(self.lane, kwargs.get(\"pid\"))\n\n for i in self.status_channels:\n if i.startswith(\"STATUS_\"):\n self.status_strs.append(\"{}_{}\".format(i, self.pid))\n else:\n self.status_strs.append(\"STATUS_{}_{}\".format(i, self.pid))\n\n if self.main_forks:\n logger.debug(\"Setting main fork channels: {}\".format(\n self.main_forks))\n operator = \"set\" if len(self.main_forks) == 1 else \"into\"\n self.forks = [\"\\n{}.{}{{ {} }}\\n\".format(\n self.output_channel, operator, \";\".join(self.main_forks))]\n\n self._context = {**kwargs, **{\"input_channel\": self.input_channel,\n \"output_channel\": self.output_channel,\n \"template\": self.template,\n \"forks\": \"\\n\".join(self.forks),\n \"pid\": self.pid}}", "response": "This method sets the main channels for the process."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nupdate the forks attribute with the sink channel destination", "response": "def update_main_forks(self, sink):\n \"\"\"Updates the forks attribute with the sink channel destination\n\n Parameters\n ----------\n sink : str\n Channel onto which the main input will be forked to\n\n \"\"\"\n\n if not self.main_forks:\n self.main_forks = [self.output_channel]\n self.output_channel = \"_{}\".format(self.output_channel)\n self.main_forks.append(sink)\n\n # fork_lst = self.forks + self.main_forks\n operator = \"set\" if len(self.main_forks) == 1 else \"into\"\n self.forks = [\"\\n{}.{}{{ {} }}\\n\".format(\n self.output_channel, operator, \";\".join(self.main_forks))]\n\n self._context = {**self._context,\n **{\"forks\": \"\".join(self.forks),\n \"output_channel\": self.output_channel}}"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef set_secondary_channel(self, source, channel_list):\n\n logger.debug(\"Setting secondary channel for source '{}': {}\".format(\n source, channel_list))\n\n source = \"{}_{}\".format(source, self.pid)\n\n # Removes possible duplicate channels, when the fork is terminal\n channel_list = sorted(list(set(channel_list)))\n\n # When there is only one channel to fork into, use the 'set' operator\n # instead of 'into'\n op = \"set\" if len(channel_list) == 1 else \"into\"\n self.forks.append(\"\\n{}.{}{{ {} }}\\n\".format(\n source, op, \";\".join(channel_list)))\n\n logger.debug(\"Setting forks attribute to: {}\".format(self.forks))\n self._context = {**self._context, **{\"forks\": \"\\n\".join(self.forks)}}", "response": "This method allows a given source to be forked into one or more channels and sets those forks in the process. forks attribute to be set to the new ones."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef update_attributes(self, attr_dict):\n\n # Update directives\n # Allowed attributes to write\n valid_directives = [\"pid\", \"ignore_type\", \"ignore_pid\", \"extra_input\",\n \"group\", \"input_type\"]\n\n for attribute, val in attr_dict.items():\n\n # If the attribute has a valid directive key, update that\n # directive\n if attribute in valid_directives and hasattr(self, attribute):\n setattr(self, attribute, val)\n\n # The params attribute is special, in the sense that it provides\n # information for the self.params attribute.\n elif attribute == \"params\":\n for name, value in val.items():\n if name in self.params:\n self.params[name][\"default\"] = value\n else:\n raise eh.ProcessError(\n \"The parameter name '{}' does not exist for \"\n \"component '{}'\".format(name, self.template))\n\n else:\n for p in self.directives:\n self.directives[p][attribute] = val", "response": "Updates the attributes of the process that have been set in the class attribute."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set_compiler_channels(self, channel_list, operator=\"mix\"):\n\n if not channel_list:\n raise eh.ProcessError(\"At least one status channel must be \"\n \"provided to include this process in the \"\n \"pipeline\")\n\n if len(channel_list) == 1:\n logger.debug(\"Setting only one status channel: {}\".format(\n channel_list[0]))\n self._context = {\"compile_channels\": channel_list[0]}\n\n else:\n\n first_status = channel_list[0]\n\n if operator == \"mix\":\n lst = \",\".join(channel_list[1:])\n\n s = \"{}.mix({})\".format(first_status, lst)\n\n elif operator == \"join\":\n\n s = first_status\n for ch in channel_list[1:]:\n s += \".join({})\".format(ch)\n\n s += \".map{ ot -> [ ot[0], ot[1..-1] ] }\"\n\n logger.debug(\"Status channel string: {}\".format(s))\n\n self._context = {\"compile_channels\": s}", "response": "This method will set the compiler channels for the status process."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef set_raw_inputs(self, raw_input):\n\n logger.debug(\"Setting raw inputs using raw input dict: {}\".format(\n raw_input))\n\n primary_inputs = []\n\n for input_type, el in raw_input.items():\n\n primary_inputs.append(el[\"channel_str\"])\n\n # Update the process' parameters with the raw input\n raw_channel = self.RAW_MAPPING[input_type]\n self.params[input_type] = {\n \"default\": raw_channel[\"default_value\"],\n \"description\": raw_channel[\"description\"]\n }\n\n op = \"set\" if len(el[\"raw_forks\"]) == 1 else \"into\"\n\n self.forks.append(\"\\n{}.{}{{ {} }}\\n\".format(\n el[\"channel\"], op, \";\".join(el[\"raw_forks\"])\n ))\n\n logger.debug(\"Setting raw inputs: {}\".format(primary_inputs))\n logger.debug(\"Setting forks attribute to: {}\".format(self.forks))\n self._context = {**self._context,\n **{\"forks\": \"\\n\".join(self.forks),\n \"main_inputs\": \"\\n\".join(primary_inputs)}}", "response": "Sets the main input channels of the pipeline and their forks."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_secondary_inputs(self, channel_dict):\n\n logger.debug(\"Setting secondary inputs: {}\".format(channel_dict))\n\n secondary_input_str = \"\\n\".join(list(channel_dict.values()))\n self._context = {**self._context,\n **{\"secondary_inputs\": secondary_input_str}}", "response": "Adds secondary inputs to the start of the pipeline file."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset the initial definition of the extra input channels.", "response": "def set_extra_inputs(self, channel_dict):\n \"\"\"Sets the initial definition of the extra input channels.\n\n The ``channel_dict`` argument should contain the input type and\n destination channel of each parameter (which is the key)::\n\n channel_dict = {\n \"param1\": {\n \"input_type\": \"fasta\"\n \"channels\": [\"abricate_2_3\", \"chewbbaca_3_4\"]\n }\n }\n\n Parameters\n ----------\n channel_dict : dict\n Dictionary with the extra_input parameter as key, and a dictionary\n as a value with the input_type and destination channels\n \"\"\"\n\n extra_inputs = []\n\n for param, info in channel_dict.items():\n\n # Update the process' parameters with the raw input\n raw_channel = self.RAW_MAPPING[info[\"input_type\"]]\n self.params[param] = {\n \"default\": raw_channel[\"default_value\"],\n \"description\": raw_channel[\"description\"]\n }\n\n channel_name = \"IN_{}_extraInput\".format(param)\n channel_str = self.RAW_MAPPING[info[\"input_type\"]][\"channel_str\"]\n extra_inputs.append(\"{} = {}\".format(channel_name,\n channel_str.format(param)))\n\n op = \"set\" if len(info[\"channels\"]) == 1 else \"into\"\n extra_inputs.append(\"{}.{}{{ {} }}\".format(\n channel_name, op, \";\".join(info[\"channels\"])))\n\n self._context = {\n **self._context,\n **{\"extra_inputs\": \"\\n\".join(extra_inputs)}\n }"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nattempt to retrieve the coverage value from the header string.", "response": "def _parse_coverage(header_str):\n \"\"\"Attempts to retrieve the coverage value from the header string.\n\n It splits the header by \"_\" and then screens the list backwards in\n search of the first float value. This will be interpreted as the\n coverage value. If it cannot find a float value, it returns None.\n This search methodology is based on the strings of assemblers\n like spades and skesa that put the mean kmer coverage for each\n contig in its corresponding fasta header.\n\n Parameters\n ----------\n header_str : str\n String\n\n Returns\n -------\n float or None\n The coverage value for the contig. None if it cannot find the\n value in the provide string.\n \"\"\"\n\n cov = None\n for i in header_str.split(\"_\")[::-1]:\n try:\n cov = float(i)\n break\n except ValueError:\n continue\n\n return cov"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nparse an assembly file and populate the self. contigs attribute with the data for each contig in the assembly.", "response": "def _parse_assembly(self, assembly_file):\n \"\"\"Parse an assembly fasta file.\n\n This is a Fasta parsing method that populates the\n :py:attr:`~Assembly.contigs` attribute with data for each contig in the\n assembly.\n\n The insertion of data on the self.contigs is done by the\n :py:meth:`Assembly._populate_contigs` method, which also calculates\n GC content and proportions.\n\n Parameters\n ----------\n assembly_file : str\n Path to the assembly fasta file.\n\n \"\"\"\n\n # Temporary storage of sequence data\n seq_temp = []\n # Id counter for contig that will serve as key in self.contigs\n contig_id = 0\n # Initialize kmer coverage and header\n cov, header = None, None\n\n with open(assembly_file) as fh:\n\n logger.debug(\"Starting iteration of assembly file: {}\".format(\n assembly_file))\n for line in fh:\n # Skip empty lines\n if not line.strip():\n continue\n else:\n # Remove whitespace surrounding line for further processing\n line = line.strip()\n\n if line.startswith(\">\"):\n # If a sequence has already been populated, save the\n # previous contig information\n if seq_temp:\n # Use join() to convert string list into the full\n # contig string. This is generally much more efficient\n # than successively concatenating strings.\n seq = \"\".join(seq_temp)\n\n logger.debug(\"Populating contig with contig_id '{}', \"\n \"header '{}' and cov '{}'\".format(\n contig_id, header, cov))\n self._populate_contigs(contig_id, header, cov, seq)\n\n # Reset temporary sequence storage\n seq_temp = []\n contig_id += 1\n\n header = line[1:]\n cov = self._parse_coverage(line)\n\n else:\n seq_temp.append(line)\n\n # Populate last contig entry\n logger.debug(\"Populating contig with contig_id '{}', \"\n \"header '{}' and cov '{}'\".format(\n contig_id, header, cov))\n seq = \"\".join(seq_temp)\n self._populate_contigs(contig_id, header, cov, seq)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\npopulating the self. contigs attribute with the data from a single contig.", "response": "def _populate_contigs(self, contig_id, header, cov, sequence):\n \"\"\" Inserts data from a single contig into\\\n :py:attr:`~Assembly.contigs`.\n\n By providing a contig id, the original header, the coverage that\n is parsed from the header and the sequence, this method will\n populate the :py:attr:`~Assembly.contigs` attribute.\n\n Parameters\n ----------\n contig_id : int\n Arbitrary unique contig identifier.\n header : str\n Original header of the current contig.\n cov : float\n The contig coverage, parsed from the fasta header\n sequence : str\n The complete sequence of the contig.\n\n \"\"\"\n\n # Get AT/GC/N counts and proportions.\n # Note that self._get_gc_content returns a dictionary with the\n # information on the GC/AT/N counts and proportions. This makes it\n # much easier to add to the contigs attribute using the ** notation.\n gc_kwargs = self._get_gc_content(sequence, len(sequence))\n logger.debug(\"Populate GC content with: {}\".format(gc_kwargs))\n\n self.contigs[contig_id] = {\n \"header\": header,\n \"sequence\": sequence,\n \"length\": len(sequence),\n \"kmer_cov\": cov,\n **gc_kwargs\n }"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the GC content and proportions.", "response": "def _get_gc_content(sequence, length):\n \"\"\"Get GC content and proportions.\n\n Parameters\n ----------\n sequence : str\n The complete sequence of the contig.\n length : int\n The length of the sequence contig.\n\n Returns\n -------\n x : dict\n Dictionary with the at/gc/n counts and proportions\n\n \"\"\"\n\n # Get AT/GC/N counts\n at = sum(map(sequence.count, [\"A\", \"T\"]))\n gc = sum(map(sequence.count, [\"G\", \"C\"]))\n n = length - (at + gc)\n\n # Get AT/GC/N proportions\n at_prop = at / length\n gc_prop = gc / length\n n_prop = n / length\n\n return {\"at\": at, \"gc\": gc, \"n\": n,\n \"at_prop\": at_prop, \"gc_prop\": gc_prop, \"n_prop\": n_prop}"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef filter_contigs(self, *comparisons):\n\n # Reset list of filtered ids\n self.filtered_ids = []\n self.report = {}\n\n gc_filters = [\n [\"gc_prop\", \">=\", self.min_gc],\n [\"gc_prop\", \"<=\", 1 - self.min_gc]\n ]\n\n self.filters = list(comparisons) + gc_filters\n\n logger.debug(\"Filtering contigs using filters: {}\".format(\n self.filters))\n\n for contig_id, contig in self.contigs.items():\n for key, op, value in list(comparisons) + gc_filters:\n if not self._test_truth(contig[key], op, value):\n self.filtered_ids.append(contig_id)\n self.report[contig_id] = \"{}/{}/{}\".format(key,\n contig[key],\n value)\n break\n else:\n self.report[contig_id] = \"pass\"", "response": "Filter the contigs of the assembly according to the user provided \\ comparisons."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the length of the assembly without the filtered contigs.", "response": "def get_assembly_length(self):\n \"\"\"Returns the length of the assembly, without the filtered contigs.\n\n Returns\n -------\n x : int\n Total length of the assembly.\n\n \"\"\"\n\n return sum(\n [vals[\"length\"] for contig_id, vals in self.contigs.items()\n if contig_id not in self.filtered_ids])"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nwrites the assembly to a new file.", "response": "def write_assembly(self, output_file, filtered=True):\n \"\"\"Writes the assembly to a new file.\n\n The ``filtered`` option controls whether the new assembly will be\n filtered or not.\n\n Parameters\n ----------\n output_file : str\n Name of the output assembly file.\n filtered : bool\n If ``True``, does not include filtered ids.\n \"\"\"\n\n logger.debug(\"Writing the filtered assembly into: {}\".format(\n output_file))\n with open(output_file, \"w\") as fh:\n\n for contig_id, contig in self.contigs.items():\n if contig_id not in self.filtered_ids and filtered:\n fh.write(\">{}_{}\\\\n{}\\\\n\".format(self.sample,\n contig[\"header\"],\n contig[\"sequence\"]))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef write_report(self, output_file):\n\n logger.debug(\"Writing the assembly report into: {}\".format(\n output_file))\n with open(output_file, \"w\") as fh:\n\n for contig_id, vals in self.report.items():\n fh.write(\"{}, {}\\\\n\".format(contig_id, vals))", "response": "Writes a report with the test results for the current assembly."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef guess_process(query_str, process_map):\n\n save_list = []\n # loops between the processes available in process_map\n for process in process_map:\n similarity = SequenceMatcher(None, process, query_str)\n # checks if similarity between the process and the query string is\n # higher than 50%\n if similarity.ratio() > 0.5:\n save_list.append(process)\n\n # checks if any process is stored in save_list\n if save_list:\n logger.info(colored_print(\n \"Maybe you meant:\\n\\t{}\".format(\"\\n\\t\".join(save_list)), \"white\"))\n\n logger.info(colored_print(\"Hint: check the available processes by using \"\n \"the '-l' or '-L' flag.\", \"white\"))", "response": "Function to guess the most similar process for a given string in the sequence of processes."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef fork_procs_insanity_check(p_string):\n\n # Check for the absence of processes in one of the branches of the fork\n # ['|)' and '(|'] and for the existence of a process before starting a fork\n # (in an inner fork) ['|('].\n if FORK_TOKEN + LANE_TOKEN in p_string or \\\n LANE_TOKEN + CLOSE_TOKEN in p_string or \\\n LANE_TOKEN + FORK_TOKEN in p_string:\n raise SanityError(\"There must be a process between the fork \"\n \"start character '(' or end ')' and the separator of \"\n \"processes character '|'\")", "response": "Checks if the pipeline string contains a process between the fork start and end characters and the separator of the fork."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef inner_fork_insanity_checks(pipeline_string):\n\n # first lets get all forks to a list.\n list_of_forks = [] # stores forks\n left_indexes = [] # stores indexes of left brackets\n\n # iterate through the string looking for '(' and ')'.\n for pos, char in enumerate(pipeline_string):\n if char == FORK_TOKEN:\n # saves pos to left_indexes list\n left_indexes.append(pos)\n elif char == CLOSE_TOKEN and len(left_indexes) > 0:\n # saves fork to list_of_forks\n list_of_forks.append(pipeline_string[left_indexes[-1] + 1: pos])\n # removes last bracket from left_indexes list\n left_indexes = left_indexes[:-1]\n\n # sort list in descending order of number of forks\n list_of_forks.sort(key=lambda x: x.count(FORK_TOKEN), reverse=True)\n\n # Now, we can iterate through list_of_forks and check for errors in each\n # fork\n for fork in list_of_forks:\n # remove inner forks for these checks since each fork has its own entry\n # in list_of_forks. Note that each fork is now sorted in descending\n # order which enables to remove sequentially the string for the fork\n # potentially with more inner forks\n for subfork in list_of_forks:\n # checks if subfork is contained in fork and if they are different,\n # avoiding to remove itself\n if subfork in list_of_forks and subfork != fork:\n # removes inner forks. Note that string has no spaces\n fork_simplified = fork.replace(\"({})\".format(subfork), \"\")\n else:\n fork_simplified = fork\n\n # Checks if there is no fork separator character '|' within each fork\n if not len(fork_simplified.split(LANE_TOKEN)) > 1:\n raise SanityError(\"One of the forks doesn't have '|' \"\n \"separator between the processes to fork. This is\"\n \" the prime suspect: '({})'\".format(fork))", "response": "This function performs two sanity checks in the pipeline string."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nwraps that performs all sanity checks on the pipeline string.", "response": "def insanity_checks(pipeline_str):\n \"\"\"Wrapper that performs all sanity checks on the pipeline string\n\n Parameters\n ----------\n pipeline_str : str\n String with the pipeline definition\n \"\"\"\n\n # Gets rid of all spaces in string\n p_string = pipeline_str.replace(\" \", \"\").strip()\n\n # some of the check functions use the pipeline_str as the user provided but\n # the majority uses the parsed p_string.\n checks = [\n [p_string, [\n empty_tasks,\n brackets_but_no_lanes,\n brackets_insanity_check,\n lane_char_insanity_check,\n final_char_insanity_check,\n fork_procs_insanity_check,\n start_proc_insanity_check,\n late_proc_insanity_check\n ]],\n [pipeline_str, [\n inner_fork_insanity_checks\n ]]\n ]\n\n # executes sanity checks in pipeline string before parsing it.\n for param, func_list in checks:\n for func in func_list:\n func(param)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef parse_pipeline(pipeline_str):\n\n if os.path.exists(pipeline_str):\n logger.debug(\"Found pipeline file: {}\".format(pipeline_str))\n with open(pipeline_str) as fh:\n pipeline_str = \"\".join([x.strip() for x in fh.readlines()])\n\n logger.info(colored_print(\"Resulting pipeline string:\\n\"))\n logger.info(colored_print(pipeline_str + \"\\n\"))\n\n # Perform pipeline insanity checks\n insanity_checks(pipeline_str)\n\n logger.debug(\"Parsing pipeline string: {}\".format(pipeline_str))\n\n pipeline_links = []\n lane = 1\n\n # Add unique identifiers to each process to allow a correct connection\n # between forks with same processes\n pipeline_str_modified, identifiers_to_tags = add_unique_identifiers(\n pipeline_str)\n\n # Get number of forks in the pipeline\n nforks = pipeline_str_modified.count(FORK_TOKEN)\n logger.debug(\"Found {} fork(s)\".format(nforks))\n\n # If there are no forks, connect the pipeline as purely linear\n if not nforks:\n logger.debug(\"Detected linear pipeline string : {}\".format(\n pipeline_str))\n linear_pipeline = [\"__init__\"] + pipeline_str_modified.split()\n pipeline_links.extend(linear_connection(linear_pipeline, lane))\n # Removes unique identifiers used for correctly assign fork parents with\n # a possible same process name\n pipeline_links = remove_unique_identifiers(identifiers_to_tags,\n pipeline_links)\n return pipeline_links\n\n for i in range(nforks):\n\n logger.debug(\"Processing fork {} in lane {}\".format(i, lane))\n # Split the pipeline at each fork start position. fields[-1] will\n # hold the process after the fork. fields[-2] will hold the processes\n # before the fork.\n fields = pipeline_str_modified.split(FORK_TOKEN, i + 1)\n\n # Get the processes before the fork. This may be empty when the\n # fork is at the beginning of the pipeline.\n previous_process = fields[-2].split(LANE_TOKEN)[-1].split()\n logger.debug(\"Previous processes string: {}\".format(fields[-2]))\n logger.debug(\"Previous processes list: {}\".format(previous_process))\n # Get lanes after the fork\n next_lanes = get_lanes(fields[-1])\n logger.debug(\"Next lanes object: {}\".format(next_lanes))\n # Get the immediate targets of the fork\n fork_sink = [x[0] for x in next_lanes]\n logger.debug(\"The fork sinks into the processes: {}\".format(fork_sink))\n\n # The first fork is a special case, where the processes before AND\n # after the fork (until the start of another fork) are added to\n # the ``pipeline_links`` variable. Otherwise, only the processes\n # after the fork will be added\n if i == 0:\n # If there are no previous process, the fork is at the beginning\n # of the pipeline string. In this case, inject the special\n # \"init\" process.\n if not previous_process:\n previous_process = [\"__init__\"]\n lane = 0\n else:\n previous_process = [\"__init__\"] + previous_process\n\n # Add the linear modules before the fork\n pipeline_links.extend(\n linear_connection(previous_process, lane))\n\n fork_source = previous_process[-1]\n logger.debug(\"Fork source is set to: {}\".format(fork_source))\n fork_lane = get_source_lane(previous_process, pipeline_links)\n logger.debug(\"Fork lane is set to: {}\".format(fork_lane))\n # Add the forking modules\n pipeline_links.extend(\n fork_connection(fork_source, fork_sink, fork_lane, lane))\n # Add the linear connections in the subsequent lanes\n pipeline_links.extend(\n linear_lane_connection(next_lanes, lane))\n\n lane += len(fork_sink)\n\n pipeline_links = remove_unique_identifiers(identifiers_to_tags,\n pipeline_links)\n return pipeline_links", "response": "Parses a string into a list of dictionaries with the connections between processes and the forks between processes."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_source_lane(fork_process, pipeline_list):\n\n fork_source = fork_process[-1]\n fork_sig = [x for x in fork_process if x != \"__init__\"]\n\n for position, p in enumerate(pipeline_list[::-1]):\n\n if p[\"output\"][\"process\"] == fork_source:\n\n lane = p[\"output\"][\"lane\"]\n logger.debug(\"Possible source match found in position {} in lane\"\n \" {}\".format(position, lane))\n lane_sequence = [x[\"output\"][\"process\"] for x in pipeline_list\n if x[\"output\"][\"lane\"] == lane]\n logger.debug(\"Testing lane sequence '{}' against fork signature\"\n \" '{}'\".format(lane_sequence, fork_sig))\n if lane_sequence == fork_sig:\n return p[\"output\"][\"lane\"]\n\n return 0", "response": "Returns the first process that matches the fork_process and the last pipeline that matches the fork_process."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngiving a raw pipeline string get a list of lanes from that pipeline.", "response": "def get_lanes(lanes_str):\n \"\"\"From a raw pipeline string, get a list of lanes from the start\n of the current fork.\n\n When the pipeline is being parsed, it will be split at every fork\n position. The string at the right of the fork position will be provided\n to this function. It's job is to retrieve the lanes that result\n from that fork, ignoring any nested forks.\n\n Parameters\n ----------\n lanes_str : str\n Pipeline string after a fork split\n\n Returns\n -------\n lanes : list\n List of lists, with the list of processes for each lane\n\n \"\"\"\n\n logger.debug(\"Parsing lanes from raw string: {}\".format(lanes_str))\n\n # Temporarily stores the lanes string after removal of nested forks\n parsed_lanes = \"\"\n # Flag used to determined whether the cursor is inside or outside the\n # right fork\n infork = 0\n for i in lanes_str:\n\n # Nested fork started\n if i == FORK_TOKEN:\n infork += 1\n # Nested fork stopped\n if i == CLOSE_TOKEN:\n infork -= 1\n\n if infork < 0:\n break\n\n # Save only when in the right fork\n if infork == 0:\n # Ignore forking syntax tokens\n if i not in [FORK_TOKEN, CLOSE_TOKEN]:\n parsed_lanes += i\n\n return [x.split() for x in parsed_lanes.split(LANE_TOKEN)]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef linear_connection(plist, lane):\n\n logger.debug(\n \"Establishing linear connection with processes: {}\".format(plist))\n\n res = []\n previous = None\n\n for p in plist:\n # Skip first process\n if not previous:\n previous = p\n continue\n\n res.append({\n \"input\": {\n \"process\": previous,\n \"lane\": lane\n },\n \"output\": {\n \"process\": p,\n \"lane\": lane\n }\n })\n previous = p\n\n return res", "response": "Connects a linear list of processes into a list of dictionaries with the links between the processes and the corresponding lane."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef fork_connection(source, sink, source_lane, lane):\n\n logger.debug(\"Establishing forking of source '{}' into processes\"\n \" '{}'. Source lane set to '{}' and lane set to '{}'\".format(\n source, sink, source_lane, lane))\n\n res = []\n # Increase the lane counter for the first lane\n lane_counter = lane + 1\n\n for p in sink:\n res.append({\n \"input\": {\n \"process\": source,\n \"lane\": source_lane\n },\n \"output\": {\n \"process\": p,\n \"lane\": lane_counter\n }\n })\n lane_counter += 1\n\n return res", "response": "Makes the connection between two processes and the first processes in the sink."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn the pipeline string with unique identifiers and a dictionary with unique identifiers references between the unique keys and the original values.", "response": "def add_unique_identifiers(pipeline_str):\n \"\"\"Returns the pipeline string with unique identifiers and a dictionary with\n references between the unique keys and the original values\n\n Parameters\n ----------\n pipeline_str : str\n Pipeline string\n\n Returns\n -------\n str\n Pipeline string with unique identifiers\n dict\n Match between process unique values and original names\n \"\"\"\n\n # Add space at beginning and end of pipeline to allow regex mapping of final\n # process in linear pipelines\n pipeline_str_modified = \" {} \".format(pipeline_str)\n\n # Regex to get all process names. Catch all words without spaces and that\n # are not fork tokens or pipes\n reg_find_proc = r\"[^\\s{}{}{}]+\".format(LANE_TOKEN, FORK_TOKEN, CLOSE_TOKEN)\n process_names = re.findall(reg_find_proc, pipeline_str_modified)\n\n identifiers_to_tags = {}\n \"\"\"\n dict: Matches new process names (identifiers) with original process \n names\n \"\"\"\n\n new_process_names = []\n \"\"\"\n list: New process names used to replace in the pipeline string\n \"\"\"\n\n # Assigns the new process names by appending a numeric id at the end of\n # the process name\n for index, val in enumerate(process_names):\n if \"=\" in val:\n parts = val.split(\"=\")\n new_id = \"{}_{}={}\".format(parts[0], index, parts[1])\n else:\n new_id = \"{}_{}\".format(val, index)\n\n # add new process with id\n new_process_names.append(new_id)\n # makes a match between new process name and original process name\n identifiers_to_tags[new_id] = val\n\n # Add space between forks, pipes and the process names for the replace\n # regex to work\n match_result = lambda match: \" {} \".format(match.group())\n\n # force to add a space between each token so that regex modification can\n # be applied\n find = r'[{}{}{}]+'.format(FORK_TOKEN, LANE_TOKEN, CLOSE_TOKEN)\n pipeline_str_modified = re.sub(find, match_result, pipeline_str_modified)\n\n # Replace original process names by the unique identifiers\n for index, val in enumerate(process_names):\n # regex to replace process names with non assigned process ids\n # escape characters are required to match to the dict keys\n # (identifiers_to_tags), since python keys with escape characters\n # must be escaped\n find = r'{}[^_]'.format(val).replace(\"\\\\\", \"\\\\\\\\\")\n pipeline_str_modified = re.sub(find, new_process_names[index] + \" \",\n pipeline_str_modified, 1)\n\n return pipeline_str_modified, identifiers_to_tags"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef remove_unique_identifiers(identifiers_to_tags, pipeline_links):\n\n # Replaces the unique identifiers by the original process names\n for index, val in enumerate(pipeline_links):\n if val[\"input\"][\"process\"] != \"__init__\":\n val[\"input\"][\"process\"] = identifiers_to_tags[\n val[\"input\"][\"process\"]]\n if val[\"output\"][\"process\"] != \"__init__\":\n val[\"output\"][\"process\"] = identifiers_to_tags[\n val[\"output\"][\"process\"]]\n\n return pipeline_links", "response": "Removes unique identifiers and adds the original process names to the pipeline list with unique identifiers."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _check_required_files(self):\n\n if not os.path.exists(self.trace_file):\n raise eh.InspectionError(\"The provided trace file could not be \"\n \"opened: {}\".format(self.trace_file))\n\n if not os.path.exists(self.log_file):\n raise eh.InspectionError(\"The .nextflow.log files could not be \"\n \"opened. Are you sure you are in a \"\n \"nextflow project directory?\")", "response": "Checks whetner the trace and log files are available."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nparse the trace file header and returns the positions of each column key.", "response": "def _header_mapping(header):\n \"\"\"Parses the trace file header and retrieves the positions of each\n column key.\n\n Parameters\n ----------\n header : str\n The header line of nextflow's trace file\n\n Returns\n -------\n dict\n Mapping the column ID to its position (e.g.: {\"tag\":2})\n \"\"\"\n\n return dict(\n (x.strip(), pos) for pos, x in enumerate(header.split(\"\\t\"))\n )"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _expand_path(hash_str):\n\n try:\n first_hash, second_hash = hash_str.split(\"/\")\n first_hash_path = join(abspath(\"work\"), first_hash)\n\n for l in os.listdir(first_hash_path):\n if l.startswith(second_hash):\n return join(first_hash_path, l)\n except FileNotFoundError:\n return None", "response": "Expands the hash string of a process into a full\n working directory."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _hms(s):\n\n if s == \"-\":\n return 0\n\n if s.endswith(\"ms\"):\n return float(s.rstrip(\"ms\")) / 1000\n\n fields = list(map(float, re.split(\"[dhms]\", s)[:-1]))\n if len(fields) == 4:\n return fields[0] * 24 * 3600 + fields[1] * 3600 + fields[2] * 60 +\\\n fields[3]\n if len(fields) == 3:\n return fields[0] * 3600 + fields[1] * 60 + fields[2]\n elif len(fields) == 2:\n return fields[0] * 60 + fields[1]\n else:\n return fields[0]", "response": "Converts a string containing a hms string into a float."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _size_coverter(s):\n\n if s.upper().endswith(\"KB\"):\n return float(s.rstrip(\"KB\")) / 1024\n\n elif s.upper().endswith(\" B\"):\n return float(s.rstrip(\"B\")) / 1024 / 1024\n\n elif s.upper().endswith(\"MB\"):\n return float(s.rstrip(\"MB\"))\n\n elif s.upper().endswith(\"GB\"):\n return float(s.rstrip(\"GB\")) * 1024\n\n elif s.upper().endswith(\"TB\"):\n return float(s.rstrip(\"TB\")) * 1024 * 1024\n\n else:\n return float(s)", "response": "Converts size string into megabytes\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_pipeline_processes(self):\n\n with open(self.log_file) as fh:\n\n for line in fh:\n if re.match(\".*Creating operator.*\", line):\n # Retrieves the process name from the string\n match = re.match(\".*Creating operator > (.*) --\", line)\n process = match.group(1)\n\n if any([process.startswith(x) for x in self._blacklist]):\n continue\n\n if process not in self.skip_processes:\n self.processes[match.group(1)] = {\n \"barrier\": \"W\",\n \"submitted\": set(),\n \"finished\": set(),\n \"failed\": set(),\n \"retry\": set(),\n \"cpus\": None,\n \"memory\": None\n }\n self.process_tags[process] = {}\n\n # Retrieves the pipeline name from the string\n if re.match(\".*Launching `.*` \\[.*\\] \", line):\n tag_match = re.match(\".*Launching `.*` \\[(.*)\\] \", line)\n self.pipeline_tag = tag_match.group(1) if tag_match else \\\n \"?\"\n name_match = re.match(\".*Launching `(.*)` \\[.*\\] \", line)\n self.pipeline_name = name_match.group(1) if name_match \\\n else \"?\"\n\n self.content_lines = len(self.processes)", "response": "Parses the. nextflow. log file and extracts the complete list of processes that are currently being launched and populates the self. processes attribute with the process name and pipeline name."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nclears the inspect attributes when re - executing a pipeline", "response": "def _clear_inspect(self):\n \"\"\"Clears inspect attributes when re-executing a pipeline\"\"\"\n\n self.trace_info = defaultdict(list)\n self.process_tags = {}\n self.process_stats = {}\n self.samples = []\n self.stored_ids = []\n self.stored_log_ids = []\n self.time_start = None\n self.time_stop = None\n self.execution_command = None\n self.nextflow_version = None\n self.abort_cause = None\n self._c = 0\n # Clean up of tag running status\n for p in self.processes.values():\n p[\"barrier\"] = \"W\"\n for i in [\"submitted\", \"finished\", \"failed\", \"retry\"]:\n p[i] = set()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _update_pipeline_status(self):\n\n with open(self.log_file) as fh:\n\n try:\n first_line = next(fh)\n except:\n raise eh.InspectionError(\"Could not read .nextflow.log file. Is file empty?\")\n time_str = \" \".join(first_line.split()[:2])\n self.time_start = time_str\n\n if not self.execution_command:\n try:\n self.execution_command = re.match(\n \".*nextflow run (.*)\", first_line).group(1)\n except AttributeError:\n self.execution_command = \"Unknown\"\n\n for line in fh:\n\n if \"DEBUG nextflow.cli.CmdRun\" in line:\n if not self.nextflow_version:\n try:\n vline = next(fh)\n self.nextflow_version = re.match(\n \".*Version: (.*)\", vline).group(1)\n except AttributeError:\n self.nextflow_version = \"Unknown\"\n\n if \"Session aborted\" in line:\n self.run_status = \"aborted\"\n # Get abort cause\n try:\n self.abort_cause = re.match(\n \".*Cause: (.*)\", line).group(1)\n except AttributeError:\n self.abort_cause = \"Unknown\"\n # Get time of pipeline stop\n time_str = \" \".join(line.split()[:2])\n self.time_stop = time_str\n self.send = True\n return\n if \"Execution complete -- Goodbye\" in line:\n self.run_status = \"complete\"\n # Get time of pipeline stop\n time_str = \" \".join(line.split()[:2])\n self.time_stop = time_str\n self.send = True\n return\n\n if self.run_status not in [\"running\", \"\"]:\n self._clear_inspect()\n # Take a break to allow nextflow to restart before refreshing\n # pipeine processes\n sleep(5)\n self._get_pipeline_processes()\n\n self.run_status = \"running\"", "response": "Parses the. nextflow. log file for signatures of pipeline status. It sets the status_info attribute of the current instance of the class."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _update_tag_status(self, process, vals):\n\n good_status = [\"COMPLETED\", \"CACHED\"]\n\n # Update status of each process\n for v in list(vals)[::-1]:\n p = self.processes[process]\n tag = v[\"tag\"]\n\n # If the process/tag is in the submitted list, move it to the\n # complete or failed list\n if tag in p[\"submitted\"]:\n p[\"submitted\"].remove(tag)\n if v[\"status\"] in good_status:\n p[\"finished\"].add(tag)\n elif v[\"status\"] == \"FAILED\":\n if not v[\"work_dir\"]:\n v[\"work_dir\"] = \"\"\n self.process_tags[process][tag][\"log\"] = \\\n self._retrieve_log(join(v[\"work_dir\"], \".command.log\"))\n p[\"failed\"].add(tag)\n\n # It the process/tag is in the retry list and it completed\n # successfully, remove it from the retry and fail lists. Otherwise\n # maintain it in the retry/failed lists\n elif tag in p[\"retry\"]:\n if v[\"status\"] in good_status:\n p[\"retry\"].remove(tag)\n p[\"failed\"].remove(tag)\n del self.process_tags[process][tag][\"log\"]\n elif self.run_status == \"aborted\":\n p[\"retry\"].remove(tag)\n\n elif v[\"status\"] in good_status:\n p[\"finished\"].add(tag)\n\n # Filter tags without a successfull status.\n if v[\"status\"] not in good_status:\n if v[\"tag\"] in list(p[\"submitted\"]) + list(p[\"finished\"]):\n vals.remove(v)\n\n return vals", "response": "Updates the status of the process and tag combination."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _update_barrier_status(self):\n\n with open(self.log_file) as fh:\n\n for line in fh:\n\n # Exit barrier update after session abort signal\n if \"Session aborted\" in line:\n return\n\n if \"<<< barrier arrive\" in line:\n # Retrieve process name from string\n process_m = re.match(\".*process: (.*)\\)\", line)\n if process_m:\n process = process_m.group(1)\n # Updates process channel to complete\n if process in self.processes:\n self.processes[process][\"barrier\"] = \"C\"", "response": "Updates the status of the process channels to complete."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _update_trace_info(self, fields, hm):\n\n process = fields[hm[\"process\"]]\n\n if process not in self.processes:\n return\n\n # Get information from a single line of trace file\n info = dict((column, fields[pos]) for column, pos in hm.items())\n\n # The headers that will be used to populate the process\n process_tag_headers = [\"realtime\", \"rss\", \"rchar\", \"wchar\"]\n for h in process_tag_headers:\n\n # In the rare occasion the tag is parsed first in the trace\n # file than the log file, add the new tag.\n if info[\"tag\"] not in self.process_tags[process]:\n # If the 'start' tag is present in the trace, use that\n # information. If not, it will be parsed in the log file.\n try:\n timestart = info[\"start\"].split()[1]\n except KeyError:\n timestart = None\n self.process_tags[process][info[\"tag\"]] = {\n \"workdir\": self._expand_path(info[\"hash\"]),\n \"start\": timestart\n }\n\n if h in info and info[\"tag\"] != \"-\":\n if h != \"realtime\" and info[h] != \"-\":\n self.process_tags[process][info[\"tag\"]][h] = \\\n round(self._size_coverter(info[h]), 2)\n else:\n self.process_tags[process][info[\"tag\"]][h] = info[h]\n\n # Set allocated cpu and memory information to process\n if \"cpus\" in info and not self.processes[process][\"cpus\"]:\n self.processes[process][\"cpus\"] = info[\"cpus\"]\n if \"memory\" in info and not self.processes[process][\"memory\"]:\n try:\n self.processes[process][\"memory\"] = self._size_coverter(\n info[\"memory\"])\n except ValueError:\n self.processes[process][\"memory\"] = None\n\n if info[\"hash\"] in self.stored_ids:\n return\n\n # If the task hash code is provided, expand it to the work directory\n # and add a new entry\n if \"hash\" in info:\n hs = info[\"hash\"]\n info[\"work_dir\"] = self._expand_path(hs)\n\n if \"tag\" in info:\n tag = info[\"tag\"]\n if tag != \"-\" and tag not in self.samples and \\\n tag.split()[0] not in self.samples:\n self.samples.append(tag)\n\n self.trace_info[process].append(info)\n self.stored_ids.append(info[\"hash\"])", "response": "Parses a single line of a trace file and updates the status_info attribute."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nupdate the resources info in processes dictionary.", "response": "def _update_process_resources(self, process, vals):\n \"\"\"Updates the resources info in :attr:`processes` dictionary.\n \"\"\"\n\n resources = [\"cpus\"]\n\n for r in resources:\n if not self.processes[process][r]:\n try:\n self.processes[process][r] = vals[0][\"cpus\"]\n # When the trace column is not present\n except KeyError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nparses the cpu load from the number of cpus and its usage percentage and returnsde cpu / hour measure.", "response": "def _cpu_load_parser(self, cpus, cpu_per, t):\n \"\"\"Parses the cpu load from the number of cpus and its usage\n percentage and returnsde cpu/hour measure\n\n Parameters\n ----------\n cpus : str\n Number of cpus allocated.\n cpu_per : str\n Percentage of cpu load measured (e.g.: 200,5%).\n t : str\n The time string can be something like '20s', '1m30s' or '300ms'.\n \"\"\"\n\n try:\n _cpus = float(cpus)\n _cpu_per = float(cpu_per.replace(\",\", \".\").replace(\"%\", \"\"))\n hours = self._hms(t) / 60 / 24\n\n return ((_cpu_per / (100 * _cpus)) * _cpus) * hours\n\n except ValueError:\n return 0"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _assess_resource_warnings(self, process, vals):\n\n cpu_warnings = {}\n mem_warnings = {}\n\n for i in vals:\n try:\n expected_load = float(i[\"cpus\"]) * 100\n cpu_load = float(i[\"%cpu\"].replace(\",\", \".\").replace(\"%\", \"\"))\n\n if expected_load * 0.9 > cpu_load > expected_load * 1.10:\n cpu_warnings[i[\"tag\"]] = {\n \"expected\": expected_load,\n \"value\": cpu_load\n }\n except (ValueError, KeyError):\n pass\n\n try:\n rss = self._size_coverter(i[\"rss\"])\n mem_allocated = self._size_coverter(i[\"memory\"])\n\n if rss > mem_allocated * 1.10:\n mem_warnings[i[\"tag\"]] = {\n \"expected\": mem_allocated,\n \"value\": rss\n }\n except (ValueError, KeyError):\n pass\n\n return cpu_warnings, mem_warnings", "response": "Assess whether the cpu load or memory usage of the resource is above the allocation\n "} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _update_process_stats(self):\n\n good_status = [\"COMPLETED\", \"CACHED\"]\n\n for process, vals in self.trace_info.items():\n\n # Update submission status of tags for each process\n vals = self._update_tag_status(process, vals)\n\n # Update process resources\n self._update_process_resources(process, vals)\n\n self.process_stats[process] = {}\n\n inst = self.process_stats[process]\n\n # Get number of completed samples\n inst[\"completed\"] = \"{}\".format(\n len([x for x in vals if x[\"status\"] in good_status]))\n\n # Get average time\n try:\n time_array = [self._hms(x[\"realtime\"]) for x in vals]\n mean_time = round(sum(time_array) / len(time_array), 1)\n mean_time_str = strftime('%H:%M:%S', gmtime(mean_time))\n inst[\"realtime\"] = mean_time_str\n # When the realtime column is not present\n except KeyError:\n inst[\"realtime\"] = \"-\"\n\n # Get cumulative cpu/hours\n try:\n cpu_hours = [self._cpu_load_parser(\n x[\"cpus\"], x[\"%cpu\"], x[\"realtime\"]) for x in vals]\n inst[\"cpuhour\"] = round(sum(cpu_hours), 2)\n # When the realtime, cpus or %cpus column are not present\n except KeyError:\n inst[\"cpuhour\"] = \"-\"\n\n # Assess resource warnings\n inst[\"cpu_warnings\"], inst[\"mem_warnings\"] = \\\n self._assess_resource_warnings(process, vals)\n\n # Get maximum memory\n try:\n rss_values = [self._size_coverter(x[\"rss\"]) for x in vals\n if x[\"rss\"] != \"-\"]\n if rss_values:\n max_rss = round(max(rss_values))\n rss_str = self._size_compress(max_rss)\n else:\n rss_str = \"-\"\n inst[\"maxmem\"] = rss_str\n except KeyError:\n inst[\"maxmem\"] = \"-\"\n\n # Get read size\n try:\n rchar_values = [self._size_coverter(x[\"rchar\"]) for x in vals\n if x[\"rchar\"] != \"-\"]\n if rchar_values:\n avg_rchar = round(sum(rchar_values) / len(rchar_values))\n rchar_str = self._size_compress(avg_rchar)\n else:\n rchar_str = \"-\"\n except KeyError:\n rchar_str = \"-\"\n inst[\"avgread\"] = rchar_str\n\n # Get write size\n try:\n wchar_values = [self._size_coverter(x[\"wchar\"]) for x in vals\n if x[\"wchar\"] != \"-\"]\n if wchar_values:\n avg_wchar = round(sum(wchar_values) / len(wchar_values))\n wchar_str = self._size_compress(avg_wchar)\n else:\n wchar_str = \"-\"\n except KeyError:\n wchar_str = \"-\"\n inst[\"avgwrite\"] = wchar_str", "response": "Updates the process stats with the information from the processes\n ."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef log_parser(self):\n\n # Check the timestamp of the log file. Only proceed with the parsing\n # if it changed from the previous time.\n size_stamp = os.path.getsize(self.log_file)\n self.log_retry = 0\n if size_stamp and size_stamp == self.log_sizestamp:\n return\n else:\n logger.debug(\"Updating log size stamp to: {}\".format(size_stamp))\n self.log_sizestamp = size_stamp\n\n # Regular expression to catch four groups:\n # 1. Start timestamp\n # 2. Work directory hash\n # 3. Process name\n # 4. Tag name\n r = \".* (.*) \\[.*\\].*\\[(.*)\\].*process > (.*) \\((.*)\\).*\"\n\n with open(self.log_file) as fh:\n\n for line in fh:\n if \"Submitted process >\" in line or \\\n \"Re-submitted process >\" in line or \\\n \"Cached process >\" in line:\n m = re.match(r, line)\n if not m:\n continue\n\n time_start = m.group(1)\n workdir = m.group(2)\n process = m.group(3)\n tag = m.group(4)\n\n # Skip if this line has already been parsed\n if time_start + tag not in self.stored_log_ids:\n self.stored_log_ids.append(time_start + tag)\n else:\n continue\n\n # For first time processes\n if process not in self.processes:\n continue\n p = self.processes[process]\n\n # Skip is process/tag combination has finished or is retrying\n if tag in list(p[\"finished\"]) + list(p[\"retry\"]):\n continue\n\n # Update failed process/tags when they have been re-submitted\n if tag in list(p[\"failed\"]) and \\\n \"Re-submitted process >\" in line:\n p[\"retry\"].add(tag)\n self.send = True\n continue\n\n # Set process barrier to running. Check for barrier status\n # are performed at the end of the trace parsing in the\n # _update_barrier_status method.\n p[\"barrier\"] = \"R\"\n if tag not in p[\"submitted\"]:\n p[\"submitted\"].add(tag)\n # Update the process_tags attribute with the new tag.\n # Update only when the tag does not exist. This may rarely\n # occur when the tag is parsed first in the trace file\n if tag not in self.process_tags[process]:\n self.process_tags[process][tag] = {\n \"workdir\": self._expand_path(workdir),\n \"start\": time_start\n }\n self.send = True\n # When the tag is filled in the trace file parsing,\n # the timestamp may not be present in the trace. In\n # those cases, fill that information here.\n elif not self.process_tags[process][tag][\"start\"]:\n self.process_tags[process][tag][\"start\"] = time_start\n self.send = True\n\n self._update_pipeline_status()", "response": "Method that parses the nextflow log file once and updates the submitted number of samples for each process."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef update_inspection(self):\n\n try:\n self.log_parser()\n except (FileNotFoundError, StopIteration) as e:\n logger.debug(\"ERROR: \" + str(sys.exc_info()[0]))\n self.log_retry += 1\n if self.log_retry == self.MAX_RETRIES:\n raise e\n try:\n self.trace_parser()\n except (FileNotFoundError, StopIteration) as e:\n logger.debug(\"ERROR: \" + str(sys.exc_info()[0]))\n self.trace_retry += 1\n if self.trace_retry == self.MAX_RETRIES:\n raise e", "response": "Wrapper method that calls the appropriate main updating methods of the canaton class."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndisplaying the default pipeline inspection overview", "response": "def display_overview(self):\n \"\"\"Displays the default pipeline inspection overview\n \"\"\"\n\n stay_alive = True\n\n self.screen = curses.initscr()\n\n self.screen.keypad(True)\n self.screen.nodelay(-1)\n curses.cbreak()\n curses.noecho()\n curses.start_color()\n\n self.screen_lines = self.screen.getmaxyx()[0]\n # self.screen_width = self.screen.getmaxyx()[1]\n\n try:\n while stay_alive:\n\n # Provide functionality to certain keybindings\n self._curses_keybindings()\n # Updates main inspector attributes\n self.update_inspection()\n # Display curses interface\n self.flush_overview()\n\n sleep(self.refresh_rate)\n except FileNotFoundError:\n sys.stderr.write(colored_print(\n \"ERROR: nextflow log and/or trace files are no longer \"\n \"reachable!\", \"red_bold\"))\n except Exception as e:\n sys.stderr.write(str(e))\n finally:\n curses.nocbreak()\n self.screen.keypad(0)\n curses.echo()\n curses.endwin()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _updown(self, direction):\n\n if direction == \"up\" and self.top_line != 0:\n self.top_line -= 1\n elif direction == \"down\" and \\\n self.screen.getmaxyx()[0] + self.top_line\\\n <= self.content_lines + 3:\n self.top_line += 1", "response": "Provides curses scroll functionality."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _rightleft(self, direction):\n\n if direction == \"left\" and self.padding != 0:\n self.padding -= 1\n\n if direction == \"right\" and \\\n self.screen.getmaxyx()[1] + self.padding < self.max_width:\n self.padding += 1", "response": "Provides curses horizontal padding"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef flush_overview(self):\n\n colors = {\n \"W\": 1,\n \"R\": 2,\n \"C\": 3\n }\n\n pc = {\n \"running\": 3,\n \"complete\": 3,\n \"aborted\": 4,\n \"error\": 4\n }\n\n curses.init_pair(1, curses.COLOR_WHITE, curses.COLOR_BLACK)\n curses.init_pair(2, curses.COLOR_BLUE, curses.COLOR_BLACK)\n curses.init_pair(3, curses.COLOR_GREEN, curses.COLOR_BLACK)\n curses.init_pair(4, curses.COLOR_MAGENTA, curses.COLOR_BLACK)\n\n # self.screen.erase()\n\n height, width = self.screen.getmaxyx()\n win = curses.newpad(height, 2000)\n\n # Add static header\n header = \"Pipeline [{}] inspection at {}. Status: \".format(\n self.pipeline_tag, strftime(\"%Y-%m-%d %H:%M:%S\", gmtime()))\n\n win.addstr(0, 0, header)\n win.addstr(0, len(header), self.run_status,\n curses.color_pair(pc[self.run_status]))\n submission_str = \"{0:23.23} {1:23.23} {2:23.23} {3:23.23}\".format(\n \"Running: {}\".format(\n sum([len(x[\"submitted\"]) for x in self.processes.values()])\n ),\n \"Failed: {}\".format(\n sum([len(x[\"failed\"]) for x in self.processes.values()])\n ),\n \"Retrying: {}\".format(\n sum([len(x[\"retry\"]) for x in self.processes.values()])\n ),\n \"Completed: {}\".format(\n sum([len(x[\"finished\"]) for x in self.processes.values()])\n )\n )\n\n win.addstr(\n 1, 0, submission_str, curses.color_pair(1)\n )\n\n headers = [\"\", \"Process\", \"Running\", \"Complete\", \"Error\",\n \"Avg Time\", \"Max Mem\", \"Avg Read\", \"Avg Write\"]\n header_str = \"{0: ^1} \" \\\n \"{1: ^25} \" \\\n \"{2: ^7} \" \\\n \"{3: ^7} \" \\\n \"{4: ^7} \" \\\n \"{5: ^10} \" \\\n \"{6: ^10} \" \\\n \"{7: ^10} \" \\\n \"{8: ^10} \".format(*headers)\n self.max_width = len(header_str)\n win.addstr(3, 0, header_str, curses.A_UNDERLINE | curses.A_REVERSE)\n\n # Get display size\n top = self.top_line\n bottom = self.screen_lines - 4 + self.top_line\n\n # Fetch process information\n for p, process in enumerate(\n list(self.processes.keys())[top:bottom]):\n\n if process not in self.process_stats:\n vals = [\"-\"] * 8\n txt_fmt = curses.A_NORMAL\n else:\n ref = self.process_stats[process]\n vals = [ref[\"completed\"],\n len(self.processes[process][\"failed\"]),\n ref[\"realtime\"],\n ref[\"maxmem\"], ref[\"avgread\"],\n ref[\"avgwrite\"]]\n txt_fmt = curses.A_BOLD\n\n proc = self.processes[process]\n if proc[\"retry\"]:\n completed = \"{}({})\".format(len(proc[\"submitted\"]),\n len(proc[\"retry\"]))\n else:\n completed = \"{}\".format(len(proc[\"submitted\"]))\n\n win.addstr(\n 4 + p, 0, \"{0: ^1} \"\n \"{1:25.25} \"\n \"{2: ^7} \"\n \"{3: ^7} \"\n \"{4: ^7} \"\n \"{5: ^10} \"\n \"{6: ^10} \"\n \"{7: ^10} \"\n \"{8: ^10} \".format(\n proc[\"barrier\"],\n process,\n completed,\n *vals),\n curses.color_pair(colors[proc[\"barrier\"]]) | txt_fmt)\n\n win.clrtoeol()\n win.refresh(0, self.padding, 0, 0, height-1, width-1)", "response": "Displays the default overview of the pipeline execution from the\n attributes into stdout."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_log_lines(self, n=300):\n\n with open(self.log_file) as fh:\n last_lines = fh.readlines()[-n:]\n\n return last_lines", "response": "Returns a list with the last n lines of the nextflow log file"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _prepare_static_info(self):\n\n pipeline_files = {}\n\n with open(join(self.workdir, self.pipeline_name)) as fh:\n pipeline_files[\"pipelineFile\"] = fh.readlines()\n\n nf_config = join(self.workdir, \"nextflow.config\")\n if os.path.exists(nf_config):\n with open(nf_config) as fh:\n pipeline_files[\"configFile\"] = fh.readlines()\n\n # Check for specific flowcraft configurations files\n configs = {\n \"params.config\": \"paramsFile\",\n \"resources.config\": \"resourcesFile\",\n \"containers.config\": \"containersFile\",\n \"user.config\": \"userFile\",\n }\n for config, key in configs.items():\n cfile = join(self.workdir, config)\n if os.path.exists(cfile):\n with open(cfile) as fh:\n pipeline_files[key] = fh.readlines()\n\n return pipeline_files", "response": "Prepares the first batch of information containing the pipeline file and configuration files containing the pipeline file and configuration files containing the first POST request. Returns the first batch of information containing the first POST request."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _dag_file_to_dict(self):\n try:\n dag_file = open(os.path.join(self.workdir, \".treeDag.json\"))\n dag_json = json.load(dag_file)\n except (FileNotFoundError, json.decoder.JSONDecodeError):\n logger.warning(colored_print(\n \"WARNING: dotfile named .treeDag.json not found or corrupted\",\n \"red_bold\"))\n dag_json = {}\n\n return dag_json", "response": "Function that opens the. treeDag. json in the current working directory and returns the dag object that is used in the post\n instance available through the method _establish_connection\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the hash of the nextflow file", "response": "def _get_run_hash(self):\n \"\"\"Gets the hash of the nextflow file\"\"\"\n\n # Get name and path of the pipeline from the log file\n pipeline_path = get_nextflow_filepath(self.log_file)\n\n # Get hash from the entire pipeline file\n pipeline_hash = hashlib.md5()\n with open(pipeline_path, \"rb\") as fh:\n for chunk in iter(lambda: fh.read(4096), b\"\"):\n pipeline_hash.update(chunk)\n # Get hash from the current working dir and hostname\n workdir = self.workdir.encode(\"utf8\")\n hostname = socket.gethostname().encode(\"utf8\")\n hardware_addr = str(uuid.getnode()).encode(\"utf8\")\n dir_hash = hashlib.md5(workdir + hostname + hardware_addr)\n\n return pipeline_hash.hexdigest() + dir_hash.hexdigest()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the nextflow file path from the nextflow log file. It searches for the nextflow run command throughout the file. Parameters ---------- log_file : str Path for the .nextflow.log file Returns ------- str Path for the nextflow file", "response": "def get_nextflow_filepath(log_file):\n \"\"\"Gets the nextflow file path from the nextflow log file. It searches for\n the nextflow run command throughout the file.\n\n Parameters\n ----------\n log_file : str\n Path for the .nextflow.log file\n\n Returns\n -------\n str\n Path for the nextflow file\n \"\"\"\n\n with open(log_file) as fh:\n # Searches for the first occurence of the nextflow pipeline\n # file name in the .nextflow.log file\n while 1:\n line = fh.readline()\n if not line:\n # file is empty\n raise eh.LogError(\"Nextflow command path could not be found - Is \"\n \".nextflow.log empty?\")\n try:\n # Regex supports absolute paths and relative paths\n pipeline_path = re.match(\".*\\s(.*.nf).*\", line) \\\n .group(1)\n return pipeline_path\n except AttributeError:\n continue"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef main(sample_id, trace_file, workdir):\n\n # Determine the path of the stored JSON for the sample_id\n stats_suffix = \".stats.json\"\n stats_path = join(workdir, sample_id + stats_suffix)\n trace_path = join(workdir, trace_file)\n\n logger.info(\"Starting pipeline status routine\")\n\n logger.debug(\"Checking for previous pipeline status data\")\n stats_array = get_previous_stats(stats_path)\n logger.info(\"Stats JSON object set to : {}\".format(stats_array))\n\n # Search for this substring in the tags field. Only lines with this\n # tag will be processed for the reports\n tag = \" getStats\"\n logger.debug(\"Tag variable set to: {}\".format(tag))\n\n logger.info(\"Starting parsing of trace file: {}\".format(trace_path))\n with open(trace_path) as fh:\n\n header = next(fh).strip().split()\n logger.debug(\"Header set to: {}\".format(header))\n\n for line in fh:\n fields = line.strip().split(\"\\t\")\n # Check if tag substring is in the tag field of the nextflow trace\n if tag in fields[2] and fields[3] == \"COMPLETED\":\n logger.debug(\n \"Parsing trace line with COMPLETED status: {}\".format(\n line))\n current_json = get_json_info(fields, header)\n\n stats_array[fields[0]] = current_json\n else:\n logger.debug(\n \"Ignoring trace line without COMPLETED status\"\n \" or stats specific tag: {}\".format(\n line))\n\n with open(join(stats_path), \"w\") as fh, open(\".report.json\", \"w\") as rfh:\n fh.write(json.dumps(stats_array, separators=(\",\", \":\")))\n rfh.write(json.dumps(stats_array, separators=(\",\", \":\")))", "response": "This function parses a nextflow trace file and sends a JSON report with the relevant information."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nbrewing a given list of processes according to the recipe and return the final pipeline string.", "response": "def brew_innuendo(args):\n \"\"\"Brews a given list of processes according to the recipe\n\n Parameters\n ----------\n args : argparse.Namespace\n The arguments passed through argparser that will be used to check the\n the recipe, tasks and brew the process\n\n Returns\n -------\n str\n The final pipeline string, ready for the engine.\n list\n List of process strings.\n \"\"\"\n\n # Create recipe class instance\n automatic_pipeline = Innuendo()\n\n if not args.tasks:\n input_processes = \" \".join(\n automatic_pipeline.process_descriptions.keys())\n else:\n input_processes = args.tasks\n\n # Validate the provided pipeline processes\n validated = automatic_pipeline.validate_pipeline(input_processes)\n if not validated:\n sys.exit(1)\n # Get the final pipeline string\n pipeline_string = automatic_pipeline.run_auto_pipeline(input_processes)\n\n return pipeline_string"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a pipeline string ready for parsing and processing a recipe.", "response": "def brew_recipe(recipe_name):\n \"\"\"Returns a pipeline string from a recipe name.\n\n Parameters\n ----------\n recipe_name : str\n Name of the recipe. Must match the name attribute in one of the classes\n defined in :mod:`flowcraft.generator.recipes`\n\n Returns\n -------\n str\n Pipeline string ready for parsing and processing by flowcraft engine\n \"\"\"\n\n # This will iterate over all modules included in the recipes subpackage\n # It will return the import class and the module name, algon with the\n # correct prefix\n prefix = \"{}.\".format(recipes.__name__)\n for importer, modname, _ in pkgutil.iter_modules(recipes.__path__, prefix):\n\n # Import the current module\n _module = importer.find_module(modname).load_module(modname)\n\n # Fetch all available classes in module\n _recipe_classes = [cls for cls in _module.__dict__.values() if\n isinstance(cls, type)]\n\n # Iterate over each Recipe class, and check for a match with the\n # provided recipe name.\n for cls in _recipe_classes:\n # Create instance of class to allow fetching the name attribute\n recipe_cls = cls()\n if getattr(recipe_cls, \"name\", None) == recipe_name:\n return recipe_cls.brew()\n\n logger.error(\n colored_print(\"Recipe name '{}' does not exist.\".format(recipe_name))\n )\n sys.exit(1)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef validate_pipeline(pipeline_string):\n if \"(\" in pipeline_string or \")\" in pipeline_string or \"|\" in \\\n pipeline_string:\n logger.error(\n colored_print(\"Please provide a valid task list!\", \"red_bold\")\n )\n return False\n\n return True", "response": "Validate the pipeline string by searching for forbidden characters in the pipeline string and checking for forbidden characters in the pipeline string."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nbuilds the upstream pipeline of the current process and returns the list of the upstream processes that are needed to run the upstream processes.", "response": "def build_upstream(self, process_descriptions, task, all_tasks,\n task_pipeline,\n count_forks, total_tasks, forks):\n \"\"\"Builds the upstream pipeline of the current process\n\n Checks for the upstream processes to the current process and\n adds them to the current pipeline fragment if they were provided in\n the process list.\n\n Parameters\n ----------\n process_descriptions : dict\n Information of processes input, output and if is forkable\n task : str\n Current process\n all_tasks : list\n A list of all provided processes\n task_pipeline : list\n Current pipeline fragment\n count_forks : int\n Current number of forks\n total_tasks : str\n All space separated processes\n forks : list\n Current forks\n Returns\n -------\n list : resulting pipeline fragment\n \"\"\"\n if task in process_descriptions:\n if process_descriptions[task][1] is not None:\n if len(process_descriptions[task][1].split(\"|\")) > 1:\n local_forks = process_descriptions[task][1].split(\"|\")\n\n # Produces a new pipeline fragment for each forkable\n # process\n for local_fork in local_forks:\n if local_fork in total_tasks:\n count_forks += 1\n task_pipeline.insert(\n 0,\n process_descriptions[task][1]\n )\n self.define_pipeline_string(\n process_descriptions,\n local_fork,\n False,\n True,\n count_forks,\n total_tasks,\n forks\n )\n\n return task_pipeline\n else:\n # Adds the process to the pipeline fragment in case it is\n # provided in the task list\n if process_descriptions[task][1] in total_tasks:\n task_pipeline.insert(\n 0,\n process_descriptions[task][1].split(\"|\")[0]\n )\n\n # Proceeds building upstream until the input for a\n # process is None\n self.build_upstream(\n process_descriptions,\n process_descriptions[task][1].split(\"|\")[0],\n all_tasks,\n task_pipeline,\n count_forks,\n total_tasks,\n forks\n )\n else:\n logger.error(\n colored_print(\"{} not in provided protocols as \"\n \"input for {}\".format(\n process_descriptions[task][1], task), \"red_bold\"\n )\n )\n\n sys.exit()\n\n return task_pipeline\n else:\n return task_pipeline"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nbuilding the downstream pipeline of the current process and returns the list of the new pipeline fragments.", "response": "def build_downstream(self, process_descriptions, task, all_tasks,\n task_pipeline,\n count_forks, total_tasks, forks):\n \"\"\"Builds the downstream pipeline of the current process\n\n Checks for the downstream processes to the current process and\n adds them to the current pipeline fragment.\n\n Parameters\n ----------\n process_descriptions : dict\n Information of processes input, output and if is forkable\n task : str\n Current process\n all_tasks : list\n A list of all provided processes\n task_pipeline : list\n Current pipeline fragment\n count_forks : int\n Current number of forks\n total_tasks : str\n All space separated processes\n forks : list\n Current forks\n Returns\n -------\n list : resulting pipeline fragment\n \"\"\"\n\n if task in process_descriptions:\n if process_descriptions[task][2] is not None:\n if len(process_descriptions[task][2].split(\"|\")) > 1:\n local_forks = process_descriptions[task][2].split(\"|\")\n\n # Adds the process to the pipeline fragment downstream\n # and defines a new pipeline fragment for each fork.\n # Those will only look for downstream processes\n for local_fork in local_forks:\n if local_fork in total_tasks:\n count_forks += 1\n task_pipeline.append(process_descriptions[task][2])\n self.define_pipeline_string(\n process_descriptions,\n local_fork,\n False,\n True,\n count_forks,\n total_tasks,\n forks\n )\n\n return task_pipeline\n else:\n if process_descriptions[task][2] in total_tasks:\n task_pipeline.append(process_descriptions[task][2].split(\"|\")[0])\n\n # Proceeds building downstream until the output for a\n # process is None\n self.build_downstream(\n process_descriptions,\n process_descriptions[task][2].split(\"|\")[0],\n all_tasks,\n task_pipeline,\n count_forks,\n total_tasks,\n forks\n )\n\n return task_pipeline\n else:\n return task_pipeline"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef define_pipeline_string(self, process_descriptions, tasks,\n check_upstream,\n check_downstream, count_forks, total_tasks,\n forks):\n \"\"\"Builds the possible forks and connections between the provided\n processes\n\n This method loops through all the provided tasks and builds the\n upstream and downstream pipeline if required. It then returns all\n possible forks than need to be merged \u00e0 posteriori`\n\n Parameters\n ----------\n process_descriptions : dict\n Information of processes input, output and if is forkable\n tasks : str\n Space separated processes\n check_upstream : bool\n If is to build the upstream pipeline of the current task\n check_downstream : bool\n If is to build the downstream pipeline of the current task\n count_forks : int\n Number of current forks\n total_tasks : str\n All space separated processes\n forks : list\n Current forks\n\n Returns\n -------\n list : List with all the possible pipeline forks\n \"\"\"\n\n tasks_array = tasks.split()\n\n for task_unsplit in tasks_array:\n task = task_unsplit.split(\"=\")[0]\n\n if task not in process_descriptions.keys():\n logger.error(\n colored_print(\n \"{} not in the possible processes\".format(task),\n \"red_bold\"\n )\n )\n\n sys.exit()\n else:\n process_split = task_unsplit.split(\"=\")\n\n if len(process_split) > 1:\n self.process_to_id[process_split[0]] = process_split[1]\n\n # Only uses the process if it is not already in the possible forks\n if not bool([x for x in forks if task in x]) and not bool([y for y in forks if process_descriptions[task][2] in y]):\n task_pipeline = []\n\n if task in process_descriptions:\n\n if check_upstream:\n task_pipeline = self.build_upstream(\n process_descriptions,\n task,\n tasks_array,\n task_pipeline,\n count_forks,\n total_tasks,\n forks\n )\n\n task_pipeline.append(task)\n\n if check_downstream:\n task_pipeline = self.build_downstream(\n process_descriptions,\n task,\n tasks_array,\n task_pipeline,\n count_forks,\n total_tasks,\n forks\n )\n\n # Adds the pipeline fragment to the list of possible forks\n forks.append(list(OrderedDict.fromkeys(task_pipeline)))\n\n # Checks for task in fork. Case order of input processes is reversed\n elif bool([y for y in forks if process_descriptions[task][2] in y]):\n for fork in forks:\n if task not in fork:\n try:\n dependent_index = fork.index(process_descriptions[task][2])\n fork.insert(dependent_index, task)\n except ValueError:\n continue\n\n for i in range(0, len(forks)):\n for j in range(0, len(forks[i])):\n try:\n if len(forks[i][j].split(\"|\")) > 1:\n forks[i][j] = forks[i][j].split(\"|\")\n tmp_fork = []\n for s in forks[i][j]:\n if s in total_tasks:\n tmp_fork.append(s)\n\n forks[i][j] = tmp_fork\n\n except AttributeError as e:\n continue\n\n return forks", "response": "This method creates the pipeline string for the given processes and processes."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef build_pipeline_string(self, forks):\n\n final_forks = []\n\n for i in range(0, len(forks)):\n needs_merge = [False, 0, 0, 0, 0, \"\"]\n is_merged = False\n for i2 in range(0, len(forks[i])):\n for j in range(i, len(forks)):\n needs_merge[0] = False\n for j2 in range(0, len(forks[j])):\n try:\n j2_fork = forks[j][j2].split(\"|\")\n except AttributeError:\n j2_fork = forks[j][j2]\n\n # Gets the indexes of the forks matrix that need to\n # be merged\n if forks[i][i2] in j2_fork and (i2 == 0 or j2 == 0) and i != j:\n needs_merge[0] = True\n needs_merge[1] = i\n needs_merge[2] = i2\n needs_merge[3] = j\n needs_merge[4] = j2\n needs_merge[5] = forks[i][i2]\n\n if needs_merge[0]:\n index_merge_point = forks[needs_merge[3]][-1].index(needs_merge[5])\n\n # Merges the forks. If only one fork is possible,\n # that fork is neglected and it merges into a single\n # channel.\n if needs_merge[2] == 0:\n if len(forks[needs_merge[3]][-1]) < 2:\n forks[needs_merge[3]] = forks[needs_merge[3]][:-1] + forks[needs_merge[1]][::]\n else:\n forks[needs_merge[3]][-1][index_merge_point] = forks[needs_merge[1]]\n\n elif needs_merge[4] == 0:\n if len(forks[needs_merge[3]][-1]) < 2:\n forks[needs_merge[3]] = forks[needs_merge[3]][:-1] + forks[needs_merge[1]][::]\n else:\n forks[needs_merge[3]][-1][index_merge_point] = forks[needs_merge[1]]\n\n is_merged = True\n\n # Adds forks that dont need merge to the final forks\n if needs_merge[0] is not None and not is_merged:\n if bool([nf for nf in forks[i] if \"|\" in nf]):\n continue\n final_forks.append(forks[i])\n\n if len(final_forks) == 1:\n final_forks = str(final_forks[0])\n\n # parses the string array to the flowcraft nomenclature\n pipeline_string = \" \" + str(final_forks)\\\n .replace(\"[[\", \"( \")\\\n .replace(\"]]\", \" )\")\\\n .replace(\"]\", \" |\")\\\n .replace(\", [\", \" \")\\\n .replace(\"'\", \"\")\\\n .replace(\",\", \"\")\\\n .replace(\"[\", \"\")\n\n if pipeline_string[-1] == \"|\":\n pipeline_string = pipeline_string[:-1]\n\n to_search = \" {} \"\n to_replace = \" {}={} \"\n\n # Replace only names by names + process ids\n for key, val in self.process_to_id.items():\n # Case only one process in the pipeline\n pipeline_string = pipeline_string\\\n .replace(to_search.format(key),\n to_replace.format(key, val))\n\n return pipeline_string", "response": "Parses and returns the pipeline string for the given list of forks."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_component_str(component, params=None, directives=None):\n\n final_directives = {}\n\n if directives:\n final_directives = directives\n\n if params:\n final_directives[\"params\"] = params\n\n if final_directives:\n return \"{}={}\".format(\n component, json.dumps(final_directives, separators=(\",\", \":\")))\n else:\n return component", "response": "Generates a string based on the provided parameters and directives and returns the component string."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nwrite a report from multiple samples.", "response": "def write_report(storage_dic, output_file, sample_id):\n \"\"\" Writes a report from multiple samples.\n\n Parameters\n ----------\n storage_dic : dict or :py:class:`OrderedDict`\n Storage containing the trimming statistics. See :py:func:`parse_log`\n for its generation.\n output_file : str\n Path where the output file will be generated.\n sample_id : str\n Id or name of the current sample.\n \"\"\"\n\n with open(output_file, \"w\") as fh, open(\".report.json\", \"w\") as json_rep:\n\n # Write header\n fh.write(\"Sample,Total length,Total trimmed,%,5end Trim,3end Trim,\"\n \"bad_reads\\\\n\")\n\n # Write contents\n for sample, vals in storage_dic.items():\n fh.write(\"{},{}\\\\n\".format(\n sample, \",\".join([str(x) for x in vals.values()])))\n\n json_dic = {\n \"tableRow\": [{\n \"sample\": sample_id,\n \"data\": [\n {\"header\": \"trimmed\",\n \"value\": vals[\"total_trim_perc\"],\n \"table\": \"qc\",\n \"columnBar\": True},\n ]\n }],\n \"plotData\": [{\n \"sample\": sample_id,\n \"data\": {\n \"sparkline\": vals[\"clean_len\"]\n }\n }],\n \"badReads\": vals[\"bad_reads\"]\n }\n json_rep.write(json.dumps(json_dic, separators=(\",\", \":\")))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fix_contig_names(asseembly_path):\n\n fixed_assembly = \"fixed_assembly.fa\"\n\n with open(asseembly_path) as in_hf, open(fixed_assembly, \"w\") as ou_fh:\n\n for line in in_hf:\n\n if line.startswith(\">\"):\n fixed_line = line.replace(\" \", \"_\")\n ou_fh.write(fixed_line)\n else:\n ou_fh.write(line)\n\n return fixed_assembly", "response": "Removes whitespace from the assembly contig names\n "} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncleans up the temporary fastq files.", "response": "def clean_up(fastq):\n \"\"\"\n Cleans the temporary fastq files. If they are symlinks, the link\n source is removed\n\n Parameters\n ----------\n fastq : list\n List of fastq files.\n \"\"\"\n\n for fq in fastq:\n # Get real path of fastq files, following symlinks\n rp = os.path.realpath(fq)\n logger.debug(\"Removing temporary fastq file path: {}\".format(rp))\n if re.match(\".*/work/.{2}/.{30}/.*\", rp):\n os.remove(rp)"}