{"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ntakes a dict of keys and returns a list of tuples with materialized value paths and values.", "response": "def materialize_dict(bundle: dict, separator: str = '.') -> t.List[t.Tuple[str, t.Any]]:\n \"\"\"\n Transforms a given ``bundle`` into a *sorted* list of tuples with materialized value paths and values:\n ``('path.to.value', )``. Output is ordered by depth: the deepest element first.\n\n :param bundle: a dict to materialize\n :param separator: build paths with a given separator\n :return: a depth descending and alphabetically ascending sorted list (-deep, asc), the longest first\n\n ::\n\n sample = {\n 'a': 1,\n 'aa': 1,\n 'b': {\n 'c': 1,\n 'b': 1,\n 'a': 1,\n 'aa': 1,\n 'aaa': {\n 'a': 1\n }\n }\n }\n materialize_dict(sample, '/')\n [\n ('b/aaa/a', 1),\n ('b/a', 1),\n ('b/aa', 1),\n ('b/b', 1),\n ('b/c', 1),\n ('a', 1),\n ('aa', 1)\n ]\n \"\"\"\n\n def _matkeysort(tup: t.Tuple[str, t.Any]):\n return len(tup[0].split(separator))\n\n s1 = sorted(_materialize_dict(bundle, separator=separator), key=lambda x: x[0])\n return sorted(s1, key=_matkeysort, reverse=True)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nserializes a given dictionary into a list of tuples with materialized value paths and values.", "response": "def mp_serialize_dict(\n bundle: dict,\n separator: str = '.',\n serialize: t.Optional[t.Callable] = dump_yaml,\n value_prefix: str = '::YAML::\\n') -> t.List[t.Tuple[str, bytes]]:\n \"\"\"\n Transforms a given ``bundle`` into a *sorted* list of tuples with materialized value paths and values:\n ``('path.to.value', b'')``. If the ```` value is not an instance of a basic type, it's serialized\n with ``serialize`` callback. If this value is an empty string, it's serialized anyway to enforce correct\n type if storage backend does not support saving empty strings.\n\n :param bundle: a dict to materialize\n :param separator: build paths with a given separator\n :param serialize: a method to serialize non-basic types, default is ``yaml.dump``\n :param value_prefix: a prefix for non-basic serialized types\n :return: a list of tuples ``(mat_path, b'value')``\n\n ::\n\n sample = {\n 'bool_flag': '', # flag\n 'unicode': '\u0432\u0430\u0441\u044f',\n 'none_value': None,\n 'debug': True,\n 'mixed': ['ascii', '\u044e\u043d\u0438\u043a\u043e\u0434', 1, {'d': 1}, {'b': 2}],\n 'nested': {\n 'a': {\n 'b': 2,\n 'c': b'bytes',\n }\n }\n }\n\n result = mp_serialize_dict(sample, separator='/')\n assert result == [\n ('nested/a/b', b'2'),\n ('nested/a/c', b'bytes'),\n ('bool_flag', b\"::YAML::\\\\n''\\\\n\"),\n ('debug', b'true'),\n ('mixed', b'::YAML::\\\\n- ascii\\\\n- '\n b'\"\\\\\\\\u044E\\\\\\\\u043D\\\\\\\\u0438\\\\\\\\u043A\\\\\\\\u043E\\\\\\\\u0434\"\\\\n- 1\\\\n- '\n b'{d: 1}\\\\n- {b: 2}\\\\n'),\n ('none_value', None),\n ('unicode', b'\\\\xd0\\\\xb2\\\\xd0\\\\xb0\\\\xd1\\\\x81\\\\xd1\\\\x8f')\n ]\n \"\"\"\n\n md = materialize_dict(bundle, separator=separator)\n res = []\n for path, value in md:\n # have to serialize values (value should be None or a string / binary data)\n if value is None:\n pass\n elif isinstance(value, str) and value != '':\n # check for value != '' used to armor empty string with forced serialization\n # since it can be not recognized by a storage backend\n pass\n elif isinstance(value, bytes):\n pass\n elif isinstance(value, bool):\n value = str(value).lower()\n elif isinstance(value, (int, float, Decimal)):\n value = str(value)\n else:\n value = (value_prefix + serialize(value))\n\n if isinstance(value, str):\n value = value.encode()\n\n res.append((path, value))\n\n return res"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef wf(raw_str: str,\n flush: bool = True,\n prevent_completion_polluting: bool = True,\n stream: t.TextIO = sys.stdout):\n \"\"\"\n Writes a given ``raw_str`` into a ``stream``. Ignores output if ``prevent_completion_polluting`` is set and there's\n no extra ``sys.argv`` arguments present (a bash completion issue).\n\n :param raw_str: a raw string to print\n :param flush: execute ``flush()``\n :param prevent_completion_polluting: don't write anything if ``len(sys.argv) <= 1``\n :param stream: ``sys.stdout`` by default\n :return: None\n \"\"\"\n if prevent_completion_polluting and len(sys.argv) <= 1:\n return\n\n stream.write(raw_str)\n flush and hasattr(stream, 'flush') and stream.flush()", "response": "Writes a given string into a stream."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef coerce_str_to_bool(val: t.Union[str, int, bool, None], strict: bool = False) -> bool:\n if isinstance(val, str):\n val = val.lower()\n\n flag = ENV_STR_BOOL_COERCE_MAP.get(val, None)\n\n if flag is not None:\n return flag\n\n if strict:\n raise ValueError('Unsupported value for boolean flag: `%s`' % val)\n\n return bool(val)", "response": "Converts a given string representation of boolean into a boolean."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef env_bool_flag(flag_name: str, strict: bool = False, env: t.Optional[t.Dict[str, str]] = None) -> bool:\n env = env or os.environ\n sentinel = object()\n val = env.get(flag_name, sentinel)\n\n if val is sentinel:\n return False\n\n return coerce_str_to_bool(val, strict=strict)", "response": "Converts an environment variable name into a boolean."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef is_dockerized(flag_name: str = 'DOCKERIZED', strict: bool = False):\n return env_bool_flag(flag_name, strict=strict)", "response": "Reads env DOCKERIZED env variable as a boolean."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef is_production(flag_name: str = 'PRODUCTION', strict: bool = False):\n return env_bool_flag(flag_name, strict=strict)", "response": "Reads environment variable PRODUCTION as a boolean."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nruns the given code line by line with printing as list of lines and return variable ans.", "response": "def runcode(code):\n\t\"\"\"Run the given code line by line with printing, as list of lines, and return variable 'ans'.\"\"\"\n\tfor line in code:\n\t\tprint('# '+line)\n\t\texec(line,globals())\n\tprint('# return ans')\n\treturn ans"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef exec_all_endpoints(self, *args, **kwargs):\n results = []\n for handler in self.endpoints:\n if isinstance(handler, weakref.ref):\n handler = handler()\n if self.adapt_params:\n bind = self._adapt_call_params(handler, args, kwargs)\n res = handler(*bind.args, **bind.kwargs)\n else:\n res = handler(*args, **kwargs)\n if isinstance(res, MultipleResults):\n if res.done:\n results += res.results\n else:\n results += res._results\n elif res is not NoResult:\n results.append(res)\n return MultipleResults(results, concurrent=self.concurrent, owner=self)", "response": "Execute each passed endpoint and collect the results."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef run(self, *args, **kwargs):\n if self.fvalidation is not None:\n try:\n if self.fvalidation(*args, **kwargs) is False:\n raise ExecutionError(\"Validation returned ``False``\")\n except Exception as e:\n if __debug__:\n logger.exception(\"Validation failed\")\n else:\n logger.error(\"Validation failed\")\n raise ExecutionError(\n \"The validation of the arguments specified to ``run()`` \"\n \"has failed\") from e\n try:\n if self.exec_wrapper is None:\n return self.exec_all_endpoints(*args, **kwargs)\n else:\n # if a exec wrapper is defined, defer notification to it,\n # a callback to execute the default notification process\n result = self.exec_wrapper(self.endpoints,\n self.exec_all_endpoints,\n *args, **kwargs)\n if inspect.isawaitable(result):\n result = pull_result(result)\n return result\n except Exception as e:\n if __debug__:\n logger.exception(\"Error while executing handlers\")\n else:\n logger.error(\"Error while executing handlers\")\n raise ExecutionError(\"Error while executing handlers\") from e", "response": "Execute all the handlers and return the result of the exec_wrapper method."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndiscovering the minimum and maximum UID number.", "response": "def login_defs():\n \"\"\"Discover the minimum and maximum UID number.\"\"\"\n uid_min = None\n uid_max = None\n login_defs_path = '/etc/login.defs'\n if os.path.exists(login_defs_path):\n with io.open(text_type(login_defs_path), encoding=text_type('utf-8')) as log_defs_file:\n login_data = log_defs_file.readlines()\n for line in login_data:\n if PY3: # pragma: no cover\n line = str(line)\n if PY2: # pragma: no cover\n line = line.encode(text_type('utf8'))\n if line[:7] == text_type('UID_MIN'):\n uid_min = int(line.split()[1].strip())\n if line[:7] == text_type('UID_MAX'):\n uid_max = int(line.split()[1].strip())\n if not uid_min: # pragma: no cover\n uid_min = DEFAULT_UID_MIN\n if not uid_max: # pragma: no cover\n uid_max = DEFAULT_UID_MAX\n return uid_min, uid_max"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef collect_static() -> bool:\n from django.core.management import execute_from_command_line\n # from django.conf import settings\n # if not os.listdir(settings.STATIC_ROOT):\n wf('Collecting static files... ', False)\n execute_from_command_line(['./manage.py', 'collectstatic', '-c', '--noinput', '-v0'])\n wf('[+]\\n')\n return True", "response": "Runs Django collectstatic command in silent mode."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns an instance of the inner config parser for the specified endpoint.", "response": "def inner_parser(self) -> BaseParser:\n \"\"\"\n Prepares inner config parser for config stored at ``endpoint``.\n\n :return: an instance of :class:`~django_docker_helpers.config.backends.base.BaseParser`\n\n :raises config.exceptions.KVStorageValueIsEmpty: if specified ``endpoint`` does not contain a config\n \"\"\"\n if self._inner_parser is not None:\n return self._inner_parser\n\n config = self.client.get(self.endpoint)\n if not config:\n raise KVStorageValueIsEmpty('Key `{0}` does not exist or value is empty'.format(self.endpoint))\n\n config = config.decode()\n\n self._inner_parser = self.inner_parser_class(\n config=io.StringIO(config),\n path_separator=self.path_separator,\n scope=None\n )\n return self._inner_parser"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef generate_add_user_command(proposed_user=None, manage_home=None):\n command = None\n if get_platform() in ('Linux', 'OpenBSD'):\n command = '{0} {1}'.format(sudo_check(), LINUX_CMD_USERADD)\n if proposed_user.uid:\n command = '{0} -u {1}'.format(command, proposed_user.uid)\n if proposed_user.gid:\n command = '{0} -g {1}'.format(command, proposed_user.gid)\n if proposed_user.gecos:\n command = '{0} -c \\'{1}\\''.format(command, proposed_user.gecos)\n if manage_home:\n if proposed_user.home_dir:\n if os.path.exists(proposed_user.home_dir):\n command = '{0} -d {1}'.format(command, proposed_user.home_dir)\n elif not os.path.exists('/home/{0}'.format(proposed_user.name)):\n command = '{0} -m'.format(command)\n if proposed_user.shell:\n command = '{0} -s {1}'.format(command, proposed_user.shell)\n command = '{0} {1}'.format(command, proposed_user.name)\n elif get_platform() == 'FreeBSD': # pragma: FreeBSD\n command = '{0} {1} useradd'.format(sudo_check(), FREEBSD_CMD_PW)\n if proposed_user.uid:\n command = '{0} -u {1}'.format(command, proposed_user.uid)\n if proposed_user.gid:\n command = '{0} -g {1}'.format(command, proposed_user.gid)\n if proposed_user.gecos:\n command = '{0} -c \\'{1}\\''.format(command, proposed_user.gecos)\n if manage_home:\n if proposed_user.home_dir:\n command = '{0} -d {1}'.format(command, proposed_user.home_dir)\n else:\n command = '{0} -m'.format(command)\n if proposed_user.shell:\n command = '{0} -s {1}'.format(command, proposed_user.shell)\n command = '{0} -n {1}'.format(command, proposed_user.name)\n\n if command:\n return shlex.split(str(command))", "response": "Generate command to add a user."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ngenerate command to modify existing user.", "response": "def generate_modify_user_command(task=None, manage_home=None):\n \"\"\"Generate command to modify existing user to become the proposed user.\n\n args:\n task (dict): A proposed user and the differences between it and the existing user\n\n returns:\n list: The command string split into shell-like syntax\n \"\"\"\n name = task['proposed_user'].name\n comparison_result = task['user_comparison']['result']\n command = None\n if get_platform() in ('Linux', 'OpenBSD'):\n command = '{0} {1}'.format(sudo_check(), LINUX_CMD_USERMOD)\n if comparison_result.get('replacement_uid_value'):\n command = '{0} -u {1}'.format(command, comparison_result.get('replacement_uid_value'))\n if comparison_result.get('replacement_gid_value'):\n command = '{0} -g {1}'.format(command, comparison_result.get('replacement_gid_value'))\n if comparison_result.get('replacement_gecos_value'):\n command = '{0} -c {1}'.format(command, comparison_result.get('replacement_gecos_value'))\n if comparison_result.get('replacement_shell_value'):\n command = '{0} -s {1}'.format(command, comparison_result.get('replacement_shell_value'))\n if manage_home and comparison_result.get('replacement_home_dir_value'):\n command = '{0} -d {1}'.format(command, comparison_result.get('replacement_home_dir_value'))\n command = '{0} {1}'.format(command, name)\n if get_platform() == 'FreeBSD': # pragma: FreeBSD\n command = '{0} {1} usermod'.format(sudo_check(), FREEBSD_CMD_PW)\n if comparison_result.get('replacement_uid_value'):\n command = '{0} -u {1}'.format(command, comparison_result.get('replacement_uid_value'))\n if comparison_result.get('replacement_gid_value'):\n command = '{0} -g {1}'.format(command, comparison_result.get('replacement_gid_value'))\n if comparison_result.get('replacement_gecos_value'):\n command = '{0} -c {1}'.format(command, comparison_result.get('replacement_gecos_value'))\n if comparison_result.get('replacement_shell_value'):\n command = '{0} -s {1}'.format(command, comparison_result.get('replacement_shell_value'))\n if manage_home and comparison_result.get('replacement_home_dir_value'):\n command = '{0} -d {1}'.format(command, comparison_result.get('replacement_home_dir_value'))\n command = '{0} -n {1}'.format(command, name)\n if command:\n return shlex.split(str(command))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef generate_delete_user_command(username=None, manage_home=None):\n command = None\n remove_home = '-r' if manage_home else ''\n\n if get_platform() in ('Linux', 'OpenBSD'):\n command = '{0} {1} {2} {3}'.format(sudo_check(), LINUX_CMD_USERDEL, remove_home, username)\n elif get_platform() == 'FreeBSD': # pragma: FreeBSD\n command = '{0} {1} userdel {2} -n {3}'.format(sudo_check(), FREEBSD_CMD_PW, remove_home, username)\n if command:\n return shlex.split(str(command))", "response": "Generate command to delete a user."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef compare_user(passed_user=None, user_list=None):\n # Check if user exists\n returned = user_list.describe_users(users_filter=dict(name=passed_user.name))\n replace_keys = False\n # User exists, so compare attributes\n comparison_result = dict()\n if passed_user.uid and (not returned[0].uid == passed_user.uid):\n comparison_result['uid_action'] = 'modify'\n comparison_result['current_uid_value'] = returned[0].uid\n comparison_result['replacement_uid_value'] = passed_user.uid\n if passed_user.gid and (not returned[0].gid == passed_user.gid):\n comparison_result['gid_action'] = 'modify'\n comparison_result['current_gid_value'] = returned[0].gid\n comparison_result['replacement_gid_value'] = passed_user.gid\n if passed_user.gecos and (not returned[0].gecos == passed_user.gecos):\n comparison_result['gecos_action'] = 'modify'\n comparison_result['current_gecos_value'] = returned[0].gecos\n comparison_result['replacement_gecos_value'] = passed_user.gecos\n if passed_user.home_dir and (not returned[0].home_dir == passed_user.home_dir):\n comparison_result['home_dir_action'] = 'modify'\n comparison_result['current_home_dir_value'] = returned[0].home_dir\n comparison_result['replacement_home_dir_value'] = passed_user.home_dir\n # (Re)set keys if home dir changed\n replace_keys = True\n if passed_user.shell and (not returned[0].shell == passed_user.shell):\n comparison_result['shell_action'] = 'modify'\n comparison_result['current_shell_value'] = returned[0].shell\n comparison_result['replacement_shell_value'] = passed_user.shell\n if passed_user.sudoers_entry and (not returned[0].sudoers_entry == passed_user.sudoers_entry):\n comparison_result['sudoers_entry_action'] = 'modify'\n comparison_result['current_sudoers_entry'] = returned[0].sudoers_entry\n comparison_result['replacement_sudoers_entry'] = passed_user.sudoers_entry\n # if passed_user.public_keys and (not returned[0].public_keys == passed_user.public_keys):\n existing_keys = returned[0].public_keys\n passed_keys = passed_user.public_keys\n # Check if existing and passed keys exist, and if so, compare\n if all((existing_keys, passed_keys)) and len(existing_keys) == len(passed_user.public_keys):\n # Compare each key, and if any differences, replace\n existing = set(key.raw for key in existing_keys)\n replacement = set(key.raw for key in passed_keys)\n if set.difference(existing, replacement):\n replace_keys = True\n # If not existing keys but keys passed set, then\n elif passed_keys and not existing_keys:\n replace_keys = True\n if replace_keys:\n comparison_result['public_keys_action'] = 'modify'\n comparison_result['current_public_keys_value'] = existing_keys\n comparison_result['replacement_public_keys_value'] = passed_keys\n return dict(state='existing', result=comparison_result, existing_user=returned)", "response": "Compare a User instance with the supplied Users list and return the differences."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef gecos(self):\n if not self._gecos:\n return None\n if self._gecos.startswith(text_type('\\'')) and self._gecos.endswith(text_type('\\'')):\n self._gecos = '\\\"{0}\\\"'.format(self._gecos[1:-1])\n return self._gecos\n elif self._gecos.startswith(text_type('\\\"')) and self._gecos.endswith(text_type('\\\"')):\n return self._gecos\n else:\n return '\\\"{0}\\\"'.format(self._gecos)", "response": "Force double quoted gecos."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn the user as a dict.", "response": "def to_dict(self):\n \"\"\" Return the user as a dict. \"\"\"\n public_keys = [public_key.b64encoded for public_key in self.public_keys]\n return dict(name=self.name, passwd=self.passwd, uid=self.uid, gid=self.gid, gecos=self.gecos,\n home_dir=self.home_dir, shell=self.shell, public_keys=public_keys)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ninsert an instance of User into the collection.", "response": "def insert(self, index, value):\n \"\"\"Insert an instance of User into the collection.\"\"\"\n self.check(value)\n self._user_list.insert(index, value)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nremoving User instance based on supplied user name.", "response": "def remove(self, username=None):\n \"\"\"Remove User instance based on supplied user name.\"\"\"\n self._user_list = [user for user in self._user_list if user.name != username]"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a list of users matching a filter ( if provided ).", "response": "def describe_users(self, users_filter=None):\n \"\"\"Return a list of users matching a filter (if provided).\"\"\"\n user_list = Users(oktypes=User)\n for user in self._user_list:\n if users_filter and (users_filter.get('name') == user.name or users_filter.get('uid') == user.uid):\n user_list.append(user)\n return user_list"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a collection from a YAML file.", "response": "def from_yaml(cls, file_path=None):\n \"\"\"Create collection from a YAML file.\"\"\"\n try:\n import yaml\n except ImportError: # pragma: no cover\n yaml = None\n if not yaml:\n import sys\n sys.exit('PyYAML is not installed, but is required in order to parse YAML files.'\n '\\nTo install, run:\\n$ pip install PyYAML\\nor visit'\n ' http://pyyaml.org/wiki/PyYAML for instructions.')\n\n with io.open(file_path, encoding=text_type('utf-8')) as stream:\n users_yaml = yaml.safe_load(stream)\n if isinstance(users_yaml, dict):\n return cls.construct_user_list(raw_users=users_yaml.get('users'))\n else:\n raise ValueError('No YAML object could be decoded')"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef from_json(cls, file_path=None):\n with io.open(file_path, encoding=text_type('utf-8')) as stream:\n try:\n users_json = json.load(stream)\n except ValueError:\n raise ValueError('No JSON object could be decoded')\n return cls.construct_user_list(raw_users=users_json.get('users'))", "response": "Create a collection from a JSON file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating collection from locally discovered data e. g. / etc. passwd.", "response": "def from_passwd(uid_min=None, uid_max=None):\n \"\"\"Create collection from locally discovered data, e.g. /etc/passwd.\"\"\"\n import pwd\n users = Users(oktypes=User)\n passwd_list = pwd.getpwall()\n if not uid_min:\n uid_min = UID_MIN\n if not uid_max:\n uid_max = UID_MAX\n sudoers_entries = read_sudoers()\n for pwd_entry in passwd_list:\n if uid_min <= pwd_entry.pw_uid <= uid_max:\n user = User(name=text_type(pwd_entry.pw_name),\n passwd=text_type(pwd_entry.pw_passwd),\n uid=pwd_entry.pw_uid,\n gid=pwd_entry.pw_gid,\n gecos=text_type(pwd_entry.pw_gecos),\n home_dir=text_type(pwd_entry.pw_dir),\n shell=text_type(pwd_entry.pw_shell),\n public_keys=read_authorized_keys(username=pwd_entry.pw_name),\n sudoers_entry=get_sudoers_entry(username=pwd_entry.pw_name,\n sudoers_entries=sudoers_entries))\n users.append(user)\n return users"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconstruct a list of User objects from a list of dicts.", "response": "def construct_user_list(raw_users=None):\n \"\"\"Construct a list of User objects from a list of dicts.\"\"\"\n users = Users(oktypes=User)\n for user_dict in raw_users:\n public_keys = None\n if user_dict.get('public_keys'):\n public_keys = [PublicKey(b64encoded=x, raw=None)\n for x in user_dict.get('public_keys')]\n users.append(User(name=user_dict.get('name'),\n passwd=user_dict.get('passwd'),\n uid=user_dict.get('uid'),\n gid=user_dict.get('gid'),\n home_dir=user_dict.get('home_dir'),\n gecos=user_dict.get('gecos'),\n shell=user_dict.get('shell'),\n public_keys=public_keys,\n sudoers_entry=user_dict.get('sudoers_entry')))\n return users"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef to_dict(self):\n users = dict(users=list())\n for user in self:\n users['users'].append(user.to_dict())\n return users", "response": "Return a dict of the users."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nwriting the users to a file.", "response": "def export(self, file_path=None, export_format=None):\n \"\"\" Write the users to a file. \"\"\"\n with io.open(file_path, mode='w', encoding=\"utf-8\") as export_file:\n if export_format == 'yaml':\n import yaml\n yaml.safe_dump(self.to_dict(), export_file, default_flow_style=False)\n elif export_format == 'json':\n export_file.write(text_type(json.dumps(self.to_dict(), ensure_ascii=False)))\n return True"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef read(path, mode='tsv'):\n ''' Helper function to read Document in TTL-TXT format (i.e. ${docname}_*.txt)\n E.g. read('~/data/myfile') is the same as Document('myfile', '~/data/').read()\n '''\n if mode == 'tsv':\n return TxtReader.from_path(path).read()\n elif mode == 'json':\n return read_json(path)\n else:\n raise Exception(\"Invalid mode - [{}] was provided\".format(mode))", "response": "Helper function to read a TTL - TXT file"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a tokens - concepts map", "response": "def tcmap(self):\n ''' Create a tokens-concepts map '''\n tcmap = dd(list)\n for concept in self.__concept_map.values():\n for w in concept.tokens:\n tcmap[w].append(concept)\n return tcmap"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning a generator of tokens with more than one sense.", "response": "def msw(self):\n ''' Return a generator of tokens with more than one sense. '''\n return (t for t, c in self.tcmap().items() if len(c) > 1)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the surface string that is associated with a tag object", "response": "def surface(self, tag):\n ''' Get surface string that is associated with a tag object '''\n if tag.cfrom is not None and tag.cto is not None and tag.cfrom >= 0 and tag.cto >= 0:\n return self.text[tag.cfrom:tag.cto]\n else:\n return ''"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a sentence - level tag.", "response": "def new_tag(self, label, cfrom=-1, cto=-1, tagtype='', **kwargs):\n ''' Create a sentence-level tag '''\n tag_obj = Tag(label, cfrom, cto, tagtype=tagtype, **kwargs)\n return self.add_tag(tag_obj)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_tag(self, tagtype):\n ''' Get the first tag of a particular type'''\n for tag in self.__tags:\n if tag.tagtype == tagtype:\n return tag\n return None", "response": "Get the first tag of a particular type"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets all tags of a type", "response": "def get_tags(self, tagtype):\n ''' Get all tags of a type '''\n return [t for t in self.__tags if t.tagtype == tagtype]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd a token object into this sentence", "response": "def add_token_object(self, token):\n ''' Add a token object into this sentence '''\n token.sent = self # take ownership of given token\n self.__tokens.append(token)\n return token"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef new_concept(self, tag, clemma=\"\", tokens=None, cidx=None, **kwargs):\n ''' Create a new concept object and add it to concept list\n tokens can be a list of Token objects or token indices\n '''\n if cidx is None:\n cidx = self.new_concept_id()\n if tokens:\n tokens = (t if isinstance(t, Token) else self[t] for t in tokens)\n c = Concept(cidx=cidx, tag=tag, clemma=clemma, sent=self, tokens=tokens, **kwargs)\n return self.add_concept(c)", "response": "Create a new concept object and add it to concept list"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_concept(self, concept_obj):\n ''' Add a concept to current concept list '''\n if concept_obj is None:\n raise Exception(\"Concept object cannot be None\")\n elif concept_obj in self.__concepts:\n raise Exception(\"Concept object is already inside\")\n elif concept_obj.cidx in self.__concept_map:\n raise Exception(\"Duplicated concept ID ({})\".format(concept_obj.cidx))\n self.__concepts.append(concept_obj)\n self.__concept_map[concept_obj.cidx] = concept_obj\n concept_obj.sent = self\n return concept_obj", "response": "Add a concept to current concept list"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget concept by concept ID", "response": "def concept(self, cid, **kwargs):\n ''' Get concept by concept ID '''\n if cid not in self.__concept_map:\n if 'default' in kwargs:\n return kwargs['default']\n else:\n raise KeyError(\"Invalid cid\")\n else:\n return self.__concept_map[cid]"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nimports a list of string as tokens", "response": "def import_tokens(self, tokens, import_hook=None, ignorecase=True):\n ''' Import a list of string as tokens '''\n text = self.text.lower() if ignorecase else self.text\n has_hooker = import_hook and callable(import_hook)\n cfrom = 0\n if self.__tokens:\n for tk in self.__tokens:\n if tk.cfrom and tk.cfrom > cfrom:\n cfrom = tk.cfrom\n for token in tokens:\n if has_hooker:\n import_hook(token)\n to_find = token.lower() if ignorecase else token\n start = text.find(to_find, cfrom)\n # stanford parser\n if to_find == '``' or to_find == \"''\":\n start_dq = text.find('\"', cfrom)\n if start_dq > -1 and (start == -1 or start > start_dq):\n to_find = '\"'\n start = start_dq\n if to_find == '`' or to_find == \"'\":\n start_dq = text.find(\"'\", cfrom)\n if start_dq > -1 and (start == -1 or start > start_dq):\n to_find = \"'\"\n start = start_dq\n if start == -1:\n raise LookupError('Cannot find token `{t}` in sent `{s}`({l}) from {i} ({p})'.format(t=token, s=self.text, l=len(self.text), i=cfrom, p=self.text[cfrom:cfrom + 20]))\n cfrom = start\n cto = cfrom + len(to_find)\n self.new_token(token, cfrom, cto)\n cfrom = cto - 1"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nbuilding a map from tagtype to list of tags", "response": "def tag_map(self):\n ''' Build a map from tagtype to list of tags '''\n tm = dd(list)\n for tag in self.__tags:\n tm[tag.tagtype].append(tag)\n return tm"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef find(self, tagtype, **kwargs):\n '''Get the first tag with a type in this token '''\n for t in self.__tags:\n if t.tagtype == tagtype:\n return t\n if 'default' in kwargs:\n return kwargs['default']\n else:\n raise LookupError(\"Token {} is not tagged with the speficied tagtype ({})\".format(self, tagtype))", "response": "Find the first tag with a type in this token"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfind all token - level tags with the specified tagtype", "response": "def find_all(self, tagtype):\n ''' Find all token-level tags with the specified tagtype '''\n return [t for t in self.__tags if t.tagtype == tagtype]"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a new tag on this token", "response": "def new_tag(self, label, cfrom=None, cto=None, tagtype=None, **kwargs):\n ''' Create a new tag on this token '''\n if cfrom is None:\n cfrom = self.cfrom\n if cto is None:\n cto = self.cto\n tag = Tag(label=label, cfrom=cfrom, cto=cto, tagtype=tagtype, **kwargs)\n return self.add_tag(tag)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning the associated sentence object.", "response": "def get(self, sent_id, **kwargs):\n ''' If sent_id exists, remove and return the associated sentence object else return default.\n If no default is provided, KeyError will be raised.'''\n if sent_id is not None and not isinstance(sent_id, int):\n sent_id = int(sent_id)\n if sent_id is None or not self.has_id(sent_id):\n if 'default' in kwargs:\n return kwargs['default']\n else:\n raise KeyError(\"Invalid sentence ID ({})\".format(sent_id))\n return self.__sent_map[sent_id]"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd a ttl. Sentence object to this document.", "response": "def add_sent(self, sent_obj):\n ''' Add a ttl.Sentence object to this document '''\n if sent_obj is None:\n raise Exception(\"Sentence object cannot be None\")\n elif sent_obj.ID is None:\n # if sentID is None, create a new ID\n sent_obj.ID = next(self.__idgen)\n elif self.has_id(sent_obj.ID):\n raise Exception(\"Sentence ID {} exists\".format(sent_obj.ID))\n self.__sent_map[sent_obj.ID] = sent_obj\n self.__sents.append(sent_obj)\n return sent_obj"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a new sentence and add it to this Document", "response": "def new_sent(self, text, ID=None, **kwargs):\n ''' Create a new sentence and add it to this Document '''\n if ID is None:\n ID = next(self.__idgen)\n return self.add_sent(Sentence(text, ID=ID, **kwargs))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef pop(self, sent_id, **kwargs):\n ''' If sent_id exists, remove and return the associated sentence object else return default.\n If no default is provided, KeyError will be raised.'''\n if sent_id is not None and not isinstance(sent_id, int):\n sent_id = int(sent_id)\n if not self.has_id(sent_id):\n if 'default' in kwargs:\n return kwargs['default']\n else:\n raise KeyError(\"Sentence ID {} does not exist\".format(sent_id))\n else:\n # sentence exists ...\n sent_obj = self.get(sent_id)\n self.__sent_map.pop(sent_id)\n self.__sents.remove(sent_obj)\n return sent_obj", "response": "Removes and returns the associated sentence object."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreading tagged doc from mutliple files.", "response": "def read(self):\n ''' Read tagged doc from mutliple files (sents, tokens, concepts, links, tags) '''\n warnings.warn(\"Document.read() is deprecated and will be removed in near future.\", DeprecationWarning)\n with TxtReader.from_doc(self) as reader:\n reader.read(self)\n return self"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nread tagged doc from mutliple files.", "response": "def read(self, doc=None):\n ''' Read tagged doc from mutliple files (sents, tokens, concepts, links, tags) '''\n if not self.sent_stream:\n raise Exception(\"There is no sentence data stream available\")\n if doc is None:\n doc = Document(name=self.doc_name, path=self.doc_path)\n for row in self.sent_reader():\n if len(row) == 2:\n sid, text = row\n doc.new_sent(text.strip(), ID=sid)\n elif len(row) == 4:\n sid, text, flag, comment = row\n sent = doc.new_sent(text.strip(), ID=sid)\n sent.flag = flag\n sent.comment = comment\n # Read tokens if available\n if self.token_stream:\n # read all tokens first\n sent_tokens_map = dd(list)\n for token_row in self.token_reader():\n if len(token_row) == 6:\n sid, wid, token, lemma, pos, comment = token_row\n else:\n sid, wid, token, lemma, pos = token_row\n comment = ''\n sid = int(sid)\n sent_tokens_map[sid].append((token, lemma, pos.strip(), wid, comment))\n # TODO: verify wid?\n # import tokens\n for sent in doc:\n sent_tokens = sent_tokens_map[sent.ID]\n sent.import_tokens([x[0] for x in sent_tokens])\n for ((tk, lemma, pos, wid, comment), token) in zip(sent_tokens, sent.tokens):\n token.pos = pos\n token.lemma = lemma\n token.comment = comment\n # only read concepts if tokens are available\n if self.concept_stream:\n # read concepts\n for concept_row in self.concept_reader():\n if len(concept_row) == 5:\n sid, cid, clemma, tag, comment = concept_row\n else:\n sid, cid, clemma, tag = concept_row\n comment = ''\n cid = int(cid)\n doc.get(sid).new_concept(tag.strip(), clemma=clemma, cidx=cid, comment=comment)\n # only read concept-token links if tokens and concepts are available\n for sid, cid, wid in self.link_reader():\n sent = doc.get(sid)\n cid = int(cid)\n wid = int(wid.strip())\n sent.concept(cid).add_token(sent[wid])\n # read sentence level tags\n if self.tag_stream:\n for row in self.tag_reader():\n if len(row) == 5:\n sid, cfrom, cto, label, tagtype = row\n wid = None\n if len(row) == 6:\n sid, cfrom, cto, label, tagtype, wid = row\n if cfrom:\n cfrom = int(cfrom)\n if cto:\n cto = int(cto)\n if wid is None or wid == '':\n doc.get(sid).new_tag(label, cfrom, cto, tagtype=tagtype)\n else:\n doc.get(sid)[int(wid)].new_tag(label, cfrom, cto, tagtype=tagtype)\n return doc"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef format_page(text):\r\n width = max(map(len, text.splitlines()))\r\n page = \"+-\" + \"-\" * width + \"-+\\n\"\r\n for line in text.splitlines():\r\n page += \"| \" + line.ljust(width) + \" |\\n\"\r\n page += \"+-\" + \"-\" * width + \"-+\\n\"\r\n return page", "response": "Format the text for output adding ASCII frame around the text."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef table(text):\r\n\r\n def table_bar(col_lengths):\r\n return \"+-%s-+%s\" % (\r\n \"-+-\".join([\"-\" * length for length in col_lengths]),\r\n os.linesep,\r\n )\r\n\r\n rows = []\r\n for line in text.splitlines():\r\n rows.append([part.strip() for part in line.split(\"|\")])\r\n max_cols = max(map(len, rows))\r\n col_lengths = [0] * max_cols\r\n for row in rows:\r\n cols = len(row)\r\n if cols < max_cols:\r\n row.extend([\"\"] * (max_cols - cols))\r\n for i, col in enumerate(row):\r\n col_length = len(col)\r\n if col_length > col_lengths[i]:\r\n col_lengths[i] = col_length\r\n text = table_bar(col_lengths)\r\n for i, row in enumerate(rows):\r\n cols = []\r\n for i, col in enumerate(row):\r\n cols.append(col.ljust(col_lengths[i]))\r\n text += \"| %s |%s\" % (\" | \".join(cols), os.linesep)\r\n text += table_bar(col_lengths)\r\n return text", "response": "Format the text as a table."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nformats the text and prints it on stdout.", "response": "def print_page(text):\r\n \"\"\"Format the text and prints it on stdout.\r\n\r\n Text is formatted by adding a ASCII frame around it and coloring the text.\r\n Colors can be added to text using color tags, for example:\r\n\r\n My [FG_BLUE]blue[NORMAL] text.\r\n My [BG_BLUE]blue background[NORMAL] text.\r\n \"\"\"\r\n color_re = re.compile(r\"\\[(?P[FB]G_[A-Z_]+|NORMAL)\\]\")\r\n width = max([len(strip_colors(x)) for x in text.splitlines()])\r\n print(\"\\n\" + hbar(width))\r\n for line in text.splitlines():\r\n if line == \"[HBAR]\":\r\n print(hbar(width))\r\n continue\r\n tail = width - len(strip_colors(line))\r\n sys.stdout.write(\"| \")\r\n previous = 0\r\n end = len(line)\r\n for match in color_re.finditer(line):\r\n sys.stdout.write(line[previous : match.start()])\r\n set_color(match.groupdict()[\"color\"])\r\n previous = match.end()\r\n sys.stdout.write(line[previous:end])\r\n sys.stdout.write(\" \" * tail + \" |\\n\")\r\n print(hbar(width))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nwrap text to maximum width characters per line.", "response": "def wrap_text(text, width=80):\r\n \"\"\"Wrap text lines to maximum *width* characters.\r\n\r\n Wrapped text is aligned against the left text border.\r\n\r\n Args:\r\n text (str): Text to wrap.\r\n width (int): Maximum number of characters per line.\r\n\r\n Returns:\r\n str: Wrapped text.\r\n \"\"\"\r\n text = re.sub(r\"\\s+\", \" \", text).strip()\r\n wrapper = TextWrapper(\r\n width=width, break_long_words=False, replace_whitespace=True\r\n )\r\n return wrapper.fill(text)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef rjust_text(text, width=80, indent=0, subsequent=None):\r\n text = re.sub(r\"\\s+\", \" \", text).strip()\r\n if subsequent is None:\r\n subsequent = indent\r\n wrapper = TextWrapper(\r\n width=width,\r\n break_long_words=False,\r\n replace_whitespace=True,\r\n initial_indent=\" \" * (indent + subsequent),\r\n subsequent_indent=\" \" * subsequent,\r\n )\r\n return wrapper.fill(text)[subsequent:]", "response": "Wrap text and adjust it to right border."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef center_text(text, width=80):\r\n centered = []\r\n for line in text.splitlines():\r\n centered.append(line.center(width))\r\n return \"\\n\".join(centered)", "response": "Center all lines of the text."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncheck various properties of a QPImage or a path to an hdf5 file.", "response": "def check(qpi_or_h5file, checks=[\"attributes\", \"background\"]):\n \"\"\"Checks various properties of a :class:`qpimage.core.QPImage` instance\n\n Parameters\n ----------\n qpi_or_h5file: qpimage.core.QPImage or str\n A QPImage object or a path to an hdf5 file\n checks: list of str\n Which checks to perform (\"attributes\" and/or \"background\")\n\n Raises\n ------\n IntegrityCheckError\n if the checks fail\n \"\"\"\n if isinstance(checks, str):\n checks = [checks]\n for ch in checks:\n if ch not in [\"attributes\", \"background\"]:\n raise ValueError(\"Unknown check: {}\".format(check))\n\n if isinstance(qpi_or_h5file, QPImage):\n qpi = qpi_or_h5file\n else:\n qpi = QPImage(h5file=qpi_or_h5file, h5mode=\"r\")\n\n # check attributes\n if \"attributes\" in checks:\n check_attributes(qpi)\n\n # check background estimation\n if \"background\" in checks:\n check_background(qpi)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nchecking that the attributes of the sequence of items in the sequence are present in the meta data.", "response": "def check_attributes(qpi):\n \"\"\"Check QPimage attributes\n\n Parameters\n ----------\n qpi: qpimage.core.QPImage\n\n Raises\n ------\n IntegrityCheckError\n if the check fails\n \"\"\"\n missing_attrs = []\n for key in DATA_KEYS:\n if key not in qpi.meta:\n missing_attrs.append(key)\n if missing_attrs:\n msg = \"Attributes are missing: {} \".format(missing_attrs) \\\n + \"in {}!\".format(qpi)\n raise IntegrityCheckError(msg)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef check_background(qpi):\n for imdat in [qpi._amp, qpi._pha]:\n try:\n fit, attrs = imdat.get_bg(key=\"fit\", ret_attrs=True)\n except KeyError:\n # No bg correction performed\n pass\n else:\n kwargs = dict(attrs)\n # check if we have a user-defined mask image\n binkey = \"estimate_bg_from_mask\"\n if binkey in imdat.h5:\n kwargs[\"from_mask\"] = imdat.h5[binkey][:]\n else:\n kwargs[\"from_mask\"] = None\n # compute background correction\n with h5py.File(\"check.h5\",\n driver=\"core\",\n backing_store=False) as h5:\n # imdat.__class__ is \"Amplitude\" or \"Phase\"\n testimdat = imdat.__class__(h5)\n testimdat[\"raw\"] = imdat.raw\n # Set experimental bg data if given\n try:\n bg = imdat.get_bg(\"data\")\n except KeyError:\n pass\n else:\n testimdat.set_bg(bg, key=\"data\")\n # fit bg\n testimdat.estimate_bg(**kwargs)\n # compare\n if not np.allclose(testimdat.get_bg(key=\"fit\"), fit):\n msg = \"Wrong estimated (fitted) background!\"\n raise IntegrityCheckError(msg)", "response": "Check if background correction is correct for all images in the sequence."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nwrites an image to an HDF5 group as a dataset", "response": "def write_image_dataset(group, key, data, h5dtype=None):\n \"\"\"Write an image to an hdf5 group as a dataset\n\n This convenience function sets all attributes such that the image\n can be visualized with HDFView, sets the compression and fletcher32\n filters, and sets the chunk size to the image shape.\n\n Parameters\n ----------\n group: h5py.Group\n HDF5 group to store data to\n key: str\n Dataset identifier\n data: np.ndarray of shape (M,N)\n Image data to store\n h5dtype: str\n The datatype in which to store the image data. The default\n is the datatype of `data`.\n\n Returns\n -------\n dataset: h5py.Dataset\n The created HDF5 dataset object\n \"\"\"\n if h5dtype is None:\n h5dtype = data.dtype\n if key in group:\n del group[key]\n if group.file.driver == \"core\":\n kwargs = {}\n else:\n kwargs = {\"fletcher32\": True,\n \"chunks\": data.shape}\n kwargs.update(COMPRESSION)\n\n dset = group.create_dataset(key,\n data=data.astype(h5dtype),\n **kwargs)\n # Create and Set image attributes\n # HDFView recognizes this as a series of images\n dset.attrs.create('CLASS', b'IMAGE')\n dset.attrs.create('IMAGE_VERSION', b'1.2')\n dset.attrs.create('IMAGE_SUBCLASS', b'IMAGE_GRAYSCALE')\n return dset"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nlist of background correction parameters", "response": "def info(self):\n \"\"\"list of background correction parameters\"\"\"\n info = []\n name = self.__class__.__name__.lower()\n # get bg information\n for key in VALID_BG_KEYS:\n if key in self.h5[\"bg_data\"]:\n attrs = self.h5[\"bg_data\"][key].attrs\n for akey in attrs:\n atr = attrs[akey]\n var = \"{} background {}\".format(name, akey)\n info.append((var, atr))\n if \"fit\" in self.h5[\"bg_data\"]:\n # mask background\n var_mask = \"{} background from mask\".format(name)\n if (\"estimate_bg_from_mask\" in self.h5\n and self.h5[\"estimate_bg_from_mask\"] is not None):\n # bg was computed from mask image\n info.append((var_mask, True))\n elif (\"estimate_bg_from_binary\" in self.h5\n and self.h5[\"estimate_bg_from_binary\"] is not None):\n # bg was computed from mask image (old notation)\n warnings.warn(\"Old file format detected!\", DeprecationWarning)\n info.append((var_mask, True))\n else:\n info.append((var_mask, False))\n return info"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef del_bg(self, key):\n if key not in VALID_BG_KEYS:\n raise ValueError(\"Invalid bg key: {}\".format(key))\n if key in self.h5[\"bg_data\"]:\n del self.h5[\"bg_data\"][key]\n else:\n msg = \"No bg data to clear for '{}' in {}.\".format(key, self)\n warnings.warn(msg)", "response": "Removes the background image data for the specified key from the H5 file."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef estimate_bg(self, fit_offset=\"mean\", fit_profile=\"tilt\",\n border_px=0, from_mask=None, ret_mask=False):\n \"\"\"Estimate image background\n\n Parameters\n ----------\n fit_profile: str\n The type of background profile to fit:\n\n - \"offset\": offset only\n - \"poly2o\": 2D 2nd order polynomial with mixed terms\n - \"tilt\": 2D linear tilt with offset (default)\n fit_offset: str\n The method for computing the profile offset\n\n - \"fit\": offset as fitting parameter\n - \"gauss\": center of a gaussian fit\n - \"mean\": simple average\n - \"mode\": mode (see `qpimage.bg_estimate.mode`)\n border_px: float\n Assume that a frame of `border_px` pixels around\n the image is background.\n from_mask: boolean np.ndarray or None\n Use a boolean array to define the background area.\n The mask image must have the same shape as the\n input data.`True` elements are used for background\n estimation.\n ret_mask: bool\n Return the mask image used to compute the background.\n\n Notes\n -----\n If both `border_px` and `from_mask` are given, the\n intersection of the two resulting mask images is used.\n\n The arguments passed to this method are stored in the\n hdf5 file `self.h5` and are used for optional integrity\n checking using `qpimage.integrity_check.check`.\n\n See Also\n --------\n qpimage.bg_estimate.estimate\n \"\"\"\n # remove existing bg before accessing imdat.image\n self.set_bg(bg=None, key=\"fit\")\n # compute bg\n bgimage, mask = bg_estimate.estimate(data=self.image,\n fit_offset=fit_offset,\n fit_profile=fit_profile,\n border_px=border_px,\n from_mask=from_mask,\n ret_mask=True)\n attrs = {\"fit_offset\": fit_offset,\n \"fit_profile\": fit_profile,\n \"border_px\": border_px}\n self.set_bg(bg=bgimage, key=\"fit\", attrs=attrs)\n # save `from_mask` separately (arrays vs. h5 attributes)\n # (if `from_mask` is `None`, this will remove the array)\n self[\"estimate_bg_from_mask\"] = from_mask\n # return mask image\n if ret_mask:\n return mask", "response": "Estimate image background of a single object."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_bg(self, key=None, ret_attrs=False):\n if key is None:\n if ret_attrs:\n raise ValueError(\"No attributes for combined background!\")\n return self.bg\n else:\n if key not in VALID_BG_KEYS:\n raise ValueError(\"Invalid bg key: {}\".format(key))\n if key in self.h5[\"bg_data\"]:\n data = self.h5[\"bg_data\"][key][:]\n if ret_attrs:\n attrs = dict(self.h5[\"bg_data\"][key].attrs)\n # remove keys for image visualization in hdf5 files\n for h5k in [\"CLASS\", \"IMAGE_VERSION\", \"IMAGE_SUBCLASS\"]:\n if h5k in attrs:\n attrs.pop(h5k)\n ret = (data, attrs)\n else:\n ret = data\n else:\n raise KeyError(\"No background data for {}!\".format(key))\n return ret", "response": "Get the background image data for the specified key."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the background data for the current object.", "response": "def set_bg(self, bg, key=\"data\", attrs={}):\n \"\"\"Set the background data\n\n Parameters\n ----------\n bg: numbers.Real, 2d ndarray, ImageData, or h5py.Dataset\n The background data. If `bg` is an `h5py.Dataset` object,\n it must exist in the same hdf5 file (a hard link is created).\n If set to `None`, the data will be removed.\n key: str\n One of :const:`VALID_BG_KEYS`)\n attrs: dict\n List of background attributes\n\n See Also\n --------\n del_bg: removing background data\n \"\"\"\n if key not in VALID_BG_KEYS:\n raise ValueError(\"Invalid bg key: {}\".format(key))\n # remove previous background key\n if key in self.h5[\"bg_data\"]:\n del self.h5[\"bg_data\"][key]\n # set background\n if isinstance(bg, (numbers.Real, np.ndarray)):\n dset = write_image_dataset(group=self.h5[\"bg_data\"],\n key=key,\n data=bg,\n h5dtype=self.h5dtype)\n for kw in attrs:\n dset.attrs[kw] = attrs[kw]\n elif isinstance(bg, h5py.Dataset):\n # Create a hard link\n # (This functionality was intended for saving memory when storing\n # large QPSeries with universal background data, i.e. when using\n # `QPSeries.add_qpimage` with the `bg_from_idx` keyword.)\n self.h5[\"bg_data\"][key] = bg\n elif bg is not None:\n msg = \"Unknown background data type: {}\".format(bg)\n raise ValueError(msg)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncombining several background amplitude images", "response": "def _bg_combine(self, bgs):\n \"\"\"Combine several background amplitude images\"\"\"\n out = np.ones(self.h5[\"raw\"].shape, dtype=float)\n # bg is an h5py.DataSet\n for bg in bgs:\n out *= bg[:]\n return out"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef git_tags() -> str:\n try:\n subprocess.check_call(['git', 'fetch', '--tags'])\n except CalledProcessError:\n pass\n\n cmd = ['git', 'tag', '--list', '--sort=-v:refname']\n rv = subprocess.check_output(cmd).decode('utf-8')\n\n if rv == '':\n raise NoGitTagsException('No Git tags are present in current repo.')\n\n return rv", "response": "Returns the Git tags of the current node."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef git_tag_to_semver(git_tag: str) -> SemVer:\n pattern = re.compile(r'[0-9]+\\.[0-9]+\\.[0-9]+$')\n match = pattern.search(git_tag)\n if match:\n version = match.group(0)\n else:\n raise InvalidTagFormatException('Tag passed contains no SemVer.')\n\n return SemVer.from_str(version)", "response": "Converts a Git tag to a SemVer object."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef last_git_release_tag(git_tags: str) -> str:\n semver_re = re.compile(r'[0-9]+\\.[0-9]+\\.[0-9]+$')\n str_ver = []\n for i in git_tags.split():\n if semver_re.search(i):\n str_ver.append(i)\n\n try:\n return str_ver[0]\n except IndexError:\n raise NoGitTagsException", "response": "Returns the last Git release tag in a list."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a dict of hash - message pairs for all commits since the given git tag.", "response": "def git_commits_since_last_tag(last_tag: str) -> dict:\n \"\"\"\n :last_tag: The Git tag that should serve as the starting point for the\n commit log lookup.\n\n Calls ``git log .. --format='%H %s'`` and returns the output as a\n dict of hash-message pairs.\n \"\"\"\n try:\n cmd = ['git', 'log', last_tag + '..', \"--format='%H %s'\"]\n commit_log = subprocess.check_output(cmd).decode('utf-8')\n except CalledProcessError:\n raise GitTagDoesNotExistError('No such tag:', last_tag)\n\n if not commit_log:\n raise NoGitCommitSinceLastTagException('No commits since last tag.')\n\n pattern = re.compile(r'([a-f0-9]{40})\\ (.*)')\n\n rv = {}\n for line in commit_log.split('\\n'):\n match = pattern.search(line)\n if match:\n commit_hash = match.group(1)\n commit_msg = match.group(2)\n rv[commit_hash] = commit_msg\n\n return rv"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse_commit_log(commit_log: dict) -> str:\n rv = 'pat'\n\n cc_patterns = patterns()\n\n for value in commit_log.values():\n if re.search(cc_patterns['feat'], value):\n rv = 'min'\n if re.search(cc_patterns['BREAKING CHANGE'], value):\n rv = 'maj'\n\n return rv", "response": "Parse Git log and return either maj min or pat."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef from_str(cls, version_str: str):\n o = cls()\n o.version = version_str\n return o", "response": "Alternative constructor that accepts a string SemVer.\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef major(self, major: int) -> None:\n self.filter_negatives(major)\n self._major = major", "response": "Sets the major version number property."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset the minor version number property.", "response": "def minor(self, minor: int) -> None:\n \"\"\"\n param minor\n\n Minor version number property. Must be a non-negative integer.\n \"\"\"\n self.filter_negatives(minor)\n self._minor = minor"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nset the patch version number property.", "response": "def patch(self, patch: int) -> None:\n \"\"\"\n param patch\n\n Patch version number property. Must be a non-negative integer.\n \"\"\"\n self.filter_negatives(patch)\n self._patch = patch"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the version number property.", "response": "def version(self) -> str:\n \"\"\"\n Version version number property. Must be a string consisting of three\n non-negative integers delimited by periods (eg. '1.0.1').\n \"\"\"\n version: str = (\n str(self._major) + '.' +\n str(self._minor) + '.' +\n str(self._patch)\n )\n return version"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef version(self, version_str: str) -> None:\n ver = []\n for i in version_str.split('.'):\n ver.append(int(i))\n self.filter_negatives(int(i))\n self._major, self._minor, self._patch = ver[0], ver[1], ver[2]", "response": "Sets the version number property of the object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nestimate the background value of an image.", "response": "def estimate(data, fit_offset=\"mean\", fit_profile=\"tilt\",\n border_px=0, from_mask=None, ret_mask=False):\n \"\"\"Estimate the background value of an image\n\n Parameters\n ----------\n data: np.ndarray\n Data from which to compute the background value\n fit_profile: str\n The type of background profile to fit:\n\n - \"offset\": offset only\n - \"poly2o\": 2D 2nd order polynomial with mixed terms\n - \"tilt\": 2D linear tilt with offset (default)\n fit_offset: str\n The method for computing the profile offset\n\n - \"fit\": offset as fitting parameter\n - \"gauss\": center of a gaussian fit\n - \"mean\": simple average\n - \"mode\": mode (see `qpimage.bg_estimate.mode`)\n border_px: float\n Assume that a frame of `border_px` pixels around\n the image is background.\n from_mask: boolean np.ndarray or None\n Use a boolean array to define the background area.\n The boolean mask must have the same shape as the\n input data. `True` elements are used for background\n estimation.\n ret_mask: bool\n Return the boolean mask used to compute the background.\n\n Notes\n -----\n If both `border_px` and `from_mask` are given, the\n intersection of the two is used, i.e. the positions\n where both, the frame mask and `from_mask`, are\n `True`.\n \"\"\"\n if fit_profile not in VALID_FIT_PROFILES:\n msg = \"`fit_profile` must be one of {}, got '{}'\".format(\n VALID_FIT_PROFILES,\n fit_profile)\n raise ValueError(msg)\n if fit_offset not in VALID_FIT_OFFSETS:\n msg = \"`fit_offset` must be one of {}, got '{}'\".format(\n VALID_FIT_OFFSETS,\n fit_offset)\n raise ValueError(msg)\n # initial mask image\n if from_mask is not None:\n assert isinstance(from_mask, np.ndarray)\n mask = from_mask.copy()\n else:\n mask = np.ones_like(data, dtype=bool)\n # multiply with border mask image (intersection)\n if border_px > 0:\n border_px = int(np.round(border_px))\n mask_px = np.zeros_like(mask)\n mask_px[:border_px, :] = True\n mask_px[-border_px:, :] = True\n mask_px[:, :border_px] = True\n mask_px[:, -border_px:] = True\n # intersection\n np.logical_and(mask, mask_px, out=mask)\n # compute background image\n if fit_profile == \"tilt\":\n bgimg = profile_tilt(data, mask)\n elif fit_profile == \"poly2o\":\n bgimg = profile_poly2o(data, mask)\n else:\n bgimg = np.zeros_like(data, dtype=float)\n # add offsets\n if fit_offset == \"fit\":\n if fit_profile == \"offset\":\n msg = \"`fit_offset=='fit'` only valid when `fit_profile!='offset`\"\n raise ValueError(msg)\n # nothing else to do here, using offset from fit\n elif fit_offset == \"gauss\":\n bgimg += offset_gaussian((data - bgimg)[mask])\n elif fit_offset == \"mean\":\n bgimg += np.mean((data - bgimg)[mask])\n elif fit_offset == \"mode\":\n bgimg += offset_mode((data - bgimg)[mask])\n\n if ret_mask:\n ret = (bgimg, mask)\n else:\n ret = bgimg\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfits a gaussian model to data and return its center", "response": "def offset_gaussian(data):\n \"\"\"Fit a gaussian model to `data` and return its center\"\"\"\n nbins = 2 * int(np.ceil(np.sqrt(data.size)))\n mind, maxd = data.min(), data.max()\n drange = (mind - (maxd - mind) / 2, maxd + (maxd - mind) / 2)\n histo = np.histogram(data, nbins, density=True, range=drange)\n dx = abs(histo[1][1] - histo[1][2]) / 2\n hx = histo[1][1:] - dx\n hy = histo[0]\n # fit gaussian\n gauss = lmfit.models.GaussianModel()\n pars = gauss.guess(hy, x=hx)\n out = gauss.fit(hy, pars, x=hx)\n return out.params[\"center\"]"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef offset_mode(data):\n nbins = int(np.ceil(np.sqrt(data.size)))\n mind, maxd = data.min(), data.max()\n histo = np.histogram(data, nbins, density=True, range=(mind, maxd))\n dx = abs(histo[1][1] - histo[1][2]) / 2\n hx = histo[1][1:] - dx\n hy = histo[0]\n idmax = np.argmax(hy)\n return hx[idmax]", "response": "Compute Mode using a histogram with sqrt ( data. size ) bins"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef profile_tilt(data, mask):\n params = lmfit.Parameters()\n params.add(name=\"mx\", value=0)\n params.add(name=\"my\", value=0)\n params.add(name=\"off\", value=np.average(data[mask]))\n fr = lmfit.minimize(tilt_residual, params, args=(data, mask))\n bg = tilt_model(fr.params, data.shape)\n return bg", "response": "Fit a 2D tilt to data [ mask ]"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfits a 2D 2nd order polynomial to data [ mask ]", "response": "def profile_poly2o(data, mask):\n \"\"\"Fit a 2D 2nd order polynomial to `data[mask]`\"\"\"\n # lmfit\n params = lmfit.Parameters()\n params.add(name=\"mx\", value=0)\n params.add(name=\"my\", value=0)\n params.add(name=\"mxy\", value=0)\n params.add(name=\"ax\", value=0)\n params.add(name=\"ay\", value=0)\n params.add(name=\"off\", value=np.average(data[mask]))\n fr = lmfit.minimize(poly2o_residual, params, args=(data, mask))\n bg = poly2o_model(fr.params, data.shape)\n return bg"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef poly2o_residual(params, data, mask):\n bg = poly2o_model(params, shape=data.shape)\n res = (data - bg)[mask]\n return res.flatten()", "response": "lmfit 2nd order polynomial residuals"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef infer(args: argparse.Namespace) -> None:\n try:\n last_tag = last_git_release_tag(git_tags())\n except NoGitTagsException:\n print(SemVer(0, 1, 0))\n exit(0)\n\n commit_log = git_commits_since_last_tag(last_tag)\n action = parse_commit_log(commit_log)\n\n last_ver = git_tag_to_semver(last_tag)\n\n if action == 'min':\n new_ver = last_ver.bump_minor()\n elif action == 'maj':\n new_ver = last_ver.bump_major()\n else:\n new_ver = last_ver.bump_patch()\n\n print(new_ver)", "response": "Infer the current version of the current node."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef bump(args: argparse.Namespace) -> None:\n try:\n last_tag = last_git_release_tag(git_tags())\n except NoGitTagsException:\n print(SemVer(0, 1, 0))\n exit(0)\n\n last_ver = git_tag_to_semver(last_tag)\n\n if args.type == 'patch':\n print(last_ver.bump_patch())\n elif args.type == 'minor':\n print(last_ver.bump_minor())\n elif args.type == 'major':\n print(last_ver.bump_major())", "response": "Bump the current version of the current node."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfinds the side band position of a hologram image.", "response": "def find_sideband(ft_data, which=+1, copy=True):\n \"\"\"Find the side band position of a hologram\n\n The hologram is Fourier-transformed and the side band\n is determined by finding the maximum amplitude in\n Fourier space.\n\n Parameters\n ----------\n ft_data: 2d ndarray\n Fourier transform of the hologram image\n which: +1 or -1\n which sideband to search for:\n\n - +1: upper half\n - -1: lower half\n copy: bool\n copy `ft_data` before modification\n\n Returns\n -------\n fsx, fsy : tuple of floats\n coordinates of the side band in Fourier space frequencies\n \"\"\"\n if copy:\n ft_data = ft_data.copy()\n\n if which not in [+1, -1]:\n raise ValueError(\"`which` must be +1 or -1!\")\n\n ox, oy = ft_data.shape\n cx = ox // 2\n cy = oy // 2\n\n minlo = max(int(np.ceil(ox / 42)), 5)\n if which == +1:\n # remove lower part\n ft_data[cx - minlo:] = 0\n else:\n ft_data[:cx + minlo] = 0\n\n # remove values around axes\n ft_data[cx - 3:cx + 3, :] = 0\n ft_data[:, cy - 3:cy + 3] = 0\n\n # find maximum\n am = np.argmax(np.abs(ft_data))\n iy = am % oy\n ix = int((am - iy) / oy)\n\n fx = np.fft.fftshift(np.fft.fftfreq(ft_data.shape[0]))[ix]\n fy = np.fft.fftshift(np.fft.fftfreq(ft_data.shape[1]))[iy]\n\n return (fx, fy)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef fourier2dpad(data, zero_pad=True):\n if zero_pad:\n # zero padding size is next order of 2\n (N, M) = data.shape\n order = np.int(max(64., 2**np.ceil(np.log(2 * max(N, M)) / np.log(2))))\n\n # this is faster than np.pad\n datapad = np.zeros((order, order), dtype=float)\n datapad[:data.shape[0], :data.shape[1]] = data\n else:\n datapad = data\n\n # Fourier transform\n fft = np.fft.fftshift(np.fft.fft2(datapad))\n return fft", "response": "Compute the 2D Fourier transform with zero padding"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_field(hologram, sideband=+1, filter_name=\"disk\", filter_size=1 / 3,\n subtract_mean=True, zero_pad=True, copy=True):\n \"\"\"Compute the complex field from a hologram using Fourier analysis\n\n Parameters\n ----------\n hologram: real-valued 2d ndarray\n hologram data\n sideband: +1, -1, or tuple of (float, float)\n specifies the location of the sideband:\n\n - +1: sideband in the upper half in Fourier space,\n exact location is found automatically\n - -1: sideband in the lower half in Fourier space,\n exact location is found automatically\n - (float, float): sideband coordinates in\n frequencies in interval [1/\"axes size\", .5]\n filter_name: str\n specifies the filter to use, one of\n\n - \"disk\": binary disk with radius `filter_size`\n - \"smooth disk\": disk with radius `filter_size` convolved\n with a radial gaussian (`sigma=filter_size/5`)\n - \"gauss\": radial gaussian (`sigma=0.6*filter_size`)\n - \"square\": binary square with side length `filter_size`\n - \"smooth square\": square with side length `filter_size`\n convolved with square gaussian (`sigma=filter_size/5`)\n - \"tukey\": a square tukey window of width `2*filter_size` and\n `alpha=0.1`\n filter_size: float\n Size of the filter in Fourier space in fractions of the\n distance between central band and sideband.\n See `filter_shape` for interpretation of `filter_size`.\n subtract_mean: bool\n If True, remove the mean of the hologram before performing\n the Fourier transform. This setting is recommended as it\n can reduce artifacts from frequencies around the central\n band.\n zero_pad: bool\n Perform zero-padding before applying the FFT. Setting\n `zero_pad` to `False` increases speed but might\n introduce image distortions such as tilts in the phase\n and amplitude data or dark borders in the amplitude data.\n copy: bool\n If set to True, input `data` is not edited.\n\n x0 and y0 are center of the filter\n R is factor for \"radius\" of filter (sqrt(x0\u00b2 + y0\u00b2)/np.pi)\n\n filter_type can be \"disk\" or \"gauss\"\n\n Notes\n -----\n Even though the size of the \"gauss\" filter approximately matches\n the frequencies of the \"disk\" filter, it takes into account\n higher frequencies as well and thus suppresses ringing artifacts\n for data that contain jumps in the phase image.\n \"\"\"\n if copy:\n hologram = hologram.astype(dtype=np.float, copy=True)\n\n if subtract_mean:\n # remove contributions of the central band\n # (this affects more than one pixel in the FFT\n # because of zero-padding)\n if issubclass(hologram.dtype.type, np.integer):\n hologram = hologram.astype(np.float)\n hologram -= hologram.mean()\n\n # Fourier transform\n fft = fourier2dpad(hologram, zero_pad=zero_pad)\n\n if sideband in [+1, -1]:\n fsx, fsy = find_sideband(fft, which=sideband)\n else:\n fsx, fsy = sideband\n\n # shift fft to sideband location\n shifted = np.roll(np.roll(fft, -int(fsx * fft.shape[0]), axis=0),\n -int(fsy * fft.shape[1]), axis=1)\n\n # coordinates in Fourier space\n fx = np.fft.fftshift(np.fft.fftfreq(fft.shape[0])).reshape(-1, 1)\n fy = np.fft.fftshift(np.fft.fftfreq(fft.shape[1])).reshape(1, -1)\n\n # filter size based on central band - sideband - distance\n if filter_size >= 1:\n raise ValueError(\"`filter_size` must be < 1!\")\n fsize = np.sqrt(fsx**2 + fsy**2) * filter_size\n\n if filter_name == \"disk\":\n afilter = (fx**2 + fy**2) < fsize**2\n elif filter_name == \"smooth disk\":\n sigma = fsize / 5\n tau = 2 * sigma**2\n radsq = fx**2 + fy**2\n disk = radsq < fsize**2\n gauss = np.exp(-radsq / tau)\n afilter = signal.convolve(gauss, disk, mode=\"same\")\n afilter /= afilter.max()\n elif filter_name == \"gauss\":\n sigma = fsize * .6\n tau = 2 * sigma**2\n afilter = np.exp(-(fx**2 + fy**2) / tau)\n afilter /= afilter.max()\n elif filter_name == \"square\":\n afilter = (np.abs(fx) < fsize) * (np.abs(fy) < fsize)\n elif filter_name == \"smooth square\":\n blur = fsize / 5\n tau = 2 * blur**2\n square = (np.abs(fx) < fsize) * (np.abs(fy) < fsize)\n gauss = np.exp(-(fy**2) / tau) * np.exp(-(fy**2) / tau)\n afilter = signal.convolve(square, gauss, mode=\"same\")\n afilter /= afilter.max()\n elif filter_name == \"tukey\":\n alpha = 0.1\n rsize = np.int(min(fx.size, fy.size)*fsize) * 2\n tukey_window_x = signal.tukey(rsize, alpha=alpha).reshape(-1, 1)\n tukey_window_y = signal.tukey(rsize, alpha=alpha).reshape(1, -1)\n tukey = tukey_window_x * tukey_window_y\n afilter = np.zeros(shifted.shape)\n s1 = (np.array(shifted.shape) - rsize)//2\n s2 = (np.array(shifted.shape) + rsize)//2\n afilter[s1[0]:s2[0], s1[1]:s2[1]] = tukey\n else:\n raise ValueError(\"Unknown filter: {}\".format(filter_name))\n\n # apply filter\n fft_filt = afilter * shifted\n\n # inverse Fourier transform\n field = np.fft.ifft2(np.fft.ifftshift(fft_filt))\n return field[:hologram.shape[0], :hologram.shape[1]]", "response": "Compute the complex field from a real - valued 2d array containing the data of the central band and the sideband."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef copyh5(inh5, outh5):\n if not isinstance(inh5, h5py.Group):\n inh5 = h5py.File(inh5, mode=\"r\")\n if outh5 is None:\n # create file in memory\n h5kwargs = {\"name\": \"qpimage{}.h5\".format(QPImage._instances),\n \"driver\": \"core\",\n \"backing_store\": False,\n \"mode\": \"a\"}\n outh5 = h5py.File(**h5kwargs)\n return_h5obj = True\n QPImage._instances += 1\n elif not isinstance(outh5, h5py.Group):\n # create new file\n outh5 = h5py.File(outh5, mode=\"w\")\n return_h5obj = False\n else:\n return_h5obj = True\n # begin iteration\n for key in inh5:\n if key in outh5:\n del outh5[key]\n if isinstance(inh5[key], h5py.Group):\n outh5.create_group(key)\n copyh5(inh5[key], outh5[key])\n else:\n dset = write_image_dataset(group=outh5,\n key=key,\n data=inh5[key][:],\n h5dtype=inh5[key].dtype)\n dset.attrs.update(inh5[key].attrs)\n outh5.attrs.update(inh5.attrs)\n if return_h5obj:\n # in-memory or previously created instance of h5py.File\n return outh5\n else:\n # properly close the file and return its name\n fn = outh5.filename\n outh5.flush()\n outh5.close()\n return fn", "response": "Recursively copy all hdf5 data from one group to another."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert which data to string or tuple", "response": "def _conv_which_data(which_data):\n \"\"\"Convert which data to string or tuple\n\n This function improves user convenience,\n as `which_data` may be of several types\n (str, ,str with spaces and commas, list, tuple) which\n is internally handled by this method.\n \"\"\"\n if isinstance(which_data, str):\n which_data = which_data.lower().strip()\n if which_data.count(\",\"):\n # convert comma string to list\n which_data = [w.strip() for w in which_data.split(\",\")]\n # remove empty strings\n which_data = [w for w in which_data if w]\n if len(which_data) == 1:\n return which_data[0]\n else:\n # convert to tuple\n return tuple(which_data)\n else:\n return which_data\n elif isinstance(which_data, (list, tuple)):\n which_data = [w.lower().strip() for w in which_data]\n return tuple(which_data)\n elif which_data is None:\n return None\n else:\n msg = \"unknown type for `which_data`: {}\".format(which_data)\n raise ValueError(msg)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_amp_pha(self, data, which_data):\n which_data = QPImage._conv_which_data(which_data)\n if which_data not in VALID_INPUT_DATA:\n msg = \"`which_data` must be one of {}!\".format(VALID_INPUT_DATA)\n raise ValueError(msg)\n\n if which_data == \"field\":\n amp = np.abs(data)\n pha = np.angle(data)\n elif which_data == \"phase\":\n pha = data\n amp = np.ones_like(data)\n elif which_data == (\"phase\", \"amplitude\"):\n amp = data[1]\n pha = data[0]\n elif which_data == (\"phase\", \"intensity\"):\n amp = np.sqrt(data[1])\n pha = data[0]\n elif which_data == \"hologram\":\n amp, pha = self._get_amp_pha(holo.get_field(data, **self.holo_kw),\n which_data=\"field\")\n if amp.size == 0 or pha.size == 0:\n msg = \"`data` with shape {} has zero size!\".format(amp.shape)\n raise ValueError(msg)\n # phase unwrapping (take into account nans)\n nanmask = np.isnan(pha)\n if np.sum(nanmask):\n # create masked array\n # skimage.restoration.unwrap_phase cannot handle nan data\n # (even if masked)\n pham = pha.copy()\n pham[nanmask] = 0\n pham = np.ma.masked_array(pham, mask=nanmask)\n pha = unwrap_phase(pham, seed=47)\n pha[nanmask] = np.nan\n else:\n pha = unwrap_phase(pha, seed=47)\n\n return amp, pha", "response": "Convert input data to phase and amplitude."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nlists of tuples with QPImage meta data", "response": "def info(self):\n \"\"\"list of tuples with QPImage meta data\"\"\"\n info = []\n # meta data\n meta = self.meta\n for key in meta:\n info.append((key, self.meta[key]))\n # background correction\n for imdat in [self._amp, self._pha]:\n info += imdat.info\n return info"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nclear the background correction of the current object.", "response": "def clear_bg(self, which_data=(\"amplitude\", \"phase\"), keys=\"fit\"):\n \"\"\"Clear background correction\n\n Parameters\n ----------\n which_data: str or list of str\n From which type of data to remove the background\n information. The list contains either \"amplitude\",\n \"phase\", or both.\n keys: str or list of str\n Which type of background data to remove. One of:\n\n - \"fit\": the background data computed with\n :func:`qpimage.QPImage.compute_bg`\n - \"data\": the experimentally obtained background image\n \"\"\"\n which_data = QPImage._conv_which_data(which_data)\n if isinstance(keys, str):\n # make sure keys is a list of strings\n keys = [keys]\n\n # Get image data for clearing\n imdats = []\n if \"amplitude\" in which_data:\n imdats.append(self._amp)\n if \"phase\" in which_data:\n imdats.append(self._pha)\n if not imdats:\n msg = \"`which_data` must contain 'phase' or 'amplitude'!\"\n raise ValueError(msg)\n # Perform clearing of backgrounds\n for imdat in imdats:\n for key in keys:\n imdat.del_bg(key)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncomputes background correction of the base image of the base image.", "response": "def compute_bg(self, which_data=\"phase\",\n fit_offset=\"mean\", fit_profile=\"tilt\",\n border_m=0, border_perc=0, border_px=0,\n from_mask=None, ret_mask=False):\n \"\"\"Compute background correction\n\n Parameters\n ----------\n which_data: str or list of str\n From which type of data to remove the background\n information. The list contains either \"amplitude\",\n \"phase\", or both.\n fit_profile: str\n The type of background profile to fit:\n\n - \"offset\": offset only\n - \"poly2o\": 2D 2nd order polynomial with mixed terms\n - \"tilt\": 2D linear tilt with offset (default)\n fit_offset: str\n The method for computing the profile offset\n\n - \"fit\": offset as fitting parameter\n - \"gauss\": center of a gaussian fit\n - \"mean\": simple average\n - \"mode\": mode (see `qpimage.bg_estimate.mode`)\n border_m: float\n Assume that a frame of `border_m` meters around the\n image is background. The value is converted to\n pixels and rounded.\n border_perc: float\n Assume that a frame of `border_perc` percent around\n the image is background. The value is converted to\n pixels and rounded. If the aspect ratio of the image\n is not one, then the average of the data's shape is\n used to compute the percentage in pixels.\n border_px: float\n Assume that a frame of `border_px` pixels around\n the image is background.\n from_mask: boolean np.ndarray or None\n Use a boolean array to define the background area.\n The boolean mask must have the same shape as the\n input data. `True` elements are used for background\n estimation.\n ret_mask: bool\n Return the boolean mask used to compute the background.\n\n Notes\n -----\n The `border_*` values are translated to pixel values and\n the largest pixel border is used to generate a mask\n image for background computation.\n\n If any of the `border_*` arguments are non-zero and\n `from_mask` is given, the intersection of the two\n is used, i.e. the positions where both, the frame\n mask and `from_mask`, are `True`.\n\n See Also\n --------\n qpimage.bg_estimate.estimate\n \"\"\"\n which_data = QPImage._conv_which_data(which_data)\n # check validity\n if not (\"amplitude\" in which_data or\n \"phase\" in which_data):\n msg = \"`which_data` must contain 'phase' or 'amplitude'!\"\n raise ValueError(msg)\n # get border in px\n border_list = []\n if border_m:\n if border_m < 0:\n raise ValueError(\"`border_m` must be greater than zero!\")\n border_list.append(border_m / self.meta[\"pixel size\"])\n if border_perc:\n if border_perc < 0 or border_perc > 50:\n raise ValueError(\"`border_perc` must be in interval [0, 50]!\")\n size = np.average(self.shape)\n border_list.append(size * border_perc / 100)\n if border_px:\n border_list.append(border_px)\n # get maximum border size\n if border_list:\n border_px = np.int(np.round(np.max(border_list)))\n elif from_mask is None:\n raise ValueError(\"Neither `from_mask` nor `border_*` given!\")\n elif np.all(from_mask == 0):\n raise ValueError(\"`from_mask` must not be all-zero!\")\n # Get affected image data\n imdat_list = []\n if \"amplitude\" in which_data:\n imdat_list.append(self._amp)\n if \"phase\" in which_data:\n imdat_list.append(self._pha)\n # Perform correction\n for imdat in imdat_list:\n mask = imdat.estimate_bg(fit_offset=fit_offset,\n fit_profile=fit_profile,\n border_px=border_px,\n from_mask=from_mask,\n ret_mask=ret_mask)\n return mask"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef copy(self, h5file=None):\n h5 = copyh5(self.h5, h5file)\n return QPImage(h5file=h5, h5dtype=self.h5dtype)", "response": "Create a copy of the current instance of the QPImage."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef refocus(self, distance, method=\"helmholtz\", h5file=None, h5mode=\"a\"):\n field2 = nrefocus.refocus(field=self.field,\n d=distance/self[\"pixel size\"],\n nm=self[\"medium index\"],\n res=self[\"wavelength\"]/self[\"pixel size\"],\n method=method\n )\n if \"identifier\" in self:\n ident = self[\"identifier\"]\n else:\n ident = \"\"\n meta_data = self.meta\n meta_data[\"identifier\"] = \"{}@{}{:.5e}m\".format(ident,\n method[0],\n distance)\n qpi2 = QPImage(data=field2,\n which_data=\"field\",\n meta_data=meta_data,\n h5file=h5file,\n h5mode=h5mode)\n return qpi2", "response": "Compute a numerically refocused QPImage."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsets the background amplitude and phase data for the current object.", "response": "def set_bg_data(self, bg_data, which_data=None):\n \"\"\"Set background amplitude and phase data\n\n Parameters\n ----------\n bg_data: 2d ndarray (float or complex), list, QPImage, or `None`\n The background data (must be same type as `data`).\n If set to `None`, the background data is reset.\n which_data: str\n String or comma-separated list of strings indicating\n the order and type of input data. Valid values are\n \"field\", \"phase\", \"phase,amplitude\", or \"phase,intensity\",\n where the latter two require an indexable object for\n `bg_data` with the phase data as first element.\n \"\"\"\n if isinstance(bg_data, QPImage):\n if which_data is not None:\n msg = \"`which_data` must not be set if `bg_data` is QPImage!\"\n raise ValueError(msg)\n pha, amp = bg_data.pha, bg_data.amp\n elif bg_data is None:\n # Reset phase and amplitude\n amp, pha = None, None\n else:\n # Compute phase and amplitude from data and which_data\n amp, pha = self._get_amp_pha(bg_data, which_data)\n # Set background data\n self._amp.set_bg(amp, key=\"data\")\n self._pha.set_bg(pha, key=\"data\")"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds a QPImage instance to the series.", "response": "def add_qpimage(self, qpi, identifier=None, bg_from_idx=None):\n \"\"\"Add a QPImage instance to the QPSeries\n\n Parameters\n ----------\n qpi: qpimage.QPImage\n The QPImage that is added to the series\n identifier: str\n Identifier key for `qpi`\n bg_from_idx: int or None\n Use the background data from the data stored in this index,\n creating hard links within the hdf5 file.\n (Saves memory if e.g. all qpimages is corrected with the same data)\n \"\"\"\n if not isinstance(qpi, QPImage):\n raise ValueError(\"`fli` must be instance of QPImage!\")\n if \"identifier\" in qpi and identifier is None:\n identifier = qpi[\"identifier\"]\n if identifier and identifier in self:\n msg = \"The identifier '{}' already \".format(identifier) \\\n + \"exists! You can either change the identifier of \" \\\n + \" '{}' or remove it.\".format(qpi)\n raise ValueError(msg)\n # determine number of qpimages\n num = len(self)\n # indices start at zero; do not add 1\n name = \"qpi_{}\".format(num)\n group = self.h5.create_group(name)\n thisqpi = qpi.copy(h5file=group)\n\n if bg_from_idx is not None:\n # Create hard links\n refqpi = self[bg_from_idx]\n thisqpi._amp.set_bg(bg=refqpi._amp.h5[\"bg_data\"][\"data\"])\n thisqpi._pha.set_bg(bg=refqpi._pha.h5[\"bg_data\"][\"data\"])\n\n if identifier:\n # set identifier\n group.attrs[\"identifier\"] = identifier"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a single QPImage of the series with the given index.", "response": "def get_qpimage(self, index):\n \"\"\"Return a single QPImage of the series\n\n Parameters\n ----------\n index: int or str\n Index or identifier of the QPImage\n\n Notes\n -----\n Instead of ``qps.get_qpimage(index)``, it is possible\n to use the short-hand ``qps[index]``.\n \"\"\"\n if isinstance(index, str):\n # search for the identifier\n for ii in range(len(self)):\n qpi = self[ii]\n if \"identifier\" in qpi and qpi[\"identifier\"] == index:\n group = self.h5[\"qpi_{}\".format(ii)]\n break\n else:\n msg = \"QPImage identifier '{}' not found!\".format(index)\n raise KeyError(msg)\n else:\n # integer index\n if index < -len(self):\n msg = \"Index {} out of bounds for QPSeries of size {}!\".format(\n index, len(self))\n raise ValueError(msg)\n elif index < 0:\n index += len(self)\n name = \"qpi_{}\".format(index)\n if name in self.h5:\n group = self.h5[name]\n else:\n msg = \"Index {} not found for QPSeries of length {}\".format(\n index, len(self))\n raise KeyError(msg)\n return QPImage(h5file=group)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _collapse_invariants(bases: List[type], namespace: MutableMapping[str, Any]) -> None:\n invariants = [] # type: List[Contract]\n\n # Add invariants of the bases\n for base in bases:\n if hasattr(base, \"__invariants__\"):\n invariants.extend(getattr(base, \"__invariants__\"))\n\n # Add invariants in the current namespace\n if '__invariants__' in namespace:\n invariants.extend(namespace['__invariants__'])\n\n # Change the final invariants in the namespace\n if invariants:\n namespace[\"__invariants__\"] = invariants", "response": "Collect invariants from the bases and merge them with the invariants in the namespace."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncollapses function preconditions with the preconditions collected from the base classes.", "response": "def _collapse_preconditions(base_preconditions: List[List[Contract]], bases_have_func: bool,\n preconditions: List[List[Contract]], func: Callable[..., Any]) -> List[List[Contract]]:\n \"\"\"\n Collapse function preconditions with the preconditions collected from the base classes.\n\n :param base_preconditions: preconditions collected from the base classes (grouped by base class)\n :param bases_have_func: True if one of the base classes has the function\n :param preconditions: preconditions of the function (before the collapse)\n :param func: function whose preconditions we are collapsing\n :return: collapsed sequence of precondition groups\n \"\"\"\n if not base_preconditions and bases_have_func and preconditions:\n raise TypeError((\"The function {} can not weaken the preconditions because the bases specify \"\n \"no preconditions at all. Hence this function must accept all possible input since \"\n \"the preconditions are OR'ed and no precondition implies a dummy precondition which is always \"\n \"fulfilled.\").format(func.__qualname__))\n\n return base_preconditions + preconditions"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncollapsing snapshots of pre - invocation values with the snapshots collected from the base classes.", "response": "def _collapse_snapshots(base_snapshots: List[Snapshot], snapshots: List[Snapshot]) -> List[Snapshot]:\n \"\"\"\n Collapse snapshots of pre-invocation values with the snapshots collected from the base classes.\n\n :param base_snapshots: snapshots collected from the base classes\n :param snapshots: snapshots of the function (before the collapse)\n :return: collapsed sequence of snapshots\n \"\"\"\n seen_names = set() # type: Set[str]\n collapsed = base_snapshots + snapshots\n\n for snap in collapsed:\n if snap.name in seen_names:\n raise ValueError(\"There are conflicting snapshots with the name: {!r}.\\n\\n\"\n \"Please mind that the snapshots are inherited from the base classes. \"\n \"Does one of the base classes defines a snapshot with the same name?\".format(snap.name))\n\n seen_names.add(snap.name)\n\n return collapsed"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _collapse_postconditions(base_postconditions: List[Contract], postconditions: List[Contract]) -> List[Contract]:\n return base_postconditions + postconditions", "response": "Collapse function postconditions with the postconditions collected from the base classes."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _decorate_namespace_function(bases: List[type], namespace: MutableMapping[str, Any], key: str) -> None:\n # pylint: disable=too-many-branches\n # pylint: disable=too-many-locals\n\n value = namespace[key]\n assert inspect.isfunction(value) or isinstance(value, (staticmethod, classmethod))\n\n # Determine the function to be decorated\n if inspect.isfunction(value):\n func = value\n elif isinstance(value, (staticmethod, classmethod)):\n func = value.__func__\n else:\n raise NotImplementedError(\"Unexpected value for a function: {}\".format(value))\n\n # Collect preconditions and postconditions of the function\n preconditions = [] # type: List[List[Contract]]\n snapshots = [] # type: List[Snapshot]\n postconditions = [] # type: List[Contract]\n\n contract_checker = icontract._checkers.find_checker(func=func)\n if contract_checker is not None:\n preconditions = contract_checker.__preconditions__ # type: ignore\n snapshots = contract_checker.__postcondition_snapshots__ # type: ignore\n postconditions = contract_checker.__postconditions__ # type: ignore\n\n # Collect the preconditions and postconditions from bases.\n #\n # Preconditions and postconditions of __init__ of base classes are deliberately ignored (and not collapsed) since\n # initialization is an operation specific to the concrete class and does not relate to the class hierarchy.\n if key not in ['__init__']:\n base_preconditions = [] # type: List[List[Contract]]\n base_snapshots = [] # type: List[Snapshot]\n base_postconditions = [] # type: List[Contract]\n\n bases_have_func = False\n for base in bases:\n if hasattr(base, key):\n bases_have_func = True\n\n # Check if there is a checker function in the base class\n base_func = getattr(base, key)\n base_contract_checker = icontract._checkers.find_checker(func=base_func)\n\n # Ignore functions which don't have preconditions or postconditions\n if base_contract_checker is not None:\n base_preconditions.extend(base_contract_checker.__preconditions__) # type: ignore\n base_snapshots.extend(base_contract_checker.__postcondition_snapshots__) # type: ignore\n base_postconditions.extend(base_contract_checker.__postconditions__) # type: ignore\n\n # Collapse preconditions and postconditions from the bases with the the function's own ones\n preconditions = _collapse_preconditions(\n base_preconditions=base_preconditions,\n bases_have_func=bases_have_func,\n preconditions=preconditions,\n func=func)\n\n snapshots = _collapse_snapshots(base_snapshots=base_snapshots, snapshots=snapshots)\n\n postconditions = _collapse_postconditions(\n base_postconditions=base_postconditions, postconditions=postconditions)\n\n if preconditions or postconditions:\n if contract_checker is None:\n contract_checker = icontract._checkers.decorate_with_checker(func=func)\n\n # Replace the function with the function decorated with contract checks\n if inspect.isfunction(value):\n namespace[key] = contract_checker\n elif isinstance(value, staticmethod):\n namespace[key] = staticmethod(contract_checker)\n\n elif isinstance(value, classmethod):\n namespace[key] = classmethod(contract_checker)\n\n else:\n raise NotImplementedError(\"Unexpected value for a function: {}\".format(value))\n\n # Override the preconditions and postconditions\n contract_checker.__preconditions__ = preconditions # type: ignore\n contract_checker.__postcondition_snapshots__ = snapshots # type: ignore\n contract_checker.__postconditions__ = postconditions", "response": "Decorate the function at the key in the base class and the base class."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndecorates a property in a namespace by iterating over all base classes and checking if they are present.", "response": "def _decorate_namespace_property(bases: List[type], namespace: MutableMapping[str, Any], key: str) -> None:\n \"\"\"Collect contracts for all getters/setters/deleters corresponding to ``key`` and decorate them.\"\"\"\n # pylint: disable=too-many-locals\n # pylint: disable=too-many-branches\n # pylint: disable=too-many-statements\n\n value = namespace[key]\n assert isinstance(value, property)\n\n fget = value.fget # type: Optional[Callable[..., Any]]\n fset = value.fset # type: Optional[Callable[..., Any]]\n fdel = value.fdel # type: Optional[Callable[..., Any]]\n\n for func in [value.fget, value.fset, value.fdel]:\n func = cast(Callable[..., Any], func)\n\n if func is None:\n continue\n\n # Collect the preconditions and postconditions from bases\n base_preconditions = [] # type: List[List[Contract]]\n base_snapshots = [] # type: List[Snapshot]\n base_postconditions = [] # type: List[Contract]\n\n bases_have_func = False\n for base in bases:\n if hasattr(base, key):\n base_property = getattr(base, key)\n assert isinstance(base_property, property), \\\n \"Expected base {} to have {} as property, but got: {}\".format(base, key, base_property)\n\n if func == value.fget:\n base_func = getattr(base, key).fget\n elif func == value.fset:\n base_func = getattr(base, key).fset\n elif func == value.fdel:\n base_func = getattr(base, key).fdel\n else:\n raise NotImplementedError(\"Unhandled case: func neither value.fget, value.fset nor value.fdel\")\n\n if base_func is None:\n continue\n\n bases_have_func = True\n\n # Check if there is a checker function in the base class\n base_contract_checker = icontract._checkers.find_checker(func=base_func)\n\n # Ignore functions which don't have preconditions or postconditions\n if base_contract_checker is not None:\n base_preconditions.extend(base_contract_checker.__preconditions__) # type: ignore\n base_snapshots.extend(base_contract_checker.__postcondition_snapshots__) # type: ignore\n base_postconditions.extend(base_contract_checker.__postconditions__) # type: ignore\n\n # Add preconditions and postconditions of the function\n preconditions = [] # type: List[List[Contract]]\n snapshots = [] # type: List[Snapshot]\n postconditions = [] # type: List[Contract]\n\n contract_checker = icontract._checkers.find_checker(func=func)\n if contract_checker is not None:\n preconditions = contract_checker.__preconditions__ # type: ignore\n snapshots = contract_checker.__postcondition_snapshots__\n postconditions = contract_checker.__postconditions__ # type: ignore\n\n preconditions = _collapse_preconditions(\n base_preconditions=base_preconditions,\n bases_have_func=bases_have_func,\n preconditions=preconditions,\n func=func)\n\n snapshots = _collapse_snapshots(base_snapshots=base_snapshots, snapshots=snapshots)\n\n postconditions = _collapse_postconditions(\n base_postconditions=base_postconditions, postconditions=postconditions)\n\n if preconditions or postconditions:\n if contract_checker is None:\n contract_checker = icontract._checkers.decorate_with_checker(func=func)\n\n # Replace the function with the function decorated with contract checks\n if func == value.fget:\n fget = contract_checker\n elif func == value.fset:\n fset = contract_checker\n elif func == value.fdel:\n fdel = contract_checker\n else:\n raise NotImplementedError(\"Unhandled case: func neither fget, fset nor fdel\")\n\n # Override the preconditions and postconditions\n contract_checker.__preconditions__ = preconditions # type: ignore\n contract_checker.__postcondition_snapshots__ = snapshots # type: ignore\n contract_checker.__postconditions__ = postconditions # type: ignore\n\n if fget != value.fget or fset != value.fset or fdel != value.fdel:\n namespace[key] = property(fget=fget, fset=fset, fdel=fdel)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _dbc_decorate_namespace(bases: List[type], namespace: MutableMapping[str, Any]) -> None:\n _collapse_invariants(bases=bases, namespace=namespace)\n\n for key, value in namespace.items():\n if inspect.isfunction(value) or isinstance(value, (staticmethod, classmethod)):\n _decorate_namespace_function(bases=bases, namespace=namespace, key=key)\n\n elif isinstance(value, property):\n _decorate_namespace_property(bases=bases, namespace=namespace, key=key)\n\n else:\n # Ignore the value which is neither a function nor a property\n pass", "response": "Decorate all methods in the namespace."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncheck whether the value is representable in the error message on contract breach.", "response": "def _representable(value: Any) -> bool:\n \"\"\"\n Check whether we want to represent the value in the error message on contract breach.\n\n We do not want to represent classes, methods, modules and functions.\n\n :param value: value related to an AST node\n :return: True if we want to represent it in the violation error\n \"\"\"\n return not inspect.isclass(value) and not inspect.isfunction(value) and not inspect.ismethod(value) and not \\\n inspect.ismodule(value) and not inspect.isbuiltin(value)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef inspect_decorator(lines: List[str], lineno: int, filename: str) -> DecoratorInspection:\n if lineno < 0 or lineno >= len(lines):\n raise ValueError((\"Given line number {} of one of the decorator lines \"\n \"is not within the range [{}, {}) of lines in {}\").format(lineno, 0, len(lines), filename))\n\n # Go up till a line starts with a decorator\n decorator_lineno = None # type: Optional[int]\n for i in range(lineno, -1, -1):\n if _DECORATOR_RE.match(lines[i]):\n decorator_lineno = i\n break\n\n if decorator_lineno is None:\n raise SyntaxError(\"Decorator corresponding to the line {} could not be found in file {}: {!r}\".format(\n lineno + 1, filename, lines[lineno]))\n\n # Find the decorator end -- it's either a function definition, a class definition or another decorator\n decorator_end_lineno = None # type: Optional[int]\n for i in range(lineno + 1, len(lines)):\n line = lines[i]\n\n if _DECORATOR_RE.match(line) or _DEF_CLASS_RE.match(line):\n decorator_end_lineno = i\n break\n\n if decorator_end_lineno is None:\n raise SyntaxError((\"The next statement following the decorator corresponding to the line {} \"\n \"could not be found in file {}: {!r}\").format(lineno + 1, filename, lines[lineno]))\n\n decorator_lines = lines[decorator_lineno:decorator_end_lineno]\n\n # We need to dedent the decorator and add a dummy decoratee so that we can parse its text as valid source code.\n decorator_text = textwrap.dedent(\"\".join(decorator_lines)) + \"def dummy_{}(): pass\".format(uuid.uuid4().hex)\n\n atok = asttokens.ASTTokens(decorator_text, parse=True)\n\n assert isinstance(atok.tree, ast.Module), \"Expected the parsed decorator text to live in an AST module.\"\n\n module_node = atok.tree\n assert len(module_node.body) == 1, \"Expected the module AST of the decorator text to have a single statement.\"\n assert isinstance(module_node.body[0], ast.FunctionDef), \\\n \"Expected the only statement in the AST module corresponding to the decorator text to be a function definition.\"\n\n func_def_node = module_node.body[0]\n\n assert len(func_def_node.decorator_list) == 1, \\\n \"Expected the function AST node corresponding to the decorator text to have a single decorator.\"\n\n assert isinstance(func_def_node.decorator_list[0], ast.Call), \\\n \"Expected the only decorator in the function definition AST node corresponding to the decorator text \" \\\n \"to be a call node.\"\n\n call_node = func_def_node.decorator_list[0]\n\n return DecoratorInspection(atok=atok, node=call_node)", "response": "Inspects the source code of a decorator call."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ninspects the decorator and extract the condition as lambda function.", "response": "def find_lambda_condition(decorator_inspection: DecoratorInspection) -> Optional[ConditionLambdaInspection]:\n \"\"\"\n Inspect the decorator and extract the condition as lambda.\n\n If the condition is not given as a lambda function, return None.\n \"\"\"\n call_node = decorator_inspection.node\n\n lambda_node = None # type: Optional[ast.Lambda]\n\n if len(call_node.args) > 0:\n assert isinstance(call_node.args[0], ast.Lambda), \\\n (\"Expected the first argument to the decorator to be a condition as lambda AST node, \"\n \"but got: {}\").format(type(call_node.args[0]))\n\n lambda_node = call_node.args[0]\n\n elif len(call_node.keywords) > 0:\n for keyword in call_node.keywords:\n if keyword.arg == \"condition\":\n assert isinstance(keyword.value, ast.Lambda), \\\n \"Expected lambda node as value of the 'condition' argument to the decorator.\"\n\n lambda_node = keyword.value\n break\n\n assert lambda_node is not None, \"Expected to find a keyword AST node with 'condition' arg, but found none\"\n else:\n raise AssertionError(\n \"Expected a call AST node of a decorator to have either args or keywords, but got: {}\".format(\n ast.dump(call_node)))\n\n return ConditionLambdaInspection(atok=decorator_inspection.atok, node=lambda_node)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef repr_values(condition: Callable[..., bool], lambda_inspection: Optional[ConditionLambdaInspection],\n condition_kwargs: Mapping[str, Any], a_repr: reprlib.Repr) -> List[str]:\n # pylint: disable=too-many-locals\n \"\"\"\n Represent function arguments and frame values in the error message on contract breach.\n\n :param condition: condition function of the contract\n :param lambda_inspection:\n inspected lambda AST node corresponding to the condition function (None if the condition was not given as a\n lambda function)\n :param condition_kwargs: condition arguments\n :param a_repr: representation instance that defines how the values are represented.\n :return: list of value representations\n \"\"\"\n if _is_lambda(a_function=condition):\n assert lambda_inspection is not None, \"Expected a lambda inspection when given a condition as a lambda function\"\n else:\n assert lambda_inspection is None, \"Expected no lambda inspection in a condition given as a non-lambda function\"\n\n reprs = dict() # type: MutableMapping[str, Any]\n\n if lambda_inspection is not None:\n # Collect the variable lookup of the condition function:\n variable_lookup = [] # type: List[Mapping[str, Any]]\n\n # Add condition arguments to the lookup\n variable_lookup.append(condition_kwargs)\n\n # Add closure to the lookup\n closure_dict = dict() # type: Dict[str, Any]\n\n if condition.__closure__ is not None: # type: ignore\n closure_cells = condition.__closure__ # type: ignore\n freevars = condition.__code__.co_freevars\n\n assert len(closure_cells) == len(freevars), \\\n \"Number of closure cells of a condition function ({}) == number of free vars ({})\".format(\n len(closure_cells), len(freevars))\n\n for cell, freevar in zip(closure_cells, freevars):\n closure_dict[freevar] = cell.cell_contents\n\n variable_lookup.append(closure_dict)\n\n # Add globals to the lookup\n if condition.__globals__ is not None: # type: ignore\n variable_lookup.append(condition.__globals__) # type: ignore\n\n # pylint: disable=protected-access\n recompute_visitor = icontract._recompute.Visitor(variable_lookup=variable_lookup)\n\n recompute_visitor.visit(node=lambda_inspection.node.body)\n recomputed_values = recompute_visitor.recomputed_values\n\n repr_visitor = Visitor(\n recomputed_values=recomputed_values, variable_lookup=variable_lookup, atok=lambda_inspection.atok)\n repr_visitor.visit(node=lambda_inspection.node.body)\n\n reprs = repr_visitor.reprs\n else:\n for key, val in condition_kwargs.items():\n if _representable(value=val):\n reprs[key] = val\n\n parts = [] # type: List[str]\n for key in sorted(reprs.keys()):\n parts.append('{} was {}'.format(key, a_repr.repr(reprs[key])))\n\n return parts", "response": "Return a list of value representations of the given condition function."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef generate_message(contract: Contract, condition_kwargs: Mapping[str, Any]) -> str:\n # pylint: disable=protected-access\n parts = [] # type: List[str]\n\n if contract.location is not None:\n parts.append(\"{}:\\n\".format(contract.location))\n\n if contract.description is not None:\n parts.append(\"{}: \".format(contract.description))\n\n lambda_inspection = None # type: Optional[ConditionLambdaInspection]\n if not _is_lambda(a_function=contract.condition):\n condition_text = contract.condition.__name__\n else:\n # We need to extract the source code corresponding to the decorator since inspect.getsource() is broken with\n # lambdas.\n\n # Find the line corresponding to the condition lambda\n lines, condition_lineno = inspect.findsource(contract.condition)\n filename = inspect.getsourcefile(contract.condition)\n\n decorator_inspection = inspect_decorator(lines=lines, lineno=condition_lineno, filename=filename)\n lambda_inspection = find_lambda_condition(decorator_inspection=decorator_inspection)\n\n assert lambda_inspection is not None, \\\n \"Expected lambda_inspection to be non-None if _is_lambda is True on: {}\".format(contract.condition)\n\n condition_text = lambda_inspection.text\n\n parts.append(condition_text)\n\n repr_vals = repr_values(\n condition=contract.condition,\n lambda_inspection=lambda_inspection,\n condition_kwargs=condition_kwargs,\n a_repr=contract._a_repr)\n\n if len(repr_vals) == 0:\n # Do not append anything since no value could be represented as a string.\n # This could appear in case we have, for example, a generator expression as the return value of a lambda.\n pass\n\n elif len(repr_vals) == 1:\n parts.append(': ')\n parts.append(repr_vals[0])\n else:\n parts.append(':\\n')\n parts.append('\\n'.join(repr_vals))\n\n msg = \"\".join(parts)\n return msg", "response": "Generate the message upon contract violation."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef visit_Name(self, node: ast.Name) -> None:\n if node in self._recomputed_values:\n value = self._recomputed_values[node]\n\n # Check if it is a non-built-in\n is_builtin = True\n for lookup in self._variable_lookup:\n if node.id in lookup:\n is_builtin = False\n break\n\n if not is_builtin and _representable(value=value):\n text = self._atok.get_text(node)\n self.reprs[text] = value\n\n self.generic_visit(node=node)", "response": "Resolve the name from the variable look - up and the built - in lookup."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef visit_Attribute(self, node: ast.Attribute) -> None:\n if node in self._recomputed_values:\n value = self._recomputed_values[node]\n\n if _representable(value=value):\n text = self._atok.get_text(node)\n self.reprs[text] = value\n\n self.generic_visit(node=node)", "response": "Represent the attribute by dumping its source code."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef visit_Call(self, node: ast.Call) -> None:\n if node in self._recomputed_values:\n value = self._recomputed_values[node]\n text = self._atok.get_text(node)\n\n self.reprs[text] = value\n\n self.generic_visit(node=node)", "response": "Represent the call by dumping its source code."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nrepresent the list comprehension by dumping its source code.", "response": "def visit_ListComp(self, node: ast.ListComp) -> None:\n \"\"\"Represent the list comprehension by dumping its source code.\"\"\"\n if node in self._recomputed_values:\n value = self._recomputed_values[node]\n text = self._atok.get_text(node)\n\n self.reprs[text] = value\n\n self.generic_visit(node=node)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef visit_DictComp(self, node: ast.DictComp) -> None:\n if node in self._recomputed_values:\n value = self._recomputed_values[node]\n text = self._atok.get_text(node)\n\n self.reprs[text] = value\n\n self.generic_visit(node=node)", "response": "Represent the dictionary comprehension by dumping its source code."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _walk_decorator_stack(func: CallableT) -> Iterable['CallableT']:\n while hasattr(func, \"__wrapped__\"):\n yield func\n\n func = getattr(func, \"__wrapped__\")\n\n yield func", "response": "Iterate through the stack of decorated functions until the original function is decorated."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef find_checker(func: CallableT) -> Optional[CallableT]:\n contract_checker = None # type: Optional[CallableT]\n for a_wrapper in _walk_decorator_stack(func):\n if hasattr(a_wrapper, \"__preconditions__\") or hasattr(a_wrapper, \"__postconditions__\"):\n contract_checker = a_wrapper\n\n return contract_checker", "response": "Iterate through the decorator stack till we find the contract checker."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ninspecting the input values received at the wrapper for the actual function call.", "response": "def _kwargs_from_call(param_names: List[str], kwdefaults: Dict[str, Any], args: Tuple[Any, ...],\n kwargs: Dict[str, Any]) -> MutableMapping[str, Any]:\n \"\"\"\n Inspect the input values received at the wrapper for the actual function call.\n\n :param param_names: parameter (*i.e.* argument) names of the original (decorated) function\n :param kwdefaults: default argument values of the original function\n :param args: arguments supplied to the call\n :param kwargs: keyword arguments supplied to the call\n :return: resolved arguments as they would be passed to the function\n \"\"\"\n # pylint: disable=too-many-arguments\n mapping = dict() # type: MutableMapping[str, Any]\n\n # Set the default argument values as condition parameters.\n for param_name, param_value in kwdefaults.items():\n mapping[param_name] = param_value\n\n # Override the defaults with the values actually suplied to the function.\n for i, func_arg in enumerate(args):\n mapping[param_names[i]] = func_arg\n\n for key, val in kwargs.items():\n mapping[key] = val\n\n return mapping"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _assert_precondition(contract: Contract, resolved_kwargs: Mapping[str, Any]) -> None:\n # Check that all arguments to the condition function have been set.\n missing_args = [arg_name for arg_name in contract.condition_args if arg_name not in resolved_kwargs]\n if missing_args:\n raise TypeError(\n (\"The argument(s) of the precondition have not been set: {}. \"\n \"Does the original function define them? Did you supply them in the call?\").format(missing_args))\n\n condition_kwargs = {\n arg_name: value\n for arg_name, value in resolved_kwargs.items() if arg_name in contract.condition_arg_set\n }\n\n check = contract.condition(**condition_kwargs)\n\n if not check:\n if contract.error is not None and (inspect.ismethod(contract.error) or inspect.isfunction(contract.error)):\n assert contract.error_arg_set is not None, \"Expected error_arg_set non-None if contract.error a function.\"\n assert contract.error_args is not None, \"Expected error_args non-None if contract.error a function.\"\n\n error_kwargs = {\n arg_name: value\n for arg_name, value in resolved_kwargs.items() if arg_name in contract.error_arg_set\n }\n\n missing_args = [arg_name for arg_name in contract.error_args if arg_name not in resolved_kwargs]\n if missing_args:\n msg_parts = [] # type: List[str]\n if contract.location is not None:\n msg_parts.append(\"{}:\\n\".format(contract.location))\n\n msg_parts.append(\n (\"The argument(s) of the precondition error have not been set: {}. \"\n \"Does the original function define them? Did you supply them in the call?\").format(missing_args))\n\n raise TypeError(''.join(msg_parts))\n\n raise contract.error(**error_kwargs)\n\n else:\n msg = icontract._represent.generate_message(contract=contract, condition_kwargs=condition_kwargs)\n if contract.error is None:\n raise ViolationError(msg)\n elif isinstance(contract.error, type):\n raise contract.error(msg)", "response": "Assert that the contract holds as a precondition."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _assert_invariant(contract: Contract, instance: Any) -> None:\n if 'self' in contract.condition_arg_set:\n check = contract.condition(self=instance)\n else:\n check = contract.condition()\n\n if not check:\n if contract.error is not None and (inspect.ismethod(contract.error) or inspect.isfunction(contract.error)):\n assert contract.error_arg_set is not None, \"Expected error_arg_set non-None if contract.error a function.\"\n assert contract.error_args is not None, \"Expected error_args non-None if contract.error a function.\"\n\n if 'self' in contract.error_arg_set:\n raise contract.error(self=instance)\n else:\n raise contract.error()\n else:\n if 'self' in contract.condition_arg_set:\n msg = icontract._represent.generate_message(contract=contract, condition_kwargs={\"self\": instance})\n else:\n msg = icontract._represent.generate_message(contract=contract, condition_kwargs=dict())\n\n if contract.error is None:\n raise ViolationError(msg)\n elif isinstance(contract.error, type):\n raise contract.error(msg)\n else:\n raise NotImplementedError(\"Unhandled contract.error: {}\".format(contract.error))", "response": "Assert that the contract holds as a class invariant given the instance of the class."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncaptures the snapshot from the keyword arguments resolved before the function call.", "response": "def _capture_snapshot(a_snapshot: Snapshot, resolved_kwargs: Mapping[str, Any]) -> Any:\n \"\"\"\n Capture the snapshot from the keyword arguments resolved before the function call (including the default values).\n\n :param a_snapshot: snapshot to be captured\n :param resolved_kwargs: resolved keyword arguments (including the default values)\n :return: captured value\n \"\"\"\n if a_snapshot.arg is not None:\n if a_snapshot.arg not in resolved_kwargs:\n raise TypeError((\"The argument of the snapshot has not been set: {}. \"\n \"Does the original function define it? Did you supply it in the call?\").format(\n a_snapshot.arg))\n\n value = a_snapshot.capture(**{a_snapshot.arg: resolved_kwargs[a_snapshot.arg]})\n else:\n value = a_snapshot.capture()\n\n return value"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndecorates the function with a checker that verifies the preconditions and postconditions.", "response": "def decorate_with_checker(func: CallableT) -> CallableT:\n \"\"\"Decorate the function with a checker that verifies the preconditions and postconditions.\"\"\"\n assert not hasattr(func, \"__preconditions__\"), \\\n \"Expected func to have no list of preconditions (there should be only a single contract checker per function).\"\n\n assert not hasattr(func, \"__postconditions__\"), \\\n \"Expected func to have no list of postconditions (there should be only a single contract checker per function).\"\n\n assert not hasattr(func, \"__postcondition_snapshots__\"), \\\n \"Expected func to have no list of postcondition snapshots (there should be only a single contract checker \" \\\n \"per function).\"\n\n sign = inspect.signature(func)\n param_names = list(sign.parameters.keys())\n\n # Determine the default argument values.\n kwdefaults = dict() # type: Dict[str, Any]\n\n # Add to the defaults all the values that are needed by the contracts.\n for param in sign.parameters.values():\n if param.default != inspect.Parameter.empty:\n kwdefaults[param.name] = param.default\n\n def wrapper(*args, **kwargs):\n \"\"\"Wrap func by checking the preconditions and postconditions.\"\"\"\n preconditions = getattr(wrapper, \"__preconditions__\") # type: List[List[Contract]]\n snapshots = getattr(wrapper, \"__postcondition_snapshots__\") # type: List[Snapshot]\n postconditions = getattr(wrapper, \"__postconditions__\") # type: List[Contract]\n\n resolved_kwargs = _kwargs_from_call(param_names=param_names, kwdefaults=kwdefaults, args=args, kwargs=kwargs)\n\n if postconditions:\n if 'result' in resolved_kwargs:\n raise TypeError(\"Unexpected argument 'result' in a function decorated with postconditions.\")\n\n if 'OLD' in resolved_kwargs:\n raise TypeError(\"Unexpected argument 'OLD' in a function decorated with postconditions.\")\n\n # Assert the preconditions in groups. This is necessary to implement \"require else\" logic when a class\n # weakens the preconditions of its base class.\n violation_err = None # type: Optional[ViolationError]\n for group in preconditions:\n violation_err = None\n try:\n for contract in group:\n _assert_precondition(contract=contract, resolved_kwargs=resolved_kwargs)\n break\n except ViolationError as err:\n violation_err = err\n\n if violation_err is not None:\n raise violation_err # pylint: disable=raising-bad-type\n\n # Capture the snapshots\n if postconditions:\n old_as_mapping = dict() # type: MutableMapping[str, Any]\n for snap in snapshots:\n # This assert is just a last defense.\n # Conflicting snapshot names should have been caught before, either during the decoration or\n # in the meta-class.\n assert snap.name not in old_as_mapping, \"Snapshots with the conflicting name: {}\"\n old_as_mapping[snap.name] = _capture_snapshot(a_snapshot=snap, resolved_kwargs=resolved_kwargs)\n\n resolved_kwargs['OLD'] = _Old(mapping=old_as_mapping)\n\n # Execute the wrapped function\n result = func(*args, **kwargs)\n\n if postconditions:\n resolved_kwargs['result'] = result\n\n # Assert the postconditions as a conjunction\n for contract in postconditions:\n _assert_postcondition(contract=contract, resolved_kwargs=resolved_kwargs)\n\n return result # type: ignore\n\n # Copy __doc__ and other properties so that doctests can run\n functools.update_wrapper(wrapper=wrapper, wrapped=func)\n\n assert not hasattr(wrapper, \"__preconditions__\"), \"Expected no preconditions set on a pristine contract checker.\"\n assert not hasattr(wrapper, \"__postcondition_snapshots__\"), \\\n \"Expected no postcondition snapshots set on a pristine contract checker.\"\n assert not hasattr(wrapper, \"__postconditions__\"), \"Expected no postconditions set on a pristine contract checker.\"\n\n # Precondition is a list of condition groups (i.e. disjunctive normal form):\n # each group consists of AND'ed preconditions, while the groups are OR'ed.\n #\n # This is necessary in order to implement \"require else\" logic when a class weakens the preconditions of\n # its base class.\n setattr(wrapper, \"__preconditions__\", [])\n setattr(wrapper, \"__postcondition_snapshots__\", [])\n setattr(wrapper, \"__postconditions__\", [])\n\n return wrapper"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _find_self(param_names: List[str], args: Tuple[Any, ...], kwargs: Dict[str, Any]) -> Any:\n instance_i = param_names.index(\"self\")\n if instance_i < len(args):\n instance = args[instance_i]\n else:\n instance = kwargs[\"self\"]\n\n return instance", "response": "Find the instance of self in the arguments."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndecorate the function func with invariant checks.", "response": "def _decorate_with_invariants(func: CallableT, is_init: bool) -> CallableT:\n \"\"\"\n Decorate the function ``func`` of the class ``cls`` with invariant checks.\n\n If the function has been already decorated with invariant checks, the function returns immediately.\n\n :param func: function to be wrapped\n :param is_init: True if the ``func`` is __init__\n :return: function wrapped with invariant checks\n \"\"\"\n if _already_decorated_with_invariants(func=func):\n return func\n\n sign = inspect.signature(func)\n param_names = list(sign.parameters.keys())\n\n if is_init:\n\n def wrapper(*args, **kwargs):\n \"\"\"Wrap __init__ method of a class by checking the invariants *after* the invocation.\"\"\"\n result = func(*args, **kwargs)\n instance = _find_self(param_names=param_names, args=args, kwargs=kwargs)\n\n for contract in instance.__class__.__invariants__:\n _assert_invariant(contract=contract, instance=instance)\n\n return result\n else:\n\n def wrapper(*args, **kwargs):\n \"\"\"Wrap a function of a class by checking the invariants *before* and *after* the invocation.\"\"\"\n instance = _find_self(param_names=param_names, args=args, kwargs=kwargs)\n\n for contract in instance.__class__.__invariants__:\n _assert_invariant(contract=contract, instance=instance)\n\n result = func(*args, **kwargs)\n\n for contract in instance.__class__.__invariants__:\n _assert_invariant(contract=contract, instance=instance)\n\n return result\n\n functools.update_wrapper(wrapper=wrapper, wrapped=func)\n\n setattr(wrapper, \"__is_invariant_check__\", True)\n\n return wrapper"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _already_decorated_with_invariants(func: CallableT) -> bool:\n already_decorated = False\n for a_decorator in _walk_decorator_stack(func=func):\n if getattr(a_decorator, \"__is_invariant_check__\", False):\n already_decorated = True\n break\n\n return already_decorated", "response": "Check if the function has already been decorated with an invariant check by going through its decorator stack."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndecorate each of the class methods with invariant checks if not already decorated.", "response": "def add_invariant_checks(cls: type) -> None:\n \"\"\"Decorate each of the class functions with invariant checks if not already decorated.\"\"\"\n # Candidates for the decoration as list of (name, dir() value)\n init_name_func = None # type: Optional[Tuple[str, Callable[..., None]]]\n names_funcs = [] # type: List[Tuple[str, Callable[..., None]]]\n names_properties = [] # type: List[Tuple[str, property]]\n\n # Filter out entries in the directory which are certainly not candidates for decoration.\n for name, value in [(name, getattr(cls, name)) for name in dir(cls)]:\n # We need to ignore __repr__ to prevent endless loops when generating error messages.\n # __getattribute__, __setattr__ and __delattr__ are too invasive and alter the state of the instance.\n # Hence we don't consider them \"public\".\n if name in [\"__repr__\", \"__getattribute__\", \"__setattr__\", \"__delattr__\"]:\n continue\n\n if name == \"__init__\":\n assert inspect.isfunction(value) or isinstance(value, _SLOT_WRAPPER_TYPE), \\\n \"Expected __init__ to be either a function or a slot wrapper, but got: {}\".format(type(value))\n\n init_name_func = (name, value)\n continue\n\n if not inspect.isfunction(value) and not isinstance(value, _SLOT_WRAPPER_TYPE) and \\\n not isinstance(value, property):\n continue\n\n # Ignore class methods\n if getattr(value, \"__self__\", None) is cls:\n continue\n\n # Ignore \"protected\"/\"private\" methods\n if name.startswith(\"_\") and not (name.startswith(\"__\") and name.endswith(\"__\")):\n continue\n\n if inspect.isfunction(value) or isinstance(value, _SLOT_WRAPPER_TYPE):\n names_funcs.append((name, value))\n\n elif isinstance(value, property):\n names_properties.append((name, value))\n\n else:\n raise NotImplementedError(\"Unhandled directory entry of class {} for {}: {}\".format(cls, name, value))\n\n if init_name_func:\n name, func = init_name_func\n wrapper = _decorate_with_invariants(func=func, is_init=True)\n setattr(cls, name, wrapper)\n\n for name, func in names_funcs:\n wrapper = _decorate_with_invariants(func=func, is_init=False)\n setattr(cls, name, wrapper)\n\n for name, prop in names_properties:\n new_prop = property( # type: ignore\n fget=_decorate_with_invariants(func=prop.fget, is_init=False) if prop.fget else None,\n fset=_decorate_with_invariants(func=prop.fset, is_init=False) if prop.fset else None,\n fdel=_decorate_with_invariants(func=prop.fdel, is_init=False) if prop.fdel else None,\n doc=prop.__doc__)\n setattr(cls, name, new_prop)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nexecute the main routine.", "response": "def main() -> None:\n \"\"\"\"Execute the main routine.\"\"\"\n parser = argparse.ArgumentParser(description=__doc__)\n parser.add_argument(\"--module\", help=\"name of the module to import\",\n choices=[\n \"functions_100_with_no_contract\",\n\n \"functions_100_with_1_contract\",\n \"functions_100_with_5_contracts\",\n \"functions_100_with_10_contracts\",\n\n \"functions_100_with_1_disabled_contract\",\n \"functions_100_with_5_disabled_contracts\",\n \"functions_100_with_10_disabled_contracts\",\n\n \"classes_100_with_no_invariant\",\n\n \"classes_100_with_1_invariant\",\n \"classes_100_with_5_invariants\",\n \"classes_100_with_10_invariants\",\n\n \"classes_100_with_1_disabled_invariant\",\n \"classes_100_with_5_disabled_invariants\",\n \"classes_100_with_10_disabled_invariants\",\n ],\n required=True)\n args = parser.parse_args()\n\n\n a_module = str(args.module)\n if a_module == \"functions_100_with_no_contract\":\n start = time.time()\n import functions_100_with_no_contract\n print(time.time() - start)\n\n elif a_module == \"functions_100_with_1_contract\":\n start = time.time()\n import functions_100_with_1_contract\n print(time.time() - start)\n\n elif a_module == \"functions_100_with_5_contracts\":\n start = time.time()\n import functions_100_with_5_contracts\n print(time.time() - start)\n\n elif a_module == \"functions_100_with_10_contracts\":\n start = time.time()\n import functions_100_with_10_contracts\n print(time.time() - start)\n\n elif a_module == \"functions_100_with_1_disabled_contract\":\n start = time.time()\n import functions_100_with_1_disabled_contract\n print(time.time() - start)\n\n elif a_module == \"functions_100_with_5_disabled_contracts\":\n start = time.time()\n import functions_100_with_5_disabled_contracts\n print(time.time() - start)\n\n elif a_module == \"functions_100_with_10_disabled_contracts\":\n start = time.time()\n import functions_100_with_10_disabled_contracts\n print(time.time() - start)\n\n elif a_module == \"classes_100_with_no_invariant\":\n start = time.time()\n import classes_100_with_no_invariant\n print(time.time() - start)\n\n elif a_module == \"classes_100_with_1_invariant\":\n start = time.time()\n import classes_100_with_1_invariant\n print(time.time() - start)\n\n elif a_module == \"classes_100_with_5_invariants\":\n start = time.time()\n import classes_100_with_5_invariants\n print(time.time() - start)\n\n elif a_module == \"classes_100_with_10_invariants\":\n start = time.time()\n import classes_100_with_10_invariants\n print(time.time() - start)\n\n elif a_module == \"classes_100_with_1_disabled_invariant\":\n start = time.time()\n import classes_100_with_1_disabled_invariant\n print(time.time() - start)\n\n elif a_module == \"classes_100_with_5_disabled_invariants\":\n start = time.time()\n import classes_100_with_5_disabled_invariants\n print(time.time() - start)\n\n elif a_module == \"classes_100_with_10_disabled_invariants\":\n start = time.time()\n import classes_100_with_10_disabled_invariants\n print(time.time() - start)\n\n else:\n raise NotImplementedError(\"Unhandled module: {}\".format(a_module))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef main() -> None:\n modules = [\n \"functions_100_with_no_contract\",\n\n \"functions_100_with_1_contract\",\n \"functions_100_with_5_contracts\",\n \"functions_100_with_10_contracts\",\n\n \"functions_100_with_1_disabled_contract\",\n \"functions_100_with_5_disabled_contracts\",\n \"functions_100_with_10_disabled_contracts\",\n\n \"classes_100_with_no_invariant\",\n\n \"classes_100_with_1_invariant\",\n \"classes_100_with_5_invariants\",\n \"classes_100_with_10_invariants\",\n\n \"classes_100_with_1_disabled_invariant\",\n \"classes_100_with_5_disabled_invariants\",\n \"classes_100_with_10_disabled_invariants\",\n ]\n\n for a_module in modules:\n durations = [] # type: List[float]\n for i in range(0, 10):\n duration = float(\n subprocess.check_output([\"./measure.py\", \"--module\", a_module], cwd=os.path.dirname(__file__)).strip())\n durations.append(duration)\n\n print(\"Duration to import the module {} (in milliseconds): {:.2f} \u00b1 {:.2f}\".format(\n a_module, statistics.mean(durations) * 10e3, statistics.stdev(durations) * 10e3))", "response": "Execute the main routine."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nexecute the main routine.", "response": "def main() -> None:\n \"\"\"\"Execute the main routine.\"\"\"\n parser = argparse.ArgumentParser(description=__doc__)\n parser.add_argument(\"--outdir\", help=\"output directory\", default=os.path.dirname(__file__))\n args = parser.parse_args()\n\n outdir = pathlib.Path(args.outdir)\n if not outdir.exists():\n raise FileNotFoundError(\"Output directory is missing: {}\".format(outdir))\n\n for contracts in [0, 1, 5, 10]:\n if contracts == 0:\n pth = outdir / \"functions_100_with_no_contract.py\"\n elif contracts == 1:\n pth = outdir / \"functions_100_with_1_contract.py\"\n else:\n pth = outdir / \"functions_100_with_{}_contracts.py\".format(contracts)\n\n text = generate_functions(functions=100, contracts=contracts, disabled=False)\n pth.write_text(text)\n\n for contracts in [1, 5, 10]:\n if contracts == 1:\n pth = outdir / \"functions_100_with_1_disabled_contract.py\"\n else:\n pth = outdir / \"functions_100_with_{}_disabled_contracts.py\".format(contracts)\n\n text = generate_functions(functions=100, contracts=contracts, disabled=True)\n pth.write_text(text)\n\n for invariants in [0, 1, 5, 10]:\n if invariants == 0:\n pth = outdir / \"classes_100_with_no_invariant.py\"\n elif invariants == 1:\n pth = outdir / \"classes_100_with_1_invariant.py\"\n else:\n pth = outdir / \"classes_100_with_{}_invariants.py\".format(invariants)\n\n text = generate_classes(classes=100, invariants=invariants, disabled=False)\n pth.write_text(text)\n\n for invariants in [1, 5, 10]:\n if invariants == 1:\n pth = outdir / \"classes_100_with_1_disabled_invariant.py\"\n else:\n pth = outdir / \"classes_100_with_{}_disabled_invariants.py\".format(invariants)\n\n text = generate_classes(classes=100, invariants=invariants, disabled=True)\n pth.write_text(text)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef visit_Bytes(self, node: ast.Bytes) -> bytes:\n result = node.s\n\n self.recomputed_values[node] = result\n return node.s", "response": "Recompute the value as the bytes at the node."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nvisit the elements and assemble the results into a list.", "response": "def visit_List(self, node: ast.List) -> List[Any]:\n \"\"\"Visit the elements and assemble the results into a list.\"\"\"\n if isinstance(node.ctx, ast.Store):\n raise NotImplementedError(\"Can not compute the value of a Store on a list\")\n\n result = [self.visit(node=elt) for elt in node.elts]\n\n self.recomputed_values[node] = result\n return result"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef visit_Tuple(self, node: ast.Tuple) -> Tuple[Any, ...]:\n if isinstance(node.ctx, ast.Store):\n raise NotImplementedError(\"Can not compute the value of a Store on a tuple\")\n\n result = tuple(self.visit(node=elt) for elt in node.elts)\n\n self.recomputed_values[node] = result\n return result", "response": "Visit the elements and assemble the results into a tuple."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef visit_Set(self, node: ast.Set) -> Set[Any]:\n result = set(self.visit(node=elt) for elt in node.elts)\n\n self.recomputed_values[node] = result\n return result", "response": "Visit the elements and assemble the results into a set."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nvisit keys and values and assemble a dictionary with the results.", "response": "def visit_Dict(self, node: ast.Dict) -> Dict[Any, Any]:\n \"\"\"Visit keys and values and assemble a dictionary with the results.\"\"\"\n recomputed_dict = dict() # type: Dict[Any, Any]\n for key, val in zip(node.keys, node.values):\n recomputed_dict[self.visit(node=key)] = self.visit(node=val)\n\n self.recomputed_values[node] = recomputed_dict\n return recomputed_dict"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef visit_NameConstant(self, node: ast.NameConstant) -> Any:\n self.recomputed_values[node] = node.value\n return node.value", "response": "Forward the node value as a result."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef visit_Name(self, node: ast.Name) -> Any:\n if not isinstance(node.ctx, ast.Load):\n raise NotImplementedError(\"Can only compute a value of Load on a name {}, but got context: {}\".format(\n node.id, node.ctx))\n\n result = None # type: Optional[Any]\n\n if node.id in self._name_to_value:\n result = self._name_to_value[node.id]\n\n if result is None and hasattr(builtins, node.id):\n result = getattr(builtins, node.id)\n\n if result is None and node.id != \"None\":\n # The variable refers to a name local of the lambda (e.g., a target in the generator expression).\n # Since we evaluate generator expressions with runtime compilation, None is returned here as a placeholder.\n return PLACEHOLDER\n\n self.recomputed_values[node] = result\n return result", "response": "Load the variable by looking it up in the variable look - up and in the built - ins."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef visit_Expr(self, node: ast.Expr) -> Any:\n result = self.visit(node=node.value)\n\n self.recomputed_values[node] = result\n return result", "response": "Visit the node s value and return the result."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef visit_UnaryOp(self, node: ast.UnaryOp) -> Any:\n if isinstance(node.op, ast.UAdd):\n result = +self.visit(node=node.operand)\n elif isinstance(node.op, ast.USub):\n result = -self.visit(node=node.operand)\n elif isinstance(node.op, ast.Not):\n result = not self.visit(node=node.operand)\n elif isinstance(node.op, ast.Invert):\n result = ~self.visit(node=node.operand)\n else:\n raise NotImplementedError(\"Unhandled op of {}: {}\".format(node, node.op))\n\n self.recomputed_values[node] = result\n return result", "response": "Visit the node operand and apply the operation on the result."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef visit_Compare(self, node: ast.Compare) -> Any:\n # pylint: disable=too-many-branches\n left = self.visit(node=node.left)\n\n comparators = [self.visit(node=comparator) for comparator in node.comparators]\n\n result = None # type: Optional[Any]\n for comparator, op in zip(comparators, node.ops):\n if isinstance(op, ast.Eq):\n comparison = left == comparator\n elif isinstance(op, ast.NotEq):\n comparison = left != comparator\n elif isinstance(op, ast.Lt):\n comparison = left < comparator\n elif isinstance(op, ast.LtE):\n comparison = left <= comparator\n elif isinstance(op, ast.Gt):\n comparison = left > comparator\n elif isinstance(op, ast.GtE):\n comparison = left >= comparator\n elif isinstance(op, ast.Is):\n comparison = left is comparator\n elif isinstance(op, ast.IsNot):\n comparison = left is not comparator\n elif isinstance(op, ast.In):\n comparison = left in comparator\n elif isinstance(op, ast.NotIn):\n comparison = left not in comparator\n else:\n raise NotImplementedError(\"Unhandled op of {}: {}\".format(node, op))\n\n if result is None:\n result = comparison\n else:\n result = result and comparison\n\n left = comparator\n\n self.recomputed_values[node] = result\n return result", "response": "Recursively visit the comparators and apply the operations on them."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nvisit the function and the arguments and then make the function call with them.", "response": "def visit_Call(self, node: ast.Call) -> Any:\n \"\"\"Visit the function and the arguments and finally make the function call with them.\"\"\"\n func = self.visit(node=node.func)\n\n args = [] # type: List[Any]\n for arg_node in node.args:\n if isinstance(arg_node, ast.Starred):\n args.extend(self.visit(node=arg_node))\n else:\n args.append(self.visit(node=arg_node))\n\n kwargs = dict() # type: Dict[str, Any]\n for keyword in node.keywords:\n if keyword.arg is None:\n kw = self.visit(node=keyword.value)\n for key, val in kw.items():\n kwargs[key] = val\n\n else:\n kwargs[keyword.arg] = self.visit(node=keyword.value)\n\n result = func(*args, **kwargs)\n\n self.recomputed_values[node] = result\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nvisits the test and depending on its outcome the body or orelse.", "response": "def visit_IfExp(self, node: ast.IfExp) -> Any:\n \"\"\"Visit the ``test``, and depending on its outcome, the ``body`` or ``orelse``.\"\"\"\n test = self.visit(node=node.test)\n\n if test:\n result = self.visit(node=node.body)\n else:\n result = self.visit(node=node.orelse)\n\n self.recomputed_values[node] = result\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef visit_Attribute(self, node: ast.Attribute) -> Any:\n value = self.visit(node=node.value)\n if not isinstance(node.ctx, ast.Load):\n raise NotImplementedError(\n \"Can only compute a value of Load on the attribute {}, but got context: {}\".format(node.attr, node.ctx))\n\n result = getattr(value, node.attr)\n\n self.recomputed_values[node] = result\n return result", "response": "Visit the node s value and get the attribute from the result."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef visit_Index(self, node: ast.Index) -> Any:\n result = self.visit(node=node.value)\n\n self.recomputed_values[node] = result\n return result", "response": "Visit the node s value and store the result in the recomputed_values dict."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nvisiting lower upper and step and recompute the node as a slice.", "response": "def visit_Slice(self, node: ast.Slice) -> slice:\n \"\"\"Visit ``lower``, ``upper`` and ``step`` and recompute the node as a ``slice``.\"\"\"\n lower = None # type: Optional[int]\n if node.lower is not None:\n lower = self.visit(node=node.lower)\n\n upper = None # type: Optional[int]\n if node.upper is not None:\n upper = self.visit(node=node.upper)\n\n step = None # type: Optional[int]\n if node.step is not None:\n step = self.visit(node=node.step)\n\n result = slice(lower, upper, step)\n\n self.recomputed_values[node] = result\n return result"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nvisiting each dimension of the advanced slicing and assemble the dimensions in a tuple.", "response": "def visit_ExtSlice(self, node: ast.ExtSlice) -> Tuple[Any, ...]:\n \"\"\"Visit each dimension of the advanced slicing and assemble the dimensions in a tuple.\"\"\"\n result = tuple(self.visit(node=dim) for dim in node.dims)\n\n self.recomputed_values[node] = result\n return result"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef visit_Subscript(self, node: ast.Subscript) -> Any:\n value = self.visit(node=node.value)\n a_slice = self.visit(node=node.slice)\n\n result = value[a_slice]\n\n self.recomputed_values[node] = result\n return result", "response": "Visit the slice and a value and get the element."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncompile the generator or comprehension from the node and execute the compiled code.", "response": "def _execute_comprehension(self, node: Union[ast.ListComp, ast.SetComp, ast.GeneratorExp, ast.DictComp]) -> Any:\n \"\"\"Compile the generator or comprehension from the node and execute the compiled code.\"\"\"\n args = [ast.arg(arg=name) for name in sorted(self._name_to_value.keys())]\n\n func_def_node = ast.FunctionDef(\n name=\"generator_expr\",\n args=ast.arguments(args=args, kwonlyargs=[], kw_defaults=[], defaults=[]),\n decorator_list=[],\n body=[ast.Return(node)])\n\n module_node = ast.Module(body=[func_def_node])\n\n ast.fix_missing_locations(module_node)\n\n code = compile(source=module_node, filename='', mode='exec')\n\n module_locals = {} # type: Dict[str, Any]\n module_globals = {} # type: Dict[str, Any]\n exec(code, module_globals, module_locals) # pylint: disable=exec-used\n\n generator_expr_func = module_locals[\"generator_expr\"]\n\n return generator_expr_func(**self._name_to_value)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncompiles the generator expression as a function and call it.", "response": "def visit_GeneratorExp(self, node: ast.GeneratorExp) -> Any:\n \"\"\"Compile the generator expression as a function and call it.\"\"\"\n result = self._execute_comprehension(node=node)\n\n for generator in node.generators:\n self.visit(generator.iter)\n\n # Do not set the computed value of the node since its representation would be non-informative.\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef visit_ListComp(self, node: ast.ListComp) -> Any:\n result = self._execute_comprehension(node=node)\n\n for generator in node.generators:\n self.visit(generator.iter)\n\n self.recomputed_values[node] = result\n return result", "response": "Compile the list comprehension as a function and call it."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef visit_SetComp(self, node: ast.SetComp) -> Any:\n result = self._execute_comprehension(node=node)\n\n for generator in node.generators:\n self.visit(generator.iter)\n\n self.recomputed_values[node] = result\n return result", "response": "Compile the set comprehension as a function and call it."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef visit_DictComp(self, node: ast.DictComp) -> Any:\n result = self._execute_comprehension(node=node)\n\n for generator in node.generators:\n self.visit(generator.iter)\n\n self.recomputed_values[node] = result\n return result", "response": "Compile the dictionary comprehension as a function and call it."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef visit_Return(self, node: ast.Return) -> Any: # pylint: disable=no-self-use\n raise AssertionError(\"Unexpected return node during the re-computation: {}\".format(ast.dump(node)))", "response": "Raise an exception that this node is unexpected."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nraising an exception that this node has not been handled.", "response": "def generic_visit(self, node: ast.AST) -> None:\n \"\"\"Raise an exception that this node has not been handled.\"\"\"\n raise NotImplementedError(\"Unhandled recomputation of the node: {} {}\".format(type(node), node))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef tokenize_words(string):\n string = six.text_type(string)\n return re.findall(WORD_TOKENIZATION_RULES, string)", "response": "Tokenize input text to words."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef align(self):\n '''\n Every step we have 3 choices:\n 1) Move pointer witness a --> omission\n 2) Move pointer witness b --> addition\n 3) Move pointer of both witness a/b --> match\n Note: a replacement is omission followed by an addition or the other way around\n\n Choice 1 and 2 are only possible if token a and b are not a match OR when tokens are repeated.\n For now I ignore token repetition..\n '''\n # extract tokens from witness (note that this can be done in a streaming manner if desired)\n tokens_a = self.witness_a.tokens()\n tokens_b = self.witness_b.tokens()\n # create virtual decision tree (nodes are created on demand)\n # see above\n # create start node\n start = DecisionTreeNode(self)\n\n # search the decision tree\n result = self.tree.search(start)\n print(result)\n\n pass", "response": "Aligns the decision tree with the witnesses"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconnect two sets of nodes near to the target set.", "response": "def connect_near(self, source, target, weight):\n # Near edges are added to self.near_graph, not self.graph, to avoid cycles\n \"\"\"\n :type source: integer\n :type target: integer\n \"\"\"\n self.near_graph.add_edge(source, target, weight = weight, type='near')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nmerging the given graph with the given witness_sigil and returns a dictionary of all the vertices that are connected to the witness_sigil.", "response": "def merge(self, graph, witness_sigil, witness_tokens, alignments={}):\n \"\"\"\n :type graph: VariantGraph\n \"\"\"\n # NOTE: token_to_vertex only contains newly generated vertices\n token_to_vertex = {}\n last = graph.start\n for token in witness_tokens:\n vertex = alignments.get(token, None)\n if not vertex:\n vertex = graph.add_vertex(token, witness_sigil)\n token_to_vertex[token] = vertex\n else:\n vertex.add_token(witness_sigil, token)\n # graph.add_token_to_vertex(vertex, token, witness_sigil)\n graph.connect(last, vertex, witness_sigil)\n last = vertex\n graph.connect(last, graph.end, witness_sigil)\n return token_to_vertex"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncollate the given VariantGraph into a single variant graph for each witness in the sequence", "response": "def collate(self, graph):\n \"\"\"\n :type graph: VariantGraph\n \"\"\"\n # prepare the token index\n self.token_index.prepare()\n self.vertex_array = [None] * len(self.token_index.token_array)\n\n # Build the variant graph for the first witness\n # this is easy: generate a vertex for every token\n first_witness = self.collation.witnesses[0]\n tokens = first_witness.tokens()\n token_to_vertex = self.merge(graph, first_witness.sigil, tokens)\n # print(\"> token_to_vertex=\", token_to_vertex)\n self.update_token_position_to_vertex(token_to_vertex)\n self.update_token_to_vertex_array(tokens, first_witness, self.token_position_to_vertex)\n\n # align witness 2 - n\n for x in range(1, len(self.collation.witnesses)):\n witness = self.collation.witnesses[x]\n tokens = witness.tokens()\n # print(\"\\nwitness\", witness.sigil)\n\n variant_graph_ranking = VariantGraphRanking.of(graph)\n # print(\"> x =\", x, \", variant_graph_ranking =\", variant_graph_ranking.byRank)\n variant_graph_ranks = list(set(map(lambda v: variant_graph_ranking.byVertex.get(v), graph.vertices())))\n # we leave in the rank of the start vertex, but remove the rank of the end vertex\n variant_graph_ranks.pop()\n\n # now the vertical stuff\n tokens_as_index_list = self.as_index_list(tokens)\n\n match_cube = MatchCube(self.token_index, witness, self.vertex_array, variant_graph_ranking,\n self.properties_filter)\n # print(\"> match_cube.matches=\", match_cube.matches)\n self.fill_needleman_wunsch_table(variant_graph_ranks, tokens_as_index_list, match_cube)\n\n aligned = self.align_matching_tokens(match_cube)\n # print(\"> aligned=\", aligned)\n # print(\"self.token_index.token_array=\", self.token_index.token_array)\n # alignment = self.align_function(superbase, next_witness, token_to_vertex, match_cube)\n\n # merge\n witness_token_to_generated_vertex = self.merge(graph, witness.sigil, witness.tokens(), aligned)\n # print(\"> witness_token_to_generated_vertex =\", witness_token_to_generated_vertex)\n token_to_vertex.update(witness_token_to_generated_vertex)\n # print(\"> token_to_vertex =\", token_to_vertex)\n self.update_token_position_to_vertex(token_to_vertex, aligned)\n witness_token_position_to_vertex = {}\n for p in self.token_index.get_range_for_witness(witness.sigil):\n # print(\"> p= \", p)\n witness_token_position_to_vertex[p] = self.token_position_to_vertex[p]\n self.update_token_to_vertex_array(tokens, witness, witness_token_position_to_vertex)\n # print(\"> vertex_array =\", self.vertex_array)\n\n # print(\"actual\")\n # self._debug_edit_graph_table(self.table)\n # print(\"expected\")\n # self._debug_edit_graph_table(self.table2)\n\n # change superbase\n # superbase = self.new_superbase\n\n if self.detect_transpositions:\n detector = TranspositionDetection(self)\n detector.detect()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nparse a string of comma - separated x - y - step - like ranges and return a tuple of the node range info.", "response": "def _parse(self, pattern):\n \"\"\"Parse string of comma-separated x-y/step -like ranges\"\"\"\n # Comma separated ranges\n if pattern.find(',') < 0:\n subranges = [pattern]\n else:\n subranges = pattern.split(',')\n\n for subrange in subranges:\n if subrange.find('/') < 0:\n step = 1\n baserange = subrange\n else:\n baserange, step = subrange.split('/', 1)\n\n try:\n step = int(step)\n except ValueError:\n raise RangeSetParseError(subrange,\n \"cannot convert string to integer\")\n\n if baserange.find('-') < 0:\n if step != 1:\n raise RangeSetParseError(subrange,\n \"invalid step usage\")\n begin = end = baserange\n else:\n begin, end = baserange.split('-', 1)\n\n # compute padding and return node range info tuple\n try:\n pad = 0\n if int(begin) != 0:\n begins = begin.lstrip(\"0\")\n if len(begin) - len(begins) > 0:\n pad = len(begin)\n start = int(begins)\n else:\n if len(begin) > 1:\n pad = len(begin)\n start = 0\n if int(end) != 0:\n ends = end.lstrip(\"0\")\n else:\n ends = end\n stop = int(ends)\n except ValueError:\n raise RangeSetParseError(subrange,\n \"cannot convert string to integer\")\n\n # check preconditions\n if stop > 1e100 or start > stop or step < 1:\n raise RangeSetParseError(subrange,\n \"invalid values in range\")\n\n self.add_range(start, stop + 1, step, pad)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nclass method that returns a new RangeSet with ranges from provided list.", "response": "def fromlist(cls, rnglist, autostep=None):\n \"\"\"Class method that returns a new RangeSet with ranges from provided\n list.\"\"\"\n inst = RangeSet(autostep=autostep)\n inst.updaten(rnglist)\n return inst"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef fromone(cls, index, pad=0, autostep=None):\n inst = RangeSet(autostep=autostep)\n # support slice object with duck-typing\n try:\n inst.add(index, pad)\n except TypeError:\n if not index.stop:\n raise ValueError(\"Invalid range upper limit (%s)\" % index.stop)\n inst.add_range(index.start or 0, index.stop, index.step or 1, pad)\n return inst", "response": "Class method that returns a new RangeSet of one single item or\n a single range ( from integer or slice object."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset autostep value ( property", "response": "def set_autostep(self, val):\n \"\"\"Set autostep value (property)\"\"\"\n if val is None:\n # disabled by default for pdsh compat (+inf is 1E400, but a bug in\n # python 2.4 makes it impossible to be pickled, so we use less)\n # NOTE: Later, we could consider sys.maxint here\n self._autostep = 1E100\n else:\n # - 1 because user means node count, but we means real steps\n self._autostep = int(val) - 1"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\niterating over each string element in RangeSet.", "response": "def striter(self):\n \"\"\"Iterate over each (optionally padded) string element in RangeSet.\"\"\"\n pad = self.padding or 0\n for i in self._sorted():\n yield \"%0*d\" % (pad, i)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nobject - based iterator over contiguous range sets.", "response": "def contiguous(self):\n \"\"\"Object-based iterator over contiguous range sets.\"\"\"\n pad = self.padding or 0\n for sli in self._contiguous_slices():\n yield RangeSet.fromone(slice(sli.start, sli.stop, sli.step), pad)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _strslices(self):\n pad = self.padding or 0\n for sli in self.slices():\n if sli.start + 1 == sli.stop:\n yield \"%0*d\" % (pad, sli.start)\n else:\n assert sli.step >= 0, \"Internal error: sli.step < 0\"\n if sli.step == 1:\n yield \"%0*d-%0*d\" % (pad, sli.start, pad, sli.stop - 1)\n else:\n yield \"%0*d-%0*d/%d\" % (pad, sli.start, pad, sli.stop - 1, \\\n sli.step)", "response": "Stringify the slices list."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _contiguous_slices(self):\n k = j = None\n for i in self._sorted():\n if k is None:\n k = j = i\n if i - j > 1:\n yield slice(k, j + 1, 1)\n k = i\n j = i\n if k is not None:\n yield slice(k, j + 1, 1)", "response": "Internal iterator over contiguous slices in RangeSet."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _folded_slices(self):\n if len(self) == 0:\n return\n\n prng = None # pending range\n istart = None # processing starting indice\n m = 0 # processing step\n for sli in self._contiguous_slices():\n start = sli.start\n stop = sli.stop\n unitary = (start + 1 == stop) # one indice?\n if istart is None: # first loop\n if unitary:\n istart = start\n else:\n prng = [start, stop, 1]\n istart = stop - 1\n i = k = istart\n elif m == 0: # istart is set but step is unknown\n if not unitary:\n if prng is not None:\n # yield and replace pending range\n yield slice(*prng)\n else:\n yield slice(istart, istart + 1, 1)\n prng = [start, stop, 1]\n istart = k = stop - 1\n continue\n i = start\n else: # step m > 0\n assert m > 0\n i = start\n # does current range lead to broken step?\n if m != i - k or not unitary:\n #j = i if m == i - k else k\n if m == i - k: j = i\n else: j = k\n # stepped is True when autostep setting does apply\n stepped = (j - istart >= self._autostep * m)\n if prng: # yield pending range?\n if stepped:\n prng[1] -= 1\n else:\n istart += m\n yield slice(*prng)\n prng = None\n if m != i - k:\n # case: step value has changed\n if stepped:\n yield slice(istart, k + 1, m)\n else:\n for j in range(istart, k - m + 1, m):\n yield slice(j, j + 1, 1)\n if not unitary:\n yield slice(k, k + 1, 1)\n if unitary:\n if stepped:\n istart = i = k = start\n else:\n istart = k\n else:\n prng = [start, stop, 1]\n istart = i = k = stop - 1\n elif not unitary:\n # case: broken step by contiguous range\n if stepped:\n # yield 'range/m' by taking first indice of new range\n yield slice(istart, i + 1, m)\n i += 1\n else:\n # autostep setting does not apply in that case\n for j in range(istart, i - m + 1, m):\n yield slice(j, j + 1, 1)\n if stop > i + 1: # current->pending only if not unitary\n prng = [i, stop, 1]\n istart = i = k = stop - 1\n m = i - k # compute step\n k = i\n # exited loop, process pending range or indice...\n if m == 0:\n if prng:\n yield slice(*prng)\n else:\n yield slice(istart, istart + 1, 1)\n else:\n assert m > 0\n stepped = (k - istart >= self._autostep * m)\n if prng:\n if stepped:\n prng[1] -= 1\n else:\n istart += m\n yield slice(*prng)\n prng = None\n if stepped:\n yield slice(istart, i + 1, m)\n else:\n for j in range(istart, i + 1, m):\n yield slice(j, j + 1, 1)", "response": "Internal generator that is able to retrieve ranges organized by step."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsplitting the rangeset into nbr sub - nodesets.", "response": "def split(self, nbr):\n \"\"\"\n Split the rangeset into nbr sub-rangesets (at most). Each\n sub-rangeset will have the same number of elements more or\n less 1. Current rangeset remains unmodified. Returns an\n iterator.\n\n >>> RangeSet(\"1-5\").split(3) \n RangeSet(\"1-2\")\n RangeSet(\"3-4\")\n RangeSet(\"foo5\")\n \"\"\"\n assert(nbr > 0)\n\n # We put the same number of element in each sub-nodeset.\n slice_size = len(self) / nbr\n left = len(self) % nbr\n\n begin = 0\n for i in range(0, min(nbr, len(self))):\n length = slice_size + int(i < left)\n yield self[begin:begin + length]\n begin += length"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd a range to the RangeSet.", "response": "def add_range(self, start, stop, step=1, pad=0):\n \"\"\"\n Add a range (start, stop, step and padding length) to RangeSet.\n Like the Python built-in function range(), the last element is\n the largest start + i * step less than stop.\n \"\"\"\n assert start < stop, \"please provide ordered node index ranges\"\n assert step > 0\n assert pad >= 0\n assert stop - start < 1e9, \"range too large\"\n\n if pad > 0 and self.padding is None:\n self.padding = pad\n set.update(self, range(start, stop, step))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef copy(self):\n cpy = self.__class__()\n cpy._autostep = self._autostep\n cpy.padding = self.padding\n cpy.update(self)\n return cpy", "response": "Return a shallow copy of a RangeSet."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nwraps built - in set operations for RangeSet to workaround built - in set operations.", "response": "def _wrap_set_op(self, fun, arg):\n \"\"\"Wrap built-in set operations for RangeSet to workaround built-in set\n base class issues (RangeSet.__new/init__ not called)\"\"\"\n result = fun(self, arg)\n result._autostep = self._autostep\n result.padding = self.padding\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the intersection of two sets as a new set instance.", "response": "def intersection(self, other):\n \"\"\"Return the intersection of two RangeSets as a new RangeSet.\n\n (I.e. all elements that are in both sets.)\n \"\"\"\n #NOTE: This is a work around \n # Python 3 return as the result of set.intersection a new set instance.\n # Python 2 however returns as a the result a ClusterShell.RangeSet.RangeSet instance.\n # ORIGINAL CODE: return self._wrap_set_op(set.intersection, other)\n copy = self.copy()\n copy.intersection_update(other)\n return copy"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef difference(self, other):\n #NOTE: This is a work around\n # Python 3 return as the result of set.intersection a new set instance.\n # Python 2 however returns as a the result a ClusterShell.RangeSet.RangeSet instance.\n # ORIGINAL CODE: return self._wrap_set_op(set.difference, other)\n copy = self.copy()\n copy.difference_update(other)\n return copy", "response": "Return the difference of two sets as a new set."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef issubset(self, other):\n self._binary_sanity_check(other)\n return set.issubset(self, other)", "response": "Report whether this set contains another set."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreport whether this RangeSet contains another set.", "response": "def issuperset(self, other):\n \"\"\"Report whether this RangeSet contains another set.\"\"\"\n self._binary_sanity_check(other)\n return set.issuperset(self, other)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nupdate this RangeSet with the difference of the elements of another set.", "response": "def difference_update(self, other, strict=False):\n \"\"\"Remove all elements of another set from this RangeSet.\n \n If strict is True, raise KeyError if an element cannot be removed.\n (strict is a RangeSet addition)\"\"\"\n if strict and other not in self:\n raise KeyError(other.difference(self)[0])\n set.difference_update(self, other)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef update(self, iterable):\n if isinstance(iterable, RangeSet):\n # keep padding unless is has not been defined yet\n if self.padding is None and iterable.padding is not None:\n self.padding = iterable.padding\n assert type(iterable) is not str\n set.update(self, iterable)", "response": "Add all integers from an iterable."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nupdating a set of all the items in the set with the union of itself and several others.", "response": "def updaten(self, rangesets):\n \"\"\"\n Update a rangeset with the union of itself and several others.\n \"\"\"\n for rng in rangesets:\n if isinstance(rng, set):\n self.update(rng)\n else:\n self.update(RangeSet(rng))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nadd an element to a RangeSet.", "response": "def add(self, element, pad=0):\n \"\"\"Add an element to a RangeSet.\n This has no effect if the element is already present.\n \"\"\"\n set.add(self, int(element))\n if pad > 0 and self.padding is None:\n self.padding = pad"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nremove element from the RangeSet.", "response": "def discard(self, element):\n \"\"\"Remove element from the RangeSet if it is a member.\n\n If the element is not a member, do nothing.\n \"\"\"\n try:\n i = int(element)\n set.discard(self, i)\n except ValueError:\n pass"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _open(filename, mode=\"r\"):\n if filename.endswith(\".gz\"):\n return GzipFile(filename, mode, COMPRESSION_LEVEL)\n elif filename == \"-\":\n if mode == \"r\":\n return _stdin\n elif mode == \"w\":\n return _stdout\n else:\n # TODO: set encoding to UTF-8?\n return open(filename, mode=mode)", "response": "Open a file with the specified mode."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nstable sort of the sequence a according to the keys given in r. >>> a=range(5) >>> b=[0]*5 >>> r=[2,1,3,0,4] >>> _radixPass(a, b, r, 5, 5) >>> b [3, 1, 0, 2, 4] When n is less than the length of a, the end of b must be left unaltered. >>> b=[5]*5 >>> _radixPass(a, b, r, 2, 2) >>> b [1, 0, 5, 5, 5] >>> _a=a=[1, 0] >>> b= [0]*2 >>> r=[0, 1] >>> _radixPass(a, b, r, 2, 2) >>> a=_a >>> b [0, 1] >>> a=[1, 1] >>> _radixPass(a, b, r, 2, 2) >>> b [1, 1] >>> a=[0, 1, 1, 0] >>> b= [0]*4 >>> r=[0, 1] >>> _radixPass(a, b, r, 4, 2) >>> a=_a >>> b [0, 0, 1, 1]", "response": "def _radixPass(a, b, r, n, K):\n \"\"\"\n Stable sort of the sequence a according to the keys given in r.\n\n >>> a=range(5)\n >>> b=[0]*5\n >>> r=[2,1,3,0,4]\n\n >>> _radixPass(a, b, r, 5, 5)\n >>> b\n [3, 1, 0, 2, 4]\n\n When n is less than the length of a, the end of b must be left unaltered.\n >>> b=[5]*5\n >>> _radixPass(a, b, r, 2, 2)\n >>> b\n [1, 0, 5, 5, 5]\n\n >>> _a=a=[1, 0]\n >>> b= [0]*2\n >>> r=[0, 1]\n >>> _radixPass(a, b, r, 2, 2)\n >>> a=_a\n >>> b\n [0, 1]\n\n >>> a=[1, 1]\n >>> _radixPass(a, b, r, 2, 2)\n >>> b\n [1, 1]\n\n >>> a=[0, 1, 1, 0]\n >>> b= [0]*4\n >>> r=[0, 1]\n >>> _radixPass(a, b, r, 4, 2)\n >>> a=_a\n >>> b\n [0, 0, 1, 1]\n \"\"\"\n c = _array(\"i\", [0] * (K + 1)) # counter array\n\n for i in range(n): # count occurrences\n c[r[a[i]]] += 1\n\n sum = 0\n\n for i in range(K + 1): # exclusive prefix sums\n t = c[i]\n c[i] = sum\n sum += t\n\n for a_i in a[:n]: # sort\n b[c[r[a_i]]] = a_i\n c[r[a_i]] += 1"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _nbOperations(n):\n if n < 2:\n return 0\n else:\n n0 = (n + 2) // 3\n n02 = n0 + n // 3\n return 3 * (n02) + n0 + _nbOperations(n02)", "response": "Returns the number of atomic operations in _radixPass."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the length of the longest common prefix of seq1 and seq2 starting at offset start1 and start2.", "response": "def _longestCommonPrefix(seq1, seq2, start1=0, start2=0):\n \"\"\"\n Returns the length of the longest common prefix of seq1\n starting at offset start1 and seq2 starting at offset start2.\n\n >>> _longestCommonPrefix(\"abcdef\", \"abcghj\")\n 3\n\n >>> _longestCommonPrefix(\"abcghj\", \"abcdef\")\n 3\n\n >>> _longestCommonPrefix(\"miss\", \"\")\n 0\n\n >>> _longestCommonPrefix(\"\", \"mr\")\n 0\n\n >>> _longestCommonPrefix(range(128), range(128))\n 128\n\n >>> _longestCommonPrefix(\"abcabcabc\", \"abcdefabcdef\", 0, 6)\n 3\n\n >>> _longestCommonPrefix(\"abcdefabcdef\", \"abcabcabc\", 6, 0)\n 3\n\n >>> _longestCommonPrefix(\"abc\", \"abcabc\", 1, 4)\n 2\n\n >>> _longestCommonPrefix(\"abcabc\", \"abc\", 4, 1)\n 2\n \"\"\"\n\n len1 = len(seq1) - start1\n len2 = len(seq2) - start2\n\n # We set seq2 as the shortest sequence\n if len1 < len2:\n seq1, seq2 = seq2, seq1\n start1, start2 = start2, start1\n len1, len2 = len2, len1\n\n # if seq2 is empty returns 0\n if len2 == 0:\n return 0\n\n i = 0\n pos2 = start2\n for i in range(min(len1, len2)):\n # print seq1, seq2, start1, start2\n if seq1[start1 + i] != seq2[start2 + i]:\n return i\n\n # we have reached the end of seq2 (need to increment i)\n return i + 1"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncompute the longest common prefix for every adjacent suffixes.", "response": "def LCP(SA):\n \"\"\"\n Compute the longest common prefix for every adjacent suffixes.\n The result is a list of same size as SA.\n Given two suffixes at positions i and i+1,\n their LCP is stored at position i+1.\n A zero is stored at position 0 of the output.\n\n >>> SA=SuffixArray(\"abba\", unit=UNIT_BYTE)\n >>> SA._LCP_values\n array('i', [0, 1, 0, 1])\n\n >>> SA=SuffixArray(\"\", unit=UNIT_BYTE)\n >>> SA._LCP_values\n array('i')\n\n >>> SA=SuffixArray(\"\", unit=UNIT_CHARACTER)\n >>> SA._LCP_values\n array('i')\n\n >>> SA=SuffixArray(\"\", unit=UNIT_WORD)\n >>> SA._LCP_values\n array('i')\n\n >>> SA=SuffixArray(\"abab\", unit=UNIT_BYTE)\n >>> SA._LCP_values\n array('i', [0, 2, 0, 1])\n \"\"\"\n string = SA.string\n length = SA.length\n lcps = _array(\"i\", [0] * length)\n SA = SA.SA\n\n if _trace:\n delta = max(length // 100, 1)\n for i, pos in enumerate(SA):\n if i % delta == 0:\n percent = float((i + 1) * 100) / length\n print >> _stderr, \"Compute_LCP %.2f%% (%i/%i)\\r\" % (percent, i + 1, length),\n lcps[i] = _longestCommonPrefix(string, string, SA[i - 1], pos)\n else:\n for i, pos in enumerate(SA):\n lcps[i] = _longestCommonPrefix(string, string, SA[i - 1], pos)\n\n if _trace:\n print >> _stderr, \"Compute_LCP %.2f%% (%i/%i)\\r\" % (100.0, length, length)\n\n if lcps: # Correct the case where string[0] == string[-1]\n lcps[0] = 0\n return lcps"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nparsing the command line arguments.", "response": "def parseArgv():\n \"\"\"\n Command line option parser.\n \"\"\"\n parser = OptionParser()\n parser.usage = r\"\"\" cat | %prog [--unit ] [--output ]\n\nCreate the suffix array of TEXT with the processing UNIT and optionally store it in SA_FILE for subsequent use.\nUNIT may be set to 'byte', 'character' (given an encoding with the --encoding option) or 'word', which is the default.\n\"\"\"\n\n parser.add_option(\"-i\", \"--input\",\n action=\"store\", type=\"string\", dest=\"input\",\n default=False,\n help=\"Path of the file containing the input text. When '-' is given, read the standard input (default). If the path ends with '.gz', reads the decompressed file.\")\n\n parser.add_option(\"-o\", \"--output\",\n action=\"store\", type=\"string\", dest=\"output\",\n default=False,\n help=\"Store the suffix array of the input to the file OUTPUT. When '-' is given, writes to the standard output. If the filename ends with '.gz', the suffix array will be stored compressed.\")\n\n parser.add_option(\"\", \"--load\",\n action=\"store\", type=\"string\", dest=\"SAFile\",\n default=False,\n help=\"Load a suffix array from SAFILE, this option and --input are mutually exclusive.\")\n\n parser.add_option(\"-u\", \"--unit\",\n action=\"store\", type=\"string\", dest=\"unit\",\n default=DEFAULT_UNIT_STR,\n help=\"Processing unit used for the creation of the suffix array.\" + \\\n 'Possible values are \"byte\", \"character\" and \"word\". Default is \"%s\".' % DEFAULT_UNIT_STR + \\\n \"This option is ignored when the suffix array is loaded from SAFILE.\" + \\\n 'For characters, the input is decoded according to the encoding set via the option --encoding.')\n\n parser.add_option(\"-e\", \"--encoding\",\n action=\"store\", type=\"string\", dest=\"encoding\",\n default=DEFAULT_ENCODING,\n help=\"Encoding of the input. This information is required only when processing characters. Default is '%s'.\" % DEFAULT_ENCODING)\n\n parser.add_option(\"-p\", \"--print\",\n action=\"store_true\", dest=\"printSA\",\n default=False,\n help=\"Prints the suffix array in a human readable format to the standard error output.\")\n\n parser.add_option(\"\", \"--verbose\",\n action=\"store_true\", dest=\"verbose\",\n default=False,\n help=\"Prints more information.\")\n\n parser.add_option(\"\", \"--no-lcps\",\n action=\"store_true\", dest=\"noLCPs\",\n default=False,\n help=\"Switch off the computation of LCPs. By doing so, the find functions are unusable.\")\n\n (options, args) = parser.parse_args(_argv)\n strings = args[1:]\n return (options, strings)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nadds a feature to the suffix array.", "response": "def addFeatureSA(self, callback, default=None, name=None):\n \"\"\"\n Add a feature to the suffix array.\n The callback must return a sequence such that\n the feature at position i is attached to the suffix referenced by\n self.SA[i].\n\n It is called with one argument: the instance of SuffixArray self.\n The callback may traverse self.SA in any fashion.\n\n The default behavior is to name the new feature after the callback name.\n To give another name, set the argument name accordingly.\n\n When the feature of an unknown substring of the text is requested,\n the value of the default argument is used.\n\n If the feature attached to a suffix is independent of the other suffix\n features, then the method addFeature gives a better alternative.\n\n You may use addFeatureSA as a decorator as in the following example.\n\n Example: feature named bigram which attach the frequencies of the\n leading bigram to each suffix.\n\n >>> SA=SuffixArray(\"mississippi\", unit=UNIT_BYTE)\n\n >>> def bigram(SA):\n ... res=[0]*SA.length\n ... end=0\n ... while end <= SA.length:\n ...\n ... begin=end-1\n ... while end < SA.length and SA._LCP_values[end]>=2:\n ... if SA.SA[end]+2<=SA.length: #end of string\n ... end+=1\n ...\n ... nbBigram=end-begin\n ... for i in xrange(begin, end):\n ... if SA.SA[i]+2<=SA.length:\n ... res[i]=nbBigram\n ...\n ... end+=1\n ... return res\n\n >>> SA.addFeatureSA(bigram, 0)\n\n >>> SA._bigram_values\n [0, 1, 2, 2, 1, 1, 1, 2, 2, 2, 2]\n\n >>> print str(SA).expandtabs(14) #doctest: +SKIP\n ... 10 'i' LCP=0 , bigram=0\n ... 7 'ippi' LCP=1 , bigram=1\n ... 4 'issippi' LCP=1 , bigram=2\n ... 1 'ississippi' LCP=4 , bigram=2\n ... 0 'mississipp' LCP=0 , bigram=1\n ... 9 'pi' LCP=0 , bigram=1\n ... 8 'ppi' LCP=1 , bigram=1\n ... 6 'sippi' LCP=0 , bigram=2\n ... 3 'sissippi' LCP=2 , bigram=2\n ... 5 'ssippi' LCP=1 , bigram=2\n ... 2 'ssissippi' LCP=3 , bigram=2\n\n >>> SA.bigram('ip')\n 1\n\n >>> SA.bigram('si')\n 2\n\n >>> SA.bigram('zw')\n 0\n\n \"\"\"\n if name is None:\n featureName = callback.__name__\n else:\n featureName = name\n\n featureValues = callback(self)\n setattr(self, \"_%s_values\" % featureName, featureValues)\n setattr(self, \"%s_default\" % featureName, default)\n self.features.append(featureName)\n\n def findFeature(substring):\n res = self._findOne(substring, )\n if res is not False:\n return featureValues[res]\n else:\n return default\n\n setattr(self, featureName, findFeature)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds a feature to the suffix array.", "response": "def addFeature(self, callback, default=None, name=None, arguments=None):\n \"\"\"\n Add a feature to the suffix array.\n The callback must return the feature corresponding to the suffix at\n position self.SA[i].\n\n The callback must be callable (a function or lambda).\n The argument names of the callback are used to determine the data\n needed. If an argument is the name of feature already defined, then\n this argument will be the value of that feature for the current suffix.\n In addition the argument pos is the position of the current suffix\n and iSA is the index of pos in SA.\n Other attributes of the SuffixArray instance may be use as argument\n names.\n\n If the feature attached to a suffix depends on other suffix features,\n then the method addFeatureSA is the only choice.\n\n \"\"\"\n if name is None:\n featureName = callback.__name__\n else:\n featureName = name\n\n if arguments is None:\n signature = getargspec(callback)[0]\n else:\n signature = arguments\n\n featureValues = [default] * (self.length)\n args = [getattr(self, \"_%s_values\" % featName) for featName in signature]\n # print args\n for i, pos in enumerate(self.SA):\n arg = [j[i] for j in args]\n # print arg\n featureValues[i] = callback(*arg)\n # end alternative\n\n setattr(self, \"_%s_values\" % featureName, featureValues)\n setattr(self, \"%s_default\" % featureName, default)\n self.features.append(featureName)\n\n def findFeature(substring):\n res = self._findOne(substring)\n if res:\n return featureValues[res]\n else:\n return default\n\n setattr(self, featureName, findFeature)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef reprString(self, string, length):\n if isinstance(string, int):\n length = min(length, self.length - string)\n string = self.string[string:string + length]\n\n voc = self.voc\n res = self.tokSep.join((voc[id] for id in string[:length]))\n if self.unit == UNIT_WORD:\n res = res.replace(\" \\n\", \"\\n\")\n res = res.replace(\"\\n \", \"\\n\")\n\n if self.unit == UNIT_CHARACTER:\n res = res.encode(self.encoding)\n\n return res", "response": "Return a string of length tokens in the original form."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nsave the suffix array instance to a file.", "response": "def toFile(self, filename):\n \"\"\"\n Save the suffix array instance including all features attached in\n filename. Accept any filename following the _open conventions,\n for example if it ends with .gz the file created will be a compressed\n GZip file.\n \"\"\"\n start = _time()\n fd = _open(filename, \"w\")\n\n savedData = [self.string, self.unit, self.voc, self.vocSize, self.SA, self.features]\n\n for featureName in self.features:\n featureValues = getattr(self, \"_%s_values\" % featureName)\n featureDefault = getattr(self, \"%s_default\" % featureName)\n\n savedData.append((featureValues, featureDefault))\n\n fd.write(_dumps(savedData, _HIGHEST_PROTOCOL))\n fd.flush()\n try:\n self.sizeOfSavedFile = getsize(fd.name)\n except OSError: # if stdout is used\n self.sizeOfSavedFile = \"-1\"\n self.toFileTime = _time() - start\n if _trace: print >> _stderr, \"toFileTime %.2fs\" % self.toFileTime\n\n if _trace: print >> _stderr, \"sizeOfSavedFile %sb\" % self.sizeOfSavedFile\n fd.close()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fromFile(cls, filename):\n self = cls.__new__(cls) # new instance which does not call __init__\n\n start = _time()\n\n savedData = _loads(_open(filename, \"r\").read())\n\n # load common attributes\n self.string, self.unit, self.voc, self.vocSize, self.SA, features = savedData[:6]\n self.length = len(self.SA)\n\n # determine token delimiter\n if self.unit == UNIT_WORD:\n self.tokSep = \" \"\n elif self.unit in (UNIT_CHARACTER, UNIT_BYTE):\n self.tokSep = \"\"\n else:\n raise Exception(\"Unknown unit type identifier:\", self.unit)\n\n # recompute tokId based on voc\n self.tokId = dict((char, iChar) for iChar, char in enumerate(self.voc))\n self.nbSentences = self.string.count(self.tokId.get(\"\\n\", 0))\n\n # Load features\n self.features = []\n for featureName, (featureValues, featureDefault) in zip(features, savedData[6:]):\n self.addFeatureSA((lambda _: featureValues), name=featureName, default=featureDefault)\n\n self.fromFileTime = _time() - start\n if _trace: print >> _stderr, \"fromFileTime %.2fs\" % self.fromFileTime\n return self", "response": "Load a suffix array instance from a file created by toFile."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _findOne(self, subString):\n SA = self.SA\n LCPs = self._LCP_values\n string = self.string\n\n try:\n subString = _array(\"i\", [self.tokId[c] for c in self.tokenize(subString)])\n except KeyError:\n # if a token of the subString is not in the vocabulary\n # the substring can't be in the string\n return False\n lenSubString = len(subString)\n\n #################################\n # Dichotomy search of subString #\n #################################\n lower = 0\n upper = self.length\n success = False\n\n while upper - lower > 0:\n middle = (lower + upper) // 2\n\n middleSubString = string[SA[middle]:min(SA[middle] + lenSubString, self.length)]\n\n # NOTE: the cmp function is removed in Python 3\n # Strictly speaking we are doing one comparison more now\n if subString < middleSubString:\n upper = middle\n elif subString > middleSubString:\n lower = middle + 1\n else:\n success = True\n break\n\n if not success:\n return False\n else:\n return middle", "response": "This method searches for a single entry in the string in the dictionary."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef escape(s, quote=False):\n if s is None:\n return ''\n elif hasattr(s, '__html__'):\n return s.__html__()\n elif not isinstance(s, basestring):\n s = unicode(s)\n s = s.replace('&', '&').replace('<', '<').replace('>', '>')\n if quote:\n s = s.replace('\"', \""\")\n return s", "response": "Escape special characters < and < and > to HTML - safe sequences."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding a new entry to the feed.", "response": "def add(self, *args, **kwargs):\n \"\"\"Add a new entry to the feed. This function can either be called\n with a :class:`FeedEntry` or some keyword and positional arguments\n that are forwarded to the :class:`FeedEntry` constructor.\n \"\"\"\n if len(args) == 1 and not kwargs and isinstance(args[0], FeedEntry):\n self.entries.append(args[0])\n else:\n kwargs['feed_url'] = self.feed_url\n kwargs['timezone'] = self.timezone\n self.entries.append(FeedEntry(*args, **kwargs))"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a generator that yields pieces of XML.", "response": "def generate(self):\n \"\"\"Return a generator that yields pieces of XML.\"\"\"\n # atom demands either an author element in every entry or a global one\n if not self.author:\n if False in map(lambda e: bool(e.author), self.entries):\n self.author = ({'name': u'unbekannter Autor'},)\n\n if not self.updated:\n dates = sorted([entry.updated for entry in self.entries])\n self.updated = dates and dates[-1] or datetime.utcnow()\n\n yield u'\\n'\n yield u'\\n'\n yield ' ' + _make_text_block('title', self.title, self.title_type)\n yield u' %s\\n' % escape(self.id)\n yield u' %s\\n' % format_iso8601(self.updated, self.timezone)\n if self.url:\n yield u' \\n' % escape(self.url, True)\n if self.feed_url:\n yield u' \\n' % \\\n escape(self.feed_url, True)\n for link in self.links:\n yield u' \\n' % ''.join('%s=\"%s\" ' % \\\n (k, escape(link[k], True)) for k in link)\n for author in self.author:\n yield u' \\n'\n yield u' %s\\n' % escape(author['name'])\n if 'uri' in author:\n yield u' %s\\n' % escape(author['uri'])\n if 'email' in author:\n yield ' %s\\n' % escape(author['email'])\n yield ' \\n'\n if self.subtitle:\n yield ' ' + _make_text_block('subtitle', self.subtitle,\n self.subtitle_type)\n if self.icon:\n yield u' %s\\n' % escape(self.icon)\n if self.logo:\n yield u' %s\\n' % escape(self.logo)\n if self.rights:\n yield ' ' + _make_text_block('rights', self.rights,\n self.rights_type)\n generator_name, generator_url, generator_version = self.generator\n if generator_name or generator_url or generator_version:\n tmp = [u' %s\\n' % escape(generator_name))\n yield u''.join(tmp)\n for entry in self.entries:\n for line in entry.generate():\n yield u' ' + line\n yield u'\\n'"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nyielding pieces of ATOM XML.", "response": "def generate(self):\n \"\"\"Yields pieces of ATOM XML.\"\"\"\n base = ''\n if self.xml_base:\n base = ' xml:base=\"%s\"' % escape(self.xml_base, True)\n yield u'\\n' % base\n yield u' ' + _make_text_block('title', self.title, self.title_type)\n yield u' %s\\n' % escape(self.id)\n yield u' %s\\n' % format_iso8601(self.updated, self.timezone)\n if self.published:\n yield u' %s\\n' % \\\n format_iso8601(self.published, self.timezone)\n if self.url:\n yield u' \\n' % escape(self.url)\n for author in self.author:\n yield u' \\n'\n yield u' %s\\n' % escape(author['name'])\n if 'uri' in author:\n yield u' %s\\n' % escape(author['uri'])\n if 'email' in author:\n yield u' %s\\n' % escape(author['email'])\n yield u' \\n'\n for link in self.links:\n yield u' \\n' % ''.join('%s=\"%s\" ' % \\\n (k, escape(link[k], True)) for k in link)\n if self.summary:\n yield u' ' + _make_text_block('summary', self.summary,\n self.summary_type)\n if self.content:\n if issubclass(self.content.__class__, dict):\n if \"content\" in self.content:\n yield u' %s\\n' % (' '.join('%s=\"%s\"' % \\\n (k, escape(self.content[k], True)) for k in self.content if k != \"content\"), escape(self.content[\"content\"]))\n else:\n yield u' \\n' % ' '.join('%s=\"%s\" ' % \\\n (k, escape(self.content[k], True)) for k in self.content)\n else:\n yield u' ' + _make_text_block('content', self.content,\n self.content_type)\n yield u'\\n'"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef serialize(value, field):\n assert isinstance(field, forms.Field)\n if isinstance(field, forms.ModelMultipleChoiceField):\n return json.dumps([v.pk for v in value])\n # todo: remove\n if isinstance(value, models.Model):\n return value.pk\n return value", "response": "Serialize a value into a form value"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_version(package):\n with io.open(os.path.join(BASE_DIR, package, '__init__.py'), encoding='utf-8') as fh:\n return [\n l.split('=', 1)[1].strip().strip(\"'\").strip('\"')\n for l in fh.readlines()\n if '__version__' in l][0]", "response": "Get version without importing the lib"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _check_backend():\n # Django 1.10 does not allow\n # both settings to be set\n middleware = set(\n getattr(settings, 'MIDDLEWARE', None) or\n getattr(settings, 'MIDDLEWARE_CLASSES', None) or\n [])\n\n # Deprecated alias\n if \"djconfig.middleware.DjConfigLocMemMiddleware\" in middleware:\n return\n\n if \"djconfig.middleware.DjConfigMiddleware\" in middleware:\n return\n\n raise ValueError(\n \"djconfig.middleware.DjConfigMiddleware \"\n \"is required but it was not found in \"\n \"MIDDLEWARE_CLASSES nor in MIDDLEWARE\")", "response": "Check that the backend is valid."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nregister a config form into the registry.", "response": "def _register(self, form_class, check_middleware=True):\n \"\"\"\n Register a config form into the registry\n\n :param object form_class: The form class to register.\\\n Must be an instance of :py:class:`djconfig.forms.ConfigForm`\n :param bool check_middleware: Check\\\n :py:class:`djconfig.middleware.DjConfigMiddleware`\\\n is registered into ``settings.MIDDLEWARE_CLASSES``. Default True\n \"\"\"\n if not issubclass(form_class, _ConfigFormBase):\n raise ValueError(\n \"The form does not inherit from `forms.ConfigForm`\")\n\n self._registry.add(form_class)\n\n if check_middleware:\n _check_backend()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _reload(self):\n ConfigModel = apps.get_model('djconfig.Config')\n cache = {}\n data = dict(\n ConfigModel.objects\n .all()\n .values_list('key', 'value'))\n\n # populate cache with initial form values,\n # then with cleaned database values,\n # then with raw database file/image paths\n for form_class in self._registry:\n empty_form = form_class()\n cache.update({\n name: field.initial\n for name, field in empty_form.fields.items()})\n form = form_class(data={\n name: _deserialize(data[name], field)\n for name, field in empty_form.fields.items()\n if name in data and not isinstance(field, forms.FileField)})\n form.is_valid()\n cache.update({\n name: _unlazify(value)\n for name, value in form.cleaned_data.items()\n if name in data})\n # files are special because they don't have an initial value\n # and the POSTED data must contain the file. So, we keep\n # the stored path as is\n # TODO: see if serialize/deserialize/unlazify can be used for this instead\n cache.update({\n name: data[name]\n for name, field in empty_form.fields.items()\n if name in data and isinstance(field, forms.FileField)})\n\n cache['_updated_at'] = data.get('_updated_at')\n self._cache = cache", "response": "Reloads the cache with the current values from the database."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _reload_maybe(self):\n ConfigModel = apps.get_model('djconfig.Config')\n\n data = dict(\n ConfigModel.objects\n .filter(key='_updated_at')\n .values_list('key', 'value'))\n\n if (not hasattr(self, '_updated_at') or\n self._updated_at != data.get('_updated_at')):\n self._reload()", "response": "Reload the config if the config \\\nMimeType model has been updated."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsave the config with the cleaned data.", "response": "def save(self):\n \"\"\"\n Save the config with the cleaned data,\\\n update the last modified date so\\\n the config is reloaded on other process/nodes.\\\n Reload the config so it can be called right away.\n \"\"\"\n assert self.__class__ in conf.config._registry,\\\n '%(class_name)s is not registered' % {\n 'class_name': self.__class__.__name__\n }\n\n ConfigModel = apps.get_model('djconfig.Config')\n\n for field_name, value in self.cleaned_data.items():\n value = utils.serialize(\n value=value,\n field=self.fields.get(field_name, None))\n # TODO: use update_or_create\n count = (ConfigModel.objects\n .filter(key=field_name)\n .update(value=value))\n if not count:\n ConfigModel.objects.create(\n key=field_name,\n value=value)\n\n count = (ConfigModel.objects\n .filter(key='_updated_at')\n .update(value=str(timezone.now())))\n if not count:\n ConfigModel.objects.create(\n key='_updated_at',\n value=str(timezone.now()))\n\n conf.config._reload()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nregistering a new admin section.", "response": "def register(conf, conf_admin, **options):\n \"\"\"\n Register a new admin section.\n\n :param conf: A subclass of ``djconfig.admin.Config``\n :param conf_admin: A subclass of ``djconfig.admin.ConfigAdmin``\n :param options: Extra options passed to ``django.contrib.admin.site.register``\n \"\"\"\n assert issubclass(conf_admin, ConfigAdmin), (\n 'conf_admin is not a ConfigAdmin subclass')\n assert issubclass(\n getattr(conf_admin, 'change_list_form', None),\n ConfigForm), 'No change_list_form set'\n assert issubclass(conf, Config), (\n 'conf is not a Config subclass')\n assert conf.app_label, 'No app_label set'\n assert conf.verbose_name_plural, 'No verbose_name_plural set'\n assert not conf.name or re.match(r\"^[a-zA-Z_]+$\", conf.name), (\n 'Not a valid name. Valid chars are [a-zA-Z_]')\n config_class = type(\"Config\", (), {})\n config_class._meta = type(\"Meta\", (_ConfigMeta,), {\n 'app_label': conf.app_label,\n 'verbose_name_plural': conf.verbose_name_plural,\n 'object_name': 'Config',\n 'model_name': conf.name,\n 'module_name': conf.name})\n admin.site.register([config_class], conf_admin, **options)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nmix type is used to indicate a field in a schema.", "response": "def Mixed(*types):\n \"\"\"Mixed type, used to indicate a field in a schema can be\n one of many types. Use as a last resort only.\n The Mixed type can be used directly as a class to indicate\n any type is permitted for a given field:\n `\"my_field\": {\"type\": Mixed}`\n It can also be instantiated with list of specific types the\n field may is allowed to be for more control:\n `\"my_field\": {\"type\": Mixed(ObjectId, int)}`\n \"\"\"\n if len(types) < 2:\n raise ValueError(\"Mixed type requires at least 2 specific types\")\n\n types = set(types) # dedupe\n\n class MixedType(type):\n def __instancecheck__(cls, instance):\n \"\"\"Returns true if the given value is an instance of\n one of the types enclosed by this mixed type.\"\"\"\n for mtype in types:\n if isinstance(instance, mtype):\n return True\n return False\n\n class Mixed(object):\n __metaclass__ = MixedType\n\n return Mixed"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nvalidating that a field value matches one of the values AttributeNames given to this validator.", "response": "def one_of(*args):\n \"\"\"\n Validates that a field value matches one of the values\n given to this validator.\n \"\"\"\n if len(args) == 1 and isinstance(args[0], list):\n items = args[0]\n else:\n items = list(args)\n\n def validate(value):\n if not value in items:\n return e(\"{} is not in the list {}\", value, items)\n return validate"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a validator that checks that a field value is greater than or equal to the given value.", "response": "def gte(min_value):\n \"\"\"\n Validates that a field value is greater than or equal to the\n value given to this validator.\n \"\"\"\n def validate(value):\n if value < min_value:\n return e(\"{} is not greater than or equal to {}\", value, min_value)\n return validate"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a validator that checks that a field value is less than or equal to the max_value given to this validator.", "response": "def lte(max_value):\n \"\"\"\n Validates that a field value is less than or equal to the\n value given to this validator.\n \"\"\"\n def validate(value):\n if value > max_value:\n return e(\"{} is not less than or equal to {}\", value, max_value)\n return validate"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef gt(gt_value):\n def validate(value):\n if value <= gt_value:\n return e(\"{} is not greater than {}\", value, gt_value)\n return validate", "response": "Returns a validator that checks that a field value is greater than the given value."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a validator that checks that a field value is less than the given value.", "response": "def lt(lt_value):\n \"\"\"\n Validates that a field value is less than the\n value given to this validator.\n \"\"\"\n def validate(value):\n if value >= lt_value:\n return e(\"{} is not less than {}\", value, lt_value)\n return validate"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef between(min_value, max_value):\n def validate(value):\n if value < min_value:\n return e(\"{} is not greater than or equal to {}\",\n value, min_value)\n if value > max_value:\n return e(\"{} is not less than or equal to {}\",\n value, max_value)\n return validate", "response": "Validates that a field value is between the two values\n articles."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef length(min=None, max=None):\n\n def validate(value):\n if min and len(value) < min:\n return e(\"{} does not have a length of at least {}\", value, min)\n if max and len(value) > max:\n return e(\"{} does not have a length of at most {}\", value, max)\n return validate", "response": "Validates that a field value s length is between the bounds given to this\n validator."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nvalidating that a field value matches the regex pattern", "response": "def match(pattern):\n \"\"\"\n Validates that a field value matches the regex given to this validator.\n \"\"\"\n regex = re.compile(pattern)\n\n def validate(value):\n if not regex.match(value):\n return e(\"{} does not match the pattern {}\", value, pattern)\n return validate"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nvalidating that a value is a valid email address.", "response": "def is_email():\n \"\"\"\n Validates that a fields value is a valid email address.\n \"\"\"\n\n email = (\n ur'(?!^\\.)' # No dot at start\n ur'(?!.*\\.@)' # No dot before at sign\n ur'(?!.*@\\.)' # No dot after at sign\n ur'(?!.*\\.$)' # No dot at the end\n ur'(?!.*\\.\\.)' # No double dots anywhere\n ur'^\\S+' # Starts with one or more non-whitespace characters\n ur'@' # Contains an at sign\n ur'\\S+$' # Ends with one or more non-whitespace characters\n )\n\n regex = re.compile(email, re.IGNORECASE | re.UNICODE)\n\n def validate(value):\n if not regex.match(value):\n return e(\"{} is not a valid email address\", value)\n return validate"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef is_url():\n # Stolen from Django\n regex = re.compile(\n r'^(?:http|ftp)s?://' # http:// or https://\n r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\\.)+(?:[A-Z]{2,6}\\.?|[A-Z0-9-]{2,}\\.?)|' #domain...\n r'localhost|' #localhost...\n r'\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3})' # ...or ip\n r'(?::\\d+)?' # optional port\n r'(?:/?|[/?]\\S+)$', re.IGNORECASE)\n\n def validate(value):\n if not regex.match(value):\n return e(\"{} is not a valid URL\", value)\n return validate", "response": "Validates that a fields value is a valid URL."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef distinct():\n def validate(value):\n for i, item in enumerate(value):\n if item in value[i+1:]:\n return e(\"{} is not a distinct set of values\", value)\n return validate", "response": "Validates that all items in the given field list value are distinct"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\napply the defaults described by this schema to the given articles document instance as appropriate.", "response": "def apply_defaults(self, instance):\n \"\"\"Applies the defaults described by the this schema to the given\n document instance as appropriate. Defaults are only applied to\n fields which are currently unset.\"\"\"\n for field, spec in self.doc_spec.iteritems():\n field_type = spec['type']\n if field not in instance:\n if 'default' in spec:\n default = spec['default']\n if callable(default):\n instance[field] = default()\n else:\n instance[field] = copy.deepcopy(default)\n # Determine if a value already exists for the field\n if field in instance:\n value = instance[field]\n\n # recurse into nested docs\n if isinstance(field_type, Schema) and isinstance(value, dict):\n field_type.apply_defaults(value)\n\n elif isinstance(field_type, Array) and isinstance(field_type.contained_type, Schema) and isinstance(value, list):\n for item in value:\n field_type.contained_type.apply_defaults(item)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate(self, instance):\n errors = {}\n self._validate_instance(instance, errors)\n\n if len(errors) > 0:\n raise ValidationException(errors)", "response": "Validates the given document against this schema. Raises ValidationException if there are any failures."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _verify(self, path_prefix=None):\n for field, spec in self.doc_spec.iteritems():\n path = self._append_path(path_prefix, field)\n\n # Standard dict-based spec\n if isinstance(spec, dict):\n self._verify_field_spec(spec, path)\n else:\n raise SchemaFormatException(\"Invalid field definition for {}\", path)", "response": "Verifies that this schema s doc spec is valid and makes sense."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nverifies a given field specification is valid recursing into nested schemas if required.", "response": "def _verify_field_spec(self, spec, path):\n \"\"\"Verifies a given field specification is valid, recursing into nested schemas if required.\"\"\"\n\n # Required should be a boolean\n if 'required' in spec and not isinstance(spec['required'], bool):\n raise SchemaFormatException(\"{} required declaration should be True or False\", path)\n\n # Required should be a boolean\n if 'nullable' in spec and not isinstance(spec['nullable'], bool):\n raise SchemaFormatException(\"{} nullable declaration should be True or False\", path)\n\n # Must have a type specified\n if 'type' not in spec:\n raise SchemaFormatException(\"{} has no type declared.\", path)\n\n self._verify_type(spec, path)\n\n # Validations should be either a single function or array of functions\n if 'validates' in spec:\n self._verify_validates(spec, path)\n\n # Defaults must be of the correct type or a function\n if 'default' in spec:\n self._verify_default(spec, path)\n\n # Only expected spec keys are supported\n if not set(spec.keys()).issubset(set(['type', 'required', 'validates', 'default', 'nullable'])):\n raise SchemaFormatException(\"Unsupported field spec item at {}. Items: \"+repr(spec.keys()), path)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _verify_type(self, spec, path):\n field_type = spec['type']\n\n if isinstance(field_type, Schema):\n # Nested documents cannot have validation\n if not set(spec.keys()).issubset(set(['type', 'required', 'nullable', 'default'])):\n raise SchemaFormatException(\"Unsupported field spec item at {}. Items: \"+repr(spec.keys()), path)\n return\n\n elif isinstance(field_type, Array):\n if not isinstance(field_type.contained_type, (type, Schema, Array, types.FunctionType)):\n raise SchemaFormatException(\"Unsupported field type contained by Array at {}.\", path)\n\n elif not isinstance(field_type, type) and not isinstance(field_type, types.FunctionType):\n raise SchemaFormatException(\"Unsupported field type at {}. Type must be a type, a function, an Array or another Schema\", path)", "response": "Verify that the type in the spec is valid"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _verify_default(self, spec, path):\n field_type = spec['type']\n default = spec['default']\n\n # If it's a function there's nothing we can really do except assume its valid\n if callable(default):\n return\n\n if isinstance(field_type, Array):\n # Verify we'd got a list as our default\n if not isinstance(default, list):\n raise SchemaFormatException(\"Default value for Array at {} is not a list of values.\", path)\n\n # Ensure the contents are of the correct type\n for i, item in enumerate(default):\n if isinstance(field_type.contained_type, Schema):\n if not self._valid_schema_default(item):\n raise SchemaFormatException(\"Default value for Schema is not valid.\", path)\n elif not isinstance(item, field_type.contained_type):\n raise SchemaFormatException(\"Not all items in the default list for the Array field at {} are of the correct type.\", path)\n\n elif isinstance(field_type, Schema):\n if not self._valid_schema_default(default):\n raise SchemaFormatException(\"Default value for Schema is not valid.\", path)\n\n else:\n if not isinstance(default, field_type):\n raise SchemaFormatException(\"Default value for {} is not of the nominated type.\", path)", "response": "Verifies that the default value specified in the given spec is valid."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _verify_validates(self, spec, path):\n validates = spec['validates']\n\n if isinstance(validates, list):\n for validator in validates:\n self._verify_validator(validator, path)\n else:\n self._verify_validator(validates, path)", "response": "Verify thats the validates argument is valid."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _verify_validator(self, validator, path):\n\n # Validator should be a function\n if not callable(validator):\n raise SchemaFormatException(\"Invalid validations for {}\", path)\n\n # Validator should accept a single argument\n (args, varargs, keywords, defaults) = getargspec(validator)\n if len(args) != 1:\n raise SchemaFormatException(\"Invalid validations for {}\", path)", "response": "Verifies that a given validator associated with the field at the given path is legitimate."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _validate_instance(self, instance, errors, path_prefix=''):\n\n if not isinstance(instance, dict):\n errors[path_prefix] = \"Expected instance of dict to validate against schema.\"\n return\n\n # validate against the schema level validators\n self._apply_validations(errors, path_prefix, self._validates, instance)\n\n # Loop over each field in the schema and check the instance value conforms\n # to its spec\n for field, spec in self.doc_spec.iteritems():\n path = self._append_path(path_prefix, field)\n\n # If the field is present, validate it's value.\n if field in instance:\n self._validate_value(instance[field], spec, path, errors)\n else:\n # If not, add an error if it was a required key.\n if spec.get('required', False):\n errors[path] = \"{} is required.\".format(path)\n\n # Now loop over each field in the given instance and make sure we don't\n # have any fields not declared in the schema, unless strict mode has been\n # explicitly disabled.\n if self._strict:\n for field in instance:\n if field not in self.doc_spec:\n errors[self._append_path(path_prefix, field)] = \"Unexpected document field not present in schema\"", "response": "Validates that the given instance of a document conforms to the given schema s base - schema and validations."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nvalidating that the given value is valid given the associated field spec and path. Any validation failures are added to the given errors collection.", "response": "def _validate_value(self, value, field_spec, path, errors):\n \"\"\"Validates that the given field value is valid given the associated\n field spec and path. Any validation failures are added to the given errors\n collection.\"\"\"\n\n # Check if the value is None and add an error if the field is not nullable.\n # Note that for backward compatibility reasons, the default value of 'nullable'\n # is the inverse of 'required' (which use to mean both that the key be present\n # and not set to None).\n if value is None:\n if not field_spec.get('nullable', not field_spec.get('required', False)):\n errors[path] = \"{} is not nullable.\".format(path)\n return\n\n # All fields should have a type\n field_type = field_spec['type']\n if isinstance(field_type, types.FunctionType):\n try:\n field_type = field_type(value)\n except Exception as e:\n raise SchemaFormatException(\"Dynamic schema function raised exception: {}\".format(str(e)), path)\n if not isinstance(field_type, (type, Schema, Array)):\n raise SchemaFormatException(\"Dynamic schema function did not return a type at path {}\", path)\n\n\n # If our field is an embedded document, recurse into it\n if isinstance(field_type, Schema):\n if isinstance(value, dict):\n field_type._validate_instance(value, errors, path)\n else:\n errors[path] = \"{} should be an embedded document\".format(path)\n return\n\n elif isinstance(field_type, Array):\n if isinstance(value, list):\n is_dynamic = isinstance(field_type.contained_type, types.FunctionType)\n for i, item in enumerate(value):\n contained_type = field_type.contained_type\n if is_dynamic:\n contained_type = contained_type(item)\n instance_path = self._append_path(path, i)\n if isinstance(contained_type, Schema):\n contained_type._validate_instance(item, errors, instance_path)\n elif not isinstance(item, contained_type):\n errors[instance_path] = \"Array item at {} is of incorrect type\".format(instance_path)\n continue\n else:\n errors[path] = \"{} should be an embedded array\".format(path)\n return\n\n elif not isinstance(value, field_type):\n errors[path] = \"Field should be of type {}\".format(field_type)\n return\n\n validations = field_spec.get('validates', None)\n if validations is None:\n return\n self._apply_validations(errors, path, validations, value)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load_config(path=None, defaults=None):\n\n if defaults is None:\n defaults = DEFAULT_FILES\n\n config = ConfigParser(allow_no_value=True)\n\n if defaults:\n config.read(defaults)\n\n if path:\n with open(path) as fh:\n config.read_file(fh)\n\n return config", "response": "Loads and parses an INI style configuration file using Python s built - in\n configparser module."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert a ConfigParser object into a dictionary.", "response": "def as_dict(config):\n \"\"\"\n Converts a ConfigParser object into a dictionary.\n\n The resulting dictionary has sections as keys which point to a dict of the\n sections options as key => value pairs.\n \"\"\"\n settings = defaultdict(lambda: {})\n for section in config.sections():\n for key, val in config.items(section):\n settings[section][key] = val\n return settings"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef initialize(self, timeouts):\n\n # Bind or connect to address\n if self.bind is True:\n self.socket.bind(self.address)\n else:\n self.socket.connect(self.address)\n\n # Set send and recv timeouts\n self._set_timeouts(timeouts)", "response": "Bind or connect the nanomsg socket to some address"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _set_timeouts(self, timeouts):\n\n (send_timeout, recv_timeout) = (None, None)\n\n try:\n (send_timeout, recv_timeout) = timeouts\n except TypeError:\n raise EndpointError(\n '`timeouts` must be a pair of numbers (2, 3) which represent '\n 'the timeout values for send and receive respectively')\n\n if send_timeout is not None:\n self.socket.set_int_option(\n nanomsg.SOL_SOCKET, nanomsg.SNDTIMEO, send_timeout)\n\n if recv_timeout is not None:\n self.socket.set_int_option(\n nanomsg.SOL_SOCKET, nanomsg.RCVTIMEO, recv_timeout)", "response": "Sets socket timeouts for send and receive respectively"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nencoding and sign the payload through the socket", "response": "def send(self, payload):\n \"\"\" Encode and sign (optional) the send through socket \"\"\"\n payload = self.encode(payload)\n payload = self.sign(payload)\n self.socket.send(payload)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreceive from socket authenticate and decode payload", "response": "def receive(self, decode=True):\n \"\"\" Receive from socket, authenticate and decode payload \"\"\"\n payload = self.socket.recv()\n payload = self.verify(payload)\n if decode:\n payload = self.decode(payload)\n return payload"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsign the payload using the authenticator", "response": "def sign(self, payload):\n \"\"\" Sign payload using the supplied authenticator \"\"\"\n if self.authenticator:\n return self.authenticator.signed(payload)\n return payload"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef verify(self, payload):\n if not self.authenticator:\n return payload\n try:\n self.authenticator.auth(payload)\n return self.authenticator.unsigned(payload)\n except AuthenticatorInvalidSignature:\n raise\n except Exception as exception:\n raise AuthenticateError(str(exception))", "response": "Verify the authenticity of the payload."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef encode(self, payload):\n try:\n return self.encoder.encode(payload)\n except Exception as exception:\n raise EncodeError(str(exception))", "response": "Encode a single object."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef start(self):\n\n if threading.current_thread().name == 'MainThread':\n signal.signal(signal.SIGINT, self.stop)\n\n logging.info('Started on {}'.format(self.address))\n\n while True:\n self.process()", "response": "Start and listen for calls"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef stop(self, dummy_signum=None, dummy_frame=None):\n logging.info('Shutting down ...')\n self.socket.close()\n sys.exit(0)", "response": "Shutdown process (this method is also a signal handler)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsummarizing the report data @param list_all: a list which save the report data @param kwargs: such as show_all: True/False report show all status cases proj_name: project name home_page: home page url", "response": "def get_summary(list_all=[], **kwargs):\r\n ''' summarize the report data\r\n @param list_all: a list which save the report data\r\n @param kwargs: such as\r\n show_all: True/False report show all status cases\r\n proj_name: project name \r\n home_page: home page url\r\n \r\n '''\r\n all_summary = []\r\n \r\n for module in list_all:\r\n summary = {\r\n \"module_name\" : module['Name'],\r\n \"show_all\" : kwargs.get(\"show_all\",True),\r\n \"project_name\" : kwargs.get(\"proj_name\",\"TestProject\"),\r\n \"home_page\" : kwargs.get(\"home_page\",__about__.HOME_PAGE),\r\n \"start_time\" : \"\",\r\n \"end_time\" : \"\",\r\n \"duration_seconds\" : \"\",\r\n \"total_case_num\" : len(module[\"TestCases\"]),\r\n \"pass_cases_num\" : 0,\r\n \"fail_cases_num\" : 0,\r\n \"details\" : []\r\n }\r\n \r\n for case in module[\"TestCases\"]:\r\n case_detail = {}\r\n case_detail[\"linkurl\"] = \"./caselogs/%s_%s.log\" %(case[\"case_name\"],case[\"exec_date\"])\r\n \r\n if case[\"status\"].lower() == \"pass\":\r\n summary[\"pass_cases_num\"] += 1\r\n case_detail[\"c_style\"] = \"tr_pass\"\r\n else:\r\n summary[\"fail_cases_num\"] += 1\r\n case_detail[\"c_style\"] = \"tr_fail\"\r\n \r\n case_detail.update(case)\r\n \r\n summary[\"details\"].append(case_detail) \r\n \r\n try:\r\n st = module[\"TestCases\"][0].get(\"start_at\")\r\n et = module[\"TestCases\"][-1].get(\"end_at\")\r\n \r\n summary[\"start_time\"] = time.strftime(\"%Y-%m-%d %H:%M:%S\",time.localtime(st)) \r\n summary[\"end_time\"] = time.strftime(\"%Y-%m-%d %H:%M:%S\",time.localtime(et)) \r\n summary[\"duration_seconds\"] = float(\"%.2f\" %(et - st))\r\n except Exception as _:\r\n logger.log_warning(\"Will set 'start_at' and 'end_at' to 'None'\")\r\n (summary[\"start_time\"], summary[\"end_time\"], summary[\"duration_seconds\"]) = (None,None,None)\r\n \r\n if summary[\"fail_cases_num\"] > 0:\r\n summary[\"dict_report\"] = {\"result\":0,\"message\":\"failure\",\"pass\":summary[\"pass_cases_num\"],\"fail\":summary[\"fail_cases_num\"]}\r\n else:\r\n summary[\"dict_report\"] = {\"result\":1,\"message\":\"success\",\"pass\":summary[\"pass_cases_num\"],\"fail\":summary[\"fail_cases_num\"]}\r\n \r\n all_summary.append(summary)\r\n \r\n return all_summary"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding report data to a list of all test set items", "response": "def add_report_data(list_all=[], module_name=\"TestModule\", **kwargs):\r\n ''' add report data to a list\r\n @param list_all: a list which save the report data\r\n @param module_name: test set name or test module name\r\n @param kwargs: such as\r\n case_name: testcase name\r\n status: test result, Pass or Fail\r\n resp_tester: responsible tester who write this case\r\n tester: tester who execute the test\r\n start_at: tester run this case at time \r\n end_at: tester stop this case at time\r\n '''\r\n start_at = kwargs.get(\"start_at\") \r\n case_name = kwargs.get(\"case_name\",\"TestCase\")\r\n raw_case_name = kwargs.get(\"raw_case_name\",\"TestCase\")\r\n \r\n exec_date_time = time.localtime(start_at)\r\n execdate = time.strftime(\"%Y-%m-%d\",exec_date_time) \r\n exectime = time.strftime(\"%H:%M:%S\",exec_date_time)\r\n \r\n _case_report = {\r\n 'resp_tester': kwargs.get(\"resp_tester\",\"administrator\"),\r\n 'tester': kwargs.get(\"tester\",\"administrator\"),\r\n 'case_name': case_name,\r\n 'raw_case_name': raw_case_name,\r\n 'status': kwargs.get(\"status\",\"Pass\"),\r\n 'exec_date': execdate,\r\n 'exec_time': exectime,\r\n 'start_at': start_at,\r\n 'end_at': kwargs.get(\"end_at\"),\r\n }\r\n \r\n for module in list_all:\r\n if module_name != module[\"Name\"]:\r\n continue\r\n \r\n for case in module[\"TestCases\"]:\r\n if raw_case_name == case[\"raw_case_name\"]:\r\n case.update(_case_report)\r\n return list_all\r\n \r\n module[\"TestCases\"].append(_case_report)\r\n return list_all\r\n \r\n list_all.append({\"Name\": module_name, \"TestCases\": [_case_report]})\r\n return list_all"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse(self, subscription):\n\n for name in self.methods:\n tag = bytes(name.encode('utf-8'))\n if subscription.startswith(tag):\n fun = self.methods.get(name)\n message = subscription[len(tag):]\n return tag, message, fun\n return None, None, None", "response": "Parse a given subscription into a tuple of tag message and function ID"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsubscribing to something and register a function", "response": "def subscribe(self, tag, fun, description=None):\n \"\"\" Subscribe to something and register a function \"\"\"\n self.methods[tag] = fun\n self.descriptions[tag] = description\n self.socket.set_string_option(nanomsg.SUB, nanomsg.SUB_SUBSCRIBE, tag)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef process(self):\n\n subscription = None\n result = None\n\n try:\n subscription = self.socket.recv()\n\n except AuthenticateError as exception:\n logging.error(\n 'Subscriber error while authenticating request: {}'\n .format(exception), exc_info=1)\n\n except AuthenticatorInvalidSignature as exception:\n logging.error(\n 'Subscriber error while authenticating request: {}'\n .format(exception), exc_info=1)\n\n except DecodeError as exception:\n logging.error(\n 'Subscriber error while decoding request: {}'\n .format(exception), exc_info=1)\n\n except RequestParseError as exception:\n logging.error(\n 'Subscriber error while parsing request: {}'\n .format(exception), exc_info=1)\n\n else:\n logging.debug(\n 'Subscriber received payload: {}'\n .format(subscription))\n\n _tag, message, fun = self.parse(subscription)\n message = self.verify(message)\n message = self.decode(message)\n\n try:\n result = fun(message)\n except Exception as exception:\n logging.error(exception, exc_info=1)\n\n # Return result to check successful execution of `fun` when testing\n return result", "response": "Receive a subscription from the socket and process it"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef build_payload(self, tag, message):\n message = self.encode(message)\n message = self.sign(message)\n payload = bytes(tag.encode('utf-8')) + message\n return payload", "response": "Builds the payload for the subscription"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef publish(self, tag, message):\n payload = self.build_payload(tag, message)\n self.socket.send(payload)", "response": "Publish a message down the socket"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nstarts a new subscriber service", "response": "def start_service(addr, n, authenticator):\n \"\"\" Start a service \"\"\"\n\n s = Subscriber(addr, authenticator=authenticator)\n\n def do_something(line):\n pass\n\n s.subscribe('test', do_something)\n\n started = time.time()\n for _ in range(n):\n s.process()\n s.socket.close()\n duration = time.time() - started\n\n print('Subscriber service stats:')\n util.print_stats(n, duration)\n return"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget a Webpack object for a given webpack config.", "response": "def get_webpack(request, name='DEFAULT'):\n \"\"\"\n Get the Webpack object for a given webpack config.\n\n Called at most once per request per config name.\n \"\"\"\n if not hasattr(request, '_webpack_map'):\n request._webpack_map = {}\n wp = request._webpack_map.get(name)\n if wp is None:\n wp = request._webpack_map[name] = Webpack(request, name)\n return wp"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef includeme(config):\n settings = config.registry.settings\n root_package_name = config.root_package.__name__\n config.registry.webpack = {\n 'DEFAULT': WebpackState(settings, root_package_name)\n }\n for extra_config in aslist(settings.get('webpack.configs', [])):\n state = WebpackState(settings, root_package_name, name=extra_config)\n config.registry.webpack[extra_config] = state\n\n # Set up any static views\n for state in six.itervalues(config.registry.webpack):\n if state.static_view:\n config.add_static_view(name=state.static_view_name,\n path=state.static_view_path,\n cache_max_age=state.cache_max_age)\n\n config.add_request_method(get_webpack, 'webpack')", "response": "Add pyramid_webpack methods and config to the app"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nloads and cache the webpack - stats file", "response": "def load_stats(self, cache=None, wait=None):\n \"\"\" Load and cache the webpack-stats file \"\"\"\n if cache is None:\n cache = not self.debug\n if wait is None:\n wait = self.debug\n if not cache or self._stats is None:\n self._stats = self._load_stats()\n start = time.time()\n while wait and self._stats.get('status') == 'compiling':\n if self.timeout and (time.time() - start > self.timeout):\n raise RuntimeError(\"Webpack {0!r} timed out while compiling\"\n .format(self.stats_file.path))\n time.sleep(0.1)\n self._stats = self._load_stats()\n return self._stats"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _load_stats(self):\n for attempt in range(0, 3):\n try:\n with self.stats_file.open() as f:\n return json.load(f)\n except ValueError:\n # If we failed to parse the JSON, it's possible that the\n # webpack process is writing to it concurrently and it's in a\n # bad state. Sleep and retry.\n if attempt < 2:\n time.sleep(attempt * 0.2)\n else:\n raise\n except IOError:\n raise IOError(\n \"Could not read stats file {0}. Make sure you are using the \"\n \"webpack-bundle-tracker plugin\" .format(self.stats_file))", "response": "Load the webpack - stats file"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _chunk_filter(self, extensions):\n if isinstance(extensions, six.string_types):\n extensions = extensions.split()\n\n def _filter(chunk):\n \"\"\" Exclusion filter \"\"\"\n name = chunk['name']\n if extensions is not None:\n if not any(name.endswith(e) for e in extensions):\n return False\n for pattern in self.state.ignore_re:\n if pattern.match(name):\n return False\n for pattern in self.state.ignore:\n if fnmatch.fnmatchcase(name, pattern):\n return False\n return True\n return _filter", "response": "Create a filter from the extensions and ignore files"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _add_url(self, chunk):\n if 'url' in chunk:\n return chunk\n public_path = chunk.get('publicPath')\n if public_path:\n chunk['url'] = public_path\n else:\n fullpath = posixpath.join(self.state.static_view_path,\n chunk['name'])\n chunk['url'] = self._request.static_url(fullpath)\n return chunk", "response": "Add a url property to a chunk and return it"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_bundle(self, bundle_name, extensions=None):\n if self.stats.get('status') == 'done':\n bundle = self.stats.get('chunks', {}).get(bundle_name, None)\n if bundle is None:\n raise KeyError('No such bundle {0!r}.'.format(bundle_name))\n test = self._chunk_filter(extensions)\n return [self._add_url(c) for c in bundle if test(c)]\n elif self.stats.get('status') == 'error':\n raise RuntimeError(\"{error}: {message}\".format(**self.stats))\n else:\n raise RuntimeError(\n \"Bad webpack stats file {0} status: {1!r}\"\n .format(self.state.stats_file, self.stats.get('status')))", "response": "Get all the chunks contained in a bundle."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngenerates unique names of bytes.", "response": "def _unique_names():\n \"\"\"Generates unique sequences of bytes.\n \"\"\"\n characters = (\"abcdefghijklmnopqrstuvwxyz\"\n \"0123456789\")\n characters = [characters[i:i + 1] for i in irange(len(characters))]\n rng = random.Random()\n while True:\n letters = [rng.choice(characters) for i in irange(10)]\n yield ''.join(letters)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nescapes the path to a queue e. g. preserves ~ at the begining.", "response": "def escape_queue(s):\n \"\"\"Escapes the path to a queue, e.g. preserves ~ at the begining.\n \"\"\"\n if isinstance(s, PosixPath):\n s = unicode_(s)\n elif isinstance(s, bytes):\n s = s.decode('utf-8')\n if s.startswith('~/'):\n return '~/' + shell_escape(s[2:])\n else:\n return shell_escape(s)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parse_ssh_destination(destination):\n match = _re_ssh.match(destination)\n if not match:\n raise InvalidDestination(\"Invalid destination: %s\" % destination)\n user, password, host, port = match.groups()\n info = {}\n if user:\n info['username'] = user\n else:\n info['username'] = getpass.getuser()\n if password:\n info['password'] = password\n if port:\n info['port'] = int(port)\n info['hostname'] = host\n\n return info", "response": "Parses the SSH destination argument."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting an SSH client to connect with.", "response": "def _ssh_client(self):\n \"\"\"Gets an SSH client to connect with.\n \"\"\"\n ssh = paramiko.SSHClient()\n ssh.load_system_host_keys()\n ssh.set_missing_host_key_policy(paramiko.RejectPolicy())\n return ssh"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the SSH client.", "response": "def get_client(self):\n \"\"\"Gets the SSH client.\n\n This will check that the connection is still alive first, and reconnect\n if necessary.\n \"\"\"\n if self._ssh is None:\n self._connect()\n return self._ssh\n else:\n try:\n chan = self._ssh.get_transport().open_session()\n except (socket.error, paramiko.SSHException):\n logger.warning(\"Lost connection, reconnecting...\")\n self._ssh.close()\n self._connect()\n else:\n chan.close()\n return self._ssh"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncalling a command through the SSH connection and returns the exit status and output.", "response": "def _call(self, cmd, get_output):\n \"\"\"Calls a command through the SSH connection.\n\n Remote stderr gets printed to this program's stderr. Output is captured\n and may be returned.\n \"\"\"\n server_err = self.server_logger()\n\n chan = self.get_client().get_transport().open_session()\n try:\n logger.debug(\"Invoking %r%s\",\n cmd, \" (stdout)\" if get_output else \"\")\n chan.exec_command('/bin/sh -c %s' % shell_escape(cmd))\n output = b''\n while True:\n r, w, e = select.select([chan], [], [])\n if chan not in r:\n continue # pragma: no cover\n recvd = False\n while chan.recv_stderr_ready():\n data = chan.recv_stderr(1024)\n server_err.append(data)\n recvd = True\n while chan.recv_ready():\n data = chan.recv(1024)\n if get_output:\n output += data\n recvd = True\n if not recvd and chan.exit_status_ready():\n break\n output = output.rstrip(b'\\r\\n')\n return chan.recv_exit_status(), output\n finally:\n server_err.done()\n chan.close()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef check_call(self, cmd):\n ret, _ = self._call(cmd, False)\n if ret != 0: # pragma: no cover\n raise RemoteCommandFailure(command=cmd, ret=ret)", "response": "Calls a command through SSH.\n "} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncalling a command through SSH and returns its output.", "response": "def check_output(self, cmd):\n \"\"\"Calls a command through SSH and returns its output.\n \"\"\"\n ret, output = self._call(cmd, True)\n if ret != 0: # pragma: no cover\n raise RemoteCommandFailure(command=cmd, ret=ret)\n logger.debug(\"Output: %r\", output)\n return output"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _resolve_queue(self, queue, depth=0, links=None):\n if depth == 0:\n logger.debug(\"resolve_queue(%s)\", queue)\n answer = self.check_output(\n 'if [ -d %(queue)s ]; then '\n ' cd %(queue)s; echo \"dir\"; cat version; pwd; '\n 'elif [ -f %(queue)s ]; then '\n ' cat %(queue)s; '\n 'else '\n ' echo no; '\n 'fi' % {\n 'queue': escape_queue(queue)})\n if answer == b'no':\n if depth > 0:\n logger.debug(\"Broken link at depth=%d\", depth)\n else:\n logger.debug(\"Path doesn't exist\")\n return None, depth\n elif answer.startswith(b'dir\\n'):\n version, runtime, path = answer[4:].split(b'\\n', 2)\n try:\n version = tuple(int(e)\n for e in version.decode('ascii', 'ignore')\n .split('.'))\n except ValueError:\n version = 0, 0\n if version[:2] != self.PROTOCOL_VERSION:\n raise QueueExists(\n msg=\"Queue exists and is using incompatible protocol \"\n \"version %s\" % '.'.join('%s' % e for e in version))\n path = PosixPath(path)\n runtime = runtime.decode('ascii', 'replace')\n if self.need_runtime is not None:\n if (self.need_runtime is not None and\n runtime not in self.need_runtime):\n raise QueueExists(\n msg=\"Queue exists and is using explicitely disallowed \"\n \"runtime %s\" % runtime)\n logger.debug(\"Found directory at %s, depth=%d, runtime=%s\",\n path, depth, runtime)\n return path, depth\n elif answer.startswith(b'tejdir: '):\n new = queue.parent / answer[8:]\n logger.debug(\"Found link to %s, recursing\", new)\n if links is not None:\n links.append(queue)\n return self._resolve_queue(new, depth + 1)\n else: # pragma: no cover\n logger.debug(\"Server returned %r\", answer)\n raise RemoteCommandFailure(msg=\"Queue resolution command failed \"\n \"in unexpected way\")", "response": "Resolves the queue location on the server."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_queue(self):\n if self._queue is None:\n self._links = []\n queue, depth = self._resolve_queue(self.queue, links=self._links)\n if queue is None and depth > 0:\n raise QueueLinkBroken\n self._queue = queue\n return self._queue", "response": "Gets the actual location of the queue or None."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef setup(self, links=None, force=False, only_links=False):\n if not links:\n links = []\n\n if only_links:\n logger.info(\"Only creating links\")\n for link in links:\n self.check_call('echo \"tejdir:\" %(queue)s > %(link)s' % {\n 'queue': escape_queue(self.queue),\n 'link': escape_queue(link)})\n return\n\n queue, depth = self._resolve_queue(self.queue)\n if queue is not None or depth > 0:\n if force:\n if queue is None:\n logger.info(\"Replacing broken link\")\n elif depth > 0:\n logger.info(\"Replacing link to %s...\", queue)\n else:\n logger.info(\"Replacing existing queue...\")\n self.check_call('rm -Rf %s' % escape_queue(self.queue))\n else:\n if queue is not None and depth > 0:\n raise QueueExists(\"Queue already exists (links to %s)\\n\"\n \"Use --force to replace\" % queue)\n elif depth > 0:\n raise QueueExists(\"Broken link exists\\n\"\n \"Use --force to replace\")\n else:\n raise QueueExists(\"Queue already exists\\n\"\n \"Use --force to replace\")\n\n queue = self._setup()\n\n for link in links:\n self.check_call('echo \"tejdir:\" %(queue)s > %(link)s' % {\n 'queue': escape_queue(queue),\n 'link': escape_queue(link)})", "response": "Installs the runtime at the target location."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _setup(self):\n # Expands ~user in queue\n if self.queue.path[0:1] == b'/':\n queue = self.queue\n else:\n if self.queue.path[0:1] == b'~':\n output = self.check_output('echo %s' %\n escape_queue(self.queue))\n queue = PosixPath(output.rstrip(b'\\r\\n'))\n else:\n output = self.check_output('pwd')\n queue = PosixPath(output.rstrip(b'\\r\\n')) / self.queue\n logger.debug(\"Resolved to %s\", queue)\n\n # Select runtime\n if not self.setup_runtime:\n # Autoselect\n if self._call('which qsub', False)[0] == 0:\n logger.debug(\"qsub is available, using runtime 'pbs'\")\n runtime = 'pbs'\n else:\n logger.debug(\"qsub not found, using runtime 'default'\")\n runtime = 'default'\n else:\n runtime = self.setup_runtime\n\n if self.need_runtime is not None and runtime not in self.need_runtime:\n raise ValueError(\"About to setup runtime %s but that wouldn't \"\n \"match explicitely allowed runtimes\" % runtime)\n\n logger.info(\"Installing runtime %s%s at %s\",\n runtime,\n \"\" if self.setup_runtime else \" (auto)\",\n self.queue)\n\n # Uploads runtime\n scp_client = self.get_scp_client()\n filename = pkg_resources.resource_filename('tej',\n 'remotes/%s' % runtime)\n scp_client.put(filename, str(queue), recursive=True)\n logger.debug(\"Files uploaded\")\n\n # Runs post-setup script\n self.check_call('/bin/sh %s' % shell_escape(queue / 'commands/setup'))\n logger.debug(\"Post-setup script done\")\n\n self._queue = queue\n return queue", "response": "Actually installs the runtime.\n "} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsubmitting a job to the queue.", "response": "def submit(self, job_id, directory, script=None):\n \"\"\"Submits a job to the queue.\n\n If the runtime is not there, it will be installed. If it is a broken\n chain of links, error.\n \"\"\"\n if job_id is None:\n job_id = '%s_%s_%s' % (Path(directory).unicodename,\n self.destination['username'],\n make_unique_name())\n else:\n check_jobid(job_id)\n\n queue = self._get_queue()\n if queue is None:\n queue = self._setup()\n\n if script is None:\n script = 'start.sh'\n\n # Create directory\n ret, target = self._call('%s %s' % (\n shell_escape(queue / 'commands/new_job'),\n job_id),\n True)\n if ret == 4:\n raise JobAlreadyExists\n elif ret != 0:\n raise JobNotFound(\"Couldn't create job\")\n target = PosixPath(target)\n logger.debug(\"Server created directory %s\", target)\n\n # Upload to directory\n try:\n scp_client = self.get_scp_client()\n scp_client.put(str(Path(directory)),\n str(target),\n recursive=True)\n except BaseException as e:\n try:\n self.delete(job_id)\n except BaseException:\n raise e\n raise\n logger.debug(\"Files uploaded\")\n\n # Submit job\n self.check_call('%s %s %s %s' % (\n shell_escape(queue / 'commands/submit'),\n job_id, shell_escape(target),\n shell_escape(script)))\n logger.info(\"Submitted job %s\", job_id)\n return job_id"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget the status of a previously - submitted job.", "response": "def status(self, job_id):\n \"\"\"Gets the status of a previously-submitted job.\n \"\"\"\n check_jobid(job_id)\n\n queue = self._get_queue()\n if queue is None:\n raise QueueDoesntExist\n\n ret, output = self._call('%s %s' % (\n shell_escape(queue / 'commands/status'),\n job_id),\n True)\n if ret == 0:\n directory, result = output.splitlines()\n result = result.decode('utf-8')\n return RemoteQueue.JOB_DONE, PosixPath(directory), result\n elif ret == 2:\n directory = output.splitlines()[0]\n return RemoteQueue.JOB_RUNNING, PosixPath(directory), None\n elif ret == 3:\n raise JobNotFound\n else:\n raise RemoteCommandFailure(command=\"commands/status\",\n ret=ret)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef download(self, job_id, files, **kwargs):\n check_jobid(job_id)\n\n if not files:\n return\n if isinstance(files, string_types):\n files = [files]\n directory = False\n recursive = kwargs.pop('recursive', True)\n\n if 'destination' in kwargs and 'directory' in kwargs:\n raise TypeError(\"Only use one of 'destination' or 'directory'\")\n elif 'destination' in kwargs:\n destination = Path(kwargs.pop('destination'))\n if len(files) != 1:\n raise ValueError(\"'destination' specified but multiple files \"\n \"given; did you mean to use 'directory'?\")\n elif 'directory' in kwargs:\n destination = Path(kwargs.pop('directory'))\n directory = True\n if kwargs:\n raise TypeError(\"Got unexpected keyword arguments\")\n\n # Might raise JobNotFound\n status, target, result = self.status(job_id)\n\n scp_client = self.get_scp_client()\n for filename in files:\n logger.info(\"Downloading %s\", target / filename)\n if directory:\n scp_client.get(str(target / filename),\n str(destination / filename),\n recursive=recursive)\n else:\n scp_client.get(str(target / filename),\n str(destination),\n recursive=recursive)", "response": "Downloads files from server."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef kill(self, job_id):\n check_jobid(job_id)\n\n queue = self._get_queue()\n if queue is None:\n raise QueueDoesntExist\n\n ret, output = self._call('%s %s' % (\n shell_escape(queue / 'commands/kill'),\n job_id),\n False)\n if ret == 3:\n raise JobNotFound\n elif ret != 0:\n raise RemoteCommandFailure(command='commands/kill',\n ret=ret)", "response": "Kills a job on the server."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef list(self):\n queue = self._get_queue()\n if queue is None:\n raise QueueDoesntExist\n\n output = self.check_output('%s' %\n shell_escape(queue / 'commands/list'))\n\n job_id, info = None, None\n for line in output.splitlines():\n line = line.decode('utf-8')\n if line.startswith(' '):\n key, value = line[4:].split(': ', 1)\n info[key] = value\n else:\n if job_id is not None:\n yield job_id, info\n job_id = line\n info = {}\n if job_id is not None:\n yield job_id, info", "response": "Lists the jobs on the server."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a function that applies multiple substitutions to a single object.", "response": "def multi_substitution(*substitutions):\n\t\"\"\"\n\tTake a sequence of pairs specifying substitutions, and create\n\ta function that performs those substitutions.\n\n\t>>> multi_substitution(('foo', 'bar'), ('bar', 'baz'))('foo')\n\t'baz'\n\t\"\"\"\n\tsubstitutions = itertools.starmap(substitution, substitutions)\n\t# compose function applies last function first, so reverse the\n\t# substitutions to get the expected order.\n\tsubstitutions = reversed(tuple(substitutions))\n\treturn compose(*substitutions)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef simple_html_strip(s):\n\thtml_stripper = re.compile('()|(<[^>]*>)|([^<]+)', re.DOTALL)\n\ttexts = (\n\t\tmatch.group(3) or ''\n\t\tfor match\n\t\tin html_stripper.finditer(s)\n\t)\n\treturn ''.join(texts)", "response": "Remove HTML from the string s."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nremove the prefix from the text.", "response": "def remove_prefix(text, prefix):\n\t\"\"\"\n\tRemove the prefix from the text if it exists.\n\n\t>>> remove_prefix('underwhelming performance', 'underwhelming ')\n\t'performance'\n\n\t>>> remove_prefix('something special', 'sample')\n\t'something special'\n\t\"\"\"\n\tnull, prefix, rest = text.rpartition(prefix)\n\treturn rest"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef remove_suffix(text, suffix):\n\trest, suffix, null = text.partition(suffix)\n\treturn rest", "response": "Remove the suffix from the text."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the common prefix of two lines.", "response": "def common_prefix(s1, s2):\n\t\t\"\"\"\n\t\tReturn the common prefix of two lines.\n\t\t\"\"\"\n\t\tindex = min(len(s1), len(s2))\n\t\twhile s1[:index] != s2[:index]:\n\t\t\tindex -= 1\n\t\treturn s1[:index]"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_graph(self, ctx, bundle, extensions, caller=None):\n request = ctx.get('request')\n if request is None:\n request = get_current_request()\n if ':' in bundle:\n config_name, bundle = bundle.split(':')\n else:\n config_name = 'DEFAULT'\n webpack = request.webpack(config_name)\n assets = (caller(a) for a in webpack.get_bundle(bundle, extensions))\n return ''.join(assets)", "response": "Run a graph and render the tag contents for each output"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nswitching to this project.", "response": "def activate(lancet, method, project):\n \"\"\"Switch to this project.\"\"\"\n with taskstatus(\"Looking up project\") as ts:\n if method == \"key\":\n func = get_project_keys\n elif method == \"dir\":\n func = get_project_keys\n\n for key, project_path in func(lancet):\n if key.lower() == project.lower():\n break\n else:\n ts.abort(\n 'Project \"{}\" not found (using {}-based lookup)',\n project,\n method,\n )\n\n # Load the configuration\n config = load_config(os.path.join(project_path, LOCAL_CONFIG))\n\n # cd to the project directory\n lancet.defer_to_shell(\"cd\", project_path)\n\n # Activate virtualenv\n venv = config.get(\"lancet\", \"virtualenv\", fallback=None)\n if venv:\n venv_path = os.path.join(project_path, os.path.expanduser(venv))\n activate_script = os.path.join(venv_path, \"bin\", \"activate\")\n lancet.defer_to_shell(\"source\", activate_script)\n else:\n if \"VIRTUAL_ENV\" in os.environ:\n lancet.defer_to_shell(\"deactivate\")"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef workon(ctx, issue_id, new, base_branch):\n lancet = ctx.obj\n\n if not issue_id and not new:\n raise click.UsageError(\"Provide either an issue ID or the --new flag.\")\n elif issue_id and new:\n raise click.UsageError(\n \"Provide either an issue ID or the --new flag, but not both.\"\n )\n\n if new:\n # Create a new issue\n summary = click.prompt(\"Issue summary\")\n issue = create_issue(\n lancet, summary=summary, add_to_active_sprint=True\n )\n else:\n issue = get_issue(lancet, issue_id)\n\n username = lancet.tracker.whoami()\n active_status = lancet.config.get(\"tracker\", \"active_status\")\n if not base_branch:\n base_branch = lancet.config.get(\"repository\", \"base_branch\")\n\n # Get the working branch\n branch = get_branch(lancet, issue, base_branch)\n\n # Make sure the issue is in a correct status\n transition = get_transition(ctx, lancet, issue, active_status)\n\n # Make sure the issue is assigned to us\n assign_issue(lancet, issue, username, active_status)\n\n # Activate environment\n set_issue_status(lancet, issue, active_status, transition)\n\n with taskstatus(\"Checking out working branch\") as ts:\n lancet.repo.checkout(branch.name)\n ts.ok('Checked out working branch based on \"{}\"'.format(base_branch))\n\n with taskstatus(\"Starting harvest timer\") as ts:\n lancet.timer.start(issue)\n ts.ok(\"Started harvest timer\")", "response": "This command creates a new branch and checks out the branch and environment."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef time(lancet, issue):\n issue = get_issue(lancet, issue)\n\n with taskstatus(\"Starting harvest timer\") as ts:\n lancet.timer.start(issue)\n ts.ok(\"Started harvest timer\")", "response": "Start an harvest timer for the given issue."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef pause(ctx):\n lancet = ctx.obj\n paused_status = lancet.config.get(\"tracker\", \"paused_status\")\n\n # Get the issue\n issue = get_issue(lancet)\n\n # Make sure the issue is in a correct status\n transition = get_transition(ctx, lancet, issue, paused_status)\n\n # Activate environment\n set_issue_status(lancet, issue, paused_status, transition)\n\n with taskstatus(\"Pausing harvest timer\") as ts:\n lancet.timer.pause()\n ts.ok(\"Harvest timer paused\")", "response": "Pause work on the current issue."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef resume(ctx):\n lancet = ctx.obj\n\n username = lancet.tracker.whoami()\n active_status = lancet.config.get(\"tracker\", \"active_status\")\n\n # Get the issue\n issue = get_issue(lancet)\n\n # Make sure the issue is in a correct status\n transition = get_transition(ctx, lancet, issue, active_status)\n\n # Make sure the issue is assigned to us\n assign_issue(lancet, issue, username, active_status)\n\n # Activate environment\n set_issue_status(lancet, issue, active_status, transition)\n\n with taskstatus(\"Resuming harvest timer\") as ts:\n lancet.timer.start(issue)\n ts.ok(\"Resumed harvest timer\")", "response": "Resume work on the currently active issue."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef ssh(lancet, print_cmd, environment):\n namespace = {}\n\n with open(lancet.config.get('dploi', 'deployment_spec')) as fh:\n code = compile(fh.read(), 'deployment.py', 'exec')\n exec(code, {}, namespace)\n\n config = namespace['settings'][environment]\n host = '{}@{}'.format(config['user'], config['hosts'][0])\n cmd = ['ssh', '-p', str(config.get('port', 22)), host]\n\n if print_cmd:\n click.echo(' '.join(quote(s) for s in cmd))\n else:\n lancet.defer_to_shell(*cmd)", "response": "SSH into the given environment"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nprint the shell integration code.", "response": "def _setup_helper():\n \"\"\"Print the shell integration code.\"\"\"\n base = os.path.abspath(os.path.dirname(__file__))\n helper = os.path.join(base, \"helper.sh\")\n with open(helper) as fh:\n click.echo(fh.read())"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _commands(ctx):\n ctx = ctx.parent\n ctx.show_hidden_subcommands = False\n main = ctx.command\n\n for subcommand in main.list_commands(ctx):\n cmd = main.get_command(ctx, subcommand)\n if cmd is None:\n continue\n help = cmd.short_help or \"\"\n click.echo(\"{}:{}\".format(subcommand, help))", "response": "Prints a list of commands for shell completion hooks."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _arguments(ctx, command_name=None):\n ctx = ctx.parent\n main = ctx.command\n if command_name:\n command = main.get_command(ctx, command_name)\n if not command:\n return\n else:\n command = main\n\n types = [\"option\", \"argument\"]\n all_params = sorted(\n command.get_params(ctx), key=lambda p: types.index(p.param_type_name)\n )\n\n def get_name(param):\n return max(param.opts, key=len)\n\n for param in all_params:\n if param.param_type_name == \"option\":\n option = get_name(param)\n same_dest = [\n get_name(p) for p in all_params if p.name == param.name\n ]\n if same_dest:\n option = \"({})\".format(\" \".join(same_dest)) + option\n if param.help:\n option += \"[{}]\".format(param.help or \"\")\n if not param.is_flag:\n option += \"=:( )\"\n click.echo(option)\n elif param.param_type_name == \"argument\":\n option = get_name(param)\n click.echo(\":{}\".format(option))", "response": "Prints a list of arguments for shell completion hooks."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nprinting the shell autocompletion code.", "response": "def _autocomplete(ctx, shell):\n \"\"\"Print the shell autocompletion code.\"\"\"\n if not shell:\n shell = os.environ.get(\"SHELL\", \"\")\n shell = os.path.basename(shell).lower()\n if not shell:\n click.secho(\n \"Your shell could not be detected, please pass its name \"\n \"as the argument.\",\n fg=\"red\",\n )\n ctx.exit(-1)\n\n base = os.path.abspath(os.path.dirname(__file__))\n autocomplete = os.path.join(base, \"autocomplete\", \"{}.sh\".format(shell))\n\n if not os.path.exists(autocomplete):\n click.secho(\n \"Autocompletion for your shell ({}) is currently not \"\n \"supported.\",\n fg=\"red\",\n )\n ctx.exit(-1)\n\n with open(autocomplete) as fh:\n click.echo(fh.read())"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncall Python 3 raise from or emulate it for Python 2", "response": "def raisefrom(exc_type, message, exc):\n # type: (Any, str, BaseException) -> None\n \"\"\"Call Python 3 raise from or emulate it for Python 2\n\n Args:\n exc_type (Any): Type of Exception\n message (str): Error message to display\n exc (BaseException): original exception\n\n Returns:\n None\n\n \"\"\"\n if sys.version_info[:2] >= (3, 2):\n six.raise_from(exc_type(message), exc)\n else:\n six.reraise(exc_type, '%s - %s' % (message, exc), sys.exc_info()[2])"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef init_runner(self, parser, tracers, projinfo):\r\n ''' initial some instances for preparing to run test case\r\n @note: should not override\r\n @param parser: instance of TestCaseParser\r\n @param tracers: dict type for the instance of Tracer. Such as {\"\":tracer_obj} or {\"192.168.0.1:5555\":tracer_obj1, \"192.168.0.2:5555\":tracer_obj2} \r\n @param proj_info: dict type of test case. use like: self.proj_info[\"module\"], self.proj_info[\"name\"]\r\n yaml case like: \r\n - project:\r\n name: xxx\r\n module: xxxx\r\n dict case like:\r\n {\"project\": {\"name\": xxx, \"module\": xxxx}} \r\n \r\n '''\r\n self.parser = parser\r\n self.tracers = tracers\r\n self.proj_info = projinfo", "response": "initialise the runner object"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nruns case with mutil process to support selenium grid - mode multiple web and appium grid - mode multiple devices.", "response": "def _run_grid_multiprocess(self, func, iterables):\r\n ''' running case with mutil process to support selenium grid-mode(multiple web) and appium grid-mode(multiple devices). \r\n @param func: function object\r\n @param iterables: iterable objects\r\n '''\r\n multiprocessing.freeze_support()\r\n pool = multiprocessing.Pool() \r\n pool_tracers = pool.map(func, iterables)\r\n pool.close()\r\n pool.join()\r\n \r\n # \u4f20\u9012\u7ed9 pool.map\u7684 \u5b9e\u4f8b\u5bf9\u8c61\uff0c\u5185\u5b58\u5730\u5740\u53d1\u751f\u53d8\u5316\uff0c \u56e0\u6b64\uff0c\u8fd9\u91cc\u5728\u8fd0\u884c\u7ed3\u675f\u540e\uff0c\u91cd\u65b0\u5b9a\u4e49 self.tracers \r\n self.tracers = dict(zip(self._default_devices, pool_tracers))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrun case with mutil process to support selenium grid - mode multiple web and appium grid - mode multiple devices.", "response": "def _run_grid_multithread(self, func, iterables):\r\n ''' running case with mutil process to support selenium grid-mode(multiple web) and appium grid-mode(multiple devices). \r\n @param func: function object\r\n @param iterables: iterable objects\r\n '''\r\n \r\n f = lambda x: threading.Thread(target = func,args = (x,))\r\n threads = map(f, iterables)\r\n for thread in threads:\r\n thread.setDaemon(True)\r\n thread.start()\r\n thread.join()"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef init_project_env(subject='Automation', proj_path = None, sysencoding = \"utf-8\", debug = False):\r\n ''' Set the environment for pyrunner ''' \r\n \r\n# if sysencoding:\r\n# set_sys_encode(sysencoding)\r\n \r\n if not proj_path:\r\n try:\r\n executable_file_path = os.path.dirname(os.path.abspath(inspect.stack()[-1][1]))\r\n except:\r\n executable_file_path = os.path.dirname(sys.path[0])\r\n finally:\r\n proj_path = executable_file_path\r\n \r\n p = os.path.join(proj_path,subject)\r\n \r\n proj_conf = {\r\n \"sys_coding\" : sysencoding,\r\n \"debug\" : debug,\r\n \"module_name\" : os.path.splitext(os.path.basename(subject))[0],\r\n \"cfg_file\" : os.path.join(p,\"config.ini\"),\r\n \"path\" : {\"root\" : p,\r\n \"case\" : os.path.join(p,\"testcase\"),\r\n \"data\" : os.path.join(p,\"data\"),\r\n \"buffer\" : os.path.join(p,\"buffer\"),\r\n \"resource\" : os.path.join(p,\"resource\"),\r\n \"tools\" : os.path.join(p,\"tools\"),\r\n \"rst\" : os.path.join(p,\"result\"),\r\n \"rst_log\" : os.path.join(p,\"result\",\"testcase\"),\r\n \"rst_shot\" : os.path.join(p,\"result\",\"screenshots\"),\r\n },\r\n }\r\n \r\n [FileSystemUtils.mkdirs(v) for v in proj_conf[\"path\"].values()] \r\n sys.path.append(p) if os.path.isdir(p) else \"\"\r\n return proj_conf", "response": "Initialize the environment for pyrunner"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns a string with the first N elements of the list.", "response": "def seqfy(strs):\r\n ''' \u5e8f\u5217\u5316 \u5b57\u7b26\u4e32--->\u5b9e\u9645\u6548\u679c\u662f\uff0c\u4e3a\u5b57\u7b26\u4e32\uff0c\u6dfb\u52a0\u884c\u53f7\uff0c\u8fd4\u56de\u5b57\u7b26\u4e32\r\n Sampe usage:\r\n strs = [\"\", None, u\"First-line\\nSecond-line\\nThird-line\", u\"\u6ca1\u6709\u6362\u884c\u7b26\"]\r\n for s in strs:\r\n print \"---\"\r\n result = seqfy(s)\r\n print result\r\n print unseqfy(result)\r\n '''\r\n \r\n if not strs:\r\n return\r\n \r\n result = \"\"\r\n seq = 1\r\n ss = strs.split(\"\\n\")\r\n for i in ss:\r\n if i:\r\n result = \"\".join([result, str(seq), \".\", i, \"\\n\"])\r\n seq = seq + 1 \r\n return result"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfinding all the stepfy strings in a string.", "response": "def stepfy(strs):\r\n ''' \u6b65\u9aa4\u5316 \u5b57\u7b26\u4e32 --->\u5b9e\u9645\u6548\u679c\u662f, \u4f9d\u636e \u5e8f\u5217\u5316\u7684\u5b57\u7b26\u4e32\uff0c\u8f6c\u6362\u4e3a Step_%s_info \u7684\u5b57\u5178\uff0c \u8fd4\u56de\u5b57\u5178\r\n Sample usage:\r\n test_strs = [\r\n \"\",\r\n None,\r\n u\"First-line\\nSecond-line\\nThird-line\",\r\n u'1.First-line\\n2.Second-line\\n3.Third-line\\n',\r\n u'3.\u6ca1\u6709\u6362\u884c\u7b26',\r\n u'3.\u6709\u6362\u884c\u7b26\\n',\r\n \"asdfasdfsdf\", \r\n \"1.asdfasdfsdf\\n2.sodfi\",\r\n \"1.1.dfasdfahttp://192.168.1.1sdfsdf2.1.1.1.1\\n\",\r\n \"dfasdfahttp://192.168.1.1sdfsdf2.1.1.1.1\\n\",\r\n ]\r\n for i in test_strs:\r\n steps = stepfy(i)\r\n un = unstepfy(steps)\r\n print \"string: %r\" %i\r\n print \"stepfy: %s\" %steps\r\n print \"unstepfy: %r\\n\" %un\r\n '''\r\n \r\n result = {}\r\n prog_step = re.compile(\"^\\d+\\.\")\r\n \r\n if not strs:\r\n return result\r\n \r\n raws = strs.split(\"\\n\")\r\n for raw in raws:\r\n step_num = raws.index(raw) + 1\r\n raw = prog_step.sub(\"\",raw) \r\n if raw:\r\n result[\"Step_%s_info\" %step_num] = raw\r\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef map_function(func_str, fw_action_addtion=None,bw_action_addtion=None, alias_func=None):\r\n ''' Sample usage:\r\n print map_function('set',alias_func = \"ini_items\");# -> ini_items\r\n print map_function('set',fw_action_addtion=\"action_steps_\",bw_action_addtion=\"_for_upd\",alias_func = \"ini_items\"); # -> action_steps_ini_items_for_upd\r\n print map_function('set(a=1,b=2,c=Test())',\"action_steps_\",\"_for_upd\",\"ini_items\");# -> action_steps_ini_items_for_upd(a=1,b=2,c=Test())\r\n print map_function('set(\"login\",a=\"good\",b=Test())',\"action_steps_\",\"_for_upd\");# -> action_steps_set_for_upd(\"login\",a=\"good\",b=Test())\r\n '''\r\n \r\n split_action_value = re.compile(\"^(\\w+)(\\((.*)\\)$)?\")\r\n matched = split_action_value.match(func_str) \r\n \r\n if matched:\r\n action = matched.group(1).lower()\r\n value = matched.group(2)\r\n #params = matched.group(3)\r\n \r\n if alias_func:\r\n action = alias_func\r\n if fw_action_addtion:\r\n action = fw_action_addtion + action \r\n if fw_action_addtion:\r\n action = action + bw_action_addtion\r\n \r\n if value:\r\n return action+value\r\n else:\r\n return action", "response": "Map function to list of items and actions."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef until_cmd(listcmd, end_expects=None, save2logfile=None, coding = encoding):\r\n ''' \u6267\u884c\u7cfb\u7edf\u547d\u4ee4,\u5e76\u7b49\u5f85\u6267\u884c\u5b8c\r\n @param listcmd: \u6267\u884c\u7684\u547d\u4ee4\uff0c\u5217\u8868\u683c\u5f0f\r\n @param end_expects: \u547d\u4ee4\u6267\u884c\u7ed3\u675f\uff0c\u5728\u8f93\u51fa\u7684\u6700\u540e\u4e00\u884c\uff0c\u6b63\u5219\u641c\u7d20\u671f\u671b\u503c\uff0c\u5e76\u8bbe\u7f6e \u7ed3\u679c\u6807\u5fd7\r\n @param save2logfile: \u8bbe\u7f6e\u6267\u884c\u8fc7\u7a0b\uff0c\u4fdd\u5b58\u7684\u65e5\u5fd7\r\n @param coding: \u8bbe\u7f6e\u8f93\u51fa\u7f16\u7801 \r\n '''\r\n \r\n \r\n if end_expects and not isinstance(end_expects, p_compat.str):\r\n raise Exception(\"invalide unicode string: '%s'\" %end_expects)\r\n \r\n lines = [] \r\n subp = subprocess.Popen(listcmd,stdout=subprocess.PIPE,stderr=subprocess.STDOUT)\r\n while subp.poll()==None:\r\n next_line = subp.stdout.readline().decode(coding)\r\n if next_line:\r\n# print(next_line)\r\n lines.append(next_line)\r\n if end_expects and re.search(end_expects, next_line):\r\n result = True\r\n else:\r\n result = False \r\n subp.stdout.close()\r\n \r\n if subp.returncode:\r\n result = False\r\n lines.append(\"sub command error code: %s\" %subp.returncode)\r\n \r\n if save2logfile:\r\n with open(save2logfile, 'a') as f:\r\n f.writelines(lines)\r\n \r\n return result", "response": "A wrapper for subprocess. Popen that returns a list of the items in the listcmd."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncalling the method until the return value is not False.", "response": "def until(method, timeout = 30, message=''):\r\n \"\"\"Calls the method until the return value is not False.\"\"\"\r\n end_time = time.time() + timeout\r\n while True:\r\n try:\r\n value = method()\r\n if value:\r\n return value\r\n except:\r\n pass \r\n time.sleep(1)\r\n if time.time() > end_time:\r\n break\r\n raise Exception(message)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nchecks testcase file format", "response": "def _check_format(file_path, content):\r\n \"\"\" check testcase format if valid\r\n \"\"\"\r\n if not content:\r\n # testcase file content is empty\r\n err_msg = u\"Testcase file content is empty: {}\".format(file_path)\r\n raise p_exception.FileFormatError(err_msg)\r\n\r\n elif not isinstance(content, (list, dict)):\r\n # testcase file content does not match testcase format\r\n err_msg = u\"Testcase file content format invalid: {}\".format(file_path)\r\n raise p_exception.FileFormatError(err_msg)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nloads yaml file and check file content format", "response": "def _load_yaml_file(yaml_file):\r\n \"\"\" load yaml file and check file content format\r\n \"\"\"\r\n with io.open(yaml_file, 'r', encoding='utf-8') as stream:\r\n yaml_content = yaml.load(stream)\r\n FileUtils._check_format(yaml_file, yaml_content)\r\n return yaml_content"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _load_json_file(json_file):\r\n with io.open(json_file, encoding='utf-8') as data_file:\r\n try:\r\n json_content = json.load(data_file)\r\n except p_exception.JSONDecodeError:\r\n err_msg = u\"JSONDecodeError: JSON file format error: {}\".format(json_file)\r\n raise p_exception.FileFormatError(err_msg)\r\n\r\n FileUtils._check_format(json_file, json_content)\r\n return json_content", "response": "load json file and check file content format\r\n "} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nload csv file and check file content format", "response": "def _load_csv_file(csv_file):\r\n \"\"\" load csv file and check file content format\r\n @param\r\n csv_file: csv file path\r\n e.g. csv file content:\r\n username,password\r\n test1,111111\r\n test2,222222\r\n test3,333333\r\n @return\r\n list of parameter, each parameter is in dict format\r\n e.g.\r\n [\r\n {'username': 'test1', 'password': '111111'},\r\n {'username': 'test2', 'password': '222222'},\r\n {'username': 'test3', 'password': '333333'}\r\n ]\r\n \"\"\"\r\n csv_content_list = []\r\n\r\n with io.open(csv_file, encoding='utf-8') as csvfile:\r\n reader = csv.DictReader(csvfile)\r\n for row in reader:\r\n csv_content_list.append(row)\r\n\r\n return csv_content_list"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nforce delete a file", "response": "def force_delete_file(file_path):\r\n ''' force delete a file '''\r\n if os.path.isfile(file_path):\r\n try:\r\n os.remove(file_path)\r\n return file_path\r\n except: \r\n return FileSystemUtils.add_unique_postfix(file_path)\r\n else:\r\n return file_path"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a zip file with all the files in source_dir.", "response": "def mkzip(source_dir, output_filename):\r\n '''Usage:\r\n p = r'D:\\auto\\env\\ttest\\ins\\build\\lib\\rock4\\softtest\\support'\r\n mkzip(os.path.join(p, \"appiumroot\"),os.path.join(p, \"appiumroot.zip\"))\r\n unzip(os.path.join(p, \"appiumroot.zip\"),os.path.join(p, \"appiumroot2\")) \r\n '''\r\n zipf = zipfile.ZipFile(output_filename, 'w', zipfile.zlib.DEFLATED)\r\n pre_len = len(os.path.dirname(source_dir))\r\n for parent, dirnames, filenames in os.walk(source_dir):\r\n for filename in filenames:\r\n pathfile = os.path.join(parent, filename)\r\n arcname = pathfile[pre_len:].strip(os.path.sep);#\u76f8\u5bf9\u8def\u5f84\r\n zipf.write(pathfile, arcname)\r\n zipf.close()"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nimport module from python file path and return imported module", "response": "def get_imported_module_from_file(file_path):\r\n \"\"\" import module from python file path and return imported module\r\n \"\"\"\r\n if p_compat.is_py3:\r\n imported_module = importlib.machinery.SourceFileLoader('module_name', file_path).load_module()\r\n elif p_compat.is_py2:\r\n imported_module = imp.load_source('module_name', file_path)\r\n else:\r\n raise RuntimeError(\"Neither Python 3 nor Python 2.\")\r\n \r\n return imported_module"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfilters functions or variables from import module", "response": "def filter_module(module, filter_type):\r\n \"\"\" filter functions or variables from import module\r\n @params\r\n module: imported module\r\n filter_type: \"function\" or \"variable\"\r\n \"\"\"\r\n filter_type = ModuleUtils.is_function if filter_type == \"function\" else ModuleUtils.is_variable\r\n module_functions_dict = dict(filter(filter_type, vars(module).items()))\r\n return module_functions_dict"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsearch expected function or variable recursive upward", "response": "def search_conf_item(start_path, item_type, item_name):\r\n \"\"\" search expected function or variable recursive upward\r\n @param\r\n start_path: search start path\r\n item_type: \"function\" or \"variable\"\r\n item_name: function name or variable name\r\n e.g.\r\n search_conf_item('C:/Users/RockFeng/Desktop/s/preference.py','function','test_func')\r\n \"\"\"\r\n dir_path = os.path.dirname(os.path.abspath(start_path))\r\n target_file = os.path.join(dir_path, \"preference.py\")\r\n \r\n if os.path.isfile(target_file):\r\n imported_module = ModuleUtils.get_imported_module_from_file(target_file)\r\n items_dict = ModuleUtils.filter_module(imported_module, item_type)\r\n if item_name in items_dict:\r\n return items_dict[item_name]\r\n else:\r\n return ModuleUtils.search_conf_item(dir_path, item_type, item_name)\r\n \r\n if dir_path == start_path:\r\n # system root path\r\n err_msg = \"'{}' not found in recursive upward path!\".format(item_name)\r\n if item_type == \"function\":\r\n raise p_exception.FunctionNotFound(err_msg)\r\n else:\r\n raise p_exception.VariableNotFound(err_msg)\r\n \r\n return ModuleUtils.search_conf_item(dir_path, item_type, item_name)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef find_data_files(source,target,patterns,isiter=False):\r\n if glob.has_magic(source) or glob.has_magic(target):\r\n raise ValueError(\"Magic not allowed in src, target\")\r\n ret = {}\r\n for pattern in patterns:\r\n pattern = os.path.join(source,pattern)\r\n for filename in glob.glob(pattern):\r\n if os.path.isfile(filename):\r\n targetpath = os.path.join(target,os.path.relpath(filename,source))\r\n path = os.path.dirname(targetpath)\r\n ret.setdefault(path,[]).append(filename)\r\n elif isiter and os.path.isdir(filename):\r\n source2 = os.path.join(source,filename)\r\n targetpath2 = \"%s/%s\" %(target,os.path.basename(filename))\r\n # iter_target = os.path.dirname(targetpath2)\r\n ret.update(SetupUtils.find_data_files(source2,targetpath2,patterns,isiter))\r\n \r\n return sorted(ret.items())", "response": "Locates the specified data - files and returns the matches ;"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nconvert mapping in list to ordered dict", "response": "def convert_to_order_dict(map_list):\r\n \"\"\" convert mapping in list to ordered dict\r\n @param (list) map_list\r\n [\r\n {\"a\": 1},\r\n {\"b\": 2}\r\n ]\r\n @return (OrderDict)\r\n OrderDict({\r\n \"a\": 1,\r\n \"b\": 2\r\n })\r\n \"\"\"\r\n ordered_dict = OrderedDict()\r\n for map_dict in map_list:\r\n ordered_dict.update(map_dict)\r\n \r\n return ordered_dict"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_value_from_cfg(cfg_file):\r\n ''' initial the configuration with file that you specify \r\n Sample usage: \r\n config = get_value_from_cfg() \r\n return:\r\n return a dict -->config[section][option] such as config[\"twsm\"][\"dut_ip\"] \r\n ''' \r\n \r\n if not os.path.isfile(cfg_file):\r\n return\r\n \r\n cfg = {} \r\n config = ConfigParser.RawConfigParser()\r\n \r\n try:\r\n config.read(cfg_file)\r\n except Exception as e:\r\n # raise Exception(\"\\n\\tcommon exception 1.2: Not a well format configuration file. error: '%s'\" %(e))\r\n return \r\n for section in config.sections():\r\n cfg[section] = {}\r\n for option in config.options(section):\r\n cfg[section][option]=config.get(section,option)\r\n return cfg", "response": "initial the configuration with file that you specify\r\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_exception_error():\r\n ''' Get the exception info\r\n Sample usage:\r\n try:\r\n raise Exception(\"asdfsdfsdf\")\r\n except:\r\n print common.get_exception_error()\r\n Return:\r\n return the exception infomation.\r\n '''\r\n error_message = \"\"\r\n for i in range(len(inspect.trace())):\r\n error_line = u\"\"\"\r\n File: %s - [%s]\r\n Function: %s\r\n Statement: %s\r\n -\"\"\" % (inspect.trace()[i][1], inspect.trace()[i][2], inspect.trace()[i][3], inspect.trace()[i][4])\r\n \r\n error_message = \"%s%s\" % (error_message, error_line) \r\n \r\n error_message = u\"\"\"Error!\\n%s\\n\\t%s\\n\\t%s\\n-------------------------------------------------------------------------------------------\\n\\n\"\"\" % (error_message,sys.exc_info()[0], sys.exc_info()[1])\r\n \r\n return error_message", "response": "Get the exception info"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nechoes the amount of transferred and toBeTransferred", "response": "def echo(transferred, toBeTransferred, suffix=''):\r\n ''' usage:\r\n for i in range(101):\r\n ProgressBarUtils.echo(i,100)\r\n '''\r\n bar_len = 60 \r\n rate = transferred/float(toBeTransferred)\r\n \r\n filled_len = int(round(bar_len * rate))\r\n _percents = \"%s%s\" %(round(100.0 * rate, 1), \"%\")\r\n \r\n end_str = \"\\r\"\r\n _bar = '=' * filled_len + '-' * (bar_len - filled_len)\r\n print(\"[%s] %s ...%s%s\" %(_bar, _percents, suffix, end_str))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef echo_size(self, transferred=1, status=None):\r\n '''Sample usage:\r\n \r\n f=lambda x,y:x+y\r\n ldata = range(10)\r\n toBeTransferred = reduce(f,range(10))\r\n \r\n progress = ProgressBarUtils(\"refresh\", toBeTransferred=toBeTransferred, unit=\"KB\", chunk_size=1.0, run_status=\"\u6b63\u5728\u4e0b\u8f7d\", fin_status=\"\u4e0b\u8f7d\u5b8c\u6210\")\r\n import time\r\n for i in ldata:\r\n time.sleep(0.2)\r\n progress.echo_size(transferred=i)\r\n '''\r\n self.transferred += transferred\r\n # if status is not None:\r\n self.status = status or self.status\r\n end_str = \"\\r\"\r\n if self.transferred == self.toBeTransferred:\r\n end_str = '\\n'\r\n self.status = status or self.fin_status\r\n \r\n print(self.__get_info() + end_str)", "response": "Print the size of the current object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef echo_percent(self,transferred=1, status=None):\r\n '''Sample usage:\r\n f=lambda x,y:x+y\r\n ldata = range(10)\r\n toBeTransferred = reduce(f,range(10))\r\n \r\n import time\r\n progress = ProgressBarUtils(\"viewbar\", toBeTransferred=toBeTransferred, run_status=\"\u6b63\u5728\u4e0b\u8f7d\", fin_status=\"\u4e0b\u8f7d\u5b8c\u6210\") \r\n for i in ldata: \r\n time.sleep(0.1) \r\n progress.echo_percent(i)\r\n '''\r\n self.transferred += transferred\r\n self.status = status or self.status\r\n end_str = \"\\r\"\r\n if self.transferred == self.toBeTransferred:\r\n end_str = '\\n'\r\n self.status = status or self.fin_status\r\n print(self.__get_bar() + end_str)", "response": "Print a progress bar for each of the available items."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nstarts a service n times", "response": "def start_service(addr, n):\n \"\"\" Start a service \"\"\"\n s = Service(addr)\n s.register('add', lambda x, y: x + y)\n\n started = time.time()\n for _ in range(n):\n s.process()\n duration = time.time() - started\n\n time.sleep(0.1)\n print('Service stats:')\n util.print_stats(n, duration)\n return"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nstarting a service with the given address n times.", "response": "def start_service(addr, n):\n \"\"\" Start a service \"\"\"\n\n s = Subscriber(addr)\n s.socket.set_string_option(nanomsg.SUB, nanomsg.SUB_SUBSCRIBE, 'test')\n\n started = time.time()\n for _ in range(n):\n msg = s.socket.recv()\n s.socket.close()\n duration = time.time() - started\n\n print('Raw SUB service stats:')\n util.print_stats(n, duration)\n return"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nlist Harvest projects optionally filtered with a regexp.", "response": "def projects(lancet, query):\n \"\"\"List Harvest projects, optionally filtered with a regexp.\"\"\"\n projects = lancet.timer.projects()\n\n if query:\n regexp = re.compile(query, flags=re.IGNORECASE)\n\n def match(project):\n match = regexp.search(project['name'])\n if match is None:\n return False\n project['match'] = match\n return True\n projects = (p for p in projects if match(p))\n\n for project in sorted(projects, key=lambda p: p['name'].lower()):\n name = project['name']\n\n if 'match' in project:\n m = project['match']\n s, e = m.start(), m.end()\n match = click.style(name[s:e], fg='green')\n name = name[:s] + match + name[e:]\n\n click.echo('{:>9d} {} {}'.format(\n project['id'], click.style('\u2023', fg='yellow'), name))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nlisting Harvest tasks for the given project ID.", "response": "def tasks(lancet, project_id):\n \"\"\"List Harvest tasks for the given project ID.\"\"\"\n for task in lancet.timer.tasks(project_id):\n click.echo('{:>9d} {} {}'.format(\n task['id'], click.style('\u2023', fg='yellow'), task['name']))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_long_description():\n here = os.path.abspath(os.path.dirname(__file__))\n\n with copen(os.path.join(here, 'DESCRIPTION.rst'), encoding='utf-8') as description:\n return description.read()", "response": "Retrieve the long description from DESCRIPTION. rst"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef flatten(nested_list: list) -> list:\n return list(sorted(filter(lambda y: y is not None,\n list(map(lambda x: (nested_list.extend(x) # noqa: T484\n if isinstance(x, list) else x),\n nested_list)))))", "response": "Flattens a list ignore all the lambdas."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_py_files(dir_name: str) -> list:\n return flatten([\n x for x in\n [[\"{0}/{1}\".format(path, f) for f in files if f.endswith(\".py\")]\n for path, _, files in os.walk(dir_name)\n if not path.startswith(\"./build\")] if x\n ])", "response": "Get all. py files."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nraises SystemExit with correct status code and output logs.", "response": "def exit(self) -> None:\n \"\"\"Raise SystemExit with correct status code and output logs.\"\"\"\n total = sum(len(logs) for logs in self.logs.values())\n if self.json:\n self.logs['total'] = total\n print(json.dumps(self.logs, indent=self.indent))\n\n else:\n for name, log in self.logs.items():\n if not log or self.parser[name].as_bool(\"quiet\"):\n continue\n\n print(\"[[{0}]]\".format(name))\n getattr(snekchek.format, name + \"_format\")(log)\n print(\"\\n\")\n\n print(\"-\" * 30)\n print(\"Total:\", total)\n\n sys.exit(self.status_code)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nrun a checker class.", "response": "def run_linter(self, linter) -> None:\n \"\"\"Run a checker class\"\"\"\n self.current = linter.name\n\n if (linter.name not in self.parser[\"all\"].as_list(\"linters\")\n or linter.base_pyversion > sys.version_info): # noqa: W503\n return\n\n if any(x not in self.installed for x in linter.requires_install):\n raise ModuleNotInstalled(linter.requires_install)\n\n linter.add_output_hook(self.out_func)\n linter.set_config(self.fn, self.parser[linter.name])\n linter.run(self.files)\n self.status_code = self.status_code or linter.status_code"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ntries to read a rcfile from a list of paths", "response": "def read_rcfile():\n \"\"\"\n Try to read a rcfile from a list of paths\n \"\"\"\n files = [\n '{}/.millipederc'.format(os.environ.get('HOME')),\n '/usr/local/etc/millipederc',\n '/etc/millipederc',\n ]\n for filepath in files:\n if os.path.isfile(filepath):\n with open(filepath) as rcfile:\n return parse_rcfile(rcfile)\n return {}"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nparsing the rcfile into a dict of key - value pairs.", "response": "def parse_rcfile(rcfile):\n \"\"\"\n Parses rcfile\n\n Invalid lines are ignored with a warning\n \"\"\"\n def parse_bool(value):\n \"\"\"Parse boolean string\"\"\"\n value = value.lower()\n if value in ['yes', 'true']:\n return True\n elif value in ['no', 'false']:\n return False\n else:\n raise ValueError('''Can't parse {}'''.format(value))\n\n valid_keys = {\n 'size': int,\n 'comment': str,\n 'template': str,\n 'reverse': parse_bool,\n 'opposite': parse_bool,\n 'position': int,\n }\n params = {}\n\n for linenum, line in enumerate(rcfile):\n line = line.strip()\n if not line or line[0] == '#':\n continue\n pos = line.find(' ')\n key = line[:pos]\n value = line[pos:].strip()\n if key in valid_keys.keys():\n try:\n params[key] = valid_keys[key](value)\n except ValueError:\n print('Ignoring line {} from rcfile'.format(linenum + 1),\n file=sys.stderr)\n return params"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef compute_settings(args, rc_settings):\n settings = {}\n for key, value in args.items():\n if key in ['reverse', 'opposite']:\n settings[key] = value ^ rc_settings.get(key, False)\n else:\n settings[key] = value or rc_settings.get(key)\n\n if not settings['size']:\n settings['size'] = DEFAULT_SIZE\n return settings", "response": "Compute the settings from the arguments and rc_settings."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\noutput the millipede of the given size.", "response": "def millipede(size, comment=None, reverse=False, template='default', position=0, opposite=False):\n \"\"\"\n Output the millipede\n \"\"\"\n padding_offsets = [2, 1, 0, 1, 2, 3, 4, 4, 3]\n padding_suite_length = len(padding_offsets)\n head_padding_extra_offset = 2\n\n if opposite:\n padding_offsets.reverse()\n\n position = position or 0\n\n templates = {\n 'frozen': {'bodyr': '\u2554\u2550(\u2744\u2744\u2744)\u2550\u2557', 'body': '\u255a\u2550(\u2744\u2744\u2744)\u2550\u255d',\n 'headr': '\u2554\u2299 \u2299\u2557', 'head': '\u255a\u2299 \u2299\u255d'},\n 'love': {'bodyr': '\u2554\u2550(\u2665\u2665\u2665)\u2550\u2557', 'body': '\u255a\u2550(\u2665\u2665\u2665)\u2550\u255d',\n 'headr': '\u2554\u2299 \u2299\u2557', 'head': '\u255a\u2299 \u2299\u255d'},\n 'corporate': {'bodyr': '\u2554\u2550(\u00a9\u00a9\u00a9)\u2550\u2557', 'body': '\u255a\u2550(\u00a9\u00a9\u00a9)\u2550\u255d',\n 'headr': '\u2554\u2299 \u2299\u2557', 'head': '\u255a\u2299 \u2299\u255d'},\n 'musician': {'bodyr': '\u2554\u2550(\u266b\u2669\u266c)\u2550\u2557', 'body': '\u255a\u2550(\u266b\u2669\u266c)\u2550\u255d',\n 'headr': '\u2554\u2299 \u2299\u2557', 'head': '\u255a\u2299 \u2299\u255d'},\n 'bocal': {'bodyr': '\u2554\u2550(\ud83d\udc1f\ud83d\udc1f\ud83d\udc1f)\u2550\u2557', 'body': '\u255a\u2550(\ud83d\udc1f\ud83d\udc1f\ud83d\udc1f)\u2550\u255d',\n 'headr': '\u2554\u2299 \u2299\u2557', 'head': '\u255a\u2299 \u2299\u255d'},\n 'ascii': {'bodyr': '|=(###)=|', 'body': '|=(###)=|',\n 'headr': '/\u2299 \u2299\\\\', 'head': '\\\\\u2299 \u2299/'},\n 'default': {'bodyr': '\u2554\u2550(\u2588\u2588\u2588)\u2550\u2557', 'body': '\u255a\u2550(\u2588\u2588\u2588)\u2550\u255d',\n 'headr': '\u2554\u2299 \u2299\u2557', 'head': '\u255a\u2299 \u2299\u255d'},\n 'inception': {'bodyr': '\u2554\u2550(\ud83d\udc1b\ud83d\udc1b\ud83d\udc1b)\u2550\u2557', 'body': '\u255a\u2550(\ud83d\udc1b\ud83d\udc1b\ud83d\udc1b)\u2550\u255d',\n 'headr': '\u2554\u2299 \u2299\u2557', 'head': '\u255a\u2299 \u2299\u255d'},\n 'humancentipede': {'bodyr': '\u2554\u2550(\ud83d\ude37\ud83d\ude37\ud83d\ude37)\u2550\u2557', 'body': '\u255a\u2550(\ud83d\ude37\ud83d\ude37\ud83d\ude37)\u2550\u255d',\n 'headr': '\u2554\u2299 \u2299\u2557', 'head': '\u255a\u2299 \u2299\u255d'},\n 'heart': {'bodyr': '\u2554\u2550(\u2764\ufe0f\u2764\ufe0f\u2764\ufe0f)\u2550\u2557', 'body': '\u255a\u2550(\u2764\ufe0f\u2764\ufe0f\u2764\ufe0f)\u2550\u255d',\n 'headr': '\u2554\u2299 \u2299\u2557', 'head': '\u255a\u2299 \u2299\u255d'},\n }\n\n template = templates.get(template, templates['default'])\n\n head = \"{}{}\\n\".format(\n \" \" * (padding_offsets[position % padding_suite_length] + head_padding_extra_offset),\n template['headr'] if reverse else template['head']\n )\n\n body_lines = [\n \"{}{}\\n\".format(\n \" \" * padding_offsets[(x + position) % padding_suite_length],\n template['bodyr'] if reverse else template['body']\n )\n for x in range(size)\n ]\n\n if reverse:\n body_lines.reverse()\n\n body = \"\".join(body_lines)\n\n output = \"\"\n if reverse:\n output += body + head\n if comment:\n output += \"\\n\" + comment + \"\\n\"\n else:\n if comment:\n output += comment + \"\\n\\n\"\n output += head + body\n\n return output"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef api_post(message, url, name, http_data=None, auth=None):\n try:\n import requests\n except ImportError:\n print('requests is required to do api post.', file=sys.stderr)\n sys.exit(1)\n\n data = {name : message}\n if http_data:\n for var in http_data:\n key, value = var.split('=')\n data[key] = value\n\n response = requests.post(\n url,\n data=data,\n auth=auth\n )\n\n if response.status_code != 200:\n raise RuntimeError('Unable to post data')", "response": "Send message as name to url."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nread package long description from README file", "response": "def read_long_description(readme_file):\n \"\"\" Read package long description from README file \"\"\"\n try:\n import pypandoc\n except (ImportError, OSError) as exception:\n print('No pypandoc or pandoc: %s' % (exception,))\n if sys.version_info.major == 3:\n handle = open(readme_file, encoding='utf-8')\n else:\n handle = open(readme_file)\n long_description = handle.read()\n handle.close()\n return long_description\n else:\n return pypandoc.convert(readme_file, 'rst')"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the content of the specified file as a string.", "response": "def content_from_path(path, encoding='utf-8'):\n \"\"\"Return the content of the specified file as a string.\n\n This function also supports loading resources from packages.\n \"\"\"\n if not os.path.isabs(path) and ':' in path:\n package, path = path.split(':', 1)\n content = resource_string(package, path)\n else:\n path = os.path.expanduser(path)\n with open(path, 'rb') as fh:\n content = fh.read()\n\n return content.decode(encoding)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nexecutes the method with args", "response": "def execute(self, method, args, ref):\n \"\"\" Execute the method with args \"\"\"\n\n response = {'result': None, 'error': None, 'ref': ref}\n fun = self.methods.get(method)\n if not fun:\n response['error'] = 'Method `{}` not found'.format(method)\n else:\n try:\n response['result'] = fun(*args)\n except Exception as exception:\n logging.error(exception, exc_info=1)\n response['error'] = str(exception)\n return response"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nregistering a function on this service", "response": "def register(self, name, fun, description=None):\n \"\"\" Register function on this service \"\"\"\n self.methods[name] = fun\n self.descriptions[name] = description"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nparsing a client request.", "response": "def parse(cls, payload):\n \"\"\" Parse client request \"\"\"\n try:\n method, args, ref = payload\n except Exception as exception:\n raise RequestParseError(exception)\n else:\n return method, args, ref"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef process(self):\n\n response = None\n\n try:\n payload = self.receive()\n method, args, ref = self.parse(payload)\n response = self.execute(method, args, ref)\n\n except AuthenticateError as exception:\n logging.error(\n 'Service error while authenticating request: {}'\n .format(exception), exc_info=1)\n\n except AuthenticatorInvalidSignature as exception:\n logging.error(\n 'Service error while authenticating request: {}'\n .format(exception), exc_info=1)\n\n except DecodeError as exception:\n logging.error(\n 'Service error while decoding request: {}'\n .format(exception), exc_info=1)\n\n except RequestParseError as exception:\n logging.error(\n 'Service error while parsing request: {}'\n .format(exception), exc_info=1)\n\n else:\n logging.debug('Service received payload: {}'.format(payload))\n\n if response:\n self.send(response)\n else:\n self.send('')", "response": "Receive data from socket and process request"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nbuilding the payload to be sent to a Responder", "response": "def build_payload(cls, method, args):\n \"\"\" Build the payload to be sent to a `Responder` \"\"\"\n ref = str(uuid.uuid4())\n return (method, args, ref)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nmakes a call to a Responder and return the result", "response": "def call(self, method, *args):\n \"\"\" Make a call to a `Responder` and return the result \"\"\"\n\n payload = self.build_payload(method, args)\n logging.debug('* Client will send payload: {}'.format(payload))\n self.send(payload)\n\n res = self.receive()\n assert payload[2] == res['ref']\n return res['result'], res['error']"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nread the json file located at filepath and return a DotDict containing the configuration items.", "response": "def load(filepath=None, filecontent=None):\n \"\"\" Read the json file located at `filepath`\n\n If `filecontent` is specified, its content will be json decoded\n and loaded instead.\n\n Usage:\n config.load(filepath=None, filecontent=None):\n Provide either a filepath or a json string\n \"\"\"\n conf = DotDict()\n\n assert filepath or filecontent\n if not filecontent:\n with io.FileIO(filepath) as handle:\n filecontent = handle.read().decode('utf-8')\n configs = json.loads(filecontent)\n conf.update(configs.items())\n return conf"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nrun the checks and exits.", "response": "def run_main(args: argparse.Namespace, do_exit=True) -> None:\n \"\"\"Runs the checks and exits.\n\n To extend this tool, use this function and set do_exit to False\n to get returned the status code.\n \"\"\"\n if args.init:\n generate()\n return None # exit after generate instead of starting to lint\n\n handler = CheckHandler(\n file=args.config_file, out_json=args.json, files=args.files)\n\n for style in get_stylers():\n handler.run_linter(style())\n\n for linter in get_linters():\n handler.run_linter(linter())\n\n for security in get_security():\n handler.run_linter(security())\n\n for tool in get_tools():\n tool = tool()\n\n # Only run pypi if everything else passed\n if tool.name == \"pypi\" and handler.status_code != 0:\n continue\n\n handler.run_linter(tool)\n\n if do_exit:\n handler.exit()\n return handler.status_code"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef connect(self):\n # type: () -> None\n \"\"\"\n Connect to server\n\n Returns:\n None\n\n \"\"\"\n if self.connection_type.lower() == 'ssl':\n self.server = smtplib.SMTP_SSL(host=self.host, port=self.port, local_hostname=self.local_hostname,\n timeout=self.timeout, source_address=self.source_address)\n elif self.connection_type.lower() == 'lmtp':\n self.server = smtplib.LMTP(host=self.host, port=self.port, local_hostname=self.local_hostname,\n source_address=self.source_address)\n else:\n self.server = smtplib.SMTP(host=self.host, port=self.port, local_hostname=self.local_hostname,\n timeout=self.timeout, source_address=self.source_address)\n self.server.login(self.username, self.password)", "response": "Connects to the server and logs in to the server."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef send(self, recipients, subject, text_body, html_body=None, sender=None, **kwargs):\n # type: (List[str], str, str, Optional[str], Optional[str], Any) -> None\n \"\"\"\n Send email\n\n Args:\n recipients (List[str]): Email recipient\n subject (str): Email subject\n text_body (str): Plain text email body\n html_body (Optional[str]): HTML email body\n sender (Optional[str]): Email sender. Defaults to global sender.\n **kwargs: See below\n mail_options (list): Mail options (see smtplib documentation)\n rcpt_options (list): Recipient options (see smtplib documentation)\n\n Returns:\n None\n\n \"\"\"\n if sender is None:\n sender = self.sender\n v = validate_email(sender, check_deliverability=False) # validate and get info\n sender = v['email'] # replace with normalized form\n normalised_recipients = list()\n for recipient in recipients:\n v = validate_email(recipient, check_deliverability=True) # validate and get info\n normalised_recipients.append(v['email']) # replace with normalized form\n\n if html_body is not None:\n msg = MIMEMultipart('alternative')\n part1 = MIMEText(text_body, 'plain')\n part2 = MIMEText(html_body, 'html')\n msg.attach(part1)\n msg.attach(part2)\n else:\n msg = MIMEText(text_body)\n msg['Subject'] = subject\n msg['From'] = sender\n msg['To'] = ', '.join(normalised_recipients)\n\n # Perform operations via server\n self.connect()\n self.server.sendmail(sender, normalised_recipients, msg.as_string(), **kwargs)\n self.close()", "response": "Sends an email to the specified recipients."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_session(db_url):\n # type: (str) -> Session\n \"\"\"Gets SQLAlchemy session given url. Your tables must inherit\n from Base in hdx.utilities.database.\n\n Args:\n db_url (str): SQLAlchemy url\n\n Returns:\n sqlalchemy.orm.session.Session: SQLAlchemy session\n \"\"\"\n engine = create_engine(db_url, poolclass=NullPool, echo=False)\n Session = sessionmaker(bind=engine)\n Base.metadata.create_all(engine)\n return Session()", "response": "Gets SQLAlchemy session given url."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting PostgreSQL database connection parameters from SQLAlchemy url", "response": "def get_params_from_sqlalchemy_url(db_url):\n # type: (str) -> Dict[str,Any]\n \"\"\"Gets PostgreSQL database connection parameters from SQLAlchemy url\n\n Args:\n db_url (str): SQLAlchemy url\n\n Returns:\n Dict[str,Any]: Dictionary of database connection parameters\n \"\"\"\n result = urlsplit(db_url)\n return {'database': result.path[1:], 'host': result.hostname, 'port': result.port,\n 'username': result.username, 'password': result.password, 'driver': result.scheme}"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting SQLAlchemy url from database connection parameters.", "response": "def get_sqlalchemy_url(database=None, host=None, port=None, username=None, password=None, driver='postgres'):\n # type: (Optional[str], Optional[str], Union[int, str, None], Optional[str], Optional[str], str) -> str\n \"\"\"Gets SQLAlchemy url from database connection parameters\n\n Args:\n database (Optional[str]): Database name\n host (Optional[str]): Host where database is located\n port (Union[int, str, None]): Database port\n username (Optional[str]): Username to log into database\n password (Optional[str]): Password to log into database\n driver (str): Database driver. Defaults to 'postgres'.\n\n Returns:\n db_url (str): SQLAlchemy url\n \"\"\"\n strings = ['%s://' % driver]\n if username:\n strings.append(username)\n if password:\n strings.append(':%s@' % password)\n else:\n strings.append('@')\n if host:\n strings.append(host)\n if port is not None:\n strings.append(':%d' % int(port))\n if database:\n strings.append('/%s' % database)\n return ''.join(strings)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nwaiting for PostgreSQL database to be up", "response": "def wait_for_postgres(database, host, port, username, password):\n # type: (Optional[str], Optional[str], Union[int, str, None], Optional[str], Optional[str]) -> None\n \"\"\"Waits for PostgreSQL database to be up\n\n Args:\n database (Optional[str]): Database name\n host (Optional[str]): Host where database is located\n port (Union[int, str, None]): Database port\n username (Optional[str]): Username to log into database\n password (Optional[str]): Password to log into database\n\n Returns:\n None\n \"\"\"\n connecting_string = 'Checking for PostgreSQL...'\n if port is not None:\n port = int(port)\n while True:\n try:\n logger.info(connecting_string)\n connection = psycopg2.connect(\n database=database,\n host=host,\n port=port,\n user=username,\n password=password,\n connect_timeout=3\n )\n connection.close()\n logger.info('PostgreSQL is running!')\n break\n except psycopg2.OperationalError:\n time.sleep(1)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn : returns a tuple num_of_not_None_caches list of unset caches endpoint", "response": "def get_unset_cache(self):\n \"\"\"return : returns a tuple (num_of_not_None_caches, [list of unset caches endpoint])\n \"\"\"\n caches = []\n if self._cached_api_global_response is None:\n caches.append('global')\n if self._cached_api_ticker_response is None:\n caches.append('ticker')\n return (len(caches), caches)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting filename from url and join to provided folder or temporary folder if no folder supplied.", "response": "def get_path_for_url(url, folder=None, filename=None, overwrite=False):\n # type: (str, Optional[str], Optional[str], bool) -> str\n \"\"\"Get filename from url and join to provided folder or temporary folder if no folder supplied, ensuring uniqueness\n\n Args:\n url (str): URL to download\n folder (Optional[str]): Folder to download it to. Defaults to None (temporary folder).\n filename (Optional[str]): Filename to use for downloaded file. Defaults to None (derive from the url).\n overwrite (bool): Whether to overwrite existing file. Defaults to False.\n\n Returns:\n str: Path of downloaded file\n\n \"\"\"\n if not filename:\n urlpath = urlsplit(url).path\n filename = basename(urlpath)\n filename, extension = splitext(filename)\n if not folder:\n folder = get_temp_dir()\n path = join(folder, '%s%s' % (filename, extension))\n if overwrite:\n try:\n remove(path)\n except OSError:\n pass\n else:\n count = 0\n while exists(path):\n count += 1\n path = join(folder, '%s%d%s' % (filename, count, extension))\n return path"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_full_url(self, url):\n # type: (str) -> str\n \"\"\"Get full url including any additional parameters\n\n Args:\n url (str): URL for which to get full url\n\n Returns:\n str: Full url including any additional parameters\n \"\"\"\n request = Request('GET', url)\n preparedrequest = self.session.prepare_request(request)\n return preparedrequest.url", "response": "Get full url including any additional parameters"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting full url for GET request including parameters", "response": "def get_url_for_get(url, parameters=None):\n # type: (str, Optional[Dict]) -> str\n \"\"\"Get full url for GET request including parameters\n\n Args:\n url (str): URL to download\n parameters (Optional[Dict]): Parameters to pass. Defaults to None.\n\n Returns:\n str: Full url\n\n \"\"\"\n spliturl = urlsplit(url)\n getparams = OrderedDict(parse_qsl(spliturl.query))\n if parameters is not None:\n getparams.update(parameters)\n spliturl = spliturl._replace(query=urlencode(getparams))\n return urlunsplit(spliturl)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_url_params_for_post(url, parameters=None):\n # type: (str, Optional[Dict]) -> Tuple[str, Dict]\n \"\"\"Get full url for POST request and all parameters including any in the url\n\n Args:\n url (str): URL to download\n parameters (Optional[Dict]): Parameters to pass. Defaults to None.\n\n Returns:\n Tuple[str, Dict]: (Full url, parameters)\n\n \"\"\"\n spliturl = urlsplit(url)\n getparams = OrderedDict(parse_qsl(spliturl.query))\n if parameters is not None:\n getparams.update(parameters)\n spliturl = spliturl._replace(query='')\n full_url = urlunsplit(spliturl)\n return full_url, getparams", "response": "Get full url and all parameters including any in the url\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef hash_stream(self, url):\n # type: (str) -> str\n \"\"\"Stream file from url and hash it using MD5. Must call setup method first.\n\n Args:\n url (str): URL to download\n\n Returns:\n str: MD5 hash of file\n\n \"\"\"\n md5hash = hashlib.md5()\n try:\n for chunk in self.response.iter_content(chunk_size=10240):\n if chunk: # filter out keep-alive new chunks\n md5hash.update(chunk)\n return md5hash.hexdigest()\n except Exception as e:\n raisefrom(DownloadError, 'Download of %s failed in retrieval of stream!' % url, e)", "response": "Stream file from url and hash it using MD5."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef stream_file(self, url, folder=None, filename=None, overwrite=False):\n # type: (str, Optional[str], Optional[str], bool) -> str\n \"\"\"Stream file from url and store in provided folder or temporary folder if no folder supplied.\n Must call setup method first.\n\n Args:\n url (str): URL to download\n filename (Optional[str]): Filename to use for downloaded file. Defaults to None (derive from the url).\n folder (Optional[str]): Folder to download it to. Defaults to None (temporary folder).\n overwrite (bool): Whether to overwrite existing file. Defaults to False.\n\n Returns:\n str: Path of downloaded file\n\n \"\"\"\n path = self.get_path_for_url(url, folder, filename, overwrite)\n f = None\n try:\n f = open(path, 'wb')\n for chunk in self.response.iter_content(chunk_size=10240):\n if chunk: # filter out keep-alive new chunks\n f.write(chunk)\n f.flush()\n return f.name\n except Exception as e:\n raisefrom(DownloadError, 'Download of %s failed in retrieval of stream!' % url, e)\n finally:\n if f:\n f.close()", "response": "Stream file from url and store in provided folder or temporary folder if no folder supplied."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef download_file(self, url, folder=None, filename=None, overwrite=False,\n post=False, parameters=None, timeout=None):\n # type: (str, Optional[str], Optional[str], bool, bool, Optional[Dict], Optional[float]) -> str\n \"\"\"Download file from url and store in provided folder or temporary folder if no folder supplied\n\n Args:\n url (str): URL to download\n folder (Optional[str]): Folder to download it to. Defaults to None.\n filename (Optional[str]): Filename to use for downloaded file. Defaults to None (derive from the url).\n overwrite (bool): Whether to overwrite existing file. Defaults to False.\n post (bool): Whether to use POST instead of GET. Defaults to False.\n parameters (Optional[Dict]): Parameters to pass. Defaults to None.\n timeout (Optional[float]): Timeout for connecting to URL. Defaults to None (no timeout).\n\n Returns:\n str: Path of downloaded file\n\n \"\"\"\n self.setup(url, stream=True, post=post, parameters=parameters, timeout=timeout)\n return self.stream_file(url, folder, filename, overwrite)", "response": "Download file from url and store in provided folder or temporary folder if no folder supplied"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef download(self, url, post=False, parameters=None, timeout=None):\n # type: (str, bool, Optional[Dict], Optional[float]) -> requests.Response\n \"\"\"Download url\n\n Args:\n url (str): URL to download\n post (bool): Whether to use POST instead of GET. Defaults to False.\n parameters (Optional[Dict]): Parameters to pass. Defaults to None.\n timeout (Optional[float]): Timeout for connecting to URL. Defaults to None (no timeout).\n\n Returns:\n requests.Response: Response\n\n \"\"\"\n return self.setup(url, stream=False, post=post, parameters=parameters, timeout=timeout)", "response": "Download the object from the specified URL."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets a tabular stream of the naculiaci.", "response": "def get_tabular_stream(self, url, **kwargs):\n # type: (str, Any) -> tabulator.Stream\n \"\"\"Get Tabulator stream.\n\n Args:\n url (str): URL to download\n **kwargs:\n headers (Union[int, List[int], List[str]]): Number of row(s) containing headers or list of headers\n file_type (Optional[str]): Type of file. Defaults to inferring.\n delimiter (Optional[str]): Delimiter used for values in each row. Defaults to inferring.\n\n Returns:\n tabulator.Stream: Tabulator Stream object\n\n \"\"\"\n self.close_response()\n file_type = kwargs.get('file_type')\n if file_type is not None:\n kwargs['format'] = file_type\n del kwargs['file_type']\n try:\n self.response = tabulator.Stream(url, **kwargs)\n self.response.open()\n return self.response\n except TabulatorException as e:\n raisefrom(DownloadError, 'Getting tabular stream for %s failed!' % url, e)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_tabular_rows(self, url, dict_rows=False, **kwargs):\n # type: (str, bool, Any) -> Iterator[Dict]\n \"\"\"Get iterator for reading rows from tabular data. Each row is returned as a dictionary.\n\n Args:\n url (str): URL to download\n dict_rows (bool): Return dict (requires headers parameter) or list for each row. Defaults to False (list).\n **kwargs:\n headers (Union[int, List[int], List[str]]): Number of row(s) containing headers or list of headers\n file_type (Optional[str]): Type of file. Defaults to inferring.\n delimiter (Optional[str]): Delimiter used for values in each row. Defaults to inferring.\n\n Returns:\n Iterator[Union[List,Dict]]: Iterator where each row is returned as a list or dictionary.\n\n \"\"\"\n return self.get_tabular_stream(url, **kwargs).iter(keyed=dict_rows)", "response": "Get iterator for reading rows from tabular data."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndownloading 2 column csv from url and return a dictionary of keys and values", "response": "def download_tabular_key_value(self, url, **kwargs):\n # type: (str, Any) -> Dict\n \"\"\"Download 2 column csv from url and return a dictionary of keys (first column) and values (second column)\n\n Args:\n url (str): URL to download\n **kwargs:\n headers (Union[int, List[int], List[str]]): Number of row(s) containing headers or list of headers\n file_type (Optional[str]): Type of file. Defaults to inferring.\n delimiter (Optional[str]): Delimiter used for values in each row. Defaults to inferring.\n\n Returns:\n Dict: Dictionary keys (first column) and values (second column)\n\n \"\"\"\n output_dict = dict()\n for row in self.get_tabular_rows(url, **kwargs):\n if len(row) < 2:\n continue\n output_dict[row[0]] = row[1]\n return output_dict"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndownloading multicolumn csv from url and return dictionary where keys are first column and values are dictionaries with keys from column beneath headers and values from columns beneath .", "response": "def download_tabular_rows_as_dicts(self, url, headers=1, keycolumn=1, **kwargs):\n # type: (str, Union[int, List[int], List[str]], int, Any) -> Dict[Dict]\n \"\"\"Download multicolumn csv from url and return dictionary where keys are first column and values are\n dictionaries with keys from column headers and values from columns beneath\n\n Args:\n url (str): URL to download\n headers (Union[int, List[int], List[str]]): Number of row(s) containing headers or list of headers. Defaults to 1.\n keycolumn (int): Number of column to be used for key. Defaults to 1.\n **kwargs:\n file_type (Optional[str]): Type of file. Defaults to inferring.\n delimiter (Optional[str]): Delimiter used for values in each row. Defaults to inferring.\n\n Returns:\n Dict[Dict]: Dictionary where keys are first column and values are dictionaries with keys from column\n headers and values from columns beneath\n\n \"\"\"\n kwargs['headers'] = headers\n stream = self.get_tabular_stream(url, **kwargs)\n output_dict = dict()\n headers = stream.headers\n key_header = headers[keycolumn - 1]\n for row in stream.iter(keyed=True):\n first_val = row[key_header]\n output_dict[first_val] = dict()\n for header in row:\n if header == key_header:\n continue\n else:\n output_dict[first_val][header] = row[header]\n return output_dict"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef init(ctx, force):\n if os.path.exists(PROJECT_CONFIG) and not force:\n click.secho(\n 'An existing configuration file was found at \"{}\".\\n'\n .format(PROJECT_CONFIG),\n fg='red', bold=True\n )\n click.secho(\n 'Please remove it before in order to run the setup wizard or use\\n'\n 'the --force flag to overwrite it.'\n )\n ctx.exit(1)\n\n project_key = click.prompt('Project key on the issue tracker')\n base_branch = click.prompt('Integration branch', default='master')\n\n virtualenvs = ('.venv', '.env', 'venv', 'env')\n for p in virtualenvs:\n if os.path.exists(os.path.join(p, 'bin', 'activate')):\n venv = p\n break\n else:\n venv = ''\n venv_path = click.prompt('Path to virtual environment', default=venv)\n\n project_id = click.prompt('Project ID on Harvest', type=int)\n task_id = click.prompt('Task id on Harvest', type=int)\n\n config = configparser.ConfigParser()\n\n config.add_section('lancet')\n config.set('lancet', 'virtualenv', venv_path)\n\n config.add_section('tracker')\n config.set('tracker', 'default_project', project_key)\n\n config.add_section('harvest')\n config.set('harvest', 'project_id', str(project_id))\n config.set('harvest', 'task_id', str(task_id))\n\n config.add_section('repository')\n config.set('repository', 'base_branch', base_branch)\n\n with open(PROJECT_CONFIG, 'w') as fh:\n config.write(fh)\n\n click.secho('\\nConfiguration correctly written to \"{}\".'\n .format(PROJECT_CONFIG), fg='green')", "response": "Wizard to create a project - level configuration file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef logout(lancet, service):\n if service:\n services = [service]\n else:\n services = ['tracker', 'harvest']\n\n for service in services:\n url = lancet.config.get(service, 'url')\n key = 'lancet+{}'.format(url)\n username = lancet.config.get(service, 'username')\n with taskstatus('Logging out from {}', url) as ts:\n if keyring.get_password(key, username):\n keyring.delete_password(key, username)\n ts.ok('Logged out from {}', url)\n else:\n ts.ok('Already logged out from {}', url)", "response": "Forget saved passwords for the web services."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nlisting all currently configured services.", "response": "def _services(lancet):\n \"\"\"List all currently configured services.\"\"\"\n def get_services(config):\n for s in config.sections():\n if config.has_option(s, 'url'):\n if config.has_option(s, 'username'):\n yield s\n\n for s in get_services(lancet.config):\n click.echo('{}[Logout from {}]'.format(s, lancet.config.get(s, 'url')))"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsending a request to the specified endpoint.", "response": "def send_request(self, endpoint='ticker', coin_name=None, **kwargs):\n \"\"\": param string 'ticker', it's 'ticker' if we want info about coins,\n 'global' for global market's info.\n : param string 'coin_name', specify the name of the coin, if None,\n we'll retrieve info about all available coins.\n \"\"\"\n\n built_url = self._make_url(endpoint, coin_name)\n payload = dict(**kwargs)\n\n self._process_request(endpoint, built_url, payload)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_response(self, data_type=None):\n if not data_type:\n return self.cache.get_response(r_type='ticker') or self.cache.get_response(r_type='global')\n elif data_type == 'ticker':\n return self.cache.get_response(r_type='ticker')\n return self.cache.get_response(r_type='global')", "response": "return json response from API converted into python list\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef iso_639_alpha3(code):\n code = normalize_code(code)\n code = ISO3_MAP.get(code, code)\n if code in ISO3_ALL:\n return code", "response": "Convert a given language identifier into an ISO 639 Part 2 code such that it can be used as an ISO 639 Part 2 code such as eng deu or en."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef list_to_alpha3(languages, synonyms=True):\n codes = set([])\n for language in ensure_list(languages):\n code = iso_639_alpha3(language)\n if code is None:\n continue\n codes.add(code)\n if synonyms:\n codes.update(expand_synonyms(code))\n return codes", "response": "Parse all the language codes in a given list into ISO 639 Part 2 codes\n and optionally expand synonyms."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate a new pull request for this issue.", "response": "def pull_request(ctx, base_branch, open_pr, stop_timer):\n \"\"\"Create a new pull request for this issue.\"\"\"\n lancet = ctx.obj\n\n review_status = lancet.config.get(\"tracker\", \"review_status\")\n remote_name = lancet.config.get(\"repository\", \"remote_name\")\n\n if not base_branch:\n base_branch = lancet.config.get(\"repository\", \"base_branch\")\n\n # Get the issue\n issue = get_issue(lancet)\n\n transition = get_transition(ctx, lancet, issue, review_status)\n\n # Get the working branch\n branch = get_branch(lancet, issue, create=False)\n\n with taskstatus(\"Checking pre-requisites\") as ts:\n if not branch:\n ts.abort(\"No working branch found\")\n\n if lancet.tracker.whoami() not in issue.assignees:\n ts.abort(\"Issue currently not assigned to you\")\n\n # TODO: Check mergeability\n\n # TODO: Check remote status (PR does not already exist)\n\n # Push to remote\n with taskstatus('Pushing to \"{}\"', remote_name) as ts:\n remote = lancet.repo.lookup_remote(remote_name)\n if not remote:\n ts.abort('Remote \"{}\" not found', remote_name)\n\n from ..git import CredentialsCallbacks\n\n remote.push([branch.name], callbacks=CredentialsCallbacks())\n\n ts.ok('Pushed latest changes to \"{}\"', remote_name)\n\n # Create pull request\n with taskstatus(\"Creating pull request\") as ts:\n template_path = lancet.config.get(\"repository\", \"pr_template\")\n message = edit_template(template_path, issue=issue)\n\n if not message:\n ts.abort(\"You didn't provide a title for the pull request\")\n\n title, body = message.split(\"\\n\", 1)\n title = title.strip()\n\n if not title:\n ts.abort(\"You didn't provide a title for the pull request\")\n\n try:\n pr = lancet.scm_manager.create_pull_request(\n branch.branch_name, base_branch, title, body.strip(\"\\n\")\n )\n except PullRequestAlreadyExists as e:\n pr = e.pull_request\n ts.ok(\"Pull request does already exist at {}\", pr.link)\n else:\n ts.ok(\"Pull request created at {}\", pr.link)\n\n # Update issue\n set_issue_status(lancet, issue, review_status, transition)\n\n # TODO: Post to activity stream on JIRA?\n # TODO: Post to Slack?\n\n # Stop harvest timer\n if stop_timer:\n with taskstatus(\"Pausing harvest timer\") as ts:\n lancet.timer.pause()\n ts.ok(\"Harvest timer paused\")\n\n # Open the pull request page in the browser if requested\n if open_pr:\n click.launch(pr.link)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets a BeautifulSoup object for a given url.", "response": "def get_soup(url, downloader=None, user_agent=None, user_agent_config_yaml=None, user_agent_lookup=None, **kwargs):\n # type: (str, Download, Optional[str], Optional[str], Optional[str], Any) -> BeautifulSoup\n \"\"\"\n Get BeautifulSoup object for a url. Requires either global user agent to be set or appropriate user agent\n parameter(s) to be completed.\n\n Args:\n url (str): url to read\n downloader (Download): Download object. Defaults to creating a Download object with given user agent values.\n user_agent (Optional[str]): User agent string. HDXPythonUtilities/X.X.X- is prefixed.\n user_agent_config_yaml (Optional[str]): Path to YAML user agent configuration. Ignored if user_agent supplied. Defaults to ~/.useragent.yml.\n user_agent_lookup (Optional[str]): Lookup key for YAML. Ignored if user_agent supplied.\n\n Returns:\n BeautifulSoup: The BeautifulSoup object for a url\n\n \"\"\"\n if not downloader:\n downloader = Download(user_agent, user_agent_config_yaml, user_agent_lookup, **kwargs)\n response = downloader.download(url)\n return BeautifulSoup(response.text, 'html.parser')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nextract HTML table as list of dictionaries", "response": "def extract_table(tabletag):\n # type: (Tag) -> List[Dict]\n \"\"\"\n Extract HTML table as list of dictionaries\n\n Args:\n tabletag (Tag): BeautifulSoup tag\n\n Returns:\n str: Text of tag stripped of leading and trailing whitespace and newlines and with   replaced with space\n\n \"\"\"\n theadtag = tabletag.find_next('thead')\n\n headertags = theadtag.find_all('th')\n if len(headertags) == 0:\n headertags = theadtag.find_all('td')\n headers = []\n for tag in headertags:\n headers.append(get_text(tag))\n\n tbodytag = tabletag.find_next('tbody')\n trtags = tbodytag.find_all('tr')\n\n table = list()\n for trtag in trtags:\n row = dict()\n tdtags = trtag.find_all('td')\n for i, tag in enumerate(tdtags):\n row[headers[i]] = get_text(tag)\n table.append(row)\n return table"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nwrap function - based callable_obj into a Route instance.", "response": "def wrap_callable(cls, uri, methods, callable_obj):\n \"\"\"Wraps function-based callable_obj into a `Route` instance, else\n proxies a `bottle_neck.handlers.BaseHandler` subclass instance.\n\n Args:\n uri (str): The uri relative path.\n methods (tuple): A tuple of valid method strings.\n callable_obj (instance): The callable object.\n\n Returns:\n A route instance.\n\n Raises:\n RouteError for invalid callable object type.\n \"\"\"\n if isinstance(callable_obj, HandlerMeta):\n callable_obj.base_endpoint = uri\n callable_obj.is_valid = True\n return callable_obj\n\n if isinstance(callable_obj, types.FunctionType):\n return cls(uri=uri, methods=methods, callable_obj=callable_obj)\n\n raise RouteError(\"Invalid handler type.\")"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef register_app(self, app):\n app.route(self.uri, methods=self.methods)(self.callable_obj)\n\n return self", "response": "Register the route object to a bottle. Bottle app instance."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nregisters a handler callable to a specific route.", "response": "def register_handler(self, callable_obj, entrypoint, methods=('GET',)):\n \"\"\"Register a handler callable to a specific route.\n\n Args:\n entrypoint (str): The uri relative path.\n methods (tuple): A tuple of valid method strings.\n callable_obj (callable): The callable object.\n\n Returns:\n The Router instance (for chaining purposes).\n\n Raises:\n RouteError, for missing routing params or invalid callable\n object type.\n \"\"\"\n\n router_obj = Route.wrap_callable(\n uri=entrypoint,\n methods=methods,\n callable_obj=callable_obj\n )\n\n if router_obj.is_valid:\n self._routes.add(router_obj)\n return self\n \n raise RouteError( # pragma: no cover\n \"Missing params: methods: {} - entrypoint: {}\".format(\n methods, entrypoint\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nmounts all registered routes to a bottle. py application instance.", "response": "def mount(self, app=None):\n \"\"\"Mounts all registered routes to a bottle.py application instance.\n\n Args:\n app (instance): A `bottle.Bottle()` application instance.\n\n Returns:\n The Router instance (for chaining purposes).\n \"\"\"\n for endpoint in self._routes:\n endpoint.register_app(app)\n\n return self"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _tolog(self,level):\r\n def wrapper(msg):\r\n if self.log_colors:\r\n color = self.log_colors[level.upper()]\r\n getattr(self.logger, level.lower())(coloring(\"- {}\".format(msg), color))\r\n else:\r\n getattr(self.logger, level.lower())(msg)\r\n \r\n return wrapper", "response": "log with different level"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef from_status(cls, status_line, msg=None):\n method = getattr(cls, status_line.lower()[4:].replace(' ', '_'))\n return method(msg)", "response": "Returns a class method from bottle. HTTPResponse. status_line attribute."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef bad_request(cls, errors=None):\n if cls.expose_status: # pragma: no cover\n cls.response.content_type = 'application/json'\n cls.response._status_line = '400 Bad Request'\n\n return cls(400, errors=errors).to_json", "response": "Shortcut API for HTTP 400 Bad Request response."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef unauthorized(cls, errors=None):\n if cls.expose_status: # pragma: no cover\n cls.response.content_type = 'application/json'\n cls.response._status_line = '401 Unauthorized'\n\n return cls(401, errors=errors).to_json", "response": "Shortcut API for HTTP 401 Unauthorized response."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nshort cut for JSON response service data.", "response": "def to_json(self):\n \"\"\"Short cut for JSON response service data.\n\n Returns:\n Dict that implements JSON interface.\n \"\"\"\n\n web_resp = collections.OrderedDict()\n\n web_resp['status_code'] = self.status_code\n web_resp['status_text'] = dict(HTTP_CODES).get(self.status_code)\n web_resp['data'] = self.data if self.data is not None else {}\n web_resp['errors'] = self.errors or []\n\n return web_resp"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef setup_logging(**kwargs):\n # type: (Any) -> None\n \"\"\"Setup logging configuration\n\n Args:\n **kwargs: See below\n logging_config_dict (dict): Logging configuration dictionary OR\n logging_config_json (str): Path to JSON Logging configuration OR\n logging_config_yaml (str): Path to YAML Logging configuration. Defaults to internal logging_configuration.yml.\n smtp_config_dict (dict): Email Logging configuration dictionary if using default logging configuration OR\n smtp_config_json (str): Path to JSON Email Logging configuration if using default logging configuration OR\n smtp_config_yaml (str): Path to YAML Email Logging configuration if using default logging configuration\n\n Returns:\n None\n \"\"\"\n smtp_config_found = False\n smtp_config_dict = kwargs.get('smtp_config_dict', None)\n if smtp_config_dict:\n smtp_config_found = True\n print('Loading smtp configuration customisations from dictionary')\n\n smtp_config_json = kwargs.get('smtp_config_json', '')\n if smtp_config_json:\n if smtp_config_found:\n raise LoggingError('More than one smtp configuration file given!')\n smtp_config_found = True\n print('Loading smtp configuration customisations from: %s' % smtp_config_json)\n smtp_config_dict = load_json(smtp_config_json)\n\n smtp_config_yaml = kwargs.get('smtp_config_yaml', '')\n if smtp_config_yaml:\n if smtp_config_found:\n raise LoggingError('More than one smtp configuration file given!')\n smtp_config_found = True\n print('Loading smtp configuration customisations from: %s' % smtp_config_yaml)\n smtp_config_dict = load_yaml(smtp_config_yaml)\n\n logging_smtp_config_dict = None\n logging_config_found = False\n logging_config_dict = kwargs.get('logging_config_dict', None)\n if logging_config_dict:\n logging_config_found = True\n print('Loading logging configuration from dictionary')\n\n logging_config_json = kwargs.get('logging_config_json', '')\n if logging_config_json:\n if logging_config_found:\n raise LoggingError('More than one logging configuration file given!')\n logging_config_found = True\n print('Loading logging configuration from: %s' % logging_config_json)\n logging_config_dict = load_json(logging_config_json)\n\n logging_config_yaml = kwargs.get('logging_config_yaml', '')\n if logging_config_found:\n if logging_config_yaml:\n raise LoggingError('More than one logging configuration file given!')\n else:\n if not logging_config_yaml:\n print('No logging configuration parameter. Using default.')\n logging_config_yaml = script_dir_plus_file('logging_configuration.yml', setup_logging)\n if smtp_config_found:\n logging_smtp_config_yaml = script_dir_plus_file('logging_smtp_configuration.yml', setup_logging)\n print('Loading base SMTP logging configuration from: %s' % logging_smtp_config_yaml)\n logging_smtp_config_dict = load_yaml(logging_smtp_config_yaml)\n print('Loading logging configuration from: %s' % logging_config_yaml)\n logging_config_dict = load_yaml(logging_config_yaml)\n\n if smtp_config_found:\n if logging_smtp_config_dict:\n logging_config_dict = merge_dictionaries([logging_config_dict, logging_smtp_config_dict, smtp_config_dict])\n else:\n raise LoggingError('SMTP logging configuration file given but not using default logging configuration!')\n file_only = os.getenv('LOG_FILE_ONLY')\n if file_only is not None and file_only.lower() not in ['false', 'f', 'n', 'no', '0']:\n root = logging_config_dict.get('root')\n if root is not None:\n handlers = root.get('handlers', list())\n for i, handler in enumerate(handlers):\n if handler.lower() == 'console':\n del handlers[i]\n break\n\n logging.config.dictConfig(logging_config_dict)", "response": "Setup logging configuration\n\n Args:\n **kwargs: See below\n logging_config_dict (dict): Logging configuration dictionary OR\n logging_config_json (str): Path to JSON Logging configuration OR\n logging_config_yaml (str): Path to YAML Logging configuration. Defaults to internal logging_configuration.yml.\n smtp_config_dict (dict): Email Logging configuration dictionary if using default logging configuration OR\n smtp_config_json (str): Path to JSON Email Logging configuration if using default logging configuration OR\n smtp_config_yaml (str): Path to YAML Email Logging configuration if using default logging configuration\n\n Returns:\n None"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _search_generator(self, item: Any) -> Generator[Tuple[Any, Any], None, None]:\n results = 0\n for key, value in self.enumerate(item):\n yield key, value\n results += 1\n if results == 0:\n raise SearchError(str(item))", "response": "A helper method for self. search that returns a generator rather than a list."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef ask_bool(question: str, default: bool = True) -> bool:\n default_q = \"Y/n\" if default else \"y/N\"\n answer = input(\"{0} [{1}]: \".format(question, default_q))\n lower = answer.lower()\n if not lower:\n return default\n return lower == \"y\"", "response": "Asks a question yes no style"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef ask_int(question: str, default: int = None) -> int:\n default_q = \" [default: {0}]: \".format(\n default) if default is not None else \"\"\n answer = input(\"{0} [{1}]: \".format(question, default_q))\n\n if not answer:\n if default is None:\n print(\"No default set, try again.\")\n return ask_int(question, default)\n return default\n\n if any(x not in \"1234567890\" for x in answer):\n print(\"Please enter only numbers (0-9).\")\n return ask_int(question, default)\n\n return int(answer)", "response": "Asks for a number in a question"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nask for a path", "response": "def ask_path(question: str, default: str = None) -> str:\n \"\"\"Asks for a path\"\"\"\n default_q = \" [default: {0}]: \".format(\n default) if default is not None else \"\"\n answer = input(\"{0} [{1}]: \".format(question, default_q))\n\n if answer == \"\":\n return default\n\n if os.path.isdir(answer):\n return answer\n\n print(\n \"No such directory: {answer}, please try again\".format(answer=answer))\n return ask_path(question, default)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef ask_list(question: str, default: list = None) -> list:\n default_q = \" [default: {0}]: \".format(\n \",\".join(default)) if default is not None else \"\"\n answer = input(\"{0} [{1}]: \".format(question, default_q))\n\n if answer == \"\":\n return default\n return [ans.strip() for ans in answer.split(\",\")]", "response": "Asks for a comma seperated list of strings"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nask for a simple string", "response": "def ask_str(question: str, default: str = None):\n \"\"\"Asks for a simple string\"\"\"\n default_q = \" [default: {0}]: \".format(\n default) if default is not None else \"\"\n answer = input(\"{0} [{1}]: \".format(question, default_q))\n\n if answer == \"\":\n return default\n return answer"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nlets the user enter the tools he want to use", "response": "def get_tools(self) -> list:\n \"\"\"Lets the user enter the tools he want to use\"\"\"\n tools = \"flake8,pylint,vulture,pyroma,isort,yapf,safety,dodgy,pytest,pypi\".split(\n \",\")\n print(\"Available tools: {0}\".format(\",\".join(tools)))\n answer = ask_list(\"What tools would you like to use?\",\n [\"flake8\", \"pytest\"])\n\n if any(tool not in tools for tool in answer):\n print(\"Invalid answer, retry.\")\n self.get_tools()\n return answer"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncompute pagination info for the current record count and base uri.", "response": "def paginator(limit, offset, record_count, base_uri, page_nav_tpl='&limit={}&offset={}'):\n \"\"\"Compute pagination info for collection filtering.\n\n Args:\n limit (int): Collection filter limit.\n offset (int): Collection filter offset.\n record_count (int): Collection filter total record count.\n base_uri (str): Collection filter base uri (without limit, offset)\n page_nav_tpl (str): Pagination template.\n\n Returns:\n A mapping of pagination info.\n \"\"\"\n\n total_pages = int(math.ceil(record_count / limit))\n\n next_cond = limit + offset <= record_count\n prev_cond = offset >= limit\n\n next_page = base_uri + page_nav_tpl.format(limit, offset + limit) if next_cond else None\n\n prev_page = base_uri + page_nav_tpl.format(limit, offset - limit) if prev_cond else None\n\n return OrderedDict([\n ('total_count', record_count),\n ('total_pages', total_pages),\n ('next_page', next_page),\n ('prev_page', prev_page)\n ])"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef serialize(obj):\n from datetime import datetime, date, time\n if isinstance(obj, date) and not isinstance(obj, datetime):\n obj = datetime.combine(obj, time.min)\n if isinstance(obj, datetime):\n return obj.isoformat()", "response": "JSON serializer that accepts datetime & date"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncheck whether the status code of the response equals expected_status and raise an APIError otherwise.", "response": "def check(response, expected_status=200, url=None):\n \"\"\"\n Check whether the status code of the response equals expected_status and\n raise an APIError otherwise.\n @param url: The url of the response (for error messages).\n Defaults to response.url\n @param json: if True, return r.json(), otherwise return r.text\n \"\"\"\n if response.status_code != expected_status:\n if url is None:\n url = response.url\n\n try:\n err = response.json()\n except:\n err = {} # force generic error\n\n if all(x in err for x in (\"status\", \"message\", \"description\", \"details\")):\n raise _APIError(err[\"status\"], err['message'], url,\n err, err[\"description\"], err[\"details\"])\n else: # generic error\n suffix = \".html\" if \" 200:\n with tempfile.NamedTemporaryFile(suffix=suffix, delete=False) as f:\n f.write(response.text.encode(\"utf-8\"))\n msg = \"{}...\\n\\n[snipped; full response written to {f.name}\".format(msg[:100], **locals())\n \n msg = (\"Request {url!r} returned code {response.status_code},\"\n \" expected {expected_status}. \\n{msg}\".format(**locals()))\n raise _APIError(response.status_code, msg, url, response.text)\n if response.headers.get('Content-Type') == 'application/json':\n try:\n return response.json()\n except:\n raise Exception(\"Cannot decode json; text={response.text!r}\"\n .format(**locals()))\n else:\n return response.text"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_auth(self, user=None, password=None):\n fn = os.path.expanduser(AUTH_FILE)\n if os.path.exists(fn):\n for i, line in enumerate(csv.reader(open(fn))):\n if len(line) != 3:\n log.warning(\"Cannot parse line {i} in {fn}\".format(**locals()))\n continue\n hostname, username, pwd = line\n if (hostname in (\"\", \"*\", self.host)\n and (user is None or username == user)):\n return (username, pwd)\n if user is None:\n user = os.environ.get(\"AMCAT_USER\", os.environ.get(\"USER\"))\n if password is None:\n password = os.environ.get(\"AMCAT_PASSWORD\")\n if user is None or password is None:\n raise Exception(\"No authentication info for {user}@{self.host} \"\n \"from {fn} or AMCAT_USER / AMCAT_PASSWORD \"\n \"variables\".format(**locals()))\n return user, password", "response": "Get the authentication info for the current user from the AUTH_FILE environment variable"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef request(self, url, method=\"get\", format=\"json\", data=None,\n expected_status=None, headers=None, use_xpost=True, **options):\n \"\"\"\n Make an HTTP request to the given relative URL with the host,\n user, and password information. Returns the deserialized json\n if successful, and raises an exception otherwise\n \"\"\"\n if expected_status is None:\n if method == \"get\":\n expected_status = 200\n elif method == \"post\":\n expected_status = 201\n else:\n raise ValueError(\"No expected status supplied and method unknown.\")\n\n if not url.startswith(\"http\"):\n url = \"{self.host}/api/v4/{url}\".format(**locals())\n if format is not None:\n options = dict({'format': format}, **options)\n options = {field: value for field, value in options.items() if value is not None}\n headers = dict(headers or {}, Authorization=\"Token {}\".format(self.token))\n #headers['Accept-encoding'] = 'gzip'\n\n if method == \"get\" and use_xpost:\n # If method is purely GET, we can use X-HTTP-METHOD-OVERRIDE to send our\n # query via POST. This allows for a large number of parameters to be supplied\n assert(data is None)\n\n headers.update({\"X-HTTP-METHOD-OVERRIDE\": method})\n data = options\n options = None\n method = \"post\"\n\n r = requests.request(method, url, data=data, params=options, headers=headers)\n\n log.debug(\n \"HTTP {method} {url} (options={options!r}, data={data!r},\"\n \"headers={headers}) -> {r.status_code}\".format(**locals())\n )\n return check(r, expected_status=expected_status)", "response": "Make an HTTP request to the given URL with the host user and password information. Returns the deserialized json if successful and raises an exception if not successful."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_pages(self, url, page=1, page_size=100, yield_pages=False, **filters):\n n = 0\n for page in itertools.count(page):\n r = self.request(url, page=page, page_size=page_size, **filters)\n n += len(r['results'])\n log.debug(\"Got {url} page {page} / {pages}\".format(url=url, **r))\n if yield_pages:\n yield r\n else:\n for row in r['results']:\n yield row\n if r['next'] is None:\n break", "response": "Get all pages at url yielding individual results"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_scroll(self, url, page_size=100, yield_pages=False, **filters):\n n = 0\n options = dict(page_size=page_size, **filters)\n format = filters.get('format')\n while True:\n r = self.request(url, use_xpost=False, **options)\n n += len(r['results'])\n log.debug(\"Got {} {n}/{total}\".format(url.split(\"?\")[0], total=r['total'], **locals()))\n if yield_pages:\n yield r\n else:\n for row in r['results']:\n yield row\n if r['next'] is None:\n break\n url = r['next']\n options = {'format': None}", "response": "Get the scroll through the resource at url and yield the individual results"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget the AmCAT status page", "response": "def get_status(self):\n \"\"\"Get the AmCAT status page\"\"\"\n url = URL.status.format(**locals())\n return self.get_request(url)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconducts an aggregate query", "response": "def aggregate(self, **filters):\n \"\"\"Conduct an aggregate query\"\"\"\n url = URL.aggregate.format(**locals())\n return self.get_pages(url, **filters)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nlists the articlesets in a project", "response": "def list_sets(self, project, **filters):\n \"\"\"List the articlesets in a project\"\"\"\n url = URL.articlesets.format(**locals())\n return self.get_pages(url, **filters)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nlists the articlesets in a project", "response": "def get_set(self, project, articleset, **filters):\n \"\"\"List the articlesets in a project\"\"\"\n url = URL.articleset.format(**locals())\n return self.request(url, **filters)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef list_articles(self, project, articleset, page=1, **filters):\n url = URL.article.format(**locals())\n return self.get_pages(url, page=page, **filters)", "response": "List the articles in a set"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a new article set.", "response": "def create_set(self, project, json_data=None, **options):\n \"\"\"\n Create a new article set. Provide the needed arguments using\n post_data or with key-value pairs\n \"\"\"\n url = URL.articlesets.format(**locals())\n if json_data is None:\n # form encoded request\n return self.request(url, method=\"post\", data=options)\n else:\n if not isinstance(json_data, (string_types)):\n json_data = json.dumps(json_data,default = serialize)\n headers = {'content-type': 'application/json'}\n return self.request(\n url, method='post', data=json_data, headers=headers)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef create_articles(self, project, articleset, json_data=None, **options):\n url = URL.article.format(**locals())\n # TODO duplicated from create_set, move into requests\n # (or separate post method?)\n if json_data is None:\n # form encoded request\n return self.request(url, method=\"post\", data=options)\n else:\n if not isinstance(json_data, string_types):\n json_data = json.dumps(json_data, default=serialize)\n headers = {'content-type': 'application/json'}\n return self.request(url, method='post', data=json_data, headers=headers)", "response": "Create one or more articles in the set."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn authentication signature of encoded bytes", "response": "def sign(self, encoded):\n \"\"\" Return authentication signature of encoded bytes \"\"\"\n signature = self._hmac.copy()\n signature.update(encoded)\n return signature.hexdigest().encode('utf-8')"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef split(self, encoded):\n maxlen = len(encoded) - self.sig_size\n message = encoded[:maxlen]\n signature = encoded[-self.sig_size:]\n return message, signature", "response": "Split into message and signature"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef auth(self, encoded):\n message, signature = self.split(encoded)\n computed = self.sign(message)\n if not hmac.compare_digest(signature, computed):\n raise AuthenticatorInvalidSignature", "response": "Validate integrity of encoded bytes"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef cached_classproperty(fun):\n @functools.wraps(fun)\n def get(cls):\n try:\n return cls.__cache[fun]\n except AttributeError:\n cls.__cache = {}\n except KeyError: # pragma: no cover\n pass\n ret = cls.__cache[fun] = fun(cls)\n return ret\n return classproperty(get)", "response": "A memorization decorator for class properties."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef route_method(method_name, extra_part=False):\n def wrapper(callable_obj):\n if method_name.lower() not in DEFAULT_ROUTES:\n raise HandlerHTTPMethodError(\n 'Invalid http method in method: {}'.format(method_name)\n )\n\n callable_obj.http_method = method_name.upper()\n\n callable_obj.url_extra_part = callable_obj.__name__ if extra_part\\\n else None\n\n return classmethod(callable_obj)\n return wrapper", "response": "Custom handler routing decorator."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef issue_add(lancet, assign, add_to_sprint, summary):\n summary = \" \".join(summary)\n issue = create_issue(\n lancet,\n summary,\n # project_id=project_id,\n add_to_active_sprint=add_to_sprint,\n )\n if assign:\n if assign == \"me\":\n username = lancet.tracker.whoami()\n else:\n username = assign\n assign_issue(lancet, issue, username)\n\n click.echo(\"Created issue\")", "response": "Create a new issue on the issue tracker."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndries helper. If the specified attribute of the user differs from the specified value, it will be updated.", "response": "def _maybe_update(self, user, attribute, new_value):\n \"\"\"\n DRY helper. If the specified attribute of the user differs from the\n specified value, it will be updated.\n \"\"\"\n old_value = getattr(user, attribute)\n if new_value != old_value:\n self.stderr.write(\n _('Setting {attribute} for user \"{username}\" to \"{new_value}\"').format(\n attribute=attribute, username=user.username, new_value=new_value\n )\n )\n setattr(user, attribute, new_value)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncheck that the passed email address matches the username and email.", "response": "def _check_email_match(self, user, email):\n \"\"\"\n DRY helper.\n\n Requiring the user to specify both username and email will help catch\n certain issues, for example if the expected username has already been\n taken by someone else.\n \"\"\"\n if user.email != email:\n # The passed email address doesn't match this username's email address.\n # Assume a problem and fail.\n raise CommandError(\n _(\n 'Skipping user \"{}\" because the specified and existing email '\n 'addresses do not match.'\n ).format(user.username)\n )"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncheck the provided credentials using the Harvest API.", "response": "def credentials_checker(url, username, password):\n \"\"\"Check the provided credentials using the Harvest API.\"\"\"\n api = HarvestAPI(url, (username, password))\n try:\n api.whoami()\n except HarvestError:\n return False\n else:\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef harvest(lancet, config_section):\n url, username, password = lancet.get_credentials(\n config_section, credentials_checker\n )\n\n project_id_getter = lancet.get_instance_from_config(\n \"timer\", \"project_id_getter\", lancet\n )\n task_id_getter = lancet.get_instance_from_config(\n \"timer\", \"task_id_getter\", lancet\n )\n\n client = HarvestPlatform(\n server=url,\n basic_auth=(username, password),\n project_id_getter=project_id_getter,\n task_id_getter=task_id_getter,\n )\n lancet.call_on_close(client.close)\n return client", "response": "Construct a new Harvest client."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning a temporary directory optionally with folder appended and created if it doesn t exist.", "response": "def temp_dir(folder=None, delete=True):\n # type: (Optional[str], bool) -> str\n \"\"\"Get a temporary directory optionally with folder appended (and created if it doesn't exist)\n\n Args:\n folder (Optional[str]): Folder to create in temporary folder. Defaults to None.\n delete (bool): Whether to delete folder on exiting with statement\n\n Returns:\n str: A temporary directory\n \"\"\"\n tempdir = get_temp_dir()\n if folder:\n tempdir = join(tempdir, folder)\n if not exists(tempdir):\n makedirs(tempdir)\n try:\n yield tempdir\n finally:\n if delete:\n rmtree(tempdir)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef send(self, obj):\n\t\tif not isinstance(obj, NotificationMessage):\n\t\t\traise ValueError, u\"You can only send NotificationMessage objects.\"\n\t\tself._send_queue.put(obj)", "response": "Send a push notification."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the next error message from the error queue.", "response": "def get_error(self, block = True, timeout = None):\n\t\t\"\"\"\n\t\tGets the next error message.\n\t\t\n\t\tEach error message is a 2-tuple of (status, identifier).\"\"\"\n\t\treturn self._error_queue.get(block = block, timeout = timeout)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_feedback(self, block = True, timeout = None):\n\t\tif self._feedback_greenlet is None:\n\t\t\tself._feedback_greenlet = gevent.spawn(self._feedback_loop)\n\t\treturn self._feedback_queue.get(block = block, timeout = timeout)", "response": "Get the next feedback message."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwaits until all queued messages are sent.", "response": "def wait_send(self, timeout = None):\n\t\t\"\"\"Wait until all queued messages are sent.\"\"\"\n\t\tself._send_queue_cleared.clear()\n\t\tself._send_queue_cleared.wait(timeout = timeout)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nstart the message sending loop.", "response": "def start(self):\n\t\t\"\"\"Start the message sending loop.\"\"\"\n\t\tif self._send_greenlet is None:\n\t\t\tself._send_greenlet = gevent.spawn(self._send_loop)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef stop(self, timeout = 10.0):\n\t\tif (self._send_greenlet is not None) and \\\n\t\t\t\t(self._send_queue.qsize() > 0):\n\t\t\tself.wait_send(timeout = timeout)\n\n\t\tif self._send_greenlet is not None:\n\t\t\tgevent.kill(self._send_greenlet)\n\t\t\tself._send_greenlet = None\n\t\tif self._error_greenlet is not None:\n\t\t\tgevent.kill(self._error_greenlet)\n\t\t\tself._error_greenlet = None\n\t\tif self._feedback_greenlet is not None:\n\t\t\tgevent.kill(self._feedback_greenlet)\n\t\t\tself._feedback_greenlet = None\n\n\t\treturn self._send_queue.qsize() < 1", "response": "Stop sending all pending messages and close connection."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef convert_to_ssml(text, text_format):\n if text_format is None:\n return text\n elif text_format == 'plain':\n return plain_to_ssml(text)\n elif text_format == 'html':\n return html_to_ssml(text)\n else:\n raise ValueError(text_format + ': text format not found.')", "response": "Convert text to SSML based on the text s current format."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef html_to_ssml(text):\n ssml_text = reduce(lambda x, y: x.replace(y, html_to_ssml_maps[y]), html_to_ssml_maps, text)\n return ssml_text", "response": "Replaces specific html tags with probable SSML counterparts."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_matching_text_in_strs(a, b, match_min_size=30, ignore='', end_characters=''):\n # type: (str, str, int, str, str) -> List[str]\n \"\"\"Returns a list of matching blocks of text in a and b\n\n Args:\n a (str): First string to match\n b (str): Second string to match\n match_min_size (int): Minimum block size to match on. Defaults to 30.\n ignore (str): Any characters to ignore in matching. Defaults to ''.\n end_characters (str): End characters to look for. Defaults to ''.\n\n Returns:\n List[str]: List of matching blocks of text\n\n \"\"\"\n compare = difflib.SequenceMatcher(lambda x: x in ignore)\n compare.set_seqs(a=a, b=b)\n matching_text = list()\n\n for match in compare.get_matching_blocks():\n start = match.a\n text = a[start: start+match.size]\n if end_characters:\n prev_text = text\n while len(text) != 0 and text[0] in end_characters:\n text = text[1:]\n while len(text) != 0 and text[-1] not in end_characters:\n text = text[:-1]\n if len(text) == 0:\n text = prev_text\n if len(text) >= match_min_size:\n matching_text.append(text)\n return matching_text", "response": "Returns a list of matching blocks of text in a and b."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_matching_text(string_list, match_min_size=30, ignore='', end_characters='.!\\r\\n'):\n # type: (List[str], int, str, str) -> str\n \"\"\"Returns a string containing matching blocks of text in a list of strings followed by non-matching.\n\n Args:\n string_list (List[str]): List of strings to match\n match_min_size (int): Minimum block size to match on. Defaults to 30.\n ignore (str): Any characters to ignore in matching. Defaults to ''.\n end_characters (str): End characters to look for. Defaults to '.\\r\\n'.\n\n Returns:\n str: String containing matching blocks of text followed by non-matching\n\n \"\"\"\n a = string_list[0]\n for i in range(1, len(string_list)):\n b = string_list[i]\n result = get_matching_text_in_strs(a, b, match_min_size=match_min_size, ignore=ignore,\n end_characters=end_characters)\n a = ''.join(result)\n return a", "response": "Returns a string containing the matching blocks of text in a list of strings followed by non - matching."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_matching_then_nonmatching_text(string_list, separator='', match_min_size=30, ignore='',\n end_characters='.!\\r\\n'):\n # type: (List[str], str, int, str, str) -> str\n \"\"\"Returns a string containing matching blocks of text in a list of strings followed by non-matching.\n\n Args:\n string_list (List[str]): List of strings to match\n separator (str): Separator to add between blocks of text. Defaults to ''.\n match_min_size (int): Minimum block size to match on. Defaults to 30.\n ignore (str): Any characters to ignore in matching. Defaults to ''.\n end_characters (str): End characters to look for. Defaults to '.\\r\\n'.\n\n Returns:\n str: String containing matching blocks of text followed by non-matching\n\n \"\"\"\n def add_separator_if_needed(text_list):\n if separator and len(text_list) > 0 and text_list[-1][-len(separator):] != separator:\n text_list.append(separator)\n\n a = string_list[0]\n for i in range(1, len(string_list)):\n b = string_list[i]\n combined_len = len(a) + len(b)\n result = get_matching_text_in_strs(a, b, match_min_size=match_min_size, ignore=ignore,\n end_characters=end_characters)\n new_a = a\n new_b = b\n for text in result:\n new_a = new_a.replace(text, '')\n new_b = new_b.replace(text, '')\n if new_a and new_a in a:\n pos_a = a.index(new_a)\n else:\n pos_a = combined_len\n if new_b and new_b in b:\n pos_b = b.index(new_b)\n else:\n pos_b = combined_len\n if pos_b > pos_a:\n text_1 = new_b\n pos_1 = pos_b\n text_2 = new_a\n pos_2 = pos_a\n else:\n text_1 = new_a\n pos_1 = pos_a\n text_2 = new_b\n pos_2 = pos_b\n output = list()\n pos = 0\n for text in result:\n output.append(text)\n pos += len(text)\n if text_1 and pos >= pos_1:\n add_separator_if_needed(output)\n output.append(text_1)\n pos += len(text_1)\n text_1 = None\n if text_2 and pos >= pos_2:\n add_separator_if_needed(output)\n output.append(text_2)\n pos += len(text_2)\n text_2 = None\n if text_1 and pos_1 == combined_len:\n add_separator_if_needed(output)\n output.append(text_1)\n if text_2 and pos_2 == combined_len:\n add_separator_if_needed(output)\n output.append(text_2)\n a = ''.join(output)\n return a", "response": "Returns a string containing the matching blocks of text followed by non - matching blocks of text."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef iexpand(string, keep_escapes=False):\n\n if isinstance(string, bytes):\n is_bytes = True\n string = string.decode('latin-1')\n\n else:\n is_bytes = False\n\n if is_bytes:\n return (entry.encode('latin-1') for entry in ExpandBrace(keep_escapes).expand(string))\n\n else:\n return (entry for entry in ExpandBrace(keep_escapes).expand(string))", "response": "Expand braces and return an iterator."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsets that we are expanding a sequence and return whether a release is required by the caller.", "response": "def set_expanding(self):\n \"\"\"Set that we are expanding a sequence, and return whether a release is required by the caller.\"\"\"\n\n status = not self.expanding\n if status:\n self.expanding = True\n return status"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef squash(self, a, b):\n\n return ((''.join(x) if isinstance(x, tuple) else x) for x in itertools.product(a, b))", "response": "Returns a generator that squashes two iterables into one."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget a string literal.", "response": "def get_literals(self, c, i, depth):\n \"\"\"\n Get a string literal.\n\n Gather all the literal chars up to opening curly or closing brace.\n Also gather chars between braces and commas within a group (is_expanding).\n \"\"\"\n\n result = ['']\n is_dollar = False\n\n try:\n while c:\n ignore_brace = is_dollar\n is_dollar = False\n\n if c == '$':\n is_dollar = True\n\n elif c == '\\\\':\n c = [self.get_escape(c, i)]\n\n elif not ignore_brace and c == '{':\n # Try and get the group\n index = i.index\n try:\n seq = self.get_sequence(next(i), i, depth + 1)\n if seq:\n c = seq\n except StopIteration:\n # Searched to end of string\n # and still didn't find it.\n i.rewind(i.index - index)\n\n elif self.is_expanding() and c in (',', '}'):\n # We are Expanding within a group and found a group delimiter\n # Return what we gathered before the group delimiters.\n i.rewind(1)\n return (x for x in result)\n\n # Squash the current set of literals.\n result = self.squash(result, [c] if isinstance(c, str) else c)\n\n c = next(i)\n except StopIteration:\n if self.is_expanding():\n return None\n\n return (x for x in result)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef combine(self, a, b):\n\n for l in (a, b):\n for x in l:\n yield x", "response": "A generator that combines two iterables."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_sequence(self, c, i, depth):\n\n result = []\n release = self.set_expanding()\n has_comma = False # Used to indicate validity of group (`{1..2}` are an exception).\n is_empty = True # Tracks whether the current slot is empty `{slot,slot,slot}`.\n\n # Detect numerical and alphabetic series: `{1..2}` etc.\n i.rewind(1)\n item = self.get_range(i)\n i.advance(1)\n if item is not None:\n self.release_expanding(release)\n return (x for x in item)\n\n try:\n while c:\n # Bash has some special top level logic. if `}` follows `{` but hasn't matched\n # a group yet, keep going except when the first 2 bytes are `{}` which gets\n # completely ignored.\n keep_looking = depth == 1 and not has_comma # and i.index not in self.skip_index\n if (c == '}' and (not keep_looking or i.index == 2)):\n # If there is no comma, we know the sequence is bogus.\n if is_empty:\n result = (x for x in self.combine(result, ['']))\n if not has_comma:\n result = ('{' + literal + '}' for literal in result)\n self.release_expanding(release)\n return (x for x in result)\n\n elif c == ',':\n # Must be the first element in the list.\n has_comma = True\n if is_empty:\n result = (x for x in self.combine(result, ['']))\n else:\n is_empty = True\n\n else:\n if c == '}':\n # Top level: If we didn't find a comma, we haven't\n # completed the top level group. Request more and\n # append to what we already have for the first slot.\n if not result:\n result = (x for x in self.combine(result, [c]))\n else:\n result = self.squash(result, [c])\n value = self.get_literals(next(i), i, depth)\n if value is not None:\n result = self.squash(result, value)\n is_empty = False\n else:\n # Lower level: Try to find group, but give up if cannot acquire.\n value = self.get_literals(c, i, depth)\n if value is not None:\n result = (x for x in self.combine(result, value))\n is_empty = False\n\n c = next(i)\n except StopIteration:\n self.release_expanding(release)\n raise", "response": "Get the sequence of the current item."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks and retrieve range if value is a valid range.", "response": "def get_range(self, i):\n \"\"\"\n Check and retrieve range if value is a valid range.\n\n Here we are looking to see if the value is series or range.\n We look for `{1..2[..inc]}` or `{a..z[..inc]}` (negative numbers are fine).\n \"\"\"\n\n try:\n m = i.match(RE_INT_ITER)\n if m:\n return self.get_int_range(*m.groups())\n\n m = i.match(RE_CHR_ITER)\n if m:\n return self.get_char_range(*m.groups())\n except Exception: # pragma: no cover\n # TODO: We really should never fail here,\n # but if we do, assume the sequence range\n # was invalid. This catch can probably\n # be removed in the future with more testing.\n pass\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nformatting the value for the log entry.", "response": "def format_value(self, value, padding):\n \"\"\"Get padding adjusting for negative values.\"\"\"\n\n # padding = padding - 1 if value < 0 and padding > 0 else padding\n # prefix = '-' if value < 0 else ''\n\n if padding:\n return \"{:0{pad}d}\".format(value, pad=padding)\n\n else:\n return str(value)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_int_range(self, start, end, increment=None):\n\n first, last = int(start), int(end)\n increment = int(increment) if increment is not None else 1\n max_length = max(len(start), len(end))\n\n # Zero doesn't make sense as an incrementer\n # but like bash, just assume one\n if increment == 0:\n increment = 1\n\n if start[0] == '-':\n start = start[1:]\n\n if end[0] == '-':\n end = end[1:]\n\n if (len(start) > 1 and start[0] == '0') or (len(end) > 1 and end[0] == '0'):\n padding = max_length\n\n else:\n padding = 0\n\n if first < last:\n r = range(first, last + 1, -increment if increment < 0 else increment)\n\n else:\n r = range(first, last - 1, increment if increment < 0 else -increment)\n\n return (self.format_value(value, padding) for value in r)", "response": "Get an integer range between start and end and increments of increment."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_char_range(self, start, end, increment=None):\n\n increment = int(increment) if increment else 1\n if increment < 0:\n increment = -increment\n\n # Zero doesn't make sense as an incrementer\n # but like bash, just assume one\n if increment == 0:\n increment = 1\n\n inverse = start > end\n alpha = _nalpha if inverse else _alpha\n\n start = alpha.index(start)\n end = alpha.index(end)\n\n if start < end:\n return (c for c in alpha[start:end + 1:increment])\n\n else:\n return (c for c in alpha[end:start + 1:increment])", "response": "Get a range of alphabetic characters."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef expand(self, string):\n\n self.expanding = False\n empties = []\n found_literal = False\n if string:\n i = iter(StringIter(string))\n for x in self.get_literals(next(i), i, 0):\n # We don't want to return trailing empty strings.\n # Store empty strings and output only when followed by a literal.\n if not x:\n empties.append(x)\n continue\n found_literal = True\n while empties:\n yield empties.pop(0)\n yield x\n empties = []\n\n # We found no literals so return an empty string\n if not found_literal:\n yield \"\"", "response": "Yields the entries from the given string."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef merge_two_dictionaries(a, b, merge_lists=False):\n # type: (DictUpperBound, DictUpperBound, bool) -> DictUpperBound\n \"\"\"Merges b into a and returns merged result\n\n NOTE: tuples and arbitrary objects are not handled as it is totally ambiguous what should happen\n\n Args:\n a (DictUpperBound): dictionary to merge into\n b (DictUpperBound): dictionary to merge from\n merge_lists (bool): Whether to merge lists (True) or replace lists (False). Default is False.\n\n Returns:\n DictUpperBound: Merged dictionary\n \"\"\"\n key = None\n # ## debug output\n # sys.stderr.write('DEBUG: %s to %s\\n' %(b,a))\n try:\n if a is None or isinstance(a, (six.string_types, six.text_type, six.integer_types, float)):\n # border case for first run or if a is a primitive\n a = b\n elif isinstance(a, list):\n # lists can be appended or replaced\n if isinstance(b, list):\n if merge_lists:\n # merge lists\n a.extend(b)\n else:\n # replace list\n a = b\n else:\n # append to list\n a.append(b)\n elif isinstance(a, (dict, UserDict)):\n # dicts must be merged\n if isinstance(b, (dict, UserDict)):\n for key in b:\n if key in a:\n a[key] = merge_two_dictionaries(a[key], b[key], merge_lists=merge_lists)\n else:\n a[key] = b[key]\n else:\n raise ValueError('Cannot merge non-dict \"%s\" into dict \"%s\"' % (b, a))\n else:\n raise ValueError('NOT IMPLEMENTED \"%s\" into \"%s\"' % (b, a))\n except TypeError as e:\n raise ValueError('TypeError \"%s\" in key \"%s\" when merging \"%s\" into \"%s\"' % (e, key, b, a))\n return a", "response": "Merges two dicts into a and returns a new dict that is merged with b."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef merge_dictionaries(dicts, merge_lists=False):\n # type: (List[DictUpperBound], bool) -> DictUpperBound\n \"\"\"Merges all dictionaries in dicts into a single dictionary and returns result\n\n Args:\n dicts (List[DictUpperBound]): Dictionaries to merge into the first one in the list\n merge_lists (bool): Whether to merge lists (True) or replace lists (False). Default is False.\n\n Returns:\n DictUpperBound: Merged dictionary\n\n \"\"\"\n dict1 = dicts[0]\n for other_dict in dicts[1:]:\n merge_two_dictionaries(dict1, other_dict, merge_lists=merge_lists)\n return dict1", "response": "Merges all dictionaries in dicts into a single dictionary and returns the result\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncompare two dictionaries returning a new dictionary with the keys that differ.", "response": "def dict_diff(d1, d2, no_key=''):\n # type: (DictUpperBound, DictUpperBound, str) -> Dict\n \"\"\"Compares two dictionaries\n\n Args:\n d1 (DictUpperBound): First dictionary to compare\n d2 (DictUpperBound): Second dictionary to compare\n no_key (str): What value to use if key is not found Defaults to ''.\n\n Returns:\n Dict: Comparison dictionary\n\n \"\"\"\n d1keys = set(d1.keys())\n d2keys = set(d2.keys())\n both = d1keys & d2keys\n diff = {k: (d1[k], d2[k]) for k in both if d1[k] != d2[k]}\n diff.update({k: (d1[k], no_key) for k in d1keys - both})\n diff.update({k: (no_key, d2[k]) for k in d2keys - both})\n return diff"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef dict_of_lists_add(dictionary, key, value):\n # type: (DictUpperBound, Any, Any) -> None\n \"\"\"Add value to a list in a dictionary by key\n\n Args:\n dictionary (DictUpperBound): Dictionary to which to add values\n key (Any): Key within dictionary\n value (Any): Value to add to list in dictionary\n\n Returns:\n None\n\n \"\"\"\n list_objs = dictionary.get(key, list())\n list_objs.append(value)\n dictionary[key] = list_objs", "response": "Adds value to a list in a dictionary by key"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef dict_of_sets_add(dictionary, key, value):\n # type: (DictUpperBound, Any, Any) -> None\n \"\"\"Add value to a set in a dictionary by key\n\n Args:\n dictionary (DictUpperBound): Dictionary to which to add values\n key (Any): Key within dictionary\n value (Any): Value to add to set in dictionary\n\n Returns:\n None\n\n \"\"\"\n set_objs = dictionary.get(key, set())\n set_objs.add(value)\n dictionary[key] = set_objs", "response": "Adds value to a set in a dictionary by key"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef list_distribute_contents_simple(input_list, function=lambda x: x):\n # type: (List, Callable[[Any], Any]) -> List\n \"\"\"Distribute the contents of a list eg. [1, 1, 1, 2, 2, 3] -> [1, 2, 3, 1, 2, 1]. List can contain complex types\n like dictionaries in which case the function can return the appropriate value eg. lambda x: x[KEY]\n\n Args:\n input_list (List): List to distribute values\n function (Callable[[Any], Any]): Return value to use for distributing. Defaults to lambda x: x.\n\n Returns:\n List: Distributed list\n\n \"\"\"\n dictionary = dict()\n for obj in input_list:\n dict_of_lists_add(dictionary, function(obj), obj)\n output_list = list()\n i = 0\n done = False\n while not done:\n found = False\n for key in sorted(dictionary):\n if i < len(dictionary[key]):\n output_list.append(dictionary[key][i])\n found = True\n if found:\n i += 1\n else:\n done = True\n return output_list", "response": "Distribute the contents of a list of items in a list."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef list_distribute_contents(input_list, function=lambda x: x):\n # type: (List, Callable[[Any], Any]) -> List\n \"\"\"Distribute the contents of a list eg. [1, 1, 1, 2, 2, 3] -> [1, 2, 1, 2, 1, 3]. List can contain complex types\n like dictionaries in which case the function can return the appropriate value eg. lambda x: x[KEY]\n\n Args:\n input_list (List): List to distribute values\n function (Callable[[Any], Any]): Return value to use for distributing. Defaults to lambda x: x.\n\n Returns:\n List: Distributed list\n\n \"\"\"\n\n def riffle_shuffle(piles_list):\n def grouper(n, iterable, fillvalue=None):\n args = [iter(iterable)] * n\n return zip_longest(fillvalue=fillvalue, *args)\n\n if not piles_list:\n return []\n piles_list.sort(key=len, reverse=True)\n width = len(piles_list[0])\n pile_iters_list = [iter(pile) for pile in piles_list]\n pile_sizes_list = [[pile_position] * len(pile) for pile_position, pile in enumerate(piles_list)]\n grouped_rows = grouper(width, itertools.chain.from_iterable(pile_sizes_list))\n grouped_columns = zip_longest(*grouped_rows)\n shuffled_pile = [next(pile_iters_list[position]) for position in itertools.chain.from_iterable(grouped_columns)\n if position is not None]\n return shuffled_pile\n\n dictionary = dict()\n for obj in input_list:\n dict_of_lists_add(dictionary, function(obj), obj)\n intermediate_list = list()\n for key in sorted(dictionary):\n intermediate_list.append(dictionary[key])\n return riffle_shuffle(intermediate_list)", "response": "Distribute the contents of a list eg. 1 2 3 4 5 5"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef extract_list_from_list_of_dict(list_of_dict, key):\n # type: (List[DictUpperBound], Any) -> List\n \"\"\"Extract a list by looking up key in each member of a list of dictionaries\n\n Args:\n list_of_dict (List[DictUpperBound]): List of dictionaries\n key (Any): Key to find in each dictionary\n\n Returns:\n List: List containing values returned from each dictionary\n\n \"\"\"\n result = list()\n for dictionary in list_of_dict:\n result.append(dictionary[key])\n return result", "response": "Extract a list by looking up key in each member of a list of dictionaries\n "} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef key_value_convert(dictin, keyfn=lambda x: x, valuefn=lambda x: x, dropfailedkeys=False, dropfailedvalues=False,\n exception=ValueError):\n # type: (DictUpperBound, Callable[[Any], Any], Callable[[Any], Any], bool, bool, ExceptionUpperBound) -> Dict\n \"\"\"Convert keys and/or values of dictionary using functions passed in as parameters\n\n Args:\n dictin (DictUpperBound): Input dictionary\n keyfn (Callable[[Any], Any]): Function to convert keys. Defaults to lambda x: x\n valuefn (Callable[[Any], Any]): Function to convert values. Defaults to lambda x: x\n dropfailedkeys (bool): Whether to drop dictionary entries where key conversion fails. Defaults to False.\n dropfailedvalues (bool): Whether to drop dictionary entries where value conversion fails. Defaults to False.\n exception (ExceptionUpperBound): The exception to expect if keyfn or valuefn fail. Defaults to ValueError.\n\n Returns:\n Dict: Dictionary with converted keys and/or values\n\n \"\"\"\n dictout = dict()\n for key in dictin:\n try:\n new_key = keyfn(key)\n except exception:\n if dropfailedkeys:\n continue\n new_key = key\n value = dictin[key]\n try:\n new_value = valuefn(value)\n except exception:\n if dropfailedvalues:\n continue\n new_value = value\n dictout[new_key] = new_value\n return dictout", "response": "Convert keys and values of a dictionary using functions passed in as parameters\n "} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef integer_key_convert(dictin, dropfailedkeys=False):\n # type: (DictUpperBound, bool) -> Dict\n \"\"\"Convert keys of dictionary to integers\n\n Args:\n dictin (DictUpperBound): Input dictionary\n dropfailedkeys (bool): Whether to drop dictionary entries where key conversion fails. Defaults to False.\n\n Returns:\n Dict: Dictionary with keys converted to integers\n\n \"\"\"\n return key_value_convert(dictin, keyfn=int, dropfailedkeys=dropfailedkeys)", "response": "Convert keys of dictionary to integers."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef integer_value_convert(dictin, dropfailedvalues=False):\n # type: (DictUpperBound, bool) -> Dict\n \"\"\"Convert values of dictionary to integers\n\n Args:\n dictin (DictUpperBound): Input dictionary\n dropfailedvalues (bool): Whether to drop dictionary entries where key conversion fails. Defaults to False.\n\n Returns:\n Dict: Dictionary with values converted to integers\n\n \"\"\"\n return key_value_convert(dictin, valuefn=int, dropfailedvalues=dropfailedvalues)", "response": "Convert values of dictionary to integers"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef float_value_convert(dictin, dropfailedvalues=False):\n # type: (DictUpperBound, bool) -> Dict\n \"\"\"Convert values of dictionary to floats\n\n Args:\n dictin (DictUpperBound): Input dictionary\n dropfailedvalues (bool): Whether to drop dictionary entries where key conversion fails. Defaults to False.\n\n Returns:\n Dict: Dictionary with values converted to floats\n\n \"\"\"\n return key_value_convert(dictin, valuefn=float, dropfailedvalues=dropfailedvalues)", "response": "Convert values of dictionary to floats"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a new dictionary from two dictionaries by averaging values of keys in one dictionary.", "response": "def avg_dicts(dictin1, dictin2, dropmissing=True):\n # type: (DictUpperBound, DictUpperBound, bool) -> Dict\n \"\"\"Create a new dictionary from two dictionaries by averaging values\n\n Args:\n dictin1 (DictUpperBound): First input dictionary\n dictin2 (DictUpperBound): Second input dictionary\n dropmissing (bool): Whether to drop keys missing in one dictionary. Defaults to True.\n\n Returns:\n Dict: Dictionary with values being average of 2 input dictionaries\n\n \"\"\"\n dictout = dict()\n for key in dictin1:\n if key in dictin2:\n dictout[key] = (dictin1[key] + dictin2[key]) / 2\n elif not dropmissing:\n dictout[key] = dictin1[key]\n if not dropmissing:\n for key in dictin2:\n if key not in dictin1:\n dictout[key] = dictin2[key]\n return dictout"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreads a list of rows from a CSV file.", "response": "def read_list_from_csv(filepath, dict_form=False, headers=None, **kwargs):\n # type: (str, bool, Union[int, List[int], List[str], None], Any) -> List[Union[Dict, List]]\n \"\"\"Read a list of rows in dict or list form from a csv. (The headers argument is either a row\n number or list of row numbers (in case of multi-line headers) to be considered as headers\n (rows start counting at 1), or the actual headers defined a list of strings. If not set,\n all rows will be treated as containing values.)\n\n Args:\n filepath (str): Path to read from\n dict_form (bool): Return in dict form. Defaults to False.\n headers (Union[int, List[int], List[str], None]): Row number of headers. Defaults to None.\n **kwargs: Other arguments to pass to Tabulator Stream\n\n Returns:\n List[Union[Dict, List]]: List of rows in dict or list form\n\n \"\"\"\n stream = Stream(filepath, headers=headers, **kwargs)\n stream.open()\n result = stream.read(keyed=dict_form)\n stream.close()\n return result"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nwrites a list of rows in dict or list form to a csv file.", "response": "def write_list_to_csv(list_of_rows, filepath, headers=None):\n # type: (List[Union[DictUpperBound, List]], str, Union[int, List[int], List[str], None]) -> None\n \"\"\"Write a list of rows in dict or list form to a csv. (The headers argument is either a row\n number or list of row numbers (in case of multi-line headers) to be considered as headers\n (rows start counting at 1), or the actual headers defined a list of strings. If not set,\n all rows will be treated as containing values.)\n\n Args:\n list_of_rows (List[Union[DictUpperBound, List]]): List of rows in dict or list form\n filepath (str): Path to write to\n headers (Union[int, List[int], List[str], None]): Headers to write. Defaults to None.\n\n Returns:\n None\n\n \"\"\"\n stream = Stream(list_of_rows, headers=headers)\n stream.open()\n stream.save(filepath, format='csv')\n stream.close()"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef args_to_dict(args):\n # type: (str) -> DictUpperBound[str,str]\n \"\"\"Convert command line arguments in a comma separated string to a dictionary\n\n Args:\n args (str): Command line arguments\n\n Returns:\n DictUpperBound[str,str]: Dictionary of arguments\n\n \"\"\"\n arguments = dict()\n for arg in args.split(','):\n key, value = arg.split('=')\n arguments[key] = value\n return arguments", "response": "Convert command line arguments in a comma separated string to a dictionary"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the delta between two files using -?,? format excluding lines that are the same", "response": "def compare_files(path1, path2):\n # type: (str, str) -> List[str]\n \"\"\"Returns the delta between two files using -, ?, + format excluding\n lines that are the same\n\n Args:\n path1 (str): Path to first file\n path2 (str): Path to second file\n\n Returns:\n List[str]: Delta between the two files\n\n \"\"\"\n diff = difflib.ndiff(open(path1).readlines(), open(path2).readlines())\n return [x for x in diff if x[0] in ['-', '+', '?']]"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nassert that two files are the same and returns delta using -?,?, + format if not", "response": "def assert_files_same(path1, path2):\n # type: (str, str) -> None\n \"\"\"Asserts that two files are the same and returns delta using\n -, ?, + format if not\n\n Args:\n path1 (str): Path to first file\n path2 (str): Path to second file\n\n Returns:\n None\n\n \"\"\"\n difflines = compare_files(path1, path2)\n assert len(difflines) == 0, ''.join(['\\n'] + difflines)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef main():\n # Locale\n locale.setlocale(locale.LC_ALL, '')\n\n # Encoding for output streams\n if str == bytes: # PY2\n writer = codecs.getwriter(locale.getpreferredencoding())\n o_stdout, o_stderr = sys.stdout, sys.stderr\n sys.stdout = writer(sys.stdout)\n sys.stdout.buffer = o_stdout\n sys.stderr = writer(sys.stderr)\n sys.stderr.buffer = o_stderr\n else: # PY3\n sys.stdin = sys.stdin.buffer\n\n # Parses command-line\n\n # Runtime to setup\n def add_runtime_option(opt):\n opt.add_argument(\n '-r', '--runtime', action='store',\n help=\"runtime to deploy on the server if the queue doesn't exist. \"\n \"If unspecified, will auto-detect what is appropriate, and \"\n \"fallback on 'default'.\")\n\n # Destination selection\n def add_destination_option(opt):\n opt.add_argument('destination', action='store',\n help=\"Machine to SSH into; [user@]host[:port]\")\n opt.add_argument('--queue', action='store', default=DEFAULT_TEJ_DIR,\n help=\"Directory for tej's files\")\n\n # Root parser\n parser = argparse.ArgumentParser(\n description=\"Trivial Extensible Job-submission\")\n parser.add_argument('--version', action='version',\n version=\"tej version %s\" % tej_version)\n parser.add_argument('-v', '--verbose', action='count', default=1,\n dest='verbosity',\n help=\"augments verbosity level\")\n subparsers = parser.add_subparsers(title=\"commands\", metavar='')\n\n # Setup action\n parser_setup = subparsers.add_parser(\n 'setup',\n help=\"Sets up tej on a remote machine\")\n add_destination_option(parser_setup)\n add_runtime_option(parser_setup)\n parser_setup.add_argument('--make-link', action='append',\n dest='make_link')\n parser_setup.add_argument('--make-default-link', action='append_const',\n dest='make_link', const=DEFAULT_TEJ_DIR)\n parser_setup.add_argument('--force', action='store_true')\n parser_setup.add_argument('--only-links', action='store_true')\n parser_setup.set_defaults(func=_setup)\n\n # Submit action\n parser_submit = subparsers.add_parser(\n 'submit',\n help=\"Submits a job to a remote machine\")\n add_destination_option(parser_submit)\n add_runtime_option(parser_submit)\n parser_submit.add_argument('--id', action='store',\n help=\"Identifier for the new job\")\n parser_submit.add_argument('--script', action='store',\n help=\"Relative name of the script in the \"\n \"directory\")\n parser_submit.add_argument('directory', action='store',\n help=\"Job directory to upload\")\n parser_submit.set_defaults(func=_submit)\n\n # Status action\n parser_status = subparsers.add_parser(\n 'status',\n help=\"Gets the status of a job\")\n add_destination_option(parser_status)\n parser_status.add_argument('--id', action='store',\n help=\"Identifier of the running job\")\n parser_status.set_defaults(func=_status)\n\n # Download action\n parser_download = subparsers.add_parser(\n 'download',\n help=\"Downloads files from finished job\")\n add_destination_option(parser_download)\n parser_download.add_argument('--id', action='store',\n help=\"Identifier of the job\")\n parser_download.add_argument('files', action='store',\n nargs=argparse.ONE_OR_MORE,\n help=\"Files to download\")\n parser_download.set_defaults(func=_download)\n\n # Kill action\n parser_kill = subparsers.add_parser(\n 'kill',\n help=\"Kills a running job\")\n add_destination_option(parser_kill)\n parser_kill.add_argument('--id', action='store',\n help=\"Identifier of the running job\")\n parser_kill.set_defaults(func=_kill)\n\n # Delete action\n parser_delete = subparsers.add_parser(\n 'delete',\n help=\"Deletes a finished job\")\n add_destination_option(parser_delete)\n parser_delete.add_argument('--id', action='store',\n help=\"Identifier of the finished job\")\n parser_delete.set_defaults(func=_delete)\n\n # List action\n parser_list = subparsers.add_parser(\n 'list',\n help=\"Lists remote jobs\")\n add_destination_option(parser_list)\n parser_list.set_defaults(func=_list)\n\n args = parser.parse_args()\n setup_logging(args.verbosity)\n\n try:\n args.func(args)\n except Error as e:\n # No need to show a traceback here, this is not an internal error\n logger.critical(e)\n sys.exit(1)\n sys.exit(0)", "response": "Entry point when called on the command - line."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\napplying the HTTPError wrapper to the callback.", "response": "def apply(self, callback, context): # pragma: no cover\n \"\"\"Apply the HTTPError wrapper to the callback.\n \"\"\"\n\n def wrapper(*args, **kwargs):\n try:\n return callback(*args, **kwargs)\n except bottle.HTTPError as error:\n return self.error_wrapper.from_status(\n status_line=error.status_line,\n msg=error.body\n )\n\n return wrapper"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding a new episode to the podcast.", "response": "def add_episode(self, text, text_format, title, author, summary=None,\n publish_date=None, synthesizer='watson', synth_args=None, sentence_break='. '):\n \"\"\"\n Add a new episode to the podcast.\n\n :param text:\n See :meth:`Episode`.\n :param text_format:\n See :meth:`Episode`.\n :param title:\n See :meth:`Episode`.\n :param author:\n See :meth:`Episode`.\n :param summary:\n See :meth:`Episode`.\n :param publish_date:\n See :meth:`Episode`.\n :param synthesizer:\n See :meth:`typecaster.utils.text_to_speech`.\n :param synth_args:\n See :meth:`typecaster.utils.text_to_speech`.\n :param sentence_break:\n See :meth:`typecaster.utils.text_to_speech`.\n \"\"\"\n if title in self.episodes:\n raise ValueError('\"' + title + '\" already exists as an episode title.')\n\n link = self.output_path + '/' + title.replace(' ', '_').lower() + '.mp3'\n episode_text = convert_to_ssml(text, text_format)\n new_episode = Episode(episode_text, text_format, title, author, link, summary, publish_date, synthesizer, synth_args, sentence_break)\n\n self.episodes[title] = new_episode"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadd a new scheduled job to the scheduler.", "response": "def add_scheduled_job(self, text_source, cron_args, text_format, title, author, summary=None,\n synthesizer='watson', synth_args=None, sentence_break='. '):\n \"\"\"\n Add and start a new scheduled job to dynamically generate podcasts.\n\n Note: scheduling will end when the process ends. This works best when run\n inside an existing application.\n\n :param text_source:\n A function that generates podcast text. Examples: a function that\n opens a file with today's date as a filename or a function that\n requests a specific url and extracts the main text.\n Also see :meth:`Episode`.\n :param cron_args:\n A dictionary of cron parameters. Keys can be: 'year', 'month',\n 'day', 'week', 'day_of_week', 'hour', 'minute' and 'second'. Keys\n that are not specified will be parsed as 'any'/'*'.\n :param text_format:\n See :meth:`Episode`.\n :param title:\n See :meth:`Episode`. Since titles need to be unique, a\n timestamp will be appended to the title for each episode.\n :param author:\n See :meth:`Episode`.\n :param summary:\n See :meth:`Episode`.\n :param publish_date:\n See :meth:`Episode`.\n :param synthesizer:\n See :meth:`typecaster.utils.text_to_speech`.\n :param synth_args:\n See :meth:`typecaster.utils.text_to_speech`.\n :param sentence_break:\n See :meth:`typecaster.utils.text_to_speech`.\n \"\"\"\n if not callable(text_source):\n raise TypeError('Argument \"text\" must be a function')\n\n def add_episode():\n episode_text = text_source()\n episode_title = title + '_' + datetime.utcnow().strftime('%Y%m%d%H%M%S')\n\n self.add_episode(episode_text, text_format, episode_title, author, summary, datetime.utcnow(), synthesizer, synth_args, sentence_break)\n\n self.scheduled_jobs[title] = self._scheduler.add_job(add_episode, 'cron', id=title, **cron_args)\n\n if not self._scheduler.running:\n self._scheduler.start()"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\npublish a set of episodes to the Podcast s RSS feed.", "response": "def publish(self, titles):\n \"\"\"\n Publish a set of episodes to the Podcast's RSS feed.\n\n :param titles:\n Either a single episode title or a sequence of episode titles to\n publish.\n \"\"\"\n if isinstance(titles, Sequence) and not isinstance(titles, six.string_types):\n for title in titles:\n self.episodes[title].publish()\n elif isinstance(titles, six.string_types):\n self.episodes[titles].publish()\n else:\n raise TypeError('titles must be a string or a sequence of strings.')\n\n self.update_rss_feed()"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nrenders audio from the episode s text.", "response": "def render_audio(self):\n \"\"\"\n Synthesize audio from the episode's text.\n \"\"\"\n segment = text_to_speech(self._text, self.synthesizer, self.synth_args, self.sentence_break)\n\n milli = len(segment)\n seconds = '{0:.1f}'.format(float(milli) / 1000 % 60).zfill(2)\n minutes = '{0:.0f}'.format((milli / (1000 * 60)) % 60).zfill(2)\n hours = '{0:.0f}'.format((milli / (1000 * 60 * 60)) % 24).zfill(2)\n self.duration = hours + ':' + minutes + ':' + seconds\n\n segment.export(self.link, format='mp3')\n self.length = os.path.getsize(self.link)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef publish(self):\n if self.published is False:\n self.published = True\n else:\n raise Warning(self.title + ' is already published.')", "response": "Mark an episode as published."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmarking an episode as not published.", "response": "def unpublish(self):\n \"\"\"\n Mark an episode as not published.\n \"\"\"\n if self.published is True:\n self.published = False\n else:\n raise Warning(self.title + ' is already not published.')"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nover-writes keyword arguments with environment variables.", "response": "def _environment_variables(**kwargs):\n # type: (Any) -> Any\n \"\"\"\n Overwrite keyword arguments with environment variables\n\n Args:\n **kwargs: See below\n user_agent (str): User agent string.\n\n Returns:\n kwargs: Changed keyword arguments\n\n \"\"\"\n user_agent = os.getenv('USER_AGENT')\n if user_agent is not None:\n kwargs['user_agent'] = user_agent\n preprefix = os.getenv('PREPREFIX')\n if preprefix is not None:\n kwargs['preprefix'] = preprefix\n return kwargs"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _construct(configdict, prefix, ua):\n # type: (Dict, str, str) -> str\n \"\"\"\n Construct user agent\n\n Args:\n configdict (str): Additional configuration for user agent\n prefix (str): Text to put at start of user agent\n ua (str): Custom user agent text\n\n Returns:\n str: Full user agent string\n\n \"\"\"\n if not ua:\n raise UserAgentError(\"User_agent parameter missing. It can be your project's name for example.\")\n preprefix = configdict.get('preprefix')\n if preprefix:\n user_agent = '%s:' % preprefix\n else:\n user_agent = ''\n if prefix:\n user_agent = '%s%s-' % (user_agent, prefix)\n user_agent = '%s%s' % (user_agent, ua)\n return user_agent", "response": "Constructs user agent string from user agent configuration and user agent string."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _load(cls, prefix, user_agent_config_yaml, user_agent_lookup=None):\n # type: (str, str, Optional[str]) -> str\n \"\"\"\n Load user agent YAML file\n\n Args:\n prefix (str): Text to put at start of user agent\n user_agent_config_yaml (str): Path to user agent YAML file\n user_agent_lookup (Optional[str]): Lookup key for YAML. Ignored if user_agent supplied.\n\n Returns:\n str: user agent\n\n \"\"\"\n if not user_agent_config_yaml:\n user_agent_config_yaml = cls.default_user_agent_config_yaml\n logger.info(\n 'No user agent or user agent config file given. Using default user agent config file: %s.' % user_agent_config_yaml)\n if not isfile(user_agent_config_yaml):\n raise UserAgentError(\n \"User_agent should be supplied in a YAML config file. It can be your project's name for example.\")\n logger.info('Loading user agent config from: %s' % user_agent_config_yaml)\n user_agent_config_dict = load_yaml(user_agent_config_yaml)\n if user_agent_lookup:\n user_agent_config_dict = user_agent_config_dict.get(user_agent_lookup)\n if not user_agent_config_dict:\n raise UserAgentError(\"No user agent information read from: %s\" % user_agent_config_yaml)\n ua = user_agent_config_dict.get('user_agent')\n return cls._construct(user_agent_config_dict, prefix, ua)", "response": "Load user agent config file and construct new instance of class"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _create(cls, user_agent=None, user_agent_config_yaml=None,\n user_agent_lookup=None, **kwargs):\n # type: (Optional[str], Optional[str], Optional[str], Any) -> str\n \"\"\"\n Get full user agent string\n\n Args:\n user_agent (Optional[str]): User agent string. HDXPythonLibrary/X.X.X- is prefixed.\n user_agent_config_yaml (Optional[str]): Path to YAML user agent configuration. Ignored if user_agent supplied. Defaults to ~/.useragent.yml.\n user_agent_lookup (Optional[str]): Lookup key for YAML. Ignored if user_agent supplied.\n\n Returns:\n str: Full user agent string\n\n \"\"\"\n kwargs = UserAgent._environment_variables(**kwargs)\n if 'user_agent' in kwargs:\n user_agent = kwargs['user_agent']\n del kwargs['user_agent']\n prefix = kwargs.get('prefix')\n if prefix:\n del kwargs['prefix']\n else:\n prefix = 'HDXPythonUtilities/%s' % get_utils_version()\n if not user_agent:\n ua = cls._load(prefix, user_agent_config_yaml, user_agent_lookup)\n else:\n ua = cls._construct(kwargs, prefix, user_agent)\n return ua", "response": "Create a new user agent string."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsetting the global user agent string for this class.", "response": "def set_global(cls, user_agent=None, user_agent_config_yaml=None,\n user_agent_lookup=None, **kwargs):\n # type: (Optional[str], Optional[str], Optional[str], Any) -> None\n \"\"\"\n Set global user agent string\n\n Args:\n user_agent (Optional[str]): User agent string. HDXPythonLibrary/X.X.X- is prefixed.\n user_agent_config_yaml (Optional[str]): Path to YAML user agent configuration. Ignored if user_agent supplied. Defaults to ~/.useragent.yml.\n user_agent_lookup (Optional[str]): Lookup key for YAML. Ignored if user_agent supplied.\n\n Returns:\n None\n \"\"\"\n cls.user_agent = cls._create(user_agent, user_agent_config_yaml, user_agent_lookup, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the full user agent string from user agent string from parameters if supplied falling back on global user agent if set.", "response": "def get(cls, user_agent=None, user_agent_config_yaml=None, user_agent_lookup=None, **kwargs):\n # type: (Optional[str], Optional[str], Optional[str], Any) -> str\n \"\"\"\n Get full user agent string from parameters if supplied falling back on global user agent if set.\n\n Args:\n user_agent (Optional[str]): User agent string. HDXPythonLibrary/X.X.X- is prefixed.\n user_agent_config_yaml (Optional[str]): Path to YAML user agent configuration. Ignored if user_agent supplied. Defaults to ~/.useragent.yml.\n user_agent_lookup (Optional[str]): Lookup key for YAML. Ignored if user_agent supplied.\n\n Returns:\n str: Full user agent string\n\n \"\"\"\n if user_agent or user_agent_config_yaml or 'user_agent' in UserAgent._environment_variables(**kwargs):\n return UserAgent._create(user_agent, user_agent_config_yaml, user_agent_lookup, **kwargs)\n if cls.user_agent:\n return cls.user_agent\n else:\n raise UserAgentError(\n 'You must either set the global user agent: UserAgent.set_global(...) or pass in user agent parameters!')"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef save_yaml(dictionary, path, pretty=False, sortkeys=False):\n # type: (Dict, str, bool, bool) -> None\n \"\"\"Save dictionary to YAML file preserving order if it is an OrderedDict\n\n Args:\n dictionary (Dict): Python dictionary to save\n path (str): Path to YAML file\n pretty (bool): Whether to pretty print. Defaults to False.\n sortkeys (bool): Whether to sort dictionary keys. Defaults to False.\n\n Returns:\n None\n \"\"\"\n if sortkeys:\n dictionary = dict(dictionary)\n with open(path, 'w') as f:\n if pretty:\n pyaml.dump(dictionary, f)\n else:\n yaml.dump(dictionary, f, default_flow_style=None, Dumper=yamlloader.ordereddict.CDumper)", "response": "Save dictionary to YAML file preserving order if it is an OrderedDict\n ArcGIS object"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsaving a Python dictionary to a JSON file preserving order if it is an OrderedDict AttributeNames", "response": "def save_json(dictionary, path, pretty=False, sortkeys=False):\n # type: (Dict, str, bool, bool) -> None\n \"\"\"Save dictionary to JSON file preserving order if it is an OrderedDict\n\n Args:\n dictionary (Dict): Python dictionary to save\n path (str): Path to JSON file\n pretty (bool): Whether to pretty print. Defaults to False.\n sortkeys (bool): Whether to sort dictionary keys. Defaults to False.\n\n Returns:\n None\n \"\"\"\n with open(path, 'w') as f:\n if pretty:\n indent = 2\n separators = (',', ': ')\n else:\n indent = None\n separators = (', ', ': ')\n json.dump(dictionary, f, indent=indent, sort_keys=sortkeys, separators=separators)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nloading a YAML file into an ordered dictionary", "response": "def load_yaml(path):\n # type: (str) -> OrderedDict\n \"\"\"Load YAML file into an ordered dictionary\n\n Args:\n path (str): Path to YAML file\n\n Returns:\n OrderedDict: Ordered dictionary containing loaded YAML file\n \"\"\"\n with open(path, 'rt') as f:\n yamldict = yaml.load(f.read(), Loader=yamlloader.ordereddict.CSafeLoader)\n if not yamldict:\n raise (LoadError('YAML file: %s is empty!' % path))\n return yamldict"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef load_json(path):\n # type: (str) -> OrderedDict\n \"\"\"Load JSON file into an ordered dictionary\n\n Args:\n path (str): Path to JSON file\n\n Returns:\n OrderedDict: Ordered dictionary containing loaded JSON file\n \"\"\"\n with open(path, 'rt') as f:\n jsondict = json.loads(f.read(), object_pairs_hook=OrderedDict)\n if not jsondict:\n raise (LoadError('JSON file: %s is empty!' % path))\n return jsondict", "response": "Load a JSON file into an ordered dictionary"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nloads file into a string removing newlines", "response": "def load_file_to_str(path):\n # type: (str) -> str\n \"\"\"\n Load file into a string removing newlines\n\n Args:\n path (str): Path to file\n\n Returns:\n str: String contents of file\n\n \"\"\"\n with open(path, 'rt') as f:\n string = f.read().replace(linesep, '')\n if not string:\n raise LoadError('%s file is empty!' % path)\n return string"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef contributors(lancet, output):\n sorting = pygit2.GIT_SORT_TIME | pygit2.GIT_SORT_REVERSE\n commits = lancet.repo.walk(lancet.repo.head.target, sorting)\n contributors = ((c.author.name, c.author.email) for c in commits)\n contributors = OrderedDict(contributors)\n\n template_content = content_from_path(\n lancet.config.get('packaging', 'contributors_template'))\n template = Template(template_content)\n output.write(template.render(contributors=contributors).encode('utf-8'))", "response": "List all contributors visible in the git history."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef text_to_speech(text, synthesizer, synth_args, sentence_break):\n if len(text.split()) < 50:\n if synthesizer == 'watson':\n with open('.temp.wav', 'wb') as temp:\n temp.write(watson_request(text=text, synth_args=synth_args).content)\n response = AudioSegment.from_wav('.temp.wav')\n os.remove('.temp.wav')\n return response\n else:\n raise ValueError('\"' + synthesizer + '\" synthesizer not found.')\n else:\n segments = []\n for i, sentence in enumerate(text.split(sentence_break)):\n if synthesizer == 'watson':\n with open('.temp' + str(i) + '.wav', 'wb') as temp:\n temp.write(watson_request(text=sentence, synth_args=synth_args).content)\n segments.append(AudioSegment.from_wav('.temp' + str(i) + '.wav'))\n os.remove('.temp' + str(i) + '.wav')\n else:\n raise ValueError('\"' + synthesizer + '\" synthesizer not found.')\n\n response = segments[0]\n for segment in segments[1:]:\n response = response + segment\n\n return response", "response": "Converts given text to a pydub AudioSegment using a specified speech synthesizer."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nmake a single request to the IBM Watson text - to - speech API.", "response": "def watson_request(text, synth_args):\n \"\"\"\n Makes a single request to the IBM Watson text-to-speech API.\n\n :param text:\n The text that will be synthesized to audio.\n :param synth_args:\n A dictionary of arguments to add to the request. These should include\n username and password for authentication.\n \"\"\"\n params = {\n 'text': text,\n 'accept': 'audio/wav'\n }\n if synth_args is not None:\n params.update(synth_args)\n\n if 'username' in params:\n username = params.pop('username')\n else:\n raise Warning('The IBM Watson API requires credentials that should be passed as \"username\" and \"password\" in \"synth_args\"')\n if 'password' in params:\n password = params.pop('password')\n else:\n raise Warning('The IBM Watson API requires credentials that should be passed as \"username\" and \"password\" in \"synth_args\"')\n\n return requests.get(watson_url, auth=(username, password), params=params)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nbuilding a podcast RSS feed and returns an xml file.", "response": "def build_rss_feed(podcast):\n \"\"\"\n Builds a podcast RSS feed and returns an xml file.\n\n :param podcast:\n A Podcast model to build the RSS feed from.\n \"\"\"\n if not os.path.exists(podcast.output_path):\n os.makedirs(podcast.output_path)\n\n rss = ET.Element('rss', attrib={'xmlns:itunes': 'http://www.itunes.com/dtds/podcast-1.0.dtd', 'version': '2.0'})\n\n channel = ET.SubElement(rss, 'channel')\n ET.SubElement(channel, 'title').text = podcast.title\n ET.SubElement(channel, 'link').text = podcast.link\n ET.SubElement(channel, 'copyright').text = podcast.copyright\n ET.SubElement(channel, 'itunes:subtitle').text = podcast.subtitle\n ET.SubElement(channel, 'itunes:author').text = podcast.author\n ET.SubElement(channel, 'itunes:summary').text = podcast.description\n ET.SubElement(channel, 'description').text = podcast.description\n\n owner = ET.SubElement(channel, 'itunes:owner')\n ET.SubElement(owner, 'itunes:name').text = podcast.owner_name\n ET.SubElement(owner, 'itunes:email').text = podcast.owner_email\n\n ET.SubElement(channel, 'itunes:image').text = podcast.image\n\n for category in podcast.categories:\n ET.SubElement(channel, 'itunes:category').text = category\n\n for episode in sorted(podcast.episodes.values(), key=lambda x: x.publish_date):\n if episode.published is True:\n item = ET.SubElement(channel, 'item')\n ET.SubElement(item, 'title').text = episode.title\n ET.SubElement(item, 'author').text = episode.author\n ET.SubElement(item, 'summary').text = episode.summary\n ET.SubElement(item, 'enclosure', attrib={'url': podcast.link + '/' + episode.link, 'length': str(episode.length), 'type': 'audio/x-mp3'})\n ET.SubElement(item, 'guid').text = podcast.link + '/' + episode.link\n ET.SubElement(item, 'pubDate').text = episode.publish_date.strftime('%a, %d %b %Y %H:%M:%S UTC')\n ET.SubElement(item, 'itunes:duration').text = episode.duration\n\n tree = ET.ElementTree(rss)\n with open(podcast.output_path + '/feed.xml', 'wb') as feed:\n tree.write(feed)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_pages(url):\n while True:\n yield url\n doc = html.parse(url).find(\"body\")\n links = [a for a in doc.findall(\".//a\") if a.text and a.text.startswith(\"next \")]\n if not links:\n break\n url = urljoin(url, links[0].get('href'))", "response": "Get the pages from the starting url"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the articles from a page", "response": "def get_article_urls(url):\n \"\"\"\n Return the articles from a page\n Technically, look for a div with class mw-search-result-heading\n and get the first link from this div\n \"\"\"\n doc = html.parse(url).getroot()\n for div in doc.cssselect(\"div.mw-search-result-heading\"):\n href = div.cssselect(\"a\")[0].get('href')\n if \":\" in href:\n continue # skip Category: links\n href = urljoin(url, href)\n yield href"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting a single article as a amcat - ready dict", "response": "def get_article(url):\n \"\"\"\n Return a single article as a 'amcat-ready' dict\n Uses the 'export' function of wikinews to get an xml article\n \"\"\"\n a = html.parse(url).getroot()\n title = a.cssselect(\".firstHeading\")[0].text_content()\n date = a.cssselect(\".published\")[0].text_content()\n date = datetime.datetime.strptime(date, \"%A, %B %d, %Y\").isoformat()\n paras = a.cssselect(\"#mw-content-text p\")\n paras = paras[1:] # skip first paragraph, which contains date\n text = \"\\n\\n\".join(p.text_content().strip() for p in paras)\n\n return dict(headline=title,\n date=date,\n url=url,\n text=text,\n medium=\"Wikinews\")"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nscrapes wikinews articles from the given query", "response": "def scrape_wikinews(conn, project, articleset, query):\n \"\"\"\n Scrape wikinews articles from the given query\n @param conn: The AmcatAPI object\n @param articleset: The target articleset ID\n @param category: The wikinews category name\n \"\"\"\n url = \"http://en.wikinews.org/w/index.php?search={}&limit=50\".format(query)\n logging.info(url)\n for page in get_pages(url):\n urls = get_article_urls(page)\n arts = list(get_articles(urls))\n logging.info(\"Adding {} articles to set {}:{}\"\n .format(len(arts), project, articleset))\n conn.create_articles(project=project, articleset=articleset,\n json_data=arts)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nstart a service n times", "response": "def start_service(addr, n):\n \"\"\" Start a service \"\"\"\n s = Service(addr)\n\n started = time.time()\n for _ in range(n):\n msg = s.socket.recv()\n s.socket.send(msg)\n s.socket.close()\n duration = time.time() - started\n\n print('Raw REP service stats:')\n util.print_stats(n, duration)\n return"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn a callable that can be used to transmit a message with a given topic.", "response": "def pub(self, topic=b'', embed_topic=False):\n \"\"\"\n Returns a callable that can be used to transmit a message, with a given\n ``topic``, in a publisher-subscriber fashion. Note that the sender\n function has a ``print`` like signature, with an infinite number of\n arguments. Each one being a part of the complete message.\n\n By default, no topic will be included into published messages. Being up\n to developers to include the topic, at the beginning of the first part\n (i.e. frame) of every published message, so that subscribers are able\n to receive them. For a different behaviour, check the embed_topic\n argument.\n\n :param topic: the topic that will be published to (default=b'')\n :type topic: bytes\n :param embed_topic: set for the topic to be automatically sent as the\n first part (i.e. frame) of every published message\n (default=False)\n :type embed_topic bool\n :rtype: function\n \"\"\"\n if not isinstance(topic, bytes):\n error = 'Topic must be bytes'\n log.error(error)\n raise TypeError(error)\n\n sock = self.__sock(zmq.PUB)\n return self.__send_function(sock, topic, embed_topic)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef sub(self, topics=(b'',)):\n sock = self.__sock(zmq.SUB)\n\n for topic in topics:\n if not isinstance(topic, bytes):\n error = 'Topics must be a list of bytes'\n log.error(error)\n raise TypeError(error)\n sock.setsockopt(zmq.SUBSCRIBE, topic)\n\n return self.__recv_generator(sock)", "response": "Returns an iterable that can be used to iterate over incoming messages from one of the specified topics."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef push(self):\n sock = self.__sock(zmq.PUSH)\n return self.__send_function(sock)", "response": "Returns a callable that can be used to transmit a message in a push - pull - noclean - mode fashion."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns an iterable that can be used to iterate over incoming messages from the push socket.", "response": "def pull(self):\n \"\"\"\n Returns an iterable that can be used to iterate over incoming messages,\n that were pushed by a push socket. Note that the iterable returns as\n many parts as sent by pushers.\n\n :rtype: generator\n \"\"\"\n sock = self.__sock(zmq.PULL)\n return self.__recv_generator(sock)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef request(self):\n sock = self.__sock(zmq.REQ)\n return self.__send_function(sock), self.__recv_generator(sock)", "response": "Returns a callable that returns a message and an iterable respectively."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef reply(self):\n sock = self.__sock(zmq.REP)\n return self.__send_function(sock), self.__recv_generator(sock)", "response": "Returns a callable and an iterable that returns a single message and a generator that yields all the messages that were requested by the request socket."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a callable and an iterable that returns a single message that was sent by a pair socket.", "response": "def pair(self):\n \"\"\"\n Returns a callable and an iterable respectively. Those can be used to\n both transmit a message and/or iterate over incoming messages, that were\n sent by a pair socket. Note that the iterable returns as many parts as\n sent by a pair. Also, the sender function has a ``print`` like signature,\n with an infinite number of arguments. Each one being a part of the\n complete message.\n\n :rtype: (function, generator)\n \"\"\"\n sock = self.__sock(zmq.PAIR)\n return self.__send_function(sock), self.__recv_generator(sock)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nconnecting to a server at the specified ip and port.", "response": "def connect(self, ip, port):\n \"\"\"\n Connects to a server at the specified ip and port.\n\n :param ip: an IP address\n :type ip: str or unicode\n :param port: port number from 1024 up to 65535\n :type port: int\n :rtype: self\n \"\"\"\n _check_valid_port_range(port)\n\n address = (ip, port)\n\n if address in self._addresses:\n error = 'Already connected to {0} on port {1}'.format(ip, port)\n log.exception(error)\n raise ValueError(error)\n\n self._addresses.append(address)\n\n if self._is_ready:\n _check_valid_num_connections(self._sock.socket_type,\n len(self._addresses))\n\n _connect_zmq_sock(self._sock, ip, port)\n\n return self"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndisconnect from a server at the specified ip and port.", "response": "def disconnect(self, ip, port):\n \"\"\"\n Disconnects from a server at the specified ip and port.\n\n :param ip: an IP address\n :type ip: str or unicode\n :param port: port number from 1024 up to 65535\n :type port: int\n :rtype: self\n \"\"\"\n _check_valid_port_range(port)\n address = (ip, port)\n\n try:\n self._addresses.remove(address)\n except ValueError:\n error = 'There was no connection to {0} on port {1}'.format(ip, port)\n log.exception(error)\n raise ValueError(error)\n\n if self._is_ready:\n _disconnect_zmq_sock(self._sock, ip, port)\n\n return self"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ndisconnects from all connected servers.", "response": "def disconnect_all(self):\n \"\"\"\n Disconnects from all connected servers.\n :rtype: self\n \"\"\"\n addresses = deepcopy(self._addresses)\n\n for ip, port in addresses:\n self.disconnect(ip, port)\n\n return self"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef remove_exponent(d):\n return d.quantize(Decimal(1)) if d == d.to_integral() else d.normalize()", "response": "Remove exponent from a sequence of decimal values."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nseparates with predefined separator.", "response": "def prettify(amount, separator=','):\n \"\"\"Separate with predefined separator.\"\"\"\n orig = str(amount)\n new = re.sub(\"^(-?\\d+)(\\d{3})\", \"\\g<1>{0}\\g<2>\".format(separator), str(amount))\n if orig == new:\n return new\n else:\n return prettify(new)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef load_json(json_data, decoder=None):\n if decoder is None:\n decoder = DateTimeDecoder\n return json.loads(json_data, object_hook=decoder.decode)", "response": "Load data from json string"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nloading data from json file", "response": "def load_json_file(file, decoder=None):\n \"\"\"\n Load data from json file\n\n :param file: Readable object or path to file\n :type file: FileIO | str\n :param decoder: Use custom json decoder\n :type decoder: T <= DateTimeDecoder\n :return: Json data\n :rtype: None | int | float | str | list | dict\n \"\"\"\n if decoder is None:\n decoder = DateTimeDecoder\n if not hasattr(file, \"read\"):\n with io.open(file, \"r\", encoding=\"utf-8\") as f:\n return json.load(f, object_hook=decoder.decode)\n return json.load(file, object_hook=decoder.decode)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef save_json(val, pretty=False, sort=True, encoder=None):\n if encoder is None:\n encoder = DateTimeEncoder\n if pretty:\n data = json.dumps(\n val,\n indent=4,\n separators=(',', ': '),\n sort_keys=sort,\n cls=encoder\n )\n else:\n data = json.dumps(\n val,\n separators=(',', ':'),\n sort_keys=sort,\n cls=encoder\n )\n if not sys.version_info > (3, 0) and isinstance(data, str):\n data = data.decode(\"utf-8\")\n return data", "response": "Save data to json string"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef save_json_file(\n file, val,\n pretty=False, compact=True, sort=True, encoder=None\n):\n \"\"\"\n Save data to json file\n\n :param file: Writable object or path to file\n :type file: FileIO | str | unicode\n :param val: Value or struct to save\n :type val: None | int | float | str | list | dict\n :param pretty: Format data to be readable (default: False)\n :type pretty: bool\n :param compact: Format data to be compact (default: True)\n :type compact: bool\n :param sort: Sort keys (default: True)\n :type sort: bool\n :param encoder: Use custom json encoder\n :type encoder: T <= DateTimeEncoder\n :rtype: None\n \"\"\"\n # TODO: make pretty/compact into one bool?\n if encoder is None:\n encoder = DateTimeEncoder\n opened = False\n\n if not hasattr(file, \"write\"):\n file = io.open(file, \"w\", encoding=\"utf-8\")\n opened = True\n\n try:\n if pretty:\n data = json.dumps(\n val,\n indent=4,\n separators=(',', ': '),\n sort_keys=sort,\n cls=encoder\n )\n elif compact:\n data = json.dumps(\n val,\n separators=(',', ':'),\n sort_keys=sort,\n cls=encoder\n )\n else:\n data = json.dumps(val, sort_keys=sort, cls=encoder)\n if not sys.version_info > (3, 0) and isinstance(data, str):\n data = data.decode(\"utf-8\")\n file.write(data)\n finally:\n if opened:\n file.close()", "response": "Save data to json file"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef load_yaml_file(file):\n if not hasattr(file, \"read\"):\n with io.open(file, \"r\", encoding=\"utf-8\") as f:\n return yaml.load(f, yaml.FullLoader)\n return yaml.load(file, yaml.FullLoader)", "response": "Load data from yaml file"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nsave data to a yaml file", "response": "def save_yaml_file(file, val):\n \"\"\"\n Save data to yaml file\n\n :param file: Writable object or path to file\n :type file: FileIO | str | unicode\n :param val: Value or struct to save\n :type val: None | int | float | str | unicode | list | dict\n \"\"\"\n opened = False\n\n if not hasattr(file, \"write\"):\n file = io.open(file, \"w\", encoding=\"utf-8\")\n opened = True\n\n try:\n yaml.dump(val, file)\n finally:\n if opened:\n file.close()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load_file(path):\n res = {}\n\n if not path:\n IOError(\"No path specified to save\")\n\n if not os.path.isfile(path):\n raise IOError(\"File not found {}\".format(path))\n\n try:\n with io.open(path, \"r\", encoding=\"utf-8\") as f:\n if path.endswith(\".json\"):\n res = load_json_file(f)\n elif path.endswith(\".yaml\") or path.endswith(\".yml\"):\n res = load_yaml_file(f)\n except IOError:\n raise\n except Exception as e:\n raise IOError(e)\n return res", "response": "Loads the file at the specified path and returns the loaded data."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsaves data to file", "response": "def save_file(path, data, readable=False):\n \"\"\"\n Save to file\n\n :param path: File path to save\n :type path: str | unicode\n :param data: Data to save\n :type data: None | int | float | str | unicode | list | dict\n :param readable: Format file to be human readable (default: False)\n :type readable: bool\n :rtype: None\n :raises IOError: If empty path or error writing file\n \"\"\"\n if not path:\n IOError(\"No path specified to save\")\n\n try:\n with io.open(path, \"w\", encoding=\"utf-8\") as f:\n if path.endswith(\".json\"):\n save_json_file(\n f,\n data,\n pretty=readable,\n compact=(not readable),\n sort=True\n )\n elif path.endswith(\".yaml\") or path.endswith(\".yml\"):\n save_yaml_file(f, data)\n except IOError:\n raise\n except Exception as e:\n raise IOError(e)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef join_path_prefix(path, pre_path=None):\n if not path:\n return path\n\n if pre_path and not os.path.isabs(path):\n return os.path.join(pre_path, path)\n\n return path", "response": "Join path with pre_path if path is not absolute."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _load_json_file(self, file, decoder=None):\n try:\n res = load_json_file(file, decoder=decoder)\n except ValueError as e:\n if \"{}\".format(e) == \"No JSON object could be decoded\":\n raise IOError(\"Decoding JSON failed\")\n self.exception(\"Failed to load from {}\".format(file))\n raise IOError(\"Loading file failed\")\n except:\n self.exception(\"Failed to load from {}\".format(file))\n raise IOError(\"Loading file failed\")\n return res", "response": "Load data from json file"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _save_json_file(\n self, file, val,\n pretty=False, compact=True, sort=True, encoder=None\n ):\n \"\"\"\n Save data to json file\n\n :param file: Writable file or path to file\n :type file: FileIO | str | unicode\n :param val: Value or struct to save\n :type val: None | int | float | str | list | dict\n :param pretty: Format data to be readable (default: False)\n :type pretty: bool\n :param compact: Format data to be compact (default: True)\n :type compact: bool\n :param sort: Sort keys (default: True)\n :type sort: bool\n :param encoder: Use custom json encoder\n :type encoder: T <= flotils.loadable.DateTimeEncoder\n :rtype: None\n :raises IOError: Failed to save\n \"\"\"\n try:\n save_json_file(file, val, pretty, compact, sort, encoder)\n except:\n self.exception(\"Failed to save to {}\".format(file))\n raise IOError(\"Saving file failed\")", "response": "Save data to json file"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nload data from yaml file", "response": "def _load_yaml_file(self, file):\n \"\"\"\n Load data from yaml file\n\n :param file: Readable object or path to file\n :type file: FileIO | str | unicode\n :return: Yaml data\n :rtype: None | int | float | str | unicode | list | dict\n :raises IOError: Failed to load\n \"\"\"\n try:\n res = load_yaml_file(file)\n except:\n self.exception(\"Failed to load from {}\".format(file))\n raise IOError(\"Loading file failed\")\n return res"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _save_yaml_file(self, file, val):\n try:\n save_yaml_file(file, val)\n except:\n self.exception(\"Failed to save to {}\".format(file))\n raise IOError(\"Saving file failed\")", "response": "Save data to yaml file"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef load_settings(self, path):\n res = self.load_file(path)\n if not isinstance(res, dict):\n raise TypeError(\"Expected settings to be dict\")\n return res", "response": "Load settings from file"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nsave the settings to file", "response": "def save_settings(self, path, settings, readable=False):\n \"\"\"\n Save settings to file\n\n :param path: File path to save\n :type path: str | unicode\n :param settings: Settings to save\n :type settings: dict\n :param readable: Format file to be human readable (default: False)\n :type readable: bool\n :rtype: None\n :raises IOError: If empty path or error writing file\n :raises TypeError: Settings is not a dict\n \"\"\"\n if not isinstance(settings, dict):\n raise TypeError(\"Expected settings to be dict\")\n return self.save_file(path, settings, readable)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef load_file(self, path):\n res = None\n\n if not path:\n IOError(\"No path specified to save\")\n\n if not os.path.isfile(path):\n raise IOError(\"File not found {}\".format(path))\n\n try:\n with io.open(path, \"r\", encoding=\"utf-8\") as f:\n if path.endswith(\".json\"):\n res = self._load_json_file(f)\n elif path.endswith(\".yaml\") or path.endswith(\".yml\"):\n res = self._load_yaml_file(f)\n except IOError:\n raise\n except Exception as e:\n self.exception(\"Failed reading {}\".format(path))\n raise IOError(e)\n return res", "response": "Loads the settings from a file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef save_file(self, path, data, readable=False):\n if not path:\n IOError(\"No path specified to save\")\n\n try:\n with io.open(path, \"w\", encoding=\"utf-8\") as f:\n if path.endswith(\".json\"):\n self._save_json_file(\n f,\n data,\n pretty=readable,\n compact=(not readable),\n sort=True\n )\n elif path.endswith(\".yaml\") or path.endswith(\".yml\"):\n self._save_yaml_file(f, data)\n except IOError:\n raise\n except Exception as e:\n self.exception(\"Failed writing {}\".format(path))\n raise IOError(e)", "response": "Save the data to file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef name(self):\n res = type(self).__name__\n if self._id:\n res += \".{}\".format(self._id)\n return res", "response": "Get the module name of the object."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget function name of calling method", "response": "def _get_function_name(self):\n \"\"\"\n Get function name of calling method\n\n :return: The name of the calling function\n (expected to be called in self.error/debug/..)\n :rtype: str | unicode\n \"\"\"\n fname = inspect.getframeinfo(inspect.stack()[2][0]).function\n if fname == \"\":\n return \"\"\n else:\n return fname"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nstarting the interface :param blocking: Should the call block until stop() is called (default: False) :type blocking: bool :rtype: None", "response": "def start(self, blocking=False):\n \"\"\"\n Start the interface\n\n :param blocking: Should the call block until stop() is called\n (default: False)\n :type blocking: bool\n :rtype: None\n \"\"\"\n super(StartStopable, self).start()\n self._is_running = True\n # blocking\n try:\n while blocking and self._is_running:\n time.sleep(self._start_block_timeout)\n except IOError as e:\n if not str(e).lower().startswith(\"[errno 4]\"):\n raise"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nextract YARA signatures embedded in Yara indicatorItem nodes.", "response": "def get_embedded_yara(self, iocid):\n \"\"\"\n Extract YARA signatures embedded in Yara/Yara indicatorItem nodes.\n This is done regardless of logic structure in the OpenIOC.\n \"\"\"\n ioc_obj = self.iocs[iocid]\n ids_to_process = set([])\n signatures = ''\n for elem in ioc_obj.top_level_indicator.xpath('.//IndicatorItem[Context/@search = \"Yara/Yara\"]'):\n signature = elem.findtext('Content')\n signatures = signatures + '\\n' + signature\n if signatures:\n signatures += '\\n'\n return signatures"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_yara_condition_string(self, indicator_node, parameters_node, ids_to_process, condition_string='',\n joining_value='or'):\n \"\"\"\n get_yara_condition_string\n\n input\n indicator_node: this is the node we walk down\n parameters_node: this contains all the parameters in the ioc, so we\n can look up parameters nodes as we walk them.\n ids_to_process: set of ids to upgrade\n condition_string: This represnts the yara condition string. This\n string grows as we walk nodes.\n return\n returns True upon completion\n may raise ValueError\n \"\"\"\n\n indicator_node_id = str(indicator_node.get('id'))\n if indicator_node_id not in ids_to_process:\n msg = 'Entered into get_yara_condition_string with a invalid node to walk [[}]'.format(indicator_node_id)\n raise YaraConversionError(msg)\n expected_tag = 'Indicator'\n if indicator_node.tag != expected_tag:\n raise YaraConversionError('indicator_node expected tag is [%s]' % expected_tag)\n is_set = None\n # print 'indicator node id [%s]' % str(indicator_node_id)\n for param in parameters_node.xpath('.//param[@ref-id=\"{}\"]'.format(indicator_node_id)):\n if param.attrib['name'] == 'yara/set':\n is_set = True\n set_count = param.findtext('value', None)\n try:\n temp = int(set_count)\n if temp < 1:\n raise YaraConversionError('yara/set parameter value was less than 1')\n if temp > len(indicator_node.getchildren()):\n msg = 'yara/set value is greater than the number of children' \\\n ' of Indicator node [%s]' % str(indicator_node_id)\n raise YaraConversionError(msg)\n except ValueError:\n raise YaraConversionError('yara/set parameter was not a integer')\n set_dict = {'set_count': set_count, 'set_ids': []}\n\n for node in indicator_node.getchildren():\n node_id = node.get('id')\n # XXX strip out '-' characters from the ids. If a guid is used as\n # the id, this will cause processing errors\n safe_node_id = node_id.replace('-', '')\n if node_id not in ids_to_process:\n continue\n if node.tag == 'IndicatorItem':\n # print 'handling indicatoritem: [%s]' % node_id\n if is_set:\n set_dict['set_ids'].append('$' + safe_node_id)\n else:\n # Default mapping\n mapping = {'prefix': '$', 'identifier': safe_node_id, 'condition': '', 'postfix': ''}\n # XXX: Alot of this could raise ValueError\n use_condition_template = False\n negation = node.get('negate')\n condition = node.get('condition')\n search = node.xpath('Context/@search')[0]\n content = node.findtext('Content')\n\n yara_condition = self.condition_to_yara_map[condition]\n if not yara_condition:\n msg = 'Invalid IndicatorItem condition encountered [%s][%s]' % (str(node_id), str(condition))\n raise YaraConversionError(msg)\n if negation.lower() == 'true':\n negation = True\n else:\n negation = False\n # parameters cannot modifier the condition of FileSize or Rule\n if search == 'Yara/FileSize':\n mapping['prefix'] = ''\n mapping['identifier'] = 'filesize'\n mapping['postfix'] = ' ' + content\n mapping['condition'] = yara_condition\n use_condition_template = True\n elif search == 'Yara/RuleName':\n if content not in self.ioc_names_set:\n if mangle_name(content) in self.ioc_names_mangled_set:\n msg = 'Yara/RuleName is present as a mangled name.[{}][{}]'.format(mangle_name(content),\n node_id)\n log.warning(msg)\n content = mangle_name(content)\n else:\n log.warning('Yara/RuleName points to a name [{}] that is not in the set of IOCs being'\n ' processed [{}]'.format(content, node_id))\n if mangle_name(content) != content:\n msg = 'Yara/RuleName contains characters which would cause libyara errors' \\\n ' [{}]'.format(node_id)\n raise YaraConversionError(msg)\n mapping['prefix'] = ''\n mapping['identifier'] = content\n # handle parameters\n else:\n xp = './/param[@ref-id=\"{}\" and (@name=\"yara/count\" or @name=\"yara/offset/at\" or' \\\n ' @name=\"yara/offset/in\")]'.format(node_id)\n params = parameters_node.xpath(xp)\n if len(params) > 1:\n msg = 'More than one condition parameters assigned to IndicatorItem [{}]'.format(node_id)\n raise YaraConversionError(msg)\n for param in params:\n param_name = param.get('name', None)\n if param_name == 'yara/count':\n log.debug('Found [%s] attached to [%s]' % (param.attrib['name'], node_id))\n mapping['prefix'] = '#'\n mapping['postfix'] = ' ' + param.findtext('value')\n mapping['condition'] = yara_condition\n use_condition_template = True\n break\n elif param_name == 'yara/offset/at':\n log.debug('Found [%s] attached to [%s]' % (param.attrib['name'], node_id))\n mapping['condition'] = 'at'\n mapping['postfix'] = ' ' + param.findtext('value')\n use_condition_template = True\n break\n elif param_name == 'yara/offset/in':\n log.debug('Found [%s] attached to [%s]' % (param.attrib['name'], node_id))\n mapping['condition'] = 'in'\n mapping['postfix'] = ' ' + param.findtext('value')\n use_condition_template = True\n break\n\n if use_condition_template:\n temp_string = self.yara_II_condition_template % mapping\n else:\n temp_string = self.yara_II_template % mapping\n\n if condition_string == '':\n condition_string = temp_string\n else:\n condition_string = ' '.join([condition_string, joining_value, temp_string])\n # print condition_string\n\n elif node.tag == 'Indicator':\n if is_set:\n raise YaraConversionError('Cannot have Indicator nodes underneath a Indicator node with yara/set')\n operator = node.get('operator').lower()\n if operator not in ['or', 'and']:\n raise YaraConversionError('Indicator@operator is not and/or. [%s] has [%s]' % (id, operator))\n # handle parameters\n # XXX Temp POC\n recursed_condition = self.get_yara_condition_string(node, parameters_node, ids_to_process, '', operator)\n xp = './/param[@ref-id=\"{}\" and @name=\"yara/set\"]'.format(node_id)\n if (not parameters_node.xpath(xp)) and has_siblings(node):\n recursed_condition = '(%s)' % recursed_condition\n\n if condition_string == '':\n condition_string = recursed_condition\n else:\n condition_string = ' '.join([condition_string, joining_value, recursed_condition])\n # print 'recursed got: [%s]' % condition_string\n else:\n # should never get here\n raise YaraConversionError('node.tag is not a Indicator/IndicatorItem [%s]' % str(id))\n\n if is_set:\n log.debug('Building set expression for [%s]' % indicator_node_id)\n if len(set_dict['set_ids']) == 0:\n raise YaraConversionError('yara/set processing did not yield any set ids')\n elif len(set_dict['set_ids']) == 1:\n log.warning('yara/set with 1 id found for node [%s]' % node_id)\n set_ids = ''.join(set_dict['set_ids'])\n else:\n set_ids = ','.join(set_dict['set_ids'])\n set_dict['set_ids'] = set_ids\n temp_set_string = self.yara_set_string_template % set_dict\n # print temp_set_string\n if condition_string == '':\n condition_string = temp_set_string\n else:\n condition_string = ' '.join(\n [condition_string, indicator_node.getparent().get('operator').lower(), temp_set_string])\n\n return condition_string", "response": "This function returns the string that can be used to build the yara condition string for the Yara object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef write_yara(self, output_file):\n fout = open(output_file, 'wb')\n fout.write('\\n')\n\n for iocid in self.yara_signatures:\n signature = self.yara_signatures[iocid]\n fout.write(signature)\n fout.write('\\n')\n\n fout.close()\n return True", "response": "Write out the yara signatures to a file."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmaking an arbitrary directory. This is safe to call for Python 2 users.", "response": "def safe_makedirs(fdir):\n \"\"\"\n Make an arbitrary directory. This is safe to call for Python 2 users.\n\n :param fdir: Directory path to make.\n :return:\n \"\"\"\n if os.path.isdir(fdir):\n pass\n # print 'dir already exists: %s' % str(dir)\n else:\n try:\n os.makedirs(fdir)\n except WindowsError as e:\n if 'Cannot create a file when that file already exists' in e:\n log.debug('relevant dir already exists')\n else:\n raise WindowsError(e)\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts the iocs in self. iocs from OpenIOC 1. 1 format to OpenIOC 1. 0 format.", "response": "def convert_to_10(self):\n \"\"\"\n converts the iocs in self.iocs from openioc 1.1 to openioc 1.0 format.\n the converted iocs are stored in the dictionary self.iocs_10\n :return: A list of iocid values which had errors downgrading.\n \"\"\"\n if len(self) < 1:\n log.error('no iocs available to modify')\n return False\n log.info('Converting IOCs from 1.1 to 1.0.')\n errors = []\n for iocid in self.iocs:\n pruned = False\n ioc_obj_11 = self.iocs[iocid]\n metadata = ioc_obj_11.metadata\n # record metadata\n name_11 = metadata.findtext('.//short_description')\n keywords_11 = metadata.findtext('.//keywords')\n description_11 = metadata.findtext('.//description')\n author_11 = metadata.findtext('.//authored_by')\n created_date_11 = metadata.findtext('.//authored_date')\n last_modified_date_11 = ioc_obj_11.root.get('last-modified')\n links_11 = []\n for link in metadata.xpath('//link'):\n link_rel = link.get('rel')\n link_text = link.text\n links_11.append((link_rel, None, link_text))\n # get ioc_logic\n try:\n ioc_logic = ioc_obj_11.root.xpath('.//criteria')[0]\n except IndexError:\n log.exception(\n 'Could not find criteria nodes for IOC [{}]. Did you attempt to convert OpenIOC 1.0 iocs?'.format(\n iocid))\n errors.append(iocid)\n continue\n try:\n tlo_11 = ioc_logic.getchildren()[0]\n except IndexError:\n log.exception(\n 'Could not find children for the top level criteria/children nodes for IOC [{}]'.format(iocid))\n errors.append(iocid)\n continue\n tlo_id = tlo_11.get('id')\n # record comment parameters\n comment_dict = {}\n for param in ioc_obj_11.parameters.xpath('//param[@name=\"comment\"]'):\n param_id = param.get('ref-id')\n param_text = param.findtext('value')\n comment_dict[param_id] = param_text\n # create a 1.1 indicator and populate it with the metadata from the existing 1.1\n # we will then modify this new IOC to conform to 1.1 schema\n ioc_obj_10 = ioc_api.IOC(name=name_11, description=description_11, author=author_11, links=links_11,\n keywords=keywords_11, iocid=iocid)\n ioc_obj_10.root.attrib['last-modified'] = last_modified_date_11\n authored_date_node = ioc_obj_10.metadata.find('authored_date')\n authored_date_node.text = created_date_11\n\n # convert 1.1 ioc object to 1.0\n # change xmlns\n ioc_obj_10.root.attrib['xmlns'] = 'http://schemas.mandiant.com/2010/ioc'\n # remove published data\n del ioc_obj_10.root.attrib['published-date']\n # remove parameters node\n ioc_obj_10.root.remove(ioc_obj_10.parameters)\n # change root tag\n ioc_obj_10.root.tag = 'ioc'\n # metadata underneath the root node\n metadata_node = ioc_obj_10.metadata\n criteria_node = ioc_obj_10.top_level_indicator.getparent()\n metadata_dictionary = {}\n for child in metadata_node:\n metadata_dictionary[child.tag] = child\n for tag in METADATA_REQUIRED_10:\n if tag not in metadata_dictionary:\n msg = 'IOC {} is missing required metadata: [{}]'.format(iocid, tag)\n raise DowngradeError(msg)\n for tag in METADATA_ORDER_10:\n if tag in metadata_dictionary:\n ioc_obj_10.root.append(metadata_dictionary.get(tag))\n ioc_obj_10.root.remove(metadata_node)\n ioc_obj_10.root.remove(criteria_node)\n criteria_node.tag = 'definition'\n ioc_obj_10.root.append(criteria_node)\n\n ioc_obj_10.top_level_indicator.attrib['id'] = tlo_id\n # identify indicator items with 1.1 specific operators\n # we will skip them when converting IOC from 1.1 to 1.0.\n ids_to_skip = set()\n indicatoritems_to_remove = set()\n for condition_type in self.openioc_11_only_conditions:\n for elem in ioc_logic.xpath('//IndicatorItem[@condition=\"%s\"]' % condition_type):\n pruned = True\n indicatoritems_to_remove.add(elem)\n for elem in ioc_logic.xpath('//IndicatorItem[@preserve-case=\"true\"]'):\n pruned = True\n indicatoritems_to_remove.add(elem)\n # walk up from each indicatoritem\n # to build set of ids to skip when downconverting\n for elem in indicatoritems_to_remove:\n nid = None\n current = elem\n while nid != tlo_id:\n parent = current.getparent()\n nid = parent.get('id')\n if nid == tlo_id:\n current_id = current.get('id')\n ids_to_skip.add(current_id)\n else:\n current = parent\n # walk the 1.1 IOC to convert it into a 1.0 IOC\n # noinspection PyBroadException\n try:\n self.convert_branch(tlo_11, ioc_obj_10.top_level_indicator, ids_to_skip, comment_dict)\n except DowngradeError:\n log.exception('Problem converting IOC [{}]'.format(iocid))\n errors.append(iocid)\n continue\n except Exception:\n log.exception('Unknown error occured while converting [{}]'.format(iocid))\n errors.append(iocid)\n continue\n # bucket pruned iocs / null iocs\n if not ioc_obj_10.top_level_indicator.getchildren():\n self.null_pruned_iocs.add(iocid)\n elif pruned is True:\n self.pruned_11_iocs.add(iocid)\n # Check the original to see if there was a comment prior to the root node, and if so, copy it's content\n comment_node = ioc_obj_11.root.getprevious()\n while comment_node is not None:\n log.debug('found a comment node')\n c = et.Comment(comment_node.text)\n ioc_obj_10.root.addprevious(c)\n comment_node = comment_node.getprevious()\n # Record the IOC\n # ioc_10 = et.ElementTree(root_10)\n self.iocs_10[iocid] = ioc_obj_10\n return errors"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef convert_branch(self, old_node, new_node, ids_to_skip, comment_dict=None):\n expected_tag = 'Indicator'\n if old_node.tag != expected_tag:\n raise DowngradeError('old_node expected tag is [%s]' % expected_tag)\n if not comment_dict:\n comment_dict = {}\n for node in old_node.getchildren():\n node_id = node.get('id')\n if node_id in ids_to_skip:\n continue\n if node.tag == 'IndicatorItem':\n negation = node.get('negate')\n condition = node.get('condition')\n if 'true' in negation.lower():\n new_condition = condition + 'not'\n else:\n new_condition = condition\n document = node.xpath('Context/@document')[0]\n search = node.xpath('Context/@search')[0]\n content_type = node.xpath('Content/@type')[0]\n content = node.findtext('Content')\n context_type = node.xpath('Context/@type')[0]\n new_ii_node = ioc_api.make_indicatoritem_node(condition=condition,\n document=document,\n search=search,\n content_type=content_type,\n content=content,\n context_type=context_type,\n nid=node_id)\n # set condition\n new_ii_node.attrib['condition'] = new_condition\n # set comment\n if node_id in comment_dict:\n comment = comment_dict[node_id]\n comment_node = et.Element('Comment')\n comment_node.text = comment\n new_ii_node.append(comment_node)\n # remove preserver-case and negate\n del new_ii_node.attrib['negate']\n del new_ii_node.attrib['preserve-case']\n new_node.append(new_ii_node)\n elif node.tag == 'Indicator':\n operator = node.get('operator')\n if operator.upper() not in ['OR', 'AND']:\n raise DowngradeError('Indicator@operator is not AND/OR. [%s] has [%s]' % (node_id, operator))\n new_i_node = ioc_api.make_indicator_node(operator, node_id)\n new_node.append(new_i_node)\n self.convert_branch(node, new_i_node, ids_to_skip, comment_dict)\n else:\n # should never get here\n raise DowngradeError('node is not a Indicator/IndicatorItem')\n return True", "response": "Recursively walk a branch of the Indicator node and convert it to a new node."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef write_iocs(self, directory=None, source=None):\n \"\"\"\n\n\n if directory is None, write the iocs to the current working directory\n source: allows specifying a different dictionry of elmentTree ioc objects\n \"\"\"\n if not source:\n source = self.iocs_10\n if len(source) < 1:\n log.error('no iocs available to write out')\n return False\n if not directory:\n directory = os.getcwd()\n if os.path.isfile(directory):\n log.error('cannot writes iocs to a directory')\n return False\n source_iocs = set(source.keys())\n source_iocs = source_iocs.difference(self.pruned_11_iocs)\n source_iocs = source_iocs.difference(self.null_pruned_iocs)\n if not source_iocs:\n log.error('no iocs available to write out after removing pruned/null iocs')\n return False\n utils.safe_makedirs(directory)\n output_dir = os.path.abspath(directory)\n log.info('Writing IOCs to %s' % (str(output_dir)))\n # serialize the iocs\n for iocid in source_iocs:\n ioc_obj = source[iocid]\n ioc_obj.write_ioc_to_file(output_dir=output_dir, force=True)\n return True", "response": "Writes the IOCs to a directory."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef write_pruned_iocs(self, directory=None, pruned_source=None):\n \"\"\"\n write_pruned_iocs to a directory\n\n if directory is None, write the iocs to the current working directory\n \"\"\"\n if pruned_source is None:\n pruned_source = self.pruned_11_iocs\n if len(pruned_source) < 1:\n log.error('no iocs available to write out')\n return False\n if not directory:\n directory = os.getcwd()\n if os.path.isfile(directory):\n log.error('cannot writes iocs to a directory')\n return False\n utils.safe_makedirs(directory)\n output_dir = os.path.abspath(directory)\n # serialize the iocs\n for iocid in pruned_source:\n ioc_obj = self.iocs_10[iocid]\n ioc_obj.write_ioc_to_file(output_dir=output_dir, force=True)\n return True", "response": "Writes the IOCs to a directory that have been pruned."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_dnsentryitem_recordname(dns_name, condition='contains', negate=False, preserve_case=False):\n document = 'DnsEntryItem'\n search = 'DnsEntryItem/RecordName'\n content_type = 'string'\n content = dns_name\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for a DnsEntryItem recordname"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef make_driveritem_deviceitem_devicename(device_name, condition='is', negate=False, preserve_case=False):\n document = 'DriverItem'\n search = 'DriverItem/DeviceItem/DeviceName'\n content_type = 'string'\n content = device_name\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for DriverItem and DeviceName"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef make_driveritem_drivername(driver_name, condition='contains', negate=False, preserve_case=False):\n document = 'DriverItem'\n search = 'DriverItem/DriverName'\n content_type = 'string'\n content = driver_name\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for DriverItem and DriverName"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a IndicatorItem node for an EventLogItem with the given EID", "response": "def make_eventlogitem_eid(eid, condition='is', negate=False):\n \"\"\"\n Create a node for EventLogItem/EID\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'EventLogItem'\n search = 'EventLogItem/EID'\n content_type = 'int'\n content = eid\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef make_eventlogitem_log(log, condition='is', negate=False, preserve_case=False):\n document = 'EventLogItem'\n search = 'EventLogItem/log'\n content_type = 'string'\n content = log\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for EventLogItem and log"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_eventlogitem_message(message, condition='contains', negate=False, preserve_case=False):\n document = 'EventLogItem'\n search = 'EventLogItem/message'\n content_type = 'string'\n content = message\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for EventLogItem and a message node."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate a IndicatorItem node for FileItem / FileAttributes", "response": "def make_fileitem_fileattributes(attributes, condition='contains', negate=False, preserve_case=False):\n \"\"\"\n Create a node for FileItem/FileAttributes\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'FileItem'\n search = 'FileItem/FileAttributes'\n content_type = 'string'\n content = attributes\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a IndicatorItem node for FileItem and FileExtension node", "response": "def make_fileitem_fileextension(extension, condition='is', negate=False, preserve_case=False):\n \"\"\"\n Create a node for FileItem/FileExtension\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'FileItem'\n search = 'FileItem/FileExtension'\n content_type = 'string'\n content = extension\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate a IndicatorItem node for FileItem / FileName", "response": "def make_fileitem_filename(filename, condition='is', negate=False, preserve_case=False):\n \"\"\"\n Create a node for FileItem/FileName\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'FileItem'\n search = 'FileItem/FileName'\n content_type = 'string'\n content = filename\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make_fileitem_filepath(filepath, condition='contains', negate=False, preserve_case=False):\n document = 'FileItem'\n search = 'FileItem/FilePath'\n content_type = 'string'\n content = filepath\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for FileItem / FilePath\n \n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef make_fileitem_fullpath(fullpath, condition='contains', negate=False, preserve_case=False):\n document = 'FileItem'\n search = 'FileItem/FullPath'\n content_type = 'string'\n content = fullpath\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for FileItem / FullPath"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef make_fileitem_md5sum(md5, condition='is', negate=False):\n document = 'FileItem'\n search = 'FileItem/Md5sum'\n content_type = 'md5'\n content = md5\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate)\n return ii_node", "response": "Create a IndicatorItem node for FileItem / Md5sum"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_fileitem_peinfo_detectedanomalies_string(anomaly, condition='is', negate=False, preserve_case=False):\n document = 'FileItem'\n search = 'FileItem/PEInfo/DetectedAnomalies/string'\n content_type = 'string'\n content = anomaly\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for FileItem / PEInfo / DetectedAnomalies / string\n \n "} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef make_fileitem_peinfo_detectedentrypointsignature_name(entrypoint_name, condition='is', negate=False,\n preserve_case=False):\n \"\"\"\n Create a node for FileItem/PEInfo/DetectedEntryPointSignature/Name\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'FileItem'\n search = 'FileItem/PEInfo/DetectedEntryPointSignature/Name'\n content_type = 'string'\n content = entrypoint_name\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for FileItem / PEInfo / DetectedEntryPointSignature / Name\n \n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a IndicatorItem node for FileItem / PEInfo / DigitalSignature / SignatureExists node", "response": "def make_fileitem_peinfo_digitalsignature_signatureexists(sig_exists, condition='is', negate=False):\n \"\"\"\n Create a node for FileItem/PEInfo/DigitalSignature/SignatureExists\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'FileItem'\n search = 'FileItem/PEInfo/DigitalSignature/SignatureExists'\n content_type = 'bool'\n content = sig_exists\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef make_fileitem_peinfo_digitalsignature_signatureverified(sig_verified, condition='is', negate=False):\n document = 'FileItem'\n search = 'FileItem/PEInfo/DigitalSignature/SignatureVerified'\n content_type = 'bool'\n content = sig_verified\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate)\n return ii_node", "response": "Create a IndicatorItem node for FileItem / PEInfo / DigitalSignature / SignatureVerified\n \n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef make_fileitem_peinfo_exports_dllname(dll_name, condition='is', negate=False, preserve_case=False):\n document = 'FileItem'\n search = 'FileItem/PEInfo/Exports/DllName'\n content_type = 'string'\n content = dll_name\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Creates a IndicatorItem node for FileItem / PEInfo / Exports / DllName"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef make_fileitem_peinfo_exports_numberoffunctions(function_count, condition='is', negate=False):\n document = 'FileItem'\n search = 'FileItem/PEInfo/Exports/NumberOfFunctions'\n content_type = 'int'\n content = function_count\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate)\n return ii_node", "response": "Create a IndicatorItem node for FileItem / PEInfo / Exports / NumberOfFunctions"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_fileitem_peinfo_importedmodules_module_importedfunctions_string(imported_function, condition='is',\n negate=False, preserve_case=False):\n \"\"\"\n Create a node for FileItem/PEInfo/ImportedModules/Module/ImportedFunctions/string\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'FileItem'\n search = 'FileItem/PEInfo/ImportedModules/Module/ImportedFunctions/string'\n content_type = 'string'\n content = imported_function\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Creates a IndicatorItem node for FileItem / PEInfo / ImportedModules / Module / ImportedFunctions / string\n \n "} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a IndicatorItem node for FileItem / PEInfo / ImportedModules / Module / Name", "response": "def make_fileitem_peinfo_importedmodules_module_name(imported_module, condition='is', negate=False,\n preserve_case=False):\n \"\"\"\n Create a node for FileItem/PEInfo/ImportedModules/Module/Name\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'FileItem'\n search = 'FileItem/PEInfo/ImportedModules/Module/Name'\n content_type = 'string'\n content = imported_module\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a IndicatorItem node for FileItem / PEInfo / PETimeStamp", "response": "def make_fileitem_peinfo_petimestamp(compile_time, condition='is', negate=False):\n \"\"\"\n Create a node for FileItem/PEInfo/PETimeStamp\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'FileItem'\n search = 'FileItem/PEInfo/PETimeStamp'\n content_type = 'date'\n content = compile_time\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a IndicatorItem node for FileItem / PEInfo / ResourceInfoItem / Name", "response": "def make_fileitem_peinfo_resourceinfolist_resourceinfoitem_name(resource_name, condition='is', negate=False,\n preserve_case=False):\n \"\"\"\n Create a node for FileItem/PEInfo/ResourceInfoList/ResourceInfoItem/Name\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'FileItem'\n search = 'FileItem/PEInfo/ResourceInfoList/ResourceInfoItem/Name'\n content_type = 'string'\n content = resource_name\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_fileitem_peinfo_type(petype, condition='is', negate=False, preserve_case=False):\n document = 'FileItem'\n search = 'FileItem/PEInfo/Type'\n content_type = 'string'\n content = petype\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for FileItem / PEInfo / Type\n \n "} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating a IndicatorItem node for FileItem / SizeInBytes", "response": "def make_fileitem_sizeinbytes(filesize, condition='is', negate=False):\n \"\"\"\n Create a node for FileItem/SizeInBytes\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'FileItem'\n search = 'FileItem/SizeInBytes'\n content_type = 'int'\n content = filesize\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_fileitem_streamlist_stream_name(stream_name, condition='is', negate=False, preserve_case=False):\n document = 'FileItem'\n search = 'FileItem/StreamList/Stream/Name'\n content_type = 'string'\n content = stream_name\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for FileItem / StreamList / Stream / Name\n \n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef make_fileitem_stringlist_string(file_string, condition='contains', negate=False, preserve_case=False):\n document = 'FileItem'\n search = 'FileItem/StringList/string'\n content_type = 'string'\n content = file_string\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for FileItem / StringList / string\n \n "} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a IndicatorItem node for FileItem / Username", "response": "def make_fileitem_username(file_owner, condition='is', negate=False, preserve_case=False):\n \"\"\"\n Create a node for FileItem/Username\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'FileItem'\n search = 'FileItem/Username'\n content_type = 'string'\n content = file_owner\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a IndicatorItem node for a HookItem / HookedFunction node", "response": "def make_hookitem_hookedfunction(hooked_function, condition='is', negate=False, preserve_case=False):\n \"\"\"\n Create a node for HookItem/HookedFunction\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'HookItem'\n search = 'HookItem/HookedFunction'\n content_type = 'string'\n content = hooked_function\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a IndicatorItem node for HookItem / HookingModule", "response": "def make_hookitem_hookingmodule(hooking_module, condition='contains', negate=False, preserve_case=False):\n \"\"\"\n Create a node for HookItem/HookingModule\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'HookItem'\n search = 'HookItem/HookingModule'\n content_type = 'string'\n content = hooking_module\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make_portitem_remoteport(remote_port, condition='is', negate=False):\n document = 'PortItem'\n search = 'PortItem/remotePort'\n content_type = 'int'\n content = remote_port\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate)\n return ii_node", "response": "Create a IndicatorItem node for PortItem and RemotePort"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef make_prefetchitem_accessedfilelist_accessedfile(accessed_file, condition='contains', negate=False,\n preserve_case=False):\n \"\"\"\n Create a node for PrefetchItem/AccessedFileList/AccessedFile\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'PrefetchItem'\n search = 'PrefetchItem/AccessedFileList/AccessedFile'\n content_type = 'string'\n content = accessed_file\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for PrefetchItem / AccessedFileList"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a IndicatorItem node for PrefetchItem and ApplicationFileName", "response": "def make_prefetchitem_applicationfilename(application_filename, condition='is', negate=False, preserve_case=False):\n \"\"\"\n Create a node for PrefetchItem/ApplicationFileName\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'PrefetchItem'\n search = 'PrefetchItem/ApplicationFileName'\n content_type = 'string'\n content = application_filename\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_prefetchitem_applicationfullpath(application_fullpath, condition='contains', negate=False,\n preserve_case=False):\n \"\"\"\n Create a node for PrefetchItem/ApplicationFullPath\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'PrefetchItem'\n search = 'PrefetchItem/ApplicationFullPath'\n content_type = 'string'\n content = application_fullpath\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for PrefetchItem and ApplicationFullPath"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a IndicatorItem node for a ProcessItem / HandleList / Handle / Name node", "response": "def make_processitem_handlelist_handle_name(handle_name, condition='contains', negate=False, preserve_case=False):\n \"\"\"\n Create a node for ProcessItem/HandleList/Handle/Name\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'ProcessItem'\n search = 'ProcessItem/HandleList/Handle/Name'\n content_type = 'string'\n content = handle_name\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef make_processitem_portlist_portitem_remoteip(remote_ip, condition='is', negate=False):\n document = 'ProcessItem'\n search = 'ProcessItem/PortList/PortItem/remoteIP'\n content_type = 'IP'\n content = remote_ip\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate)\n return ii_node", "response": "Create a IndicatorItem node for a ProcessItem / PortList / PortItem / remoteIP"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make_processitem_sectionlist_memorysection_name(section_name, condition='contains', negate=False,\n preserve_case=False):\n \"\"\"\n Create a node for ProcessItem/SectionList/MemorySection/Name\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'ProcessItem'\n search = 'ProcessItem/SectionList/MemorySection/Name'\n content_type = 'string'\n content = section_name\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for a ProcessItem section list memory section name"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate a IndicatorItem node for Exported Functions in Memory Section List", "response": "def make_processitem_sectionlist_memorysection_peinfo_exports_exportedfunctions_string(export_function, condition='is',\n negate=False,\n preserve_case=False):\n \"\"\"\n Create a node for ProcessItem/SectionList/MemorySection/PEInfo/Exports/ExportedFunctions/string\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'ProcessItem'\n search = 'ProcessItem/SectionList/MemorySection/PEInfo/Exports/ExportedFunctions/string'\n content_type = 'string'\n content = export_function\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef make_processitem_stringlist_string(string, condition='contains', negate=False, preserve_case=False):\n document = 'ProcessItem'\n search = 'ProcessItem/StringList/string'\n content_type = 'string'\n content = string\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for a ProcessItem / StringList"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef make_processitem_username(username, condition='contains', negate=False, preserve_case=False):\n document = 'ProcessItem'\n search = 'ProcessItem/Username'\n content_type = 'string'\n content = username\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for a ProcessItem with the given username"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef make_processitem_arguments(arguments, condition='contains', negate=False, preserve_case=False):\n document = 'ProcessItem'\n search = 'ProcessItem/arguments'\n content_type = 'string'\n content = arguments\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a node for ProcessItem / arguments"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef make_processitem_path(path, condition='contains', negate=False, preserve_case=False):\n document = 'ProcessItem'\n search = 'ProcessItem/path'\n content_type = 'string'\n content = path\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for a ProcessItem with a given path"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef make_registryitem_keypath(keypath, condition='contains', negate=False, preserve_case=False):\n document = 'RegistryItem'\n search = 'RegistryItem/KeyPath'\n content_type = 'string'\n content = keypath\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for RegistryItem and KeyPath"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a node for RegistryItem and Text node for RegistryItem", "response": "def make_registryitem_text(text, condition='contains', negate=False, preserve_case=False):\n \"\"\"\n Create a node for RegistryItem/Text\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'RegistryItem'\n search = 'RegistryItem/Text'\n content_type = 'string'\n content = text\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating a IndicatorItem node for RegistryItem ValueName", "response": "def make_registryitem_valuename(valuename, condition='is', negate=False, preserve_case=False):\n \"\"\"\n Create a node for RegistryItem/ValueName\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'RegistryItem'\n search = 'RegistryItem/ValueName'\n content_type = 'string'\n content = valuename\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef make_serviceitem_description(description, condition='contains', negate=False, preserve_case=False):\n document = 'ServiceItem'\n search = 'ServiceItem/description'\n content_type = 'string'\n content = description\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for ServiceItem and description"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a IndicatorItem node for ServiceItem and descriptiveName", "response": "def make_serviceitem_descriptivename(descriptive_name, condition='is', negate=False, preserve_case=False):\n \"\"\"\n Create a node for ServiceItem/descriptiveName\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'ServiceItem'\n search = 'ServiceItem/descriptiveName'\n content_type = 'string'\n content = descriptive_name\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating a IndicatorItem node for ServiceItem and name", "response": "def make_serviceitem_name(name, condition='is', negate=False, preserve_case=False):\n \"\"\"\n Create a node for ServiceItem/name\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'ServiceItem'\n search = 'ServiceItem/name'\n content_type = 'string'\n content = name\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef make_serviceitem_pathmd5sum(path_md5, condition='is', negate=False):\n document = 'ServiceItem'\n search = 'ServiceItem/pathmd5sum'\n content_type = 'md5'\n content = path_md5\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate)\n return ii_node", "response": "Create a IndicatorItem node for ServiceItem / pathmd5sum"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a IndicatorItem node for ServiceItem and serviceDLL", "response": "def make_serviceitem_servicedll(servicedll, condition='contains', negate=False, preserve_case=False):\n \"\"\"\n Create a node for ServiceItem/serviceDLL\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'ServiceItem'\n search = 'ServiceItem/serviceDLL'\n content_type = 'string'\n content = servicedll\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef make_serviceitem_servicedllsignatureexists(dll_sig_exists, condition='is', negate=False):\n document = 'ServiceItem'\n search = 'ServiceItem/serviceDLLSignatureExists'\n content_type = 'bool'\n content = dll_sig_exists\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate)\n return ii_node", "response": "Create a IndicatorItem node for ServiceItem / serviceDLLSignatureExists"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef make_serviceitem_servicedllsignatureverified(dll_sig_verified, condition='is', negate=False):\n document = 'ServiceItem'\n search = 'ServiceItem/serviceDLLSignatureVerified'\n content_type = 'bool'\n content = dll_sig_verified\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate)\n return ii_node", "response": "Create a IndicatorItem node for ServiceItem / serviceDLLSignatureVerified"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef make_serviceitem_servicedllmd5sum(servicedll_md5, condition='is', negate=False):\n document = 'ServiceItem'\n search = 'ServiceItem/serviceDLLmd5sum'\n content_type = 'md5'\n content = servicedll_md5\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate)\n return ii_node", "response": "Create a IndicatorItem node for ServiceItem / serviceDLLmd5sum\n \n "} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a IndicatorItem node for a SystemInfoItem with the given hostname", "response": "def make_systeminfoitem_hostname(hostname, condition='contains', negate=False, preserve_case=False):\n \"\"\"\n Create a node for SystemInfoItem/hostname\n \n :return: A IndicatorItem represented as an Element node\n \"\"\"\n document = 'SystemInfoItem'\n search = 'SystemInfoItem/hostname'\n content_type = 'string'\n content = hostname\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef make_systemrestoreitem_originalfilename(original_filename, condition='contains', negate=False, preserve_case=False):\n document = 'SystemRestoreItem'\n search = 'SystemRestoreItem/OriginalFileName'\n content_type = 'string'\n content = original_filename\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for a SystemRestoreItem with the original filename."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef make_fileitem_peinfo_versioninfoitem(key, value, condition='is', negate=False, preserve_case=False):\n document = 'FileItem'\n search = 'FileItem/PEInfo/VersionInfoList/VersionInfoItem/' + key # XXX: No validation of key done.\n content_type = 'string'\n content = value\n ii_node = ioc_api.make_indicatoritem_node(condition, document, search, content_type, content,\n negate=negate, preserve_case=preserve_case)\n return ii_node", "response": "Create a IndicatorItem node for FileItem / PEInfo / VersionInfoList / VersionInfoItem"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nfixes the ordering of children under the criteria node to ensure that IndicatorItem or Indicator order is preserved as per XML Schema.", "response": "def fix_schema_node_ordering(parent):\n \"\"\"\n Fix the ordering of children under the criteria node to ensure that IndicatorItem/Indicator order\n is preserved, as per XML Schema.\n :return:\n \"\"\"\n children = parent.getchildren()\n i_nodes = [node for node in children if node.tag == 'IndicatorItem']\n ii_nodes = [node for node in children if node.tag == 'Indicator']\n if not ii_nodes:\n return\n # Remove all the children\n for node in children:\n parent.remove(node)\n # Add the Indicator nodes back\n for node in i_nodes:\n parent.append(node)\n # Now add the IndicatorItem nodes back\n for node in ii_nodes:\n parent.append(node)\n # Now recurse\n for node in ii_nodes:\n fix_schema_node_ordering(node)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef make_indicator_node(operator, nid=None):\n if operator.upper() not in VALID_INDICATOR_OPERATORS:\n raise ValueError('Indicator operator must be in [{}].'.format(VALID_INDICATOR_OPERATORS))\n i_node = et.Element('Indicator')\n if nid:\n i_node.attrib['id'] = nid\n else:\n i_node.attrib['id'] = ioc_et.get_guid()\n i_node.attrib['operator'] = operator.upper()\n return i_node", "response": "This function makes a Indicator node element."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make_indicatoritem_node(condition,\n document,\n search,\n content_type,\n content,\n preserve_case=False,\n negate=False,\n context_type='mir',\n nid=None):\n \"\"\"\n This makes a IndicatorItem element. This contains the actual threat intelligence in the IOC.\n\n :param condition: This is the condition of the item ('is', 'contains', 'matches', etc). The following contants in ioc_api may be used:\n==================== =====================================================\nConstant Meaning\n==================== =====================================================\nioc_api.IS Exact String match.\nioc_api.CONTAINS Substring match.\nioc_api.MATCHES Regex match.\nioc_api.STARTS_WITH String match at the beginning of a string.\nioc_api.ENDS_WITH String match at the end of a string.\nioc_api.GREATER_THAN Integer match indicating a greater than (>) operation.\nioc_api.LESS_THAN Integer match indicator a less than (<) operation.\n==================== =====================================================\n\n :param document: Denotes the type of document to look for the encoded artifact in.\n :param search: Specifies what attribute of the document type the encoded value is.\n :param content_type: This is the display type of the item. This is normally derived from the iocterm for the search value.\n :param content: The threat intelligence that is being encoded.\n :param preserve_case: Specifiy that the content should be treated in a case sensitive manner.\n :param negate: Specifify that the condition is negated. An example of this is:\n @condition = 'is' & @negate = 'true' would be equal to the\n @condition = 'isnot' in OpenIOC 1.0.\n :param context_type: Gives context to the document/search information.\n :param nid: This is used to provide a GUID for the IndicatorItem. The ID should NOT be specified under normal\n circumstances.\n :return: an elementTree Element item\n \"\"\"\n # validate condition\n if condition not in VALID_INDICATORITEM_CONDITIONS:\n raise ValueError('Invalid IndicatorItem condition [{}]'.format(condition))\n ii_node = et.Element('IndicatorItem')\n if nid:\n ii_node.attrib['id'] = nid\n else:\n ii_node.attrib['id'] = ioc_et.get_guid()\n ii_node.attrib['condition'] = condition\n if preserve_case:\n ii_node.attrib['preserve-case'] = 'true'\n else:\n ii_node.attrib['preserve-case'] = 'false'\n if negate:\n ii_node.attrib['negate'] = 'true'\n else:\n ii_node.attrib['negate'] = 'false'\n context_node = ioc_et.make_context_node(document, search, context_type)\n content_node = ioc_et.make_content_node(content_type, content)\n ii_node.append(context_node)\n ii_node.append(content_node)\n return ii_node", "response": "This function creates an IndicatorItem element."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_top_level_indicator_node(root_node):\n if root_node.tag != 'OpenIOC':\n raise IOCParseError('Root tag is not \"OpenIOC\" [{}].'.format(root_node.tag))\n elems = root_node.xpath('criteria/Indicator')\n if len(elems) == 0:\n log.warning('No top level Indicator node found.')\n return None\n elif len(elems) > 1:\n log.warning('Multiple top level Indicator nodes found. This is not a valid MIR IOC.')\n return None\n else:\n top_level_indicator_node = elems[0]\n if top_level_indicator_node.get('operator').lower() != 'or':\n log.warning('Top level Indicator/@operator attribute is not \"OR\". This is not a valid MIR IOC.')\n return top_level_indicator_node", "response": "This returns the first top level Indicator node under the criteria node."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nserializes an IOC as defined by a set of etree Elements to a. IOC file.", "response": "def write_ioc(root, output_dir=None, force=False):\n \"\"\"\n Serialize an IOC, as defined by a set of etree Elements, to a .IOC file.\n\n :param root: etree Element to write out. Should have the tag 'OpenIOC'\n :param output_dir: Directory to write the ioc out to. default is current working directory.\n :param force: If set, skip the root node tag check.\n :return: True, unless an error occurs while writing the IOC.\n \"\"\"\n root_tag = 'OpenIOC'\n if not force and root.tag != root_tag:\n raise ValueError('Root tag is not \"{}\".'.format(root_tag))\n default_encoding = 'utf-8'\n tree = root.getroottree()\n # noinspection PyBroadException\n try:\n encoding = tree.docinfo.encoding\n except:\n log.debug('Failed to get encoding from docinfo')\n encoding = default_encoding\n ioc_id = root.attrib['id']\n fn = ioc_id + '.ioc'\n if output_dir:\n fn = os.path.join(output_dir, fn)\n else:\n fn = os.path.join(os.getcwd(), fn)\n try:\n with open(fn, 'wb') as fout:\n fout.write(et.tostring(tree, encoding=encoding, xml_declaration=True, pretty_print=True))\n except (IOError, OSError):\n log.exception('Failed to write out IOC')\n return False\n except:\n raise\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef write_ioc_string(root, force=False):\n root_tag = 'OpenIOC'\n if not force and root.tag != root_tag:\n raise ValueError('Root tag is not \"{}\".'.format(root_tag))\n default_encoding = 'utf-8'\n tree = root.getroottree()\n # noinspection PyBroadException\n try:\n encoding = tree.docinfo.encoding\n except:\n log.debug('Failed to get encoding from docinfo')\n encoding = default_encoding\n return et.tostring(tree, encoding=encoding, xml_declaration=True, pretty_print=True)", "response": "Serialize an IOC to a string."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nopening an IOC file or XML string and returns the root element top level Indicator element and Parameters element.", "response": "def open_ioc(fn):\n \"\"\"\n Opens an IOC file, or XML string. Returns the root element, top level\n indicator element, and parameters element. If the IOC or string fails\n to parse, an IOCParseError is raised.\n\n This is a helper function used by __init__.\n\n :param fn: This is a path to a file to open, or a string containing XML representing an IOC.\n :return: a tuple containing three elementTree Element objects\n The first element, the root, contains the entire IOC itself.\n The second element, the top level OR indicator, allows the user to add\n additional IndicatorItem or Indicator nodes to the IOC easily.\n The third element, the parameters node, allows the user to quickly\n parse the parameters.\n \"\"\"\n parsed_xml = xmlutils.read_xml_no_ns(fn)\n if not parsed_xml:\n raise IOCParseError('Error occured parsing XML')\n root = parsed_xml.getroot()\n metadata_node = root.find('metadata')\n top_level_indicator = get_top_level_indicator_node(root)\n parameters_node = root.find('parameters')\n if parameters_node is None:\n # parameters node is not required by schema; but we add it if it is not present\n parameters_node = ioc_et.make_parameters_node()\n root.append(parameters_node)\n return root, metadata_node, top_level_indicator, parameters_node"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef make_ioc(name=None,\n description='Automatically generated IOC',\n author='IOC_api',\n links=None,\n keywords=None,\n iocid=None):\n \"\"\"\n This generates all parts of an IOC, but without any definition.\n\n This is a helper function used by __init__.\n\n :param name: string, Name of the ioc\n :param description: string, description of the ioc\n :param author: string, author name/email address\n :param links: ist of tuples. Each tuple should be in the form (rel, href, value).\n :param keywords: string. This is normally a space delimited string of values that may be used as keywords\n :param iocid: GUID for the IOC. This should not be specified under normal circumstances.\n :return: a tuple containing three elementTree Element objects\n The first element, the root, contains the entire IOC itself.\n The second element, the top level OR indicator, allows the user to add\n additional IndicatorItem or Indicator nodes to the IOC easily.\n The third element, the parameters node, allows the user to quickly\n parse the parameters.\n \"\"\"\n root = ioc_et.make_ioc_root(iocid)\n root.append(ioc_et.make_metadata_node(name, description, author, links, keywords))\n metadata_node = root.find('metadata')\n top_level_indicator = make_indicator_node('OR')\n parameters_node = (ioc_et.make_parameters_node())\n root.append(ioc_et.make_criteria_node(top_level_indicator))\n root.append(parameters_node)\n ioc_et.set_root_lastmodified(root)\n return root, metadata_node, top_level_indicator, parameters_node", "response": "This function generates all parts of an IOC."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef set_lastmodified_date(self, date=None):\n if date:\n match = re.match(DATE_REGEX, date)\n if not match:\n raise IOCParseError('last-modified date is not valid. Must be in the form YYYY-MM-DDTHH:MM:SS')\n ioc_et.set_root_lastmodified(self.root, date)\n return True", "response": "Set the last modified date of a IOC to the current date."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting the published date of a IOC to the current date.", "response": "def set_published_date(self, date=None):\n \"\"\"\n Set the published date of a IOC to the current date.\n User may specify the date they want to set as well.\n\n :param date: Date value to set the published date to. This should be in the xsdDate form.\n This defaults to the current date if it is not provided.\n xsdDate Form: YYYY-MM-DDTHH:MM:SS\n :return: True\n :raises: IOCParseError if date format is not valid.\n \"\"\"\n if date:\n match = re.match(DATE_REGEX, date)\n if not match:\n raise IOCParseError('Published date is not valid. Must be in the form YYYY-MM-DDTHH:MM:SS')\n ioc_et.set_root_published_date(self.root, date)\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets the created date of a IOC to the current date.", "response": "def set_created_date(self, date=None):\n \"\"\"\n Set the created date of a IOC to the current date.\n User may specify the date they want to set as well.\n\n :param date: Date value to set the created date to. This should be in the xsdDate form.\n This defaults to the current date if it is not provided.\n xsdDate form: YYYY-MM-DDTHH:MM:SS\n :return: True\n :raises: IOCParseError if date format is not valid.\n \"\"\"\n if date:\n match = re.match(DATE_REGEX, date)\n if not match:\n raise IOCParseError('Created date is not valid. Must be in the form YYYY-MM-DDTHH:MM:SS')\n # XXX can this use self.metadata?\n ioc_et.set_root_created_date(self.root, date)\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nadds a parameter to the IOC.", "response": "def add_parameter(self, indicator_id, content, name='comment', ptype='string'):\n \"\"\"\n Add a a parameter to the IOC.\n\n :param indicator_id: The unique Indicator/IndicatorItem id the parameter is associated with.\n :param content: The value of the parameter.\n :param name: The name of the parameter.\n :param ptype: The type of the parameter content.\n :return: True\n :raises: IOCParseError if the indicator_id is not associated with a Indicator or IndicatorItem in the IOC.\n \"\"\"\n parameters_node = self.parameters\n criteria_node = self.top_level_indicator.getparent()\n # first check for duplicate id,name pairs \n elems = parameters_node.xpath('.//param[@ref-id=\"{}\" and @name=\"{}\"]'.format(indicator_id, name))\n if len(elems) > 0:\n # there is no actual restriction on duplicate parameters\n log.info('Duplicate (id,name) parameter pair will be inserted [{}][{}].'.format(indicator_id, name))\n # now check to make sure the id is present in the IOC logic\n elems = criteria_node.xpath(\n './/IndicatorItem[@id=\"{}\"]|.//Indicator[@id=\"{}\"]'.format(indicator_id, indicator_id))\n if len(elems) == 0:\n raise IOCParseError('ID does not exist in the IOC [{}][{}].'.format(str(indicator_id), str(content)))\n parameters_node.append(ioc_et.make_param_node(indicator_id, content, name, ptype))\n return True"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef add_link(self, rel, value, href=None):\n links_node = self.metadata.find('links')\n if links_node is None:\n links_node = ioc_et.make_links_node()\n self.metadata.append(links_node)\n link_node = ioc_et.make_link_node(rel, value, href)\n links_node.append(link_node)\n return True", "response": "Add a Link metadata element to the IOC."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef update_name(self, name):\n short_desc_node = self.metadata.find('short_description')\n if short_desc_node is None:\n log.debug('Could not find short description node for [{}].'.format(str(self.iocid)))\n log.debug('Creating & inserting the short description node')\n short_desc_node = ioc_et.make_short_description_node(name)\n self.metadata.insert(0, short_desc_node)\n else:\n short_desc_node.text = name\n return True", "response": "Update the name of an IOC."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nupdate the description node of an IOC.", "response": "def update_description(self, description):\n \"\"\"\n Update the description) of an IOC\n\n This creates the description node if it is not present.\n :param description: Value to set the description too\n :return: True\n \"\"\"\n desc_node = self.metadata.find('description')\n if desc_node is None:\n log.debug('Could not find short description node for [{}].'.format(str(self.iocid)))\n log.debug('Creating & inserting the short description node')\n desc_node = ioc_et.make_description_node(description)\n insert_index = 0\n for child in self.metadata.getchildren():\n if child.tag == 'short_description':\n index = self.metadata.index(child)\n insert_index = index + 1\n break\n self.metadata.insert(insert_index, desc_node)\n else:\n desc_node.text = description\n return True"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef update_link_rel_based(self, old_rel, new_rel=None, new_text=None, single_link=False):\n links = self.metadata.xpath('./links/link[@rel=\"{}\"]'.format(old_rel))\n if len(links) < 1:\n log.warning('No links with link/[@rel=\"{}\"]'.format(str(old_rel)))\n return False\n if new_rel and not new_text:\n # update link/@rel value\n for link in links:\n link.attrib['rel'] = new_rel\n if single_link:\n break\n elif not new_rel and new_text:\n # update link/@text() value\n for link in links:\n link.text = new_text\n if single_link:\n break\n elif new_rel and new_text:\n log.warning('Cannot update rel and text at the same time')\n return False\n else:\n log.warning('Must specify either new_rel or new_text arguments')\n return False\n return True", "response": "Update link nodes based on the existing link / @rel value."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrewrite the text() value of a link based on the link/@rel and link/text() value. This is similar to update_link_rel_based but users link/@rel AND link/text() values to determine which links have their link/@text() values updated. :param old_rel: The link/@rel value used to select link nodes to update. :param old_text: The link/text() value used to select link nodes to update. :param new_text: The new link/text() value to set on link nodes. :param single_link: Determine if only the first, or multiple, linkes are modified. :return: True, unless there are no links with link/[@rel='old_rel' and text()='old_text']", "response": "def update_link_rewrite(self, old_rel, old_text, new_text, single_link=False):\n \"\"\"\n Rewrite the text() value of a link based on the link/@rel and link/text() value.\n\n This is similar to update_link_rel_based but users link/@rel AND link/text() values\n to determine which links have their link/@text() values updated.\n\n :param old_rel: The link/@rel value used to select link nodes to update.\n :param old_text: The link/text() value used to select link nodes to update.\n :param new_text: The new link/text() value to set on link nodes.\n :param single_link: Determine if only the first, or multiple, linkes are modified.\n :return: True, unless there are no links with link/[@rel='old_rel' and text()='old_text']\n \"\"\"\n links = self.metadata.xpath('./links/link[@rel=\"{}\" and text()=\"{}\"]'.format(old_rel, old_text))\n if len(links) < 1:\n log.warning('No links with link/[@rel=\"{}\"and text()=\"{}\"]'.format(str(old_rel), str(old_text)))\n return False\n for link in links:\n link.text = new_text\n if single_link:\n break\n return True"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef update_parameter(self, parameter_id, content=None, name=None, param_type=None):\n if not (content or name or param_type):\n log.warning('Must specify at least the value/text(), param/@name or the value/@type values to update.')\n return False\n parameters_node = self.parameters\n elems = parameters_node.xpath('.//param[@id=\"{}\"]'.format(parameter_id))\n if len(elems) != 1:\n msg = 'Did not find a single parameter with the supplied ID[{}]. Found [{}] parameters'.format(parameter_id,\n len(elems))\n raise IOCParseError(msg)\n\n param_node = elems[0]\n value_node = param_node.find('value')\n\n if name:\n param_node.attrib['name'] = name\n\n if value_node is None:\n msg = 'No value node is associated with param [{}]. Not updating value node with content or tuple.' \\\n .format(parameter_id)\n log.warning(msg)\n else:\n if content:\n value_node.text = content\n if param_type:\n value_node.attrib['type'] = param_type\n return True", "response": "Updates the value of a single parameter in an Indicator or IndicatorItem node."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nremoves a link node based on the function arguments.", "response": "def remove_link(self, rel, value=None, href=None):\n \"\"\"\n Removes link nodes based on the function arguments.\n\n This can remove link nodes based on the following combinations of arguments:\n link/@rel\n link/@rel & link/text()\n link/@rel & link/@href\n link/@rel & link/text() & link/@href\n\n :param rel: link/@rel value to remove. Required.\n :param value: link/text() value to remove. This is used in conjunction with link/@rel.\n :param href: link/@href value to remove. This is used in conjunction with link/@rel.\n :return: Return the number of link nodes removed, or False if no nodes are removed.\n \"\"\"\n links_node = self.metadata.find('links')\n if links_node is None:\n log.warning('No links node present')\n return False\n counter = 0\n links = links_node.xpath('.//link[@rel=\"{}\"]'.format(rel))\n for link in links:\n if value and href:\n if link.text == value and link.attrib['href'] == href:\n links_node.remove(link)\n counter += 1\n elif value and not href:\n if link.text == value:\n links_node.remove(link)\n counter += 1\n elif not value and href:\n if link.attrib['href'] == href:\n links_node.remove(link)\n counter += 1\n else:\n links_node.remove(link)\n counter += 1\n return counter"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nremoves a Indicator or IndicatorItem node from the IOC.", "response": "def remove_indicator(self, nid, prune=False):\n \"\"\"\n Removes a Indicator or IndicatorItem node from the IOC. By default,\n if nodes are removed, any children nodes are inherited by the removed\n node. It has the ability to delete all children Indicator and\n IndicatorItem nodes underneath an Indicator node if the 'prune'\n argument is set.\n\n This will not remove the top level Indicator node from an IOC.\n If the id value has been reused within the IOC, this will remove the\n first node which contains the id value.\n\n This also removes any parameters associated with any nodes that are\n removed.\n\n :param nid: The Indicator/@id or IndicatorItem/@id value indicating a specific node to remove.\n :param prune: Remove all children of the deleted node. If a Indicator node is removed and prune is set to\n False, the children nodes will be promoted to be children of the removed nodes' parent.\n :return: True if nodes are removed, False otherwise.\n \"\"\"\n try:\n node_to_remove = self.top_level_indicator.xpath(\n '//IndicatorItem[@id=\"{}\"]|//Indicator[@id=\"{}\"]'.format(str(nid), str(nid)))[0]\n except IndexError:\n log.exception('Node [{}] not present'.format(nid))\n return False\n if node_to_remove.tag == 'IndicatorItem':\n node_to_remove.getparent().remove(node_to_remove)\n self.remove_parameter(ref_id=nid)\n return True\n elif node_to_remove.tag == 'Indicator':\n if node_to_remove == self.top_level_indicator:\n raise IOCParseError('Cannot remove the top level indicator')\n if prune:\n pruned_ids = node_to_remove.xpath('.//@id')\n node_to_remove.getparent().remove(node_to_remove)\n for pruned_id in pruned_ids:\n self.remove_parameter(ref_id=pruned_id)\n else:\n for child_node in node_to_remove.getchildren():\n node_to_remove.getparent().append(child_node)\n node_to_remove.getparent().remove(node_to_remove)\n self.remove_parameter(ref_id=nid)\n return True\n else:\n raise IOCParseError(\n 'Bad tag found. Expected \"IndicatorItem\" or \"Indicator\", got [[}]'.format(node_to_remove.tag))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nremoving parameters based on function arguments. This can remove parameters based on the following param values: param/@id param/@name param/@ref_id Each input is mutually exclusive. Calling this function with multiple values set will cause an IOCParseError exception. Calling this function without setting one value will raise an exception. :param param_id: The id of the parameter to remove. :param name: The name of the parameter to remove. :param ref_id: The IndicatorItem/Indicator id of the parameter to remove. :return: Number of parameters removed.", "response": "def remove_parameter(self, param_id=None, name=None, ref_id=None, ):\n \"\"\"\n Removes parameters based on function arguments.\n\n This can remove parameters based on the following param values:\n param/@id\n param/@name\n param/@ref_id\n\n Each input is mutually exclusive. Calling this function with multiple values set will cause an IOCParseError\n exception. Calling this function without setting one value will raise an exception.\n\n :param param_id: The id of the parameter to remove.\n :param name: The name of the parameter to remove.\n :param ref_id: The IndicatorItem/Indicator id of the parameter to remove.\n :return: Number of parameters removed.\n \"\"\"\n l = []\n if param_id:\n l.append('param_id')\n if name:\n l.append('name')\n if ref_id:\n l.append('ref_id')\n if len(l) > 1:\n raise IOCParseError('Must specify only param_id, name or ref_id. Specified {}'.format(str(l)))\n elif len(l) < 1:\n raise IOCParseError('Must specifiy an param_id, name or ref_id to remove a paramater')\n\n counter = 0\n parameters_node = self.parameters\n\n if param_id:\n params = parameters_node.xpath('//param[@id=\"{}\"]'.format(param_id))\n for param in params:\n parameters_node.remove(param)\n counter += 1\n elif name:\n params = parameters_node.xpath('//param[@name=\"{}\"]'.format(name))\n for param in params:\n parameters_node.remove(param)\n counter += 1\n elif ref_id:\n params = parameters_node.xpath('//param[@ref-id=\"{}\"]'.format(ref_id))\n for param in params:\n parameters_node.remove(param)\n counter += 1\n return counter"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nremoving the name from the metadata node.", "response": "def remove_name(self):\n \"\"\"\n Removes the name (short_description node) from the metadata node, if present.\n\n :return: True if the node is removed. False is the node is node is not present.\n \"\"\"\n short_description_node = self.metadata.find('short_description')\n if short_description_node is not None:\n self.metadata.remove(short_description_node)\n return True\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef remove_description(self):\n description_node = self.metadata.find('description')\n if description_node is not None:\n self.metadata.remove(description_node)\n return True\n return False", "response": "Removes the description node from the metadata node. Returns True if the description node is removed. Returns False if the description node is not present."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef write_ioc_to_file(self, output_dir=None, force=False):\n return write_ioc(self.root, output_dir, force=force)", "response": "Serialize the IOC to a. ioc file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget a string representation of an IOC.", "response": "def display_ioc(self, width=120, sep=' ', params=False):\n \"\"\"\n Get a string representation of an IOC.\n\n :param width: Width to print the description too.\n :param sep: Separator used for displaying the contents of the criteria nodes.\n :param params: Boolean, set to True in order to display node parameters.\n :return:\n \"\"\"\n s = 'Name: {}\\n'.format(self.metadata.findtext('short_description', default='No Name'))\n s += 'ID: {}\\n'.format(self.root.attrib.get('id'))\n s += 'Created: {}\\n'.format(self.metadata.findtext('authored_date', default='No authored_date'))\n s += 'Updated: {}\\n\\n'.format(self.root.attrib.get('last-modified', default='No last-modified attrib'))\n s += 'Author: {}\\n'.format(self.metadata.findtext('authored_by', default='No authored_by'))\n desc = self.metadata.findtext('description', default='No Description')\n desc = textwrap.wrap(desc, width=width)\n desc = '\\n'.join(desc)\n s += 'Description:\\n{}\\n\\n'.format(desc)\n links = self.link_text()\n if links:\n s += '{}'.format(links)\n content_text = self.criteria_text(sep=sep, params=params)\n s += '\\nCriteria:\\n{}'.format(content_text)\n return s"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef link_text(self):\n s = ''\n links_node = self.metadata.find('links')\n if links_node is None:\n return s\n links = links_node.getchildren()\n if links is None:\n return s\n s += 'IOC Links\\n'\n for link in links:\n rel = link.attrib.get('rel', 'No Rel')\n href = link.attrib.get('href')\n text = link.text\n lt = '{rel}{href}: {text}\\n'.format(rel=rel,\n href=' @ {}'.format(href) if href else '',\n text=text)\n s += lt\n return s", "response": "Get a text represention of the links node."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets a text representation of the criteria node.", "response": "def criteria_text(self, sep=' ', params=False):\n \"\"\"\n Get a text representation of the criteria node.\n\n :param sep: Separator used to indent the contents of the node.\n :param params: Boolean, set to True in order to display node parameters.\n :return:\n \"\"\"\n\n s = ''\n criteria_node = self.root.find('criteria')\n if criteria_node is None:\n return s\n node_texts = []\n for node in criteria_node.getchildren():\n nt = self.get_node_text(node, depth=0, sep=sep, params=params)\n node_texts.append(nt)\n s = '\\n'.join(node_texts)\n return s"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_node_text(self, node, depth, sep, params=False,):\n indent = sep * depth\n s = ''\n tag = node.tag\n if tag == 'Indicator':\n node_text = self.get_i_text(node)\n elif tag == 'IndicatorItem':\n node_text = self.get_ii_text(node)\n else:\n raise IOCParseError('Invalid node encountered: {}'.format(tag))\n s += '{}{}\\n'.format(indent, node_text)\n if params:\n param_text = self.get_param_text(node.attrib.get('id'))\n for pt in param_text:\n s += '{}{}\\n'.format(indent+sep, pt)\n if node.tag == 'Indicator':\n for child in node.getchildren():\n s += self.get_node_text(node=child, depth=depth+1, sep=sep, params=params)\n return s", "response": "Get the text for a given Indicator or IndicatorItem node."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the text for an Indicator node.", "response": "def get_i_text(node):\n \"\"\"\n Get the text for an Indicator node.\n\n :param node: Indicator node.\n :return:\n \"\"\"\n if node.tag != 'Indicator':\n raise IOCParseError('Invalid tag: {}'.format(node.tag))\n s = node.get('operator').upper()\n return s"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the text for IndicatorItem node.", "response": "def get_ii_text(node):\n \"\"\"\n Get the text for IndicatorItem node.\n\n :param node: IndicatorItem node.\n :return:\n \"\"\"\n if node.tag != 'IndicatorItem':\n raise IOCParseError('Invalid tag: {}'.format(node.tag))\n condition = node.attrib.get('condition')\n preserve_case = node.attrib.get('preserve-case', '')\n negate = node.attrib.get('negate', '')\n content = node.findtext('Content')\n search = node.find('Context').get('search')\n if preserve_case.lower() == 'true':\n preserve_case = ' (Preserve Case)'\n else:\n preserve_case = ''\n if negate.lower() == 'true':\n negate = 'NOT '\n else:\n negate = ''\n s = '{negate}{search} {condition} \"{content}\"{preserve_case}'.format(negate=negate,\n search=search,\n condition=condition,\n content=content,\n preserve_case=preserve_case)\n return s"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_param_text(self, nid):\n r = []\n params = self.parameters.xpath('.//param[@ref-id=\"{}\"]'.format(nid))\n if not params:\n return r\n for param in params:\n vnode = param.find('value')\n s = 'Parameter: {}, type:{}, value: {}'.format(param.attrib.get('name'),\n vnode.attrib.get('type'),\n param.findtext('value', default='No Value'))\n r.append(s)\n return r", "response": "Get a list of parameters as text values for a given node id."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreading in an XML file or string into a lxml. _elementTree object.", "response": "def read_xml(filename):\n \"\"\"\n Use et to read in a xml file, or string, into a Element object.\n\n :param filename: File to parse.\n :return: lxml._elementTree object or None\n \"\"\"\n parser = et.XMLParser(remove_blank_text=True)\n isfile=False\n try:\n isfile = os.path.exists(filename)\n except ValueError as e:\n if 'path too long for Windows' in str(e):\n pass\n else:\n raise\n try:\n if isfile:\n return et.parse(filename, parser)\n else:\n r = et.fromstring(filename, parser)\n return r.getroottree()\n except IOError:\n log.exception('unable to open file [[}]'.format(filename))\n except et.XMLSyntaxError:\n log.exception('unable to parse XML [{}]'.format(filename))\n return None\n return None"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nremoves the specified namespace from all Element nodes within a document.", "response": "def remove_namespace(doc, namespace):\n \"\"\"\n Takes in a ElementTree object and namespace value. The length of that\n namespace value is removed from all Element nodes within the document.\n This effectively removes the namespace from that document.\n\n :param doc: lxml.etree\n :param namespace: Namespace that needs to be removed.\n :return: Returns the source document with namespaces removed.\n \"\"\"\n # http://homework.nwsnet.de/products/45be_remove-namespace-in-an-xml-document-using-elementtree\n #\n # Permission is hereby granted, free of charge, to any person obtaining\n # a copy of this software and associated documentation files (the\n # \"Software\"), to deal in the Software without restriction, including\n # without limitation the rights to use, copy, modify, merge, publish,\n # distribute, sublicense, and/or sell copies of the Software, and to\n # permit persons to whom the Software is furnished to do so, subject to\n # the following conditions:\n #\n # The above copyright notice and this permission notice shall be\n # included in all copies or substantial portions of the Software.\n #\n # THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND,\n # EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF\n # MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND\n # NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE\n # LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n # OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION\n # WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n ns = '{{{}}}'.format(namespace)\n nsl = len(ns)\n # print 'DEBUG: removing',ns\n for elem in doc.getiterator():\n if elem.tag.startswith(ns):\n elem.tag = elem.tag[nsl:]\n return doc"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef delete_namespace(parsed_xml):\n if parsed_xml.getroot().tag.startswith('{'):\n root = parsed_xml.getroot().tag\n end_ns = root.find('}')\n remove_namespace(parsed_xml, root[1:end_ns])\n return parsed_xml", "response": "Removes the namespace associated with the root node of a XML document and returns the original XML document."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nparse the specified file into memory and inserts them into the class.", "response": "def insert(self, filename):\n \"\"\"\n Parses files to load them into memory and insert them into the class.\n\n :param filename: File or directory pointing to .ioc files.\n :return: A list of .ioc files which could not be parsed.\n \"\"\"\n errors = []\n if os.path.isfile(filename):\n log.info('loading IOC from: {}'.format(filename))\n if not self.parse(filename):\n log.warning('Failed to prase [{}]'.format(filename))\n errors.append(filename)\n elif os.path.isdir(filename):\n log.info('loading IOCs from: {}'.format(filename))\n for fn in glob.glob(filename + os.path.sep + '*.ioc'):\n if not os.path.isfile(fn):\n continue\n else:\n if not self.parse(fn):\n log.warning('Failed to parse [{}]'.format(filename))\n errors.append(fn)\n else:\n pass\n log.info('Parsed [%s] IOCs' % str(len(self)))\n return errors"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse(self, fn):\n ioc_xml = xmlutils.read_xml_no_ns(fn)\n if not ioc_xml:\n return False\n root = ioc_xml.getroot()\n iocid = root.get('id', None)\n if not iocid:\n return False\n self.iocs[iocid] = ioc_xml\n return True", "response": "Parses a file into a lxml. etree structure with namespaces removed."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef convert_to_11(self):\n if len(self) < 1:\n log.error('No iocs available to modify.')\n return False\n log.info('Converting IOCs from 1.0 to 1.1')\n errors = []\n for iocid in self.iocs:\n ioc_xml = self.iocs[iocid]\n root = ioc_xml.getroot()\n if root.tag != 'ioc':\n log.error('IOC root is not \"ioc\" [%s].' % str(iocid))\n errors.append(iocid)\n continue\n name_10 = root.findtext('.//short_description')\n keywords_10 = root.findtext('.//keywords')\n description_10 = root.findtext('.//description')\n author_10 = root.findtext('.//authored_by')\n created_date_10 = root.findtext('.//authored_date')\n last_modified_date_10 = root.get('last-modified', None)\n if last_modified_date_10:\n last_modified_date_10 = last_modified_date_10.rstrip('Z')\n created_date_10 = created_date_10.rstrip('Z')\n links_10 = []\n for link in root.xpath('//link'):\n link_rel = link.get('rel', None)\n link_text = link.text\n links_10.append((link_rel, link_text, None))\n # get ioc_logic\n try:\n ioc_logic = root.xpath('.//definition')[0]\n except IndexError:\n log.exception(\n 'Could not find definition nodes for IOC [%s]. Did you attempt to convert OpenIOC 1.1 iocs?' % str(\n iocid))\n errors.append(iocid)\n continue\n # create 1.1 ioc obj\n ioc_obj = ioc_api.IOC(name=name_10, description=description_10, author=author_10, links=links_10,\n keywords=keywords_10, iocid=iocid)\n ioc_obj.set_lastmodified_date(last_modified_date_10)\n ioc_obj.set_created_date(created_date_10)\n\n comment_dict = {}\n tlo_10 = ioc_logic.getchildren()[0]\n try:\n self.convert_branch(tlo_10, ioc_obj.top_level_indicator, comment_dict)\n except UpgradeError:\n log.exception('Problem converting IOC [{}]'.format(iocid))\n errors.append(iocid)\n continue\n for node_id in comment_dict:\n ioc_obj.add_parameter(node_id, comment_dict[node_id])\n self.iocs_11[iocid] = ioc_obj\n return errors", "response": "Convert the iocs in self. iocs from OpenIOC 1. 0 to OpenIOC 1. 1 format."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef convert_branch(self, old_node, new_node, comment_dict=None):\n expected_tag = 'Indicator'\n if old_node.tag != expected_tag:\n raise UpgradeError('old_node expected tag is [%s]' % expected_tag)\n if not comment_dict:\n comment_dict = {}\n for node in old_node.getchildren():\n node_id = node.get('id')\n if node.tag == 'IndicatorItem':\n condition = node.get('condition')\n negation = False\n if condition.endswith('not'):\n negation = True\n condition = condition[:-3]\n document = node.xpath('Context/@document')[0]\n search = node.xpath('Context/@search')[0]\n content_type = node.xpath('Content/@type')[0]\n content = node.findtext('Content')\n context_type = node.xpath('Context/@type')[0]\n new_ii_node = ioc_api.make_indicatoritem_node(condition=condition,\n document=document,\n search=search,\n content_type=content_type,\n content=content,\n context_type=context_type,\n negate=negation,\n nid=node_id)\n # set comment\n comment = node.find('Comment')\n if comment is not None:\n comment_dict[node_id] = comment.text\n new_node.append(new_ii_node)\n elif node.tag == 'Indicator':\n operator = node.get('operator')\n if operator.upper() not in ['OR', 'AND']:\n raise UpgradeError('Indicator@operator is not AND/OR. [%s] has [%s]' % (node_id, operator))\n new_i_node = ioc_api.make_indicator_node(operator, node_id)\n new_node.append(new_i_node)\n self.convert_branch(node, new_i_node, comment_dict)\n else:\n # should never get here\n raise UpgradeError('node is not a Indicator/IndicatorItem')\n return True", "response": "This function recursively walks a branch of the Indicator node and converts it to a new node."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nparsing a file or directory containing IOCs and inserts them into the class.", "response": "def insert(self, filename):\n \"\"\"\n Parses files to load them into memory and insert them into the class.\n\n :param filename: File or directory pointing to .ioc files.\n :return: A list of .ioc files which could not be parsed.\n \"\"\"\n errors = []\n if os.path.isfile(filename):\n log.info('loading IOC from: {}'.format(filename))\n try:\n self.parse(ioc_api.IOC(filename))\n except ioc_api.IOCParseError:\n log.exception('Parse Error')\n errors.append(filename)\n elif os.path.isdir(filename):\n log.info('loading IOCs from: {}'.format(filename))\n for fn in glob.glob(filename + os.path.sep + '*.ioc'):\n if not os.path.isfile(fn):\n continue\n else:\n try:\n self.parse(ioc_api.IOC(fn))\n except ioc_api.IOCParseError:\n log.exception('Parse Error')\n errors.append(fn)\n else:\n pass\n log.info('Parsed [{}] IOCs'.format(len(self)))\n return errors"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nparse an ioc to populate self. iocs and self. ioc_name .", "response": "def parse(self, ioc_obj):\n \"\"\"\n parses an ioc to populate self.iocs and self.ioc_name\n\n :param ioc_obj:\n :return:\n \"\"\"\n if ioc_obj is None:\n return\n iocid = ioc_obj.iocid\n try:\n sd = ioc_obj.metadata.xpath('.//short_description/text()')[0]\n except IndexError:\n sd = 'NoName'\n if iocid in self.iocs:\n msg = 'duplicate IOC UUID [{}] [orig_shortName: {}][new_shortName: {}]'.format(iocid,\n self.ioc_name[iocid],\n sd)\n log.warning(msg)\n self.iocs[iocid] = ioc_obj\n self.ioc_name[iocid] = sd\n if self.parser_callback:\n self.parser_callback(ioc_obj)\n return True"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef register_parser_callback(self, func):\n if hasattr(func, '__call__'):\n self.parser_callback = func\n log.debug('Set callback to {}'.format(func))\n else:\n raise TypeError('Provided function is not callable: {}'.format(func))", "response": "Register a callback function that is called after self. ioc_name is populated."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates single permission for the given object.", "response": "def create(self, permission):\n \"\"\"\n Create single permission for the given object.\n\n :param Permission permission: A single Permission object to be set.\n \"\"\"\n parent_url = self.client.get_url(self.parent_object._manager._URL_KEY, 'GET', 'single', {'id': self.parent_object.id})\n target_url = parent_url + self.client.get_url_path(self._URL_KEY, 'POST', 'single')\n r = self.client.request('POST', target_url, json=permission._serialize())\n return permission._deserialize(r.json(), self)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef set(self, permissions):\n parent_url = self.client.get_url(self.parent_object._manager._URL_KEY, 'GET', 'single', {'id': self.parent_object.id})\n target_url = parent_url + self.client.get_url_path(self._URL_KEY, 'PUT', 'multi')\n r = self.client.request('PUT', target_url, json=permissions)\n if r.status_code != 201:\n raise exceptions.ServerError(\"Expected 201 response, got %s: %s\" % (r.status_code, target_url))\n return self.list()", "response": "Sets the permissions for the object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef list(self):\n parent_url = self.client.get_url(self.parent_object._manager._URL_KEY, 'GET', 'single', {'id': self.parent_object.id})\n target_url = parent_url + self.client.get_url_path(self._URL_KEY, 'GET', 'multi')\n return base.Query(self, target_url)", "response": "List permissions for the given object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get(self, permission_id, expand=[]):\n parent_url = self.client.get_url(self.parent_object._manager._URL_KEY, 'GET', 'single', {'id': self.parent_object.id})\n target_url = parent_url + self.client.get_url_path(\n self._URL_KEY, 'GET', 'single', {'permission_id': permission_id})\n return self._get(target_url, expand=expand)", "response": "Get a specific permisison for the given object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_config():\n configpath = get_configpath()\n if not configpath.exists():\n raise IOError(\"Config file {} not found.\".format(str(configpath)))\n else:\n config = configparser.ConfigParser()\n config.read(str(configpath))\n return config", "response": "Read the configfile and return a dictionary with the content of the configpath file."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nuse to write the database path into the config.", "response": "def set_database_path(dbfolder):\n \"\"\"Use to write the database path into the config.\n\n Parameters\n ----------\n dbfolder : str or pathlib.Path\n Path to where pyciss will store the ISS images it downloads and receives.\n \"\"\"\n configpath = get_configpath()\n try:\n d = get_config()\n except IOError:\n d = configparser.ConfigParser()\n d['pyciss_db'] = {}\n d['pyciss_db']['path'] = dbfolder\n with configpath.open('w') as f:\n d.write(f)\n print(\"Saved database path into {}.\".format(configpath))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_db_root():\n \"Read dbroot folder from config and mkdir if required.\"\n d = get_config()\n dbroot = Path(d['pyciss_db']['path'])\n dbroot.mkdir(exist_ok=True)\n return dbroot", "response": "Read dbroot folder from config and mkdir if required."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nprints database stats. Returns ------- pd. DataFrame pd. DataFrame", "response": "def print_db_stats():\n \"\"\"Print database stats.\n\n Returns\n -------\n pd.DataFrame\n Table with the found data items per type.\n \"\"\"\n dbroot = get_db_root()\n n_ids = len(list(dbroot.glob(\"[N,W]*\")))\n print(\"Number of WACs and NACs in database: {}\".format(n_ids))\n print(\"These kind of data are in the database: (returning pd.DataFrame)\")\n d = {}\n for key, val in PathManager.extensions.items():\n d[key] = [len(list(dbroot.glob(\"**/*\" + val)))]\n return pd.DataFrame(d)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nchecking if the label file is LOSSY.", "response": "def is_lossy(label):\n \"\"\"Check Label file for the compression type. \"\"\"\n val = getkey(from_=label, keyword='INST_CMPRS_TYPE').decode().strip()\n if val == 'LOSSY':\n return True\n else:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef download_and_calibrate_parallel(list_of_ids, n=None):\n setup_cluster(n_cores=n)\n c = Client()\n lbview = c.load_balanced_view()\n lbview.map_async(download_and_calibrate, list_of_ids)\n subprocess.Popen([\"ipcluster\", \"stop\", \"--quiet\"])", "response": "Download and calibrate in parallel."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ndownloading and calibrate one or more image ids in parallel.", "response": "def download_and_calibrate(img_id=None, overwrite=False, recalibrate=False, **kwargs):\n \"\"\"Download and calibrate one or more image ids, in parallel.\n\n Parameters\n ----------\n img_id : str or io.PathManager, optional\n If more than one item is in img_id, a parallel process is started\n overwrite: bool, optional\n If the pm.cubepath exists, this switch controls if it is being overwritten.\n Default: False\n \"\"\"\n if isinstance(img_id, io.PathManager):\n pm = img_id\n else:\n # get a PathManager object that knows where your data is or should be\n logger.debug(\"Creating Pathmanager object\")\n pm = io.PathManager(img_id)\n\n if not pm.raw_image.exists() or overwrite is True:\n logger.debug(\"Downloading file %s\" % pm.img_id)\n download_file_id(pm.img_id)\n pm = io.PathManager(img_id) # refresh, to get proper PDS version id.\n else:\n logger.info(\"Found \")\n\n if not (pm.cubepath.exists() and pm.undestriped.exists()) or overwrite is True:\n calib = pipeline.Calibrator(img_id, **kwargs)\n calib.standard_calib()\n else:\n print(\"All files exist. Use overwrite=True to redownload and calibrate.\")"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nperforms either normal spiceinit or outer spiceinit.", "response": "def spiceinit(self):\n \"\"\"Perform either normal spiceinit or one for ringdata.\n\n Note how Python name-spacing can distinguish between the method\n and the function with the same name. `spiceinit` from the outer\n namespace is the one imported from pysis.\n \"\"\"\n shape = \"ringplane\" if self.is_ring_data else None\n spiceinit(from_=self.pm.raw_cub, cksmithed=\"yes\", spksmithed=\"yes\", shape=shape)\n logger.info(\"spiceinit done.\")"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef check_label(self):\n if not self.is_ring_data:\n return\n targetname = getkey(\n from_=self.pm.raw_cub, grp=\"instrument\", keyword=\"targetname\"\n )\n\n if targetname.lower() != \"saturn\":\n editlab(\n from_=self.pm.raw_cub,\n options=\"modkey\",\n keyword=\"TargetName\",\n value=\"Saturn\",\n grpname=\"Instrument\",\n )", "response": "Check label for target name and fix if necessary."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nchecking and sets the supported content types configuration value.", "response": "def _set_supported_content_type(self, content_types_supported):\n \"\"\" Checks and sets the supported content types configuration value.\n \"\"\"\n if not isinstance(content_types_supported, list):\n raise TypeError((\"Settings 'READTIME_CONTENT_SUPPORT' must be\"\n \"a list of content types.\"))\n\n self.content_type_supported = content_types_supported"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks and sets the per language WPM singular and plural values.", "response": "def _set_lang_settings(self, lang_settings):\n \"\"\" Checks and sets the per language WPM, singular and plural values.\n \"\"\"\n is_int = isinstance(lang_settings, int)\n is_dict = isinstance(lang_settings, dict)\n if not is_int and not is_dict:\n raise TypeError((\"Settings 'READTIME_WPM' must be either an int,\"\n \"or a dict with settings per language.\"))\n\n # For backwards compatability reasons we'll allow the\n # READTIME_WPM setting to be set as an to override just the default\n # set WPM.\n if is_int:\n self.lang_settings['default']['wpm'] = lang_settings\n elif is_dict:\n for lang, conf in lang_settings.items():\n if 'wpm' not in conf:\n raise KeyError(('Missing wpm value for the'\n 'language: {}'.format(lang)))\n\n if not isinstance(conf['wpm'], int):\n raise TypeError(('WPM is not an integer for'\n ' the language: {}'.format(lang)))\n\n if \"min_singular\" not in conf:\n raise KeyError(('Missing singular form for \"minute\" for'\n ' the language: {}'.format(lang)))\n\n if \"min_plural\" not in conf:\n raise KeyError(('Missing plural form for \"minutes\" for'\n ' the language: {}'.format(lang)))\n\n if \"sec_singular\" not in conf:\n raise KeyError(('Missing singular form for \"second\" for'\n ' the language: {}'.format(lang)))\n\n if \"sec_plural\" not in conf:\n raise KeyError(('Missing plural form for \"seconds\" for'\n ' the language: {}'.format(lang)))\n\n self.lang_settings = lang_settings"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ninitializing the ReadTimeParser with the configuration values set by the readtime plugin.", "response": "def initialize_settings(self, sender):\n \"\"\" Initializes ReadTimeParser with configuration values set by the\n site author.\n \"\"\"\n try:\n self.initialized = True\n\n settings_content_types = sender.settings.get(\n 'READTIME_CONTENT_SUPPORT', self.content_type_supported)\n self._set_supported_content_type(settings_content_types)\n\n lang_settings = sender.settings.get(\n 'READTIME_WPM', self.lang_settings)\n self._set_lang_settings(lang_settings)\n except Exception as e:\n raise Exception(\"ReadTime Plugin: %s\" % str(e))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef read_time(self, content):\n if get_class_name(content) in self.content_type_supported:\n # Exit if readtime is already set\n if hasattr(content, 'readtime'):\n return None\n\n default_lang_conf = self.lang_settings['default']\n lang_conf = self.lang_settings.get(content.lang, default_lang_conf)\n avg_reading_wpm = lang_conf['wpm']\n\n num_words = len(content._content.split())\n # Floor division so we don't have to convert float -> int\n minutes = num_words // avg_reading_wpm\n # Get seconds to read, then subtract our minutes as seconds from\n # the time to get remainder seconds\n seconds = int((num_words / avg_reading_wpm * 60) - (minutes * 60))\n\n minutes_str = self.pluralize(\n minutes,\n lang_conf['min_singular'],\n lang_conf['min_plural']\n )\n\n seconds_str = self.pluralize(\n seconds,\n lang_conf['sec_singular'],\n lang_conf['sec_plural']\n )\n\n content.readtime = minutes\n content.readtime_string = minutes_str\n content.readtime_with_seconds = (minutes, seconds,)\n content.readtime_string_with_seconds = \"{}, {}\".format(\n minutes_str, seconds_str)", "response": "This function generates the read time for the content."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a string that contains the measure and its plural form depending on the amount.", "response": "def pluralize(self, measure, singular, plural):\n \"\"\" Returns a string that contains the measure (amount) and its plural\n or singular form depending on the amount.\n\n Parameters:\n :param measure: Amount, value, always a numerical value\n :param singular: The singular form of the chosen word\n :param plural: The plural form of the chosen word\n\n Returns:\n String\n \"\"\"\n if measure == 1:\n return \"{} {}\".format(measure, singular)\n else:\n return \"{} {}\".format(measure, plural)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef list_datasources(self, source_id):\n target_url = self.client.get_url('DATASOURCE', 'GET', 'multi', {'source_id': source_id})\n return base.Query(self.client.get_manager(Datasource), target_url)", "response": "Filterable list of Datasources for a Source."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_datasource(self, source_id, datasource_id):\n target_url = self.client.get_url('DATASOURCE', 'GET', 'single', {'source_id': source_id, 'datasource_id': datasource_id})\n return self.client.get_manager(Datasource)._get(target_url)", "response": "Get a Datasource object"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nlist all Scans for a Source.", "response": "def list_scans(self, source_id=None):\n \"\"\"\n Filterable list of Scans for a Source.\n Ordered newest to oldest by default\n \"\"\"\n if source_id:\n target_url = self.client.get_url('SCAN', 'GET', 'multi', {'source_id': source_id})\n else:\n target_url = self.client.get_ulr('SCAN', 'GET', 'all')\n return base.Query(self.client.get_manager(Scan), target_url)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget a Scan object from the given source and scan ID.", "response": "def get_scan(self, source_id, scan_id):\n \"\"\"\n Get a Scan object\n\n :rtype: Scan\n \"\"\"\n target_url = self.client.get_url('SCAN', 'GET', 'single', {'source_id': source_id, 'scan_id': scan_id})\n return self.client.get_manager(Scan)._get(target_url)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_scan_log_lines(self, source_id, scan_id):\n return self.client.get_manager(Scan).get_log_lines(source_id=source_id, scan_id=scan_id)", "response": "Get the log lines for a Scan"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nstarts a new scan of a Source.", "response": "def start_scan(self, source_id):\n \"\"\"\n Start a new scan of a Source.\n\n :rtype: Scan\n \"\"\"\n target_url = self.client.get_url('SCAN', 'POST', 'create', {'source_id': source_id})\n r = self.client.request('POST', target_url, json={})\n return self.client.get_manager(Scan).create_from_result(r.json())"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef save(self, with_data=False):\n r = self._client.request('PUT', self.url, json=self._serialize(with_data=with_data))\n return self._deserialize(r.json(), self._manager)", "response": "Saves this Source into the server."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding a single file or archive to the upload list.", "response": "def add_file(self, fp, upload_path=None, content_type=None):\n \"\"\"\n Add a single file or archive to upload.\n\n To add metadata records with a file, add a .xml file with the same upload path basename\n eg. ``points-with-metadata.geojson`` & ``points-with-metadata.xml``\n Datasource XML must be in one of these three formats:\n\n - ISO 19115/19139\n - FGDC CSDGM\n - Dublin Core (OAI-PMH)\n\n :param fp: File to upload into this source, can be a path or a file-like object.\n :type fp: str or file\n :param str upload_path: relative path to store the file as within the source (eg. ``folder/0001.tif``). \\\n By default it will use ``fp``, either the filename from a path or the ``.name`` \\\n attribute of a file-like object.\n :param str content_type: Content-Type of the file. By default it will attempt to auto-detect from the \\\n file/upload_path.\n \"\"\"\n if isinstance(fp, six.string_types):\n # path\n if not os.path.isfile(fp):\n raise ClientValidationError(\"Invalid file: %s\", fp)\n if not upload_path:\n upload_path = os.path.split(fp)[1]\n else:\n # file-like object\n if not upload_path:\n upload_path = os.path.split(fp.name)[1]\n\n content_type = content_type or mimetypes.guess_type(upload_path, strict=False)[0]\n if upload_path in self._files:\n raise ClientValidationError(\"Duplicate upload path: %s\" % upload_path)\n\n self._files[upload_path] = (fp, content_type)\n logger.debug(\"UploadSource.add_file: %s -> %s (%s)\", repr(fp), upload_path, content_type)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the log lines for a given source and scan.", "response": "def get_log_lines(self, source_id, scan_id):\n \"\"\"\n Get the log text for a scan object\n :rtype: Iterator over log lines.\n \"\"\"\n target_url = self.client.get_url('SCAN', 'GET', 'log', {'source_id': source_id, 'scan_id': scan_id})\n r = self.client.request('GET', target_url, headers={'Accept': 'text/plain'}, stream=True)\n return r.iter_lines(decode_unicode=True)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_log_lines(self):\n rel = self._client.reverse_url('SCAN', self.url)\n return self._manager.get_log_lines(**rel)", "response": "Get the log lines for a scan object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_creative_commons(self, slug, jurisdiction=None):\n if not slug.startswith('cc-by'):\n raise exceptions.ClientValidationError(\"slug needs to start with 'cc-by'\")\n\n if jurisdiction is None:\n jurisdiction = ''\n\n target_url = self.client.get_url(self._URL_KEY, 'GET', 'cc', {'slug': slug, 'jurisdiction': jurisdiction})\n return self._get(target_url)", "response": "Returns the Creative Commons license for the given attributes."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef mad(arr, relative=True):\n with warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n med = np.nanmedian(arr, axis=1)\n mad = np.nanmedian(np.abs(arr - med[:, np.newaxis]), axis=1)\n if relative:\n return mad / med\n else:\n return mad", "response": "Median Absolute Deviation of a sample."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef calc_offset(cube):\n i = 0\n while pd.Series(cube.img[:, i]).count() < 200:\n i += 1\n return max(i, 20)", "response": "Calculate an offset from the side of data so that at least 200 image pixels are in the MAD stats."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nplotting the image of the current object.", "response": "def imshow(\n self,\n data=None,\n save=False,\n ax=None,\n interpolation=\"none\",\n extra_title=None,\n show_resonances=\"some\",\n set_extent=True,\n equalized=False,\n rmin=None,\n rmax=None,\n savepath=\".\",\n **kwargs,\n ):\n \"\"\"Powerful default display.\n\n show_resonances can be True, a list, 'all', or 'some'\n \"\"\"\n if data is None:\n data = self.img\n if self.resonance_axis is not None:\n logger.debug(\"removing resonance_axis\")\n self.resonance_axis.remove()\n if equalized:\n data = np.nan_to_num(data)\n data[data < 0] = 0\n data = exposure.equalize_hist(data)\n self.plotted_data = data\n\n extent_val = self.extent if set_extent else None\n min_, max_ = self.plot_limits\n self.min_ = min_\n self.max_ = max_\n if ax is None:\n if not _SEABORN_INSTALLED:\n fig, ax = plt.subplots(figsize=calc_4_3(8))\n else:\n fig, ax = plt.subplots()\n else:\n fig = ax.get_figure()\n\n with quantity_support():\n im = ax.imshow(\n data,\n extent=extent_val,\n cmap=\"gray\",\n vmin=min_,\n vmax=max_,\n interpolation=interpolation,\n origin=\"lower\",\n aspect=\"auto\",\n **kwargs,\n )\n if any([rmin is not None, rmax is not None]):\n ax.set_ylim(rmin, rmax)\n self.mpl_im = im\n ax.set_xlabel(\"Longitude [deg]\")\n ax.set_ylabel(\"Radius [Mm]\")\n ax.ticklabel_format(useOffset=False)\n # ax.grid('on')\n title = self.plot_title\n if extra_title:\n title += \", \" + extra_title\n ax.set_title(title, fontsize=12)\n if show_resonances:\n self.set_resonance_axis(ax, show_resonances, rmin, rmax)\n if save:\n savename = self.plotfname\n if extra_title:\n savename = savename[:-4] + \"_\" + extra_title + \".png\"\n p = Path(savename)\n fullpath = Path(savepath) / p.name\n fig.savefig(fullpath, dpi=150)\n logging.info(\"Created %s\", fullpath)\n self.im = im\n return im"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nupdates the query count property from the X - Resource - Range response header.", "response": "def _update_range(self, response):\n \"\"\" Update the query count property from the `X-Resource-Range` response header \"\"\"\n header_value = response.headers.get('x-resource-range', '')\n m = re.match(r'\\d+-\\d+/(\\d+)$', header_value)\n if m:\n self._count = int(m.group(1))\n else:\n self._count = None"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nserialising this query into a request - able URL including parameters", "response": "def _to_url(self):\n \"\"\" Serialises this query into a request-able URL including parameters \"\"\"\n url = self._target_url\n\n params = collections.defaultdict(list, copy.deepcopy(self._filters))\n if self._order_by is not None:\n params['sort'] = self._order_by\n for k, vl in self._extra.items():\n params[k] += vl\n\n if params:\n url += \"?\" + urllib.parse.urlencode(params, doseq=True)\n\n return url"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a new Query object with extra query parameters added.", "response": "def extra(self, **params):\n \"\"\"\n Set extra query parameters (eg. filter expressions/attributes that don't validate).\n Appends to any previous extras set.\n\n :rtype: Query\n \"\"\"\n q = self._clone()\n for key, value in params.items():\n q._extra[key].append(value)\n return q"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef filter(self, **filters):\n\n q = self._clone()\n for key, value in filters.items():\n filter_key = re.split('__', key)\n filter_attr = filter_key[0]\n if filter_attr not in self._valid_filter_attrs:\n raise ClientValidationError(\"Invalid filter attribute: %s\" % key)\n\n # we use __ as a separator in the Python library, the APIs use '.'\n q._filters['.'.join(filter_key)].append(value)\n return q", "response": "Add a filter to this query."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef order_by(self, sort_key=None):\n if sort_key is not None:\n sort_attr = re.match(r'(-)?(.*)$', sort_key).group(2)\n if sort_attr not in self._valid_sort_attrs:\n raise ClientValidationError(\"Invalid ordering attribute: %s\" % sort_key)\n\n q = self._clone()\n q._order_by = sort_key\n return q", "response": "Sets the ordering for this query."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _serialize(self, skip_empty=True):\n skip = set(getattr(self._meta, 'serialize_skip', []))\n\n r = {}\n for k, v in self.__dict__.items():\n if k.startswith('_'):\n continue\n elif k in skip:\n continue\n elif v is None and skip_empty:\n continue\n elif isinstance(v, (dict, list, tuple, set)) and len(v) == 0 and skip_empty:\n continue\n else:\n r[k] = self._serialize_value(v)\n return r", "response": "Serialise this instance into JSON - style request data."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _serialize_value(self, value):\n if isinstance(value, (list, tuple, set)):\n return [self._serialize_value(v) for v in value]\n elif isinstance(value, dict):\n return dict([(k, self._serialize_value(v)) for k, v in value.items()])\n elif isinstance(value, ModelBase):\n return value._serialize()\n elif isinstance(value, datetime.date): # includes datetime.datetime\n return value.isoformat()\n else:\n return value", "response": "Serialize a single value."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrefresh the current model s attributes with the server - defined values.", "response": "def refresh(self):\n \"\"\"\n Refresh this model from the server.\n\n Updates attributes with the server-defined values. This is useful where the Model\n instance came from a partial response (eg. a list query) and additional details\n are required.\n\n Existing attribute values will be overwritten.\n \"\"\"\n r = self._client.request('GET', self.url)\n return self._deserialize(r.json(), self._manager)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_feature(self, croplayer_id, cropfeature_id):\n target_url = self.client.get_url('CROPFEATURE', 'GET', 'single', {'croplayer_id': croplayer_id, 'cropfeature_id': cropfeature_id})\n return self.client.get_manager(CropFeature)._get(target_url)", "response": "Gets a crop feature"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ncreating and start processing a new Export.", "response": "def create(self, export):\n \"\"\"\n Create and start processing a new Export.\n\n :param Export export: The Export to create.\n :rtype: Export\n \"\"\"\n target_url = self.client.get_url(self._URL_KEY, 'POST', 'create')\n r = self.client.request('POST', target_url, json=export._serialize())\n return export._deserialize(r.json(), self)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef validate(self, export):\n target_url = self.client.get_url(self._URL_KEY, 'POST', 'validate')\n response_object = ExportValidationResponse()\n r = self.client.request('POST', target_url, json=export._serialize())\n return response_object._deserialize(r.json())", "response": "Validates an Export.\n\n\n :param Export export:\n :rtype: ExportValidationResponse"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _options(self):\n if self._options_cache is None:\n target_url = self.client.get_url(self._URL_KEY, 'OPTIONS', 'options')\n r = self.client.request('OPTIONS', target_url)\n self._options_cache = r.json()\n return self._options_cache", "response": "Returns a raw options object"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a dictionary of format options keyed by data kind.", "response": "def get_formats(self):\n \"\"\"\n Returns a dictionary of format options keyed by data kind.\n\n .. code-block:: python\n\n {\n \"vector\": {\n \"application/x-ogc-gpkg\": \"GeoPackage\",\n \"application/x-zipped-shp\": \"Shapefile\",\n #...\n },\n \"table\": {\n \"text/csv\": \"CSV (text/csv)\",\n \"application/x-ogc-gpkg\": \"GeoPackage\",\n #...\n },\n \"raster\": {\n \"image/jpeg\": \"JPEG\",\n \"image/jp2\": \"JPEG2000\",\n #...\n },\n \"grid\": {\n \"application/x-ogc-aaigrid\": \"ASCII Grid\",\n \"image/tiff;subtype=geotiff\": \"GeoTIFF\",\n #...\n },\n \"rat\": {\n \"application/x-erdas-hfa\": \"ERDAS Imagine\",\n #...\n }\n }\n\n :rtype: dict\n \"\"\"\n format_opts = self._options()['actions']['POST']['formats']['children']\n r = {}\n for kind, kind_opts in format_opts.items():\n r[kind] = {c['value']: c['display_name'] for c in kind_opts['choices']}\n return r"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding a new item to the export.", "response": "def add_item(self, item, **options):\n \"\"\"\n Add a layer or table item to the export.\n\n :param Layer|Table item: The Layer or Table to add\n :rtype: self\n \"\"\"\n export_item = {\n \"item\": item.url,\n }\n export_item.update(options)\n self.items.append(export_item)\n return self"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef download(self, path, progress_callback=None, chunk_size=1024**2):\n if not self.download_url or self.state != 'complete':\n raise DownloadError(\"Download not available\")\n\n # ignore parsing the Content-Disposition header, since we know the name\n download_filename = \"{}.zip\".format(self.name)\n fd = None\n\n if isinstance(getattr(path, 'write', None), collections.Callable):\n # already open file-like object\n fd = path\n elif os.path.isdir(path):\n # directory to download to, using the export name\n path = os.path.join(path, download_filename)\n # do not allow overwriting\n if os.path.exists(path):\n raise DownloadError(\"Download file already exists: %s\" % path)\n elif path:\n # fully qualified file path\n # allow overwriting\n pass\n elif not path:\n raise DownloadError(\"Empty download file path\")\n\n with contextlib.ExitStack() as stack:\n if not fd:\n fd = open(path, 'wb')\n # only close a file we open\n stack.callback(fd.close)\n\n r = self._manager.client.request('GET', self.download_url, stream=True)\n stack.callback(r.close)\n\n bytes_written = 0\n try:\n bytes_total = int(r.headers.get('content-length', None))\n except TypeError:\n bytes_total = None\n\n if progress_callback:\n # initial callback (0%)\n progress_callback(bytes_written, bytes_total)\n\n for chunk in r.iter_content(chunk_size=chunk_size):\n fd.write(chunk)\n bytes_written += len(chunk)\n if progress_callback:\n progress_callback(bytes_written, bytes_total)\n\n return download_filename", "response": "Download the export archive."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating an instance of the class from a requests. ServerError.", "response": "def from_requests_error(cls, err):\n \"\"\"\n Raises a subclass of ServerError based on the HTTP response code.\n \"\"\"\n import requests\n if isinstance(err, requests.HTTPError):\n status_code = err.response.status_code\n return HTTP_ERRORS.get(status_code, cls)(error=err, response=err.response)\n else:\n return cls(error=err)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncommanding line tool (``koordinates-create-token``) used to create an API token.", "response": "def console_create():\n \"\"\"\n Command line tool (``koordinates-create-token``) used to create an API token.\n \"\"\"\n import argparse\n import getpass\n import re\n import sys\n import requests\n from six.moves import input\n from koordinates.client import Client\n\n parser = argparse.ArgumentParser(description=\"Command line tool to create a Koordinates API Token.\")\n parser.add_argument('site', help=\"Domain (eg. labs.koordinates.com) for the Koordinates site.\", metavar='SITE')\n parser.add_argument('email', help=\"User account email address\", metavar='EMAIL')\n parser.add_argument('name', help=\"Description for the key\", metavar='NAME')\n parser.add_argument('--scopes', help=\"Scopes for the new API token\", nargs='+', metavar=\"SCOPE\")\n parser.add_argument('--referrers', help=\"Restrict the request referrers for the token. You can use * as a wildcard, eg. *.example.com\", nargs='+', metavar='HOST')\n parser.add_argument('--expires', help=\"Expiry time in ISO 8601 (YYYY-MM-DD) format\", metavar=\"DATE\")\n args = parser.parse_args()\n\n # check we have a valid-ish looking domain name\n if not re.match(r'(?=^.{4,253}$)(^((?!-)[a-zA-Z0-9-]{1,63}(? Check your email address, password, and site domain carefully.\")\n\n sys.exit(1)\n\n print(\"Token created successfully.\\n Key: %s\\n Scopes: %s\" % (token.key, token.scope))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates a new token in Koordinates user account.", "response": "def create(self, token, email, password):\n \"\"\"\n Create a new token\n\n :param Token token: Token instance to create.\n :param str email: Email address of the Koordinates user account.\n :param str password: Koordinates user account password.\n \"\"\"\n target_url = self.client.get_url('TOKEN', 'POST', 'create')\n post_data = {\n 'grant_type': 'password',\n 'username': email,\n 'password': password,\n 'name': token.name,\n }\n if getattr(token, 'scope', None):\n post_data['scope'] = token.scope\n if getattr(token, 'expires_at', None):\n post_data['expires_at'] = token.expires_at\n if getattr(token, 'referrers', None):\n post_data['referrers'] = token.referrers\n\n r = self.client._raw_request('POST', target_url, json=post_data, headers={'Content-type': 'application/json'})\n return self.create_from_result(r.json())"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef read_cumulative_iss_index():\n \"Read in the whole cumulative index and return dataframe.\"\n indexdir = get_index_dir()\n\n path = indexdir / \"COISS_2999_index.hdf\"\n try:\n df = pd.read_hdf(path, \"df\")\n except FileNotFoundError:\n path = indexdir / \"cumindex.hdf\"\n df = pd.read_hdf(path, \"df\")\n # replace PDS Nan values (-1e32) with real NaNs\n df = df.replace(-1.000000e32, np.nan)\n return df.replace(-999.0, np.nan)", "response": "Read in the whole cumulative index and return dataframe."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nfilter cumulative index for ring images.", "response": "def read_ring_images_index():\n \"\"\"Filter cumulative index for ring images.\n\n This is done by matching the column TARGET_DESC to contain the string 'ring'\n\n Returns\n -------\n pandas.DataFrame\n data table containing only meta-data for ring images\n \"\"\"\n meta = read_cumulative_iss_index()\n ringfilter = meta.TARGET_DESC.str.contains(\"ring\", case=False)\n return meta[ringfilter]"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef filter_for_ringspan(clearnacs, spanlimit):\n \"filter for covered ringspan, giver in km.\"\n delta = clearnacs.MAXIMUM_RING_RADIUS - clearnacs.MINIMUM_RING_RADIUS\n f = delta < spanlimit\n ringspan = clearnacs[f].copy()\n return ringspan", "response": "filter for covered ringspan giver in km."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create(self, publish):\n target_url = self.client.get_url('PUBLISH', 'POST', 'create')\n r = self.client.request('POST', target_url, json=publish._serialize())\n return self.create_from_result(r.json())", "response": "Creates a new publish group."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncancels a pending publish task", "response": "def cancel(self):\n \"\"\" Cancel a pending publish task \"\"\"\n target_url = self._client.get_url('PUBLISH', 'DELETE', 'single', {'id': self.id})\n r = self._client.request('DELETE', target_url)\n logger.info(\"cancel(): %s\", r.status_code)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef get_items(self):\n from .layers import Layer\n\n # no expansion support, just URLs\n results = []\n for url in self.items:\n if '/layers/' in url:\n r = self._client.request('GET', url)\n results.append(self._client.get_manager(Layer).create_from_result(r.json()))\n else:\n raise NotImplementedError(\"No support for %s\" % url)\n return results", "response": "Returns the item models associated with this Publish group."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding a Layer to the publish group.", "response": "def add_layer_item(self, layer):\n \"\"\"\n Adds a Layer to the publish group.\n \"\"\"\n if not layer.is_draft_version:\n raise ValueError(\"Layer isn't a draft version\")\n\n self.items.append(layer.latest_version)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef add_table_item(self, table):\n if not table.is_draft_version:\n raise ValueError(\"Table isn't a draft version\")\n\n self.items.append(table.latest_version)", "response": "Adds a Table to the publish group."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd new ticks to an axis.", "response": "def add_ticks_to_x(ax, newticks, newnames):\n \"\"\"Add new ticks to an axis.\n\n I use this for the right-hand plotting of resonance names in my plots.\n \"\"\"\n ticks = list(ax.get_xticks())\n ticks.extend(newticks)\n ax.set_xticks(ticks)\n\n names = list(ax.get_xticklabels())\n names.extend(newnames)\n ax.set_xticklabels(names)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a new Set.", "response": "def create(self, set):\n \"\"\"\n Creates a new Set.\n \"\"\"\n target_url = self.client.get_url('SET', 'POST', 'create')\n r = self.client.request('POST', target_url, json=set._serialize())\n return set._deserialize(r.json(), self)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting the XML metadata on a set.", "response": "def set_metadata(self, set_id, fp):\n \"\"\"\n Set the XML metadata on a set.\n\n :param file fp: file-like object to read the XML metadata from.\n \"\"\"\n base_url = self.client.get_url('SET', 'GET', 'single', {'id': set_id})\n self._metadata.set(base_url, fp)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef set_metadata(self, fp):\n base_url = self._client.get_url('SET', 'GET', 'single', {'id': self.id})\n self._manager._metadata.set(base_url, fp)\n\n # reload myself\n r = self._client.request('GET', base_url)\n return self._deserialize(r.json(), self._manager)", "response": "Set the XML metadata on a set."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set(self, parent_url, fp):\n url = parent_url + self.client.get_url_path('METADATA', 'POST', 'set', {})\n r = self.client.request('POST', url, data=fp, headers={'Content-Type': 'text/xml'})\n if r.status_code not in [200, 201]:\n raise exceptions.ServerError(\"Expected success response, got %s: %s\" % (r.status_code, url))", "response": "Set the XML metadata for the current object."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the XML metadata for this source converted to the requested format.", "response": "def get_xml(self, fp, format=FORMAT_NATIVE):\n \"\"\"\n Returns the XML metadata for this source, converted to the requested format.\n Converted metadata may not contain all the same information as the native format.\n\n :param file fp: A path, or an open file-like object which the content should be written to.\n :param str format: desired format for the output. This should be one of the available\n formats from :py:meth:`.get_formats`, or :py:attr:`.FORMAT_NATIVE` for the native format.\n\n If you pass this function an open file-like object as the fp parameter, the function will\n not close that file for you.\n \"\"\"\n r = self._client.request('GET', getattr(self, format), stream=True)\n filename = stream.stream_response_to_file(r, path=fp)\n return filename"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_formats(self):\n formats = []\n for key in (self.FORMAT_DC, self.FORMAT_FGDC, self.FORMAT_ISO):\n if hasattr(self, key):\n formats.append(key)\n return formats", "response": "Return the available format names for this metadata"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef is_bound(method):\n @functools.wraps(method)\n def wrapper(self, *args, **kwargs):\n if not self._is_bound:\n raise ValueError(\"%r must be bound to call %s()\" % (self, method.__name__))\n return method(self, *args, **kwargs)\n return wrapper", "response": "Decorator that asserts the model instance is bound."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef query_image_id(self, image_id):\n myquery = {\"primaryfilespec\": image_id}\n self.create_files_request(myquery, fmt=\"json\")\n self.unpack_json_response()\n return self.obsids", "response": "Query OPUS via the image_id. This method returns a list of URLs into the database_path attribute."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a request to get the relevant data from the API.", "response": "def create_request_with_query(self, kind, query, size=\"thumb\", fmt=\"json\"):\n \"\"\"api/data.[fmt], api/images/[size].[fmt] api/files.[fmt]\n\n kind = ['data', 'images', 'files']\n\n\n \"\"\"\n if kind == \"data\" or kind == \"files\":\n url = \"{}/{}.{}\".format(base_url, kind, fmt)\n elif kind == \"images\":\n url = \"{}/images/{}.{}\".format(base_url, size, fmt)\n self.url = url\n self.r = requests.get(url, params=unquote(urlencode(query)))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_between_times(self, t1, t2, target=None):\n try:\n # checking if times have isoformat() method (datetimes have)\n t1 = t1.isoformat()\n t2 = t2.isoformat()\n except AttributeError:\n # if not, should already be a string, so do nothing.\n pass\n myquery = self._get_time_query(t1, t2)\n if target is not None:\n myquery[\"target\"] = target\n self.create_files_request(myquery, fmt=\"json\")\n self.unpack_json_response()", "response": "Query OPUS data between times t1 and t2."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef show_images(self, size=\"small\"):\n d = dict(small=256, med=512, thumb=100, full=1024)\n try:\n width = d[size]\n except KeyError:\n print(\"Allowed keys:\", d.keys())\n return\n img_urls = [i._get_img_url(size) for i in self.obsids]\n imagesList = \"\".join(\n [\n \"\".format(width, s)\n for s in img_urls\n ]\n )\n display(HTML(imagesList))", "response": "Shows preview images using the Jupyter notebook HTML display."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndownload the previously found and stored Opus obsids.", "response": "def download_results(self, savedir=None, raw=True, calib=False, index=None):\n \"\"\"Download the previously found and stored Opus obsids.\n\n Parameters\n ==========\n savedir: str or pathlib.Path, optional\n If the database root folder as defined by the config.ini should not be used,\n provide a different savedir here. It will be handed to PathManager.\n \"\"\"\n obsids = self.obsids if index is None else [self.obsids[index]]\n for obsid in obsids:\n pm = io.PathManager(obsid.img_id, savedir=savedir)\n pm.basepath.mkdir(exist_ok=True)\n to_download = []\n if raw is True:\n to_download.extend(obsid.raw_urls)\n if calib is True:\n to_download.extend(obsid.calib_urls)\n for url in to_download:\n basename = Path(url).name\n print(\"Downloading\", basename)\n store_path = str(pm.basepath / basename)\n try:\n urlretrieve(url, store_path)\n except Exception as e:\n urlretrieve(url.replace(\"https\", \"http\"), store_path)\n return str(pm.basepath)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef download_previews(self, savedir=None):\n for obsid in self.obsids:\n pm = io.PathManager(obsid.img_id, savedir=savedir)\n pm.basepath.mkdir(exist_ok=True)\n basename = Path(obsid.medium_img_url).name\n print(\"Downloading\", basename)\n urlretrieve(obsid.medium_img_url, str(pm.basepath / basename))", "response": "Download preview files for previously found Opus obsids."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef which_epi_janus_resonance(name, time):\n t1 = Time('2002-01-21').to_datetime()\n delta = Time(time).to_datetime() - t1\n yearfraction = delta.days / 365\n if int(yearfraction / 4) % 2 == 0:\n return name + '2'\n else:\n return name + '1'", "response": "Find which swap situtation we are in by time."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef list_drafts(self):\n target_url = self.client.get_url('LAYER', 'GET', 'multidraft')\n return base.Query(self, target_url)", "response": "A filterable list views of layers returning the draft version of each layer or table."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef list_versions(self, layer_id):\n target_url = self.client.get_url('VERSION', 'GET', 'multi', {'layer_id': layer_id})\n return base.Query(self, target_url, valid_filter_attributes=('data',), valid_sort_attributes=())", "response": "Filterable list of versions of a layer."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets a specific version of a layer.", "response": "def get_version(self, layer_id, version_id, expand=[]):\n \"\"\"\n Get a specific version of a layer.\n \"\"\"\n target_url = self.client.get_url('VERSION', 'GET', 'single', {'layer_id': layer_id, 'version_id': version_id})\n return self._get(target_url, expand=expand)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_draft(self, layer_id, expand=[]):\n target_url = self.client.get_url('VERSION', 'GET', 'draft', {'layer_id': layer_id})\n return self._get(target_url, expand=expand)", "response": "Get the current draft version of a layer."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting the latest published version of this layer.", "response": "def get_published(self, layer_id, expand=[]):\n \"\"\"\n Get the latest published version of this layer.\n :raises NotFound: if there is no published version.\n \"\"\"\n target_url = self.client.get_url('VERSION', 'GET', 'published', {'layer_id': layer_id})\n return self._get(target_url, expand=expand)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef create_draft(self, layer_id):\n target_url = self.client.get_url('VERSION', 'POST', 'create', {'layer_id': layer_id})\n r = self.client.request('POST', target_url, json={})\n return self.create_from_result(r.json())", "response": "Creates a new draft version for the given layer."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef start_import(self, layer_id, version_id):\n target_url = self.client.get_url('VERSION', 'POST', 'import', {'layer_id': layer_id, 'version_id': version_id})\n r = self.client.request('POST', target_url, json={})\n return self.create_from_result(r.json())", "response": "Starts importing the specified draft version."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef start_update(self, layer_id):\n target_url = self.client.get_url('LAYER', 'POST', 'update', {'layer_id': layer_id})\n r = self.client.request('POST', target_url, json={})\n return self.parent.create_from_result(r.json())", "response": "Start importing a new version of the layer."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_metadata(self, layer_id, version_id, fp):\n base_url = self.client.get_url('VERSION', 'GET', 'single', {'layer_id': layer_id, 'version_id': version_id})\n self._metadata.set(base_url, fp)", "response": "Set the XML metadata on a layer draft version."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef is_published_version(self):\n pub_ver = getattr(self, 'published_version', None)\n this_ver = getattr(self, 'this_version', None)\n return this_ver and pub_ver and (this_ver == pub_ver)", "response": "Return True if this version is the published version of a layer"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn True if this version is the draft version of a layer", "response": "def is_draft_version(self):\n \"\"\" Return if this version is the draft version of a layer \"\"\"\n pub_ver = getattr(self, 'published_version', None)\n latest_ver = getattr(self, 'latest_version', None)\n this_ver = getattr(self, 'this_version', None)\n return this_ver and latest_ver and (this_ver == latest_ver) and (latest_ver != pub_ver)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef list_versions(self):\n target_url = self._client.get_url('VERSION', 'GET', 'multi', {'layer_id': self.id})\n return base.Query(self._manager, target_url, valid_filter_attributes=('data',), valid_sort_attributes=())", "response": "Return a list of versions of a specific version of the layer."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets a specific version of this layer.", "response": "def get_version(self, version_id, expand=[]):\n \"\"\"\n Get a specific version of this layer\n \"\"\"\n target_url = self._client.get_url('VERSION', 'GET', 'single', {'layer_id': self.id, 'version_id': version_id})\n return self._manager._get(target_url, expand=expand)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the current draft version of this layer.", "response": "def get_draft_version(self, expand=[]):\n \"\"\"\n Get the current draft version of this layer.\n :raises NotFound: if there is no draft version.\n \"\"\"\n target_url = self._client.get_url('VERSION', 'GET', 'draft', {'layer_id': self.id})\n return self._manager._get(target_url, expand=expand)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef start_import(self, version_id=None):\n if not version_id:\n version_id = self.version.id\n\n target_url = self._client.get_url('VERSION', 'POST', 'import', {'layer_id': self.id, 'version_id': version_id})\n r = self._client.request('POST', target_url, json={})\n return self._deserialize(r.json(), self._manager)", "response": "Starts importing this draft layerversion."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef start_update(self):\n target_url = self._client.get_url('LAYER', 'POST', 'update', {'layer_id': self.id})\n r = self._client.request('POST', target_url, json={})\n return self._manager.create_from_result(r.json())", "response": "A shortcut to create a new version and start importing it."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef publish(self, version_id=None):\n if not version_id:\n version_id = self.version.id\n\n target_url = self._client.get_url('VERSION', 'POST', 'publish', {'layer_id': self.id, 'version_id': version_id})\n r = self._client.request('POST', target_url, json={})\n return self._client.get_manager(Publish).create_from_result(r.json())", "response": "Creates a publish task just for this version which publishes as soon as any import is complete."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef save(self, with_data=False):\n target_url = self._client.get_url('VERSION', 'PUT', 'edit', {'layer_id': self.id, 'version_id': self.version.id})\n r = self._client.request('PUT', target_url, json=self._serialize(with_data=with_data))\n return self._deserialize(r.json(), self._manager)", "response": "Save this draft layerversion."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef delete_version(self, version_id=None):\n if not version_id:\n version_id = self.version.id\n\n target_url = self._client.get_url('VERSION', 'DELETE', 'single', {'layer_id': self.id, 'version_id': version_id})\n r = self._client.request('DELETE', target_url)\n logger.info(\"delete_version(): %s\", r.status_code)", "response": "Deletes this draft version."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_item_class(self, url):\n if '/layers/' in url:\n return Layer\n elif '/tables/' in url:\n return Table\n elif '/sets/' in url:\n return Set\n # elif '/documents/' in url:\n # return Document\n else:\n raise NotImplementedError(\"No support for catalog results of type %s\" % url)", "response": "Return the model class matching a URL"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncalculates the fraction of the year since moon swap.", "response": "def get_year_since_resonance(ringcube):\n \"Calculate the fraction of the year since moon swap.\"\n t0 = dt(2006, 1, 21)\n td = ringcube.imagetime - t0\n return td.days / 365.25"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncreating and return a dict of all the polynomal objects for the current order of the solar prediction.", "response": "def create_polynoms():\n \"\"\"Create and return poly1d objects.\n\n Uses the parameters from Morgan to create poly1d objects for\n calculations.\n \"\"\"\n fname = pr.resource_filename('pyciss', 'data/soliton_prediction_parameters.csv')\n res_df = pd.read_csv(fname)\n polys = {}\n for resorder, row in zip('65 54 43 21'.split(),\n range(4)):\n p = poly1d([res_df.loc[row, 'Slope (km/yr)'], res_df.loc[row, 'Intercept (km)']])\n polys['janus ' + ':'.join(resorder)] = p\n return polys"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef check_for_soliton(img_id):\n pm = io.PathManager(img_id)\n try:\n ringcube = RingCube(pm.cubepath)\n except FileNotFoundError:\n ringcube = RingCube(pm.undestriped)\n polys = create_polynoms()\n minrad = ringcube.minrad.to(u.km)\n maxrad = ringcube.maxrad.to(u.km)\n delta_years = get_year_since_resonance(ringcube)\n soliton_radii = {}\n for k, p in polys.items():\n current_r = p(delta_years) * u.km\n if minrad < current_r < maxrad:\n soliton_radii[k] = current_r\n return soliton_radii if soliton_radii else None", "response": "Workhorse function. Returns a dict with all solitons found in a single ISS image."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_manager(self, model):\n if isinstance(model, six.string_types):\n # undocumented string lookup\n for k, m in self._manager_map.items():\n if k.__name__ == model:\n return m\n else:\n raise KeyError(model)\n\n return self._manager_map[model]", "response": "Return the manager instance for the given model."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _assemble_headers(self, method, user_headers=None):\n\n headers = copy.deepcopy(user_headers or {})\n\n if method not in ('GET', 'HEAD'):\n headers.setdefault('Content-Type', 'application/json')\n\n return headers", "response": "Assemble the headers for the current request."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef reverse_url(self, datatype, url, verb='GET', urltype='single', api_version=None):\n api_version = api_version or 'v1'\n templates = getattr(self, 'URL_TEMPLATES__%s' % api_version)\n\n # this is fairly simplistic, if necessary we could use the parse lib\n template_url = r\"https://(?P.+)/services/api/(?P.+)\"\n template_url += re.sub(r'{([^}]+)}', r'(?P<\\1>.+)', templates[datatype][verb][urltype])\n # /foo/{foo_id}/bar/{id}/\n m = re.match(template_url, url or '')\n if not m:\n raise KeyError(\"No reverse match from '%s' to %s.%s.%s\" % (url, datatype, verb, urltype))\n\n r = m.groupdict()\n del r['api_host']\n if r.pop('api_version') != api_version:\n raise ValueError(\"API version mismatch\")\n return r", "response": "Returns a dictionary of parameters extracted from a populated URL."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_url(self, datatype, verb, urltype, params={}, api_host=None, api_version=None):\n api_version = api_version or 'v1'\n api_host = api_host or self.host\n\n subst = params.copy()\n subst['api_host'] = api_host\n subst['api_version'] = api_version\n\n url = \"https://{api_host}/services/api/{api_version}\"\n url += self.get_url_path(datatype, verb, urltype, params, api_version)\n return url.format(**subst)", "response": "Returns a fully formed url for the current session."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef open_store_variable(self, name, var):\n data = indexing.LazilyOuterIndexedArray(CDMArrayWrapper(name, self))\n return Variable(var.dimensions, data, {a: getattr(var, a) for a in var.ncattrs()})", "response": "Turn CDMRemote variable into something like a numpy. ndarray."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the global attributes from underlying data set.", "response": "def get_attrs(self):\n \"\"\"Get the global attributes from underlying data set.\"\"\"\n return FrozenOrderedDict((a, getattr(self.ds, a)) for a in self.ds.ncattrs())"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_dimensions(self):\n return FrozenOrderedDict((k, len(v)) for k, v in self.ds.dimensions.items())", "response": "Get the dimensions from underlying data set."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _find_base_tds_url(catalog_url):\n url_components = urlparse(catalog_url)\n if url_components.path:\n return catalog_url.split(url_components.path)[0]\n else:\n return catalog_url", "response": "Identify the base URL of the THREDDS server from the catalog URL."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef filter_time_nearest(self, time, regex=None):\n return min(self._get_datasets_with_times(regex),\n key=lambda i: abs((i[0] - time).total_seconds()))[-1]", "response": "Filter keys for an item closest to the desired time."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nfiltering keys for all items within the specified time range.", "response": "def filter_time_range(self, start, end, regex=None):\n \"\"\"Filter keys for all items within the desired time range.\n\n Loops over all keys in the collection and uses `regex` to extract and build\n `datetime`s. From the collection of `datetime`s, all values within `start` and `end`\n (inclusive) are returned. If none of the keys in the collection match the regex,\n indicating that the keys are not date/time-based, a ``ValueError`` is raised.\n\n Parameters\n ----------\n start : ``datetime.datetime``\n The start of the desired time range, inclusive\n end : ``datetime.datetime``\n The end of the desired time range, inclusive\n regex : str, optional\n The regular expression to use to extract date/time information from the key. If\n given, this should contain named groups: 'year', 'month', 'day', 'hour', 'minute',\n 'second', and 'microsecond', as appropriate. When a match is found, any of those\n groups missing from the pattern will be assigned a value of 0. The default pattern\n looks for patterns like: 20171118_2356.\n\n Returns\n -------\n All values corresponding to times within the specified range\n\n \"\"\"\n return [item[-1] for item in self._get_datasets_with_times(regex)\n if start <= item[0] <= end]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nremoving and return the value associated with case - insensitive key.", "response": "def pop(self, key, *args, **kwargs):\n \"\"\"Remove and return the value associated with case-insensitive ``key``.\"\"\"\n return super(CaseInsensitiveDict, self).pop(CaseInsensitiveStr(key))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _keys_to_lower(self):\n for k in list(self.keys()):\n val = super(CaseInsensitiveDict, self).__getitem__(k)\n super(CaseInsensitiveDict, self).__delitem__(k)\n self.__setitem__(CaseInsensitiveStr(k), val)", "response": "Convert key set to lowercase."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef resolve_url(self, catalog_url):\n if catalog_url != '':\n resolver_base = catalog_url.split('catalog.xml')[0]\n resolver_url = resolver_base + self.url_path\n resolver_xml = session_manager.urlopen(resolver_url)\n tree = ET.parse(resolver_xml)\n root = tree.getroot()\n if 'name' in root.attrib:\n self.catalog_name = root.attrib['name']\n else:\n self.catalog_name = 'No name found'\n resolved_url = ''\n found = False\n for child in root.iter():\n if not found:\n tag_type = child.tag.split('}')[-1]\n if tag_type == 'dataset':\n if 'urlPath' in child.attrib:\n ds = Dataset(child)\n resolved_url = ds.url_path\n found = True\n if found:\n return resolved_url\n else:\n log.warning('no dataset url path found in latest.xml!')", "response": "Resolve the url of the dataset when reading latest. xml."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nmakes fully qualified urls for the access methods enabled on the dataset.", "response": "def make_access_urls(self, catalog_url, all_services, metadata=None):\n \"\"\"Make fully qualified urls for the access methods enabled on the dataset.\n\n Parameters\n ----------\n catalog_url : str\n The top level server url\n all_services : List[SimpleService]\n list of :class:`SimpleService` objects associated with the dataset\n metadata : dict\n Metadata from the :class:`TDSCatalog`\n\n \"\"\"\n all_service_dict = CaseInsensitiveDict({})\n for service in all_services:\n all_service_dict[service.name] = service\n if isinstance(service, CompoundService):\n for subservice in service.services:\n all_service_dict[subservice.name] = subservice\n\n service_name = metadata.get('serviceName', None)\n\n access_urls = CaseInsensitiveDict({})\n server_url = _find_base_tds_url(catalog_url)\n\n # process access urls for datasets that reference top\n # level catalog services (individual or compound service\n # types).\n if service_name in all_service_dict:\n service = all_service_dict[service_name]\n if service.service_type != 'Resolver':\n # if service is a CompoundService, create access url\n # for each SimpleService\n if isinstance(service, CompoundService):\n for subservice in service.services:\n server_base = urljoin(server_url, subservice.base)\n access_urls[subservice.service_type] = urljoin(server_base,\n self.url_path)\n else:\n server_base = urljoin(server_url, service.base)\n access_urls[service.service_type] = urljoin(server_base, self.url_path)\n\n # process access children of dataset elements\n for service_type in self.access_element_info:\n url_path = self.access_element_info[service_type]\n if service_type in all_service_dict:\n server_base = urljoin(server_url, all_service_dict[service_type].base)\n access_urls[service_type] = urljoin(server_base, url_path)\n\n self.access_urls = access_urls"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef add_access_element_info(self, access_element):\n service_name = access_element.attrib['serviceName']\n url_path = access_element.attrib['urlPath']\n self.access_element_info[service_name] = url_path", "response": "Create an access method from a catalog element."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ndownloads the dataset to a local file.", "response": "def download(self, filename=None):\n \"\"\"Download the dataset to a local file.\n\n Parameters\n ----------\n filename : str, optional\n The full path to which the dataset will be saved\n\n \"\"\"\n if filename is None:\n filename = self.name\n with self.remote_open() as infile:\n with open(filename, 'wb') as outfile:\n outfile.write(infile.read())"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef remote_access(self, service=None, use_xarray=None):\n if service is None:\n service = 'CdmRemote' if 'CdmRemote' in self.access_urls else 'OPENDAP'\n\n if service not in (CaseInsensitiveStr('CdmRemote'), CaseInsensitiveStr('OPENDAP')):\n raise ValueError(service + ' is not a valid service for remote_access')\n\n return self.access_with_service(service, use_xarray)", "response": "Open the remote dataset and get a netCDF4 - compatible Dataset object providing index - based subsetting capabilities."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef subset(self, service=None):\n if service is None:\n for serviceName in self.ncssServiceNames:\n if serviceName in self.access_urls:\n service = serviceName\n break\n else:\n raise RuntimeError('Subset access is not available for this dataset.')\n elif service not in self.ncssServiceNames:\n raise ValueError(service + ' is not a valid service for subset. Options are: '\n + ', '.join(self.ncssServiceNames))\n\n return self.access_with_service(service)", "response": "Subset the dataset.\n\n Open the remote dataset and get a client for talking to ``service``.\n\n Parameters\n ----------\n service : str, optional\n The name of the service for subsetting the dataset. Defaults to 'NetcdfSubset'\n or 'NetcdfServer', in that order, depending on the services listed in the\n catalog.\n\n Returns\n -------\n a client for communicating using ``service``"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef access_with_service(self, service, use_xarray=None):\n service = CaseInsensitiveStr(service)\n if service == 'CdmRemote':\n if use_xarray:\n from .cdmr.xarray_support import CDMRemoteStore\n try:\n import xarray as xr\n provider = lambda url: xr.open_dataset(CDMRemoteStore(url)) # noqa: E731\n except ImportError:\n raise ImportError('CdmRemote access needs xarray to be installed.')\n else:\n from .cdmr import Dataset as CDMRDataset\n provider = CDMRDataset\n elif service == 'OPENDAP':\n if use_xarray:\n try:\n import xarray as xr\n provider = xr.open_dataset\n except ImportError:\n raise ImportError('xarray to be installed if `use_xarray` is True.')\n else:\n try:\n from netCDF4 import Dataset as NC4Dataset\n provider = NC4Dataset\n except ImportError:\n raise ImportError('OPENDAP access needs netCDF4-python to be installed.')\n elif service in self.ncssServiceNames:\n from .ncss import NCSS\n provider = NCSS\n elif service == 'HTTPServer':\n provider = session_manager.urlopen\n else:\n raise ValueError(service + ' is not an access method supported by Siphon')\n\n try:\n return provider(self.access_urls[service])\n except KeyError:\n raise ValueError(service + ' is not available for this dataset')", "response": "Access the dataset using a particular service."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets header information and store as metadata for the endpoint.", "response": "def _get_metadata(self):\n \"\"\"Get header information and store as metadata for the endpoint.\"\"\"\n self.metadata = self.fetch_header()\n self.variables = {g.name for g in self.metadata.grids}"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmaking a header request to the endpoint.", "response": "def fetch_header(self):\n \"\"\"Make a header request to the endpoint.\"\"\"\n query = self.query().add_query_parameter(req='header')\n return self._parse_messages(self.get_query(query).content)[0]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef fetch_feature_type(self):\n query = self.query().add_query_parameter(req='featureType')\n return self.get_query(query).content", "response": "Request the featureType from the endpoint."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\npull down coordinate data from the endpoint.", "response": "def fetch_coords(self, query):\n \"\"\"Pull down coordinate data from the endpoint.\"\"\"\n q = query.add_query_parameter(req='coord')\n return self._parse_messages(self.get_query(q).content)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_data(self):\n # Split the list of times into begin and end dates. If only\n # one date is supplied, set both begin and end dates equal to that date.\n\n body, header, dates_long, dates = self._get_data_raw()\n\n params = self._get_fwf_params()\n\n df_body = pd.read_fwf(StringIO(body), **params['body'])\n df_header = pd.read_fwf(StringIO(header), **params['header'])\n df_body['date'] = dates_long\n\n df_body = self._clean_body_df(df_body)\n df_header = self._clean_header_df(df_header)\n df_header['date'] = dates\n\n return df_body, df_header", "response": "Process the IGRA2 text file for observations at site_id matching time."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndownload observations matching the time range.", "response": "def _get_data_raw(self):\n \"\"\"Download observations matching the time range.\n\n Returns a tuple with a string for the body, string for the headers,\n and a list of dates.\n \"\"\"\n # Import need to be here so we can monkeypatch urlopen for testing and avoid\n # downloading live data for testing\n try:\n from urllib.request import urlopen\n except ImportError:\n from urllib2 import urlopen\n\n with closing(urlopen(self.ftpsite + self.site_id + self.suffix + '.zip')) as url:\n f = ZipFile(BytesIO(url.read()), 'r').open(self.site_id + self.suffix)\n\n lines = [line.decode('utf-8') for line in f.readlines()]\n\n body, header, dates_long, dates = self._select_date_range(lines)\n\n return body, header, dates_long, dates"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nselects the time range of the IGRA2 data file.", "response": "def _select_date_range(self, lines):\n \"\"\"Identify lines containing headers within the range begin_date to end_date.\n\n Parameters\n -----\n lines: list\n list of lines from the IGRA2 data file.\n\n \"\"\"\n headers = []\n num_lev = []\n dates = []\n\n # Get indices of headers, and make a list of dates and num_lev\n for idx, line in enumerate(lines):\n if line[0] == '#':\n year, month, day, hour = map(int, line[13:26].split())\n\n # All soundings have YMD, most have hour\n try:\n date = datetime.datetime(year, month, day, hour)\n except ValueError:\n date = datetime.datetime(year, month, day)\n\n # Check date\n if self.begin_date <= date <= self.end_date:\n headers.append(idx)\n num_lev.append(int(line[32:36]))\n dates.append(date)\n if date > self.end_date:\n break\n\n if len(dates) == 0:\n # Break if no matched dates.\n # Could improve this later by showing the date range for the station.\n raise ValueError('No dates match selection.')\n\n # Compress body of data into a string\n begin_idx = min(headers)\n end_idx = max(headers) + num_lev[-1]\n\n # Make a boolean vector that selects only list indices within the time range\n selector = np.zeros(len(lines), dtype=bool)\n selector[begin_idx:end_idx + 1] = True\n selector[headers] = False\n body = ''.join([line for line in itertools.compress(lines, selector)])\n\n selector[begin_idx:end_idx + 1] = ~selector[begin_idx:end_idx + 1]\n header = ''.join([line for line in itertools.compress(lines, selector)])\n\n # expand date vector to match length of the body dataframe.\n dates_long = np.repeat(dates, num_lev)\n\n return body, header, dates_long, dates"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_fwf_params(self):\n def _cdec(power=1):\n \"\"\"Make a function to convert string 'value*10^power' to float.\"\"\"\n def _cdec_power(val):\n if val in ['-9999', '-8888', '-99999']:\n return np.nan\n else:\n return float(val) / 10**power\n return _cdec_power\n\n def _cflag(val):\n \"\"\"Replace alphabetic flags A and B with numeric.\"\"\"\n if val == 'A':\n return 1\n elif val == 'B':\n return 2\n else:\n return 0\n\n def _ctime(strformat='MMMSS'):\n \"\"\"Return a function converting a string from MMMSS or HHMM to seconds.\"\"\"\n def _ctime_strformat(val):\n time = val.strip().zfill(5)\n\n if int(time) < 0:\n return np.nan\n elif int(time) == 9999:\n return np.nan\n else:\n if strformat == 'MMMSS':\n minutes = int(time[0:3])\n seconds = int(time[3:5])\n time_seconds = minutes * 60 + seconds\n elif strformat == 'HHMM':\n hours = int(time[0:2])\n minutes = int(time[2:4])\n time_seconds = hours * 3600 + minutes * 60\n else:\n sys.exit('Unrecognized time format')\n\n return time_seconds\n return _ctime_strformat\n\n def _clatlon(x):\n n = len(x)\n deg = x[0:n - 4]\n dec = x[n - 4:]\n return float(deg + '.' + dec)\n\n if self.suffix == '-drvd.txt':\n names_body = ['pressure', 'reported_height', 'calculated_height',\n 'temperature', 'temperature_gradient', 'potential_temperature',\n 'potential_temperature_gradient', 'virtual_temperature',\n 'virtual_potential_temperature', 'vapor_pressure',\n 'saturation_vapor_pressure', 'reported_relative_humidity',\n 'calculated_relative_humidity', 'u_wind', 'u_wind_gradient',\n 'v_wind', 'v_wind_gradient', 'refractive_index']\n\n colspecs_body = [(0, 7), (8, 15), (16, 23), (24, 31), (32, 39),\n (40, 47), (48, 55), (56, 63), (64, 71), (72, 79),\n (80, 87), (88, 95), (96, 103), (104, 111), (112, 119),\n (120, 127), (128, 135), (137, 143), (144, 151)]\n\n conv_body = {'pressure': _cdec(power=2),\n 'reported_height': int,\n 'calculated_height': int,\n 'temperature': _cdec(),\n 'temperature_gradient': _cdec(),\n 'potential_temperature': _cdec(),\n 'potential_temperature_gradient': _cdec(),\n 'virtual_temperature': _cdec(),\n 'virtual_potential_temperature': _cdec(),\n 'vapor_pressure': _cdec(power=3),\n 'saturation_vapor_pressure': _cdec(power=3),\n 'reported_relative_humidity': _cdec(),\n 'calculated_relative_humidity': _cdec(),\n 'u_wind': _cdec(),\n 'u_wind_gradient': _cdec(),\n 'v_wind': _cdec(),\n 'v_wind_gradient': _cdec(),\n 'refractive_index': int}\n\n names_header = ['site_id', 'year', 'month', 'day', 'hour', 'release_time',\n 'number_levels', 'precipitable_water', 'inv_pressure',\n 'inv_height', 'inv_strength', 'mixed_layer_pressure',\n 'mixed_layer_height', 'freezing_point_pressure',\n 'freezing_point_height', 'lcl_pressure', 'lcl_height',\n 'lfc_pressure', 'lfc_height', 'lnb_pressure', 'lnb_height',\n 'lifted_index', 'showalter_index', 'k_index', 'total_totals_index',\n 'cape', 'convective_inhibition']\n\n colspecs_header = [(1, 12), (13, 17), (18, 20), (21, 23), (24, 26),\n (27, 31), (31, 36), (37, 43), (43, 48), (49, 55),\n (55, 61), (61, 67), (67, 73), (73, 79), (79, 85),\n (85, 91), (91, 97), (97, 103), (103, 109), (109, 115),\n (115, 121), (121, 127), (127, 133), (133, 139),\n (139, 145), (145, 151), (151, 157)]\n\n conv_header = {'site_id': str,\n 'year': int,\n 'month': int,\n 'day': int,\n 'hour': int,\n 'release_time': _ctime(strformat='HHMM'),\n 'number_levels': int,\n 'precipitable_water': _cdec(power=2),\n 'inv_pressure': _cdec(power=2),\n 'inv_height': int,\n 'inv_strength': _cdec(),\n 'mixed_layer_pressure': _cdec(power=2),\n 'mixed_layer_height': int,\n 'freezing_point_pressure': _cdec(power=2),\n 'freezing_point_height': int,\n 'lcl_pressure': _cdec(power=2),\n 'lcl_height': int,\n 'lfc_pressure': _cdec(power=2),\n 'lfc_height': int,\n 'lnb_pressure': _cdec(power=2),\n 'lnb_height': int,\n 'lifted_index': int,\n 'showalter_index': int,\n 'k_index': int,\n 'total_totals_index': int,\n 'cape': int,\n 'convective_inhibition': int}\n\n na_vals = ['-99999']\n\n else:\n names_body = ['lvltyp1', 'lvltyp2', 'etime', 'pressure',\n 'pflag', 'height', 'zflag', 'temperature', 'tflag',\n 'relative_humidity', 'dewpoint_depression',\n 'direction', 'speed']\n\n colspecs_body = [(0, 1), (1, 2), (3, 8), (9, 15), (15, 16),\n (16, 21), (21, 22), (22, 27), (27, 28),\n (28, 33), (34, 39), (40, 45), (46, 51)]\n\n conv_body = {'lvltyp1': int,\n 'lvltyp2': int,\n 'etime': _ctime(strformat='MMMSS'),\n 'pressure': _cdec(power=2),\n 'pflag': _cflag,\n 'height': int,\n 'zflag': _cflag,\n 'temperature': _cdec(),\n 'tflag': _cflag,\n 'relative_humidity': _cdec(),\n 'dewpoint_depression': _cdec(),\n 'direction': int,\n 'speed': _cdec()}\n\n names_header = ['site_id', 'year', 'month', 'day', 'hour', 'release_time',\n 'number_levels', 'pressure_source_code',\n 'non_pressure_source_code',\n 'latitude', 'longitude']\n\n colspecs_header = [(1, 12), (13, 17), (18, 20), (21, 23), (24, 26),\n (27, 31), (32, 36), (37, 45), (46, 54), (55, 62), (63, 71)]\n\n na_vals = ['-8888', '-9999']\n\n conv_header = {'release_time': _ctime(strformat='HHMM'),\n 'number_levels': int,\n 'latitude': _clatlon,\n 'longitude': _clatlon}\n\n return {'body': {'names': names_body,\n 'colspecs': colspecs_body,\n 'converters': conv_body,\n 'na_values': na_vals,\n 'index_col': False},\n 'header': {'names': names_header,\n 'colspecs': colspecs_header,\n 'converters': conv_header,\n 'na_values': na_vals,\n 'index_col': False}}", "response": "Return a dictionary with names colspecs and dtype for IGRA2 data."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nformats the dataframe remove empty rows and add units attribute.", "response": "def _clean_body_df(self, df):\n \"\"\"Format the dataframe, remove empty rows, and add units attribute.\"\"\"\n if self.suffix == '-drvd.txt':\n df = df.dropna(subset=('temperature', 'reported_relative_humidity',\n 'u_wind', 'v_wind'), how='all').reset_index(drop=True)\n\n df.units = {'pressure': 'hPa',\n 'reported_height': 'meter',\n 'calculated_height': 'meter',\n 'temperature': 'Kelvin',\n 'temperature_gradient': 'Kelvin / kilometer',\n 'potential_temperature': 'Kelvin',\n 'potential_temperature_gradient': 'Kelvin / kilometer',\n 'virtual_temperature': 'Kelvin',\n 'virtual_potential_temperature': 'Kelvin',\n 'vapor_pressure': 'Pascal',\n 'saturation_vapor_pressure': 'Pascal',\n 'reported_relative_humidity': 'percent',\n 'calculated_relative_humidity': 'percent',\n 'u_wind': 'meter / second',\n 'u_wind_gradient': '(meter / second) / kilometer)',\n 'v_wind': 'meter / second',\n 'v_wind_gradient': '(meter / second) / kilometer)',\n 'refractive_index': 'unitless'}\n\n else:\n df['u_wind'], df['v_wind'] = get_wind_components(df['speed'],\n np.deg2rad(df['direction']))\n df['u_wind'] = np.round(df['u_wind'], 1)\n df['v_wind'] = np.round(df['v_wind'], 1)\n\n df = df.dropna(subset=('temperature', 'direction', 'speed',\n 'dewpoint_depression', 'u_wind', 'v_wind'),\n how='all').reset_index(drop=True)\n\n df['dewpoint'] = df['temperature'] - df['dewpoint_depression']\n\n df.drop('dewpoint_depression', axis=1, inplace=True)\n\n df.units = {'etime': 'second',\n 'pressure': 'hPa',\n 'height': 'meter',\n 'temperature': 'degC',\n 'dewpoint': 'degC',\n 'direction': 'degrees',\n 'speed': 'meter / second',\n 'u_wind': 'meter / second',\n 'v_wind': 'meter / second'}\n\n return df"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nformat the header dataframe and add units.", "response": "def _clean_header_df(self, df):\n \"\"\"Format the header dataframe and add units.\"\"\"\n if self.suffix == '-drvd.txt':\n df.units = {'release_time': 'second',\n 'precipitable_water': 'millimeter',\n 'inv_pressure': 'hPa',\n 'inv_height': 'meter',\n 'inv_strength': 'Kelvin',\n 'mixed_layer_pressure': 'hPa',\n 'mixed_layer_height': 'meter',\n 'freezing_point_pressure': 'hPa',\n 'freezing_point_height': 'meter',\n 'lcl_pressure': 'hPa',\n 'lcl_height': 'meter',\n 'lfc_pressure': 'hPa',\n 'lfc_height': 'meter',\n 'lnb_pressure': 'hPa',\n 'lnb_height': 'meter',\n 'lifted_index': 'degC',\n 'showalter_index': 'degC',\n 'k_index': 'degC',\n 'total_totals_index': 'degC',\n 'cape': 'Joule / kilogram',\n 'convective_inhibition': 'Joule / kilogram'}\n\n else:\n df.units = {'release_time': 'second',\n 'latitude': 'degrees',\n 'longitude': 'degrees'}\n\n return df"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef realtime_observations(cls, buoy, data_type='txt'):\n endpoint = cls()\n parsers = {'txt': endpoint._parse_met,\n 'drift': endpoint._parse_drift,\n 'cwind': endpoint._parse_cwind,\n 'spec': endpoint._parse_spec,\n 'ocean': endpoint._parse_ocean,\n 'srad': endpoint._parse_srad,\n 'dart': endpoint._parse_dart,\n 'supl': endpoint._parse_supl,\n 'rain': endpoint._parse_rain}\n\n if data_type not in parsers:\n raise KeyError('Data type must be txt, drift, cwind, spec, ocean, srad, dart,'\n 'supl, or rain for parsed realtime data.')\n\n raw_data = endpoint.raw_buoy_data(buoy, data_type=data_type)\n return parsers[data_type](raw_data)", "response": "Retrieve the realtime buoy data from NDBC."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nparse the standard meteorological data from NDBC buoys.", "response": "def _parse_met(content):\n \"\"\"Parse standard meteorological data from NDBC buoys.\n\n Parameters\n ----------\n content : str\n Data to parse\n\n Returns\n -------\n :class:`pandas.DataFrame` containing the data\n\n \"\"\"\n col_names = ['year', 'month', 'day', 'hour', 'minute',\n 'wind_direction', 'wind_speed', 'wind_gust',\n 'wave_height', 'dominant_wave_period', 'average_wave_period',\n 'dominant_wave_direction', 'pressure',\n 'air_temperature', 'water_temperature', 'dewpoint',\n 'visibility', '3hr_pressure_tendency', 'water_level_above_mean']\n\n col_units = {'wind_direction': 'degrees',\n 'wind_speed': 'meters/second',\n 'wind_gust': 'meters/second',\n 'wave_height': 'meters',\n 'dominant_wave_period': 'seconds',\n 'average_wave_period': 'seconds',\n 'dominant_wave_direction': 'degrees',\n 'pressure': 'hPa',\n 'air_temperature': 'degC',\n 'water_temperature': 'degC',\n 'dewpoint': 'degC',\n 'visibility': 'nautical_mile',\n '3hr_pressure_tendency': 'hPa',\n 'water_level_above_mean': 'feet',\n 'time': None}\n\n df = pd.read_table(StringIO(content), comment='#', na_values='MM',\n names=col_names, sep=r'\\s+')\n df['time'] = pd.to_datetime(df[['year', 'month', 'day', 'hour', 'minute']], utc=True)\n df = df.drop(columns=['year', 'month', 'day', 'hour', 'minute'])\n df.units = col_units\n return df"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _parse_supl(content):\n col_names = ['year', 'month', 'day', 'hour', 'minute',\n 'hourly_low_pressure', 'hourly_low_pressure_time',\n 'hourly_high_wind', 'hourly_high_wind_direction',\n 'hourly_high_wind_time']\n\n col_units = {'hourly_low_pressure': 'hPa',\n 'hourly_low_pressure_time': None,\n 'hourly_high_wind': 'meters/second',\n 'hourly_high_wind_direction': 'degrees',\n 'hourly_high_wind_time': None,\n 'time': None}\n\n df = pd.read_table(StringIO(content), comment='#', na_values='MM',\n names=col_names, sep=r'\\s+')\n\n df['time'] = pd.to_datetime(df[['year', 'month', 'day', 'hour', 'minute']], utc=True)\n\n df['hours'] = np.floor(df['hourly_low_pressure_time'] / 100)\n df['minutes'] = df['hourly_low_pressure_time'] - df['hours'] * 100\n df['hours'] = df['hours'].replace(99, np.nan)\n df['minutes'] = df['minutes'].replace(99, np.nan)\n df['hourly_low_pressure_time'] = pd.to_datetime(df[['year', 'month', 'day', 'hours',\n 'minutes']], utc=True)\n\n df['hours'] = np.floor(df['hourly_high_wind_time'] / 100)\n df['minutes'] = df['hourly_high_wind_time'] - df['hours'] * 100\n df['hours'] = df['hours'].replace(99, np.nan)\n df['minutes'] = df['minutes'].replace(99, np.nan)\n df['hourly_high_wind_time'] = pd.to_datetime(df[['year', 'month', 'day',\n 'hours', 'minutes']], utc=True)\n df = df.drop(columns=['year', 'month', 'day', 'hour', 'minute', 'hours', 'minutes'])\n df.units = col_units\n return df", "response": "Parse supplemental measurements data."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncheck if a url is valid or not.", "response": "def _check_if_url_valid(url):\n \"\"\"Check if a url is valid (returns 200) or not.\n\n Parameters\n ----------\n url : str\n URL to check\n\n Returns\n -------\n bool if url is valid\n\n \"\"\"\n r = requests.head(url)\n if r.status_code == 200:\n return True\n else:\n return False"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndetermine which types of data are available for a given buoy.", "response": "def buoy_data_types(cls, buoy):\n \"\"\"Determine which types of data are available for a given buoy.\n\n Parameters\n ----------\n buoy : str\n Buoy name\n\n Returns\n -------\n dict of valid file extensions and their descriptions\n\n \"\"\"\n endpoint = cls()\n file_types = {'txt': 'standard meteorological data',\n 'drift': 'meteorological data from drifting buoys and limited moored'\n 'buoy data mainly from international partners',\n 'cwind': 'continuous wind data (10 minute average)',\n 'spec': 'spectral wave summaries',\n 'data_spec': 'raw spectral wave data',\n 'swdir': 'spectral wave data (alpha1)',\n 'swdir2': 'spectral wave data (alpha2)',\n 'swr1': 'spectral wave data (r1)',\n 'swr2': 'spectral wave data (r2)',\n 'adcp': 'acoustic doppler current profiler',\n 'ocean': 'oceanographic data',\n 'tide': 'tide data',\n 'srad': 'solar radiation data',\n 'dart': 'water column height',\n 'supl': 'supplemental measurements data',\n 'rain': 'hourly rain data'}\n available_data = {}\n buoy_url = 'https://www.ndbc.noaa.gov/data/realtime2/' + buoy + '.'\n for key in file_types:\n if endpoint._check_if_url_valid(buoy_url + key):\n available_data[key] = file_types[key]\n return available_data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef raw_buoy_data(cls, buoy, data_type='txt'):\n endpoint = cls()\n resp = endpoint.get_path('data/realtime2/{}.{}'.format(buoy, data_type))\n return resp.text", "response": "Retrieve the raw buoy data from NDBC."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef create_session(self):\n ret = requests.Session()\n ret.headers['User-Agent'] = self.user_agent\n for k, v in self.options.items():\n setattr(ret, k, v)\n return ret", "response": "Create a new HTTP session with our user - agent set."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef urlopen(self, url, **kwargs):\n return BytesIO(self.create_session().get(url, **kwargs).content)", "response": "A thin wrapper around requests. Session. get that returns a file - like object that is wrapped around the resulting content."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nadd a latitude and longitude bounding box to the query.", "response": "def lonlat_box(self, west, east, south, north):\n \"\"\"Add a latitude/longitude bounding box to the query.\n\n This adds a request for a spatial bounding box, bounded by ('north', 'south')\n for latitude and ('east', 'west') for the longitude. This modifies the query\n in-place, but returns `self` so that multiple queries can be chained together\n on one line.\n\n This replaces any existing spatial queries that have been set.\n\n Parameters\n ----------\n west: float\n The bounding longitude to the west, in degrees east of the prime meridian\n east : float\n The bounding longitude to the east, in degrees east of the prime meridian\n south : float\n The bounding latitude to the south, in degrees north of the equator\n north : float\n The bounding latitude to the north, in degrees north of the equator\n\n Returns\n -------\n self : DataQuery\n Returns self for chaining calls\n\n \"\"\"\n self._set_query(self.spatial_query, west=west, east=east, south=south,\n north=north)\n return self"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef lonlat_point(self, lon, lat):\n self._set_query(self.spatial_query, longitude=lon, latitude=lat)\n return self", "response": "Add a latitude point to the query."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef time(self, time):\n self._set_query(self.time_query, time=self._format_time(time))\n return self", "response": "Add a request for a specific time to the query."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef time_range(self, start, end):\n self._set_query(self.time_query, time_start=self._format_time(start),\n time_end=self._format_time(end))\n return self", "response": "Add a request for a time range to the query."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmaking a GET request including a query to the endpoint.", "response": "def get_query(self, query):\n \"\"\"Make a GET request, including a query, to the endpoint.\n\n The path of the request is to the base URL assigned to the endpoint.\n\n Parameters\n ----------\n query : DataQuery\n The query to pass when making the request\n\n Returns\n -------\n resp : requests.Response\n The server's response to the request\n\n See Also\n --------\n get_path, get\n\n \"\"\"\n url = self._base[:-1] if self._base[-1] == '/' else self._base\n return self.get(url, query)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_path(self, path, query=None):\n return self.get(self.url_path(path), query)", "response": "Make a GET request optionally including a query to a relative path."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nmake a GET request to the server.", "response": "def get(self, path, params=None):\n \"\"\"Make a GET request, optionally including a parameters, to a path.\n\n The path of the request is the full URL.\n\n Parameters\n ----------\n path : str\n The URL to request\n params : DataQuery, optional\n The query to pass when making the request\n\n Returns\n -------\n resp : requests.Response\n The server's response to the request\n\n Raises\n ------\n HTTPError\n If the server returns anything other than a 200 (OK) code\n\n See Also\n --------\n get_query, get\n\n \"\"\"\n resp = self._session.get(path, params=params)\n if resp.status_code != 200:\n if resp.headers.get('Content-Type', '').startswith('text/html'):\n text = resp.reason\n else:\n text = resp.text\n raise requests.HTTPError('Error accessing {0}\\n'\n 'Server Error ({1:d}: {2})'.format(resp.request.url,\n resp.status_code,\n text))\n return resp"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef path(self):\n # If root, return '/'\n if self.dataset is self:\n return ''\n else: # Otherwise recurse\n return self.dataset.path + '/' + self.name", "response": "Return the full path to the Group including any parent Groups."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef load_from_stream(self, group):\n self._unpack_attrs(group.atts)\n self.name = group.name\n\n for dim in group.dims:\n new_dim = Dimension(self, dim.name)\n self.dimensions[dim.name] = new_dim\n new_dim.load_from_stream(dim)\n\n for var in group.vars:\n new_var = Variable(self, var.name)\n self.variables[var.name] = new_var\n new_var.load_from_stream(var)\n\n for grp in group.groups:\n new_group = Group(self)\n self.groups[grp.name] = new_group\n new_group.load_from_stream(grp)\n\n for struct in group.structs:\n new_var = Variable(self, struct.name)\n self.variables[struct.name] = new_var\n new_var.load_from_stream(struct)\n\n if group.enumTypes:\n for en in group.enumTypes:\n self.types[en.name] = enum.Enum(en.name,\n [(typ.value, typ.code) for typ in en.map])", "response": "Load a Group from an NCStream object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef load_from_stream(self, var):\n dims = []\n for d in var.shape:\n dim = Dimension(None, d.name)\n dim.load_from_stream(d)\n dims.append(dim)\n\n self.dimensions = tuple(dim.name for dim in dims)\n self.shape = tuple(dim.size for dim in dims)\n self.ndim = len(var.shape)\n self._unpack_attrs(var.atts)\n\n data, dt, type_name = unpack_variable(var)\n if data is not None:\n data = data.reshape(self.shape)\n self._data = data\n self.dtype = dt\n self.datatype = type_name\n\n if hasattr(var, 'enumType') and var.enumType:\n self.datatype = var.enumType\n self._enum = True", "response": "Populate the Variable from an NCStream object."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef load_from_stream(self, dim):\n self.unlimited = dim.isUnlimited\n self.private = dim.isPrivate\n self.vlen = dim.isVlen\n if not self.vlen:\n self.size = dim.length", "response": "Load from an NCStream object."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _read_header(self):\n self._header = self.cdmrf.fetch_header()\n self.load_from_stream(self._header)", "response": "Get the needed header information to initialize dataset."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef load_from_stream(self, header):\n self._unpack_attrs(header.atts)\n self.name = header.name\n self.lon_lat_domain = header.latlonRect\n self.proj_domain = header.projRect\n self.date_range = header.dateRange\n self.type = header.coverageType\n\n for sys in header.coordSys:\n self.coord_systems[sys.name] = sys\n\n for trans in header.coordTransforms:\n self.transforms[trans.name] = trans\n\n for ax in header.coordAxes:\n self.axes[ax.name] = ax\n\n for cov in header.grids:\n self.grids[cov.name] = cov", "response": "Populate the CoverageDataset from the protobuf information."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nrequesting data from Iowa State s archive for a single station.", "response": "def request_data(cls, time, site_id, **kwargs):\n \"\"\"Retrieve upper air observations from Iowa State's archive for a single station.\n\n Parameters\n ----------\n time : datetime\n The date and time of the desired observation.\n\n site_id : str\n The three letter ICAO identifier of the station for which data should be\n downloaded.\n\n kwargs\n Arbitrary keyword arguments to use to initialize source\n\n Returns\n -------\n :class:`pandas.DataFrame` containing the data\n\n \"\"\"\n endpoint = cls()\n df = endpoint._get_data(time, site_id, None, **kwargs)\n return df"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef request_all_data(cls, time, pressure=None, **kwargs):\n endpoint = cls()\n df = endpoint._get_data(time, None, pressure, **kwargs)\n return df", "response": "Request all data from the Iowa State s archive for all stations."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _get_data(self, time, site_id, pressure=None):\n json_data = self._get_data_raw(time, site_id, pressure)\n data = {}\n for profile in json_data['profiles']:\n for pt in profile['profile']:\n for field in ('drct', 'dwpc', 'hght', 'pres', 'sknt', 'tmpc'):\n data.setdefault(field, []).append(np.nan if pt[field] is None\n else pt[field])\n for field in ('station', 'valid'):\n data.setdefault(field, []).append(np.nan if profile[field] is None\n else profile[field])\n\n # Make sure that the first entry has a valid temperature and dewpoint\n idx = np.argmax(~(np.isnan(data['tmpc']) | np.isnan(data['dwpc'])))\n\n # Stuff data into a pandas dataframe\n df = pd.DataFrame()\n df['pressure'] = ma.masked_invalid(data['pres'][idx:])\n df['height'] = ma.masked_invalid(data['hght'][idx:])\n df['temperature'] = ma.masked_invalid(data['tmpc'][idx:])\n df['dewpoint'] = ma.masked_invalid(data['dwpc'][idx:])\n df['direction'] = ma.masked_invalid(data['drct'][idx:])\n df['speed'] = ma.masked_invalid(data['sknt'][idx:])\n df['station'] = data['station'][idx:]\n df['time'] = [datetime.strptime(valid, '%Y-%m-%dT%H:%M:%SZ')\n for valid in data['valid'][idx:]]\n\n # Calculate the u and v winds\n df['u_wind'], df['v_wind'] = get_wind_components(df['speed'],\n np.deg2rad(df['direction']))\n\n # Drop any rows with all NaN values for T, Td, winds\n df = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed',\n 'u_wind', 'v_wind'), how='all').reset_index(drop=True)\n\n # Add unit dictionary\n df.units = {'pressure': 'hPa',\n 'height': 'meter',\n 'temperature': 'degC',\n 'dewpoint': 'degC',\n 'direction': 'degrees',\n 'speed': 'knot',\n 'u_wind': 'knot',\n 'v_wind': 'knot',\n 'station': None,\n 'time': None}\n return df", "response": "Download data from Iowa State s upper air archive."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndownloading data from the Iowa State s upper air archive.", "response": "def _get_data_raw(self, time, site_id, pressure=None):\n r\"\"\"Download data from the Iowa State's upper air archive.\n\n Parameters\n ----------\n time : datetime\n Date and time for which data should be downloaded\n site_id : str\n Site id for which data should be downloaded\n pressure : float, optional\n Mandatory pressure level at which to request data (in hPa).\n\n Returns\n -------\n list of json data\n\n \"\"\"\n query = {'ts': time.strftime('%Y%m%d%H00')}\n if site_id is not None:\n query['station'] = site_id\n if pressure is not None:\n query['pressure'] = pressure\n\n resp = self.get_path('raob.py', query)\n json_data = json.loads(resp.text)\n\n # See if the return is valid, but has no data\n if not (json_data['profiles'] and json_data['profiles'][0]['profile']):\n message = 'No data available '\n if time is not None:\n message += 'for {time:%Y-%m-%d %HZ} '.format(time=time)\n if site_id is not None:\n message += 'for station {stid}'.format(stid=site_id)\n if pressure is not None:\n message += 'for pressure {pres}'.format(pres=pressure)\n message = message[:-1] + '.'\n raise ValueError(message)\n return json_data"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nparses station list XML file.", "response": "def parse_station_table(root):\n \"\"\"Parse station list XML file.\"\"\"\n stations = [parse_xml_station(elem) for elem in root.findall('station')]\n return {st.id: st for st in stations}"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef parse_xml_station(elem):\n stid = elem.attrib['id']\n name = elem.find('name').text\n lat = float(elem.find('latitude').text)\n lon = float(elem.find('longitude').text)\n elev = float(elem.find('elevation').text)\n return Station(id=stid, elevation=elev, latitude=lat, longitude=lon, name=name)", "response": "Create a : class : Station instance from an XML tag."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef stations(self, *stns):\n self._set_query(self.spatial_query, stn=stns)\n return self", "response": "Set one or more stations for the query."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nvalidating a query. Determines whether `query` is well-formed. This includes checking for all required parameters, as well as checking parameters for valid values. Parameters ---------- query : RadarQuery The query to validate Returns ------- valid : bool Whether `query` is valid.", "response": "def validate_query(self, query):\n \"\"\"Validate a query.\n\n Determines whether `query` is well-formed. This includes checking for all\n required parameters, as well as checking parameters for valid values.\n\n Parameters\n ----------\n query : RadarQuery\n The query to validate\n\n Returns\n -------\n valid : bool\n Whether `query` is valid.\n\n \"\"\"\n valid = True\n # Make sure all stations are in the table\n if 'stn' in query.spatial_query:\n valid = valid and all(stid in self.stations\n for stid in query.spatial_query['stn'])\n\n if query.var:\n valid = valid and all(var in self.variables for var in query.var)\n\n return valid"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nfetch a parsed THREDDS catalog from the radar server given the parameters in query and returns a TDSCatalog instance.", "response": "def get_catalog(self, query):\n \"\"\"Fetch a parsed THREDDS catalog from the radar server.\n\n Requests a catalog of radar data files data from the radar server given the\n parameters in `query` and returns a :class:`~siphon.catalog.TDSCatalog` instance.\n\n Parameters\n ----------\n query : RadarQuery\n The parameters to send to the radar server\n\n Returns\n -------\n catalog : TDSCatalog\n The catalog of matching data files\n\n Raises\n ------\n :class:`~siphon.http_util.BadQueryError`\n When the query cannot be handled by the server\n\n See Also\n --------\n get_catalog_raw\n\n \"\"\"\n # TODO: Refactor TDSCatalog so we don't need two requests, or to do URL munging\n try:\n url = self._base[:-1] if self._base[-1] == '/' else self._base\n url += '?' + str(query)\n return TDSCatalog(url)\n except ET.ParseError:\n raise BadQueryError(self.get_catalog_raw(query))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef acis_request(method, params):\n base_url = 'http://data.rcc-acis.org/' # ACIS Web API URL\n\n timeout = 300 if method == 'MultiStnData' else 60\n\n try:\n response = session_manager.create_session().post(base_url + method, json=params,\n timeout=timeout)\n return response.json()\n except requests.exceptions.Timeout:\n raise AcisApiException('Connection Timeout')\n except requests.exceptions.TooManyRedirects:\n raise AcisApiException('Bad URL. Check your ACIS connection method string.')\n except ValueError:\n raise AcisApiException('No data returned! The ACIS parameter dictionary'\n 'may be incorrectly formatted')", "response": "This function is used to make a request to the ACIS Web Services API for data based on a given method and parameters."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nparses XML data returned by NCSS.", "response": "def parse_xml(data, handle_units):\n \"\"\"Parse XML data returned by NCSS.\"\"\"\n root = ET.fromstring(data)\n return squish(parse_xml_dataset(root, handle_units))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef parse_xml_point(elem):\n point = {}\n units = {}\n for data in elem.findall('data'):\n name = data.get('name')\n unit = data.get('units')\n point[name] = float(data.text) if name != 'date' else parse_iso_date(data.text)\n if unit:\n units[name] = unit\n return point, units", "response": "Parse an XML point tag."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncombines multiple Point tags into an array.", "response": "def combine_xml_points(l, units, handle_units):\n \"\"\"Combine multiple Point tags into an array.\"\"\"\n ret = {}\n for item in l:\n for key, value in item.items():\n ret.setdefault(key, []).append(value)\n\n for key, value in ret.items():\n if key != 'date':\n ret[key] = handle_units(value, units.get(key, None))\n\n return ret"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates a netCDF - like dataset from XML data.", "response": "def parse_xml_dataset(elem, handle_units):\n \"\"\"Create a netCDF-like dataset from XML data.\"\"\"\n points, units = zip(*[parse_xml_point(p) for p in elem.findall('point')])\n # Group points by the contents of each point\n datasets = {}\n for p in points:\n datasets.setdefault(tuple(p), []).append(p)\n\n all_units = combine_dicts(units)\n return [combine_xml_points(d, all_units, handle_units) for d in datasets.values()]"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nhandle CSV - formatted HTTP responses.", "response": "def parse_csv_response(data, unit_handler):\n \"\"\"Handle CSV-formatted HTTP responses.\"\"\"\n return squish([parse_csv_dataset(d, unit_handler) for d in data.split(b'\\n\\n')])"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nparse the CSV header returned by TDS.", "response": "def parse_csv_header(line):\n \"\"\"Parse the CSV header returned by TDS.\"\"\"\n units = {}\n names = []\n for var in line.split(','):\n start = var.find('[')\n if start < 0:\n names.append(str(var))\n continue\n else:\n names.append(str(var[:start]))\n end = var.find(']', start)\n unitstr = var[start + 1:end]\n eq = unitstr.find('=')\n if eq >= 0:\n # go past = and \", skip final \"\n units[names[-1]] = unitstr[eq + 2:-1]\n return names, units"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parse_csv_dataset(data, handle_units):\n fobj = BytesIO(data)\n names, units = parse_csv_header(fobj.readline().decode('utf-8'))\n arrs = np.genfromtxt(fobj, dtype=None, names=names, delimiter=',', unpack=True,\n converters={'date': lambda s: parse_iso_date(s.decode('utf-8'))})\n d = {}\n for f in arrs.dtype.fields:\n dat = arrs[f]\n if dat.dtype == np.object:\n dat = dat.tolist()\n d[f] = handle_units(dat, units.get(f, None))\n return d", "response": "Parse CSV data into a netCDF - like dataset."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nvalidate a query. Determines whether `query` is well-formed. This includes checking for all required parameters, as well as checking parameters for valid values. Parameters ---------- query : NCSSQuery The query to validate Returns ------- valid : bool Whether `query` is valid.", "response": "def validate_query(self, query):\n \"\"\"Validate a query.\n\n Determines whether `query` is well-formed. This includes checking for all\n required parameters, as well as checking parameters for valid values.\n\n Parameters\n ----------\n query : NCSSQuery\n The query to validate\n\n Returns\n -------\n valid : bool\n Whether `query` is valid.\n\n \"\"\"\n # Make sure all variables are in the dataset\n return bool(query.var) and all(var in self.variables for var in query.var)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_data(self, query):\n resp = self.get_query(query)\n return response_handlers(resp, self.unit_handler)", "response": "Fetch parsed data from a THREDDS server using NCSS."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding a spatial bounding box in projected ( native ) coordinates to the query.", "response": "def projection_box(self, min_x, min_y, max_x, max_y):\n \"\"\"Add a bounding box in projected (native) coordinates to the query.\n\n This adds a request for a spatial bounding box, bounded by (`min_x`, `max_x`) for\n x direction and (`min_y`, `max_y`) for the y direction. This modifies the query\n in-place, but returns ``self`` so that multiple queries can be chained together\n on one line.\n\n This replaces any existing spatial queries that have been set.\n\n Parameters\n ----------\n min_x : float\n The left edge of the bounding box\n min_y : float\n The bottom edge of the bounding box\n max_x : float\n The right edge of the bounding box\n max_y: float\n The top edge of the bounding box\n\n Returns\n -------\n self : NCSSQuery\n Returns self for chaining calls\n\n \"\"\"\n self._set_query(self.spatial_query, minx=min_x, miny=min_y,\n maxx=max_x, maxy=max_y)\n return self"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef strides(self, time=None, spatial=None):\n if time:\n self.add_query_parameter(timeStride=time)\n if spatial:\n self.add_query_parameter(horizStride=spatial)\n return self", "response": "Sets time and spatial strides for the NCSSQuery object."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef register(self, mimetype):\n def dec(func):\n self._reg[mimetype] = func\n return func\n return dec", "response": "Register a function to handle a particular mimetype."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ntranslates typed values into python objects.", "response": "def handle_typed_values(val, type_name, value_type):\n \"\"\"Translate typed values into the appropriate python object.\n\n Takes an element name, value, and type and returns a list\n with the string value(s) properly converted to a python type.\n\n TypedValues are handled in ucar.ma2.DataType in netcdfJava\n in the DataType enum. Possibilities are:\n\n \"boolean\"\n \"byte\"\n \"char\"\n \"short\"\n \"int\"\n \"long\"\n \"float\"\n \"double\"\n \"Sequence\"\n \"String\"\n \"Structure\"\n \"enum1\"\n \"enum2\"\n \"enum4\"\n \"opaque\"\n \"object\"\n\n All of these are values written as strings in the xml, so simply\n applying int, float to the values will work in most cases (i.e.\n the TDS encodes them as string values properly).\n\n Examle XML element:\n\n \n\n Parameters\n ----------\n val : string\n The string representation of the value attribute of the xml element\n\n type_name : string\n The string representation of the name attribute of the xml element\n\n value_type : string\n The string representation of the type attribute of the xml element\n\n Returns\n -------\n val : list\n A list containing the properly typed python values.\n\n \"\"\"\n if value_type in ['byte', 'short', 'int', 'long']:\n try:\n val = [int(v) for v in re.split('[ ,]', val) if v]\n except ValueError:\n log.warning('Cannot convert \"%s\" to int. Keeping type as str.', val)\n elif value_type in ['float', 'double']:\n try:\n val = [float(v) for v in re.split('[ ,]', val) if v]\n except ValueError:\n log.warning('Cannot convert \"%s\" to float. Keeping type as str.', val)\n elif value_type == 'boolean':\n try:\n # special case for boolean type\n val = val.split()\n # values must be either true or false\n for potential_bool in val:\n if potential_bool not in ['true', 'false']:\n raise ValueError\n val = [True if item == 'true' else False for item in val]\n except ValueError:\n msg = 'Cannot convert values %s to boolean.'\n msg += ' Keeping type as str.'\n log.warning(msg, val)\n elif value_type == 'String':\n # nothing special for String type\n pass\n else:\n # possibilities - Sequence, Structure, enum, opaque, object,\n # and char.\n # Not sure how to handle these as I do not have an example\n # of how they would show up in dataset.xml\n log.warning('%s type %s not understood. Keeping as String.',\n type_name, value_type)\n\n if not isinstance(val, list):\n val = [val]\n\n return val"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndownload and parse upper air observations from an online archive and return a pandas. DataFrame containing the data.", "response": "def _get_data(self, time, site_id):\n r\"\"\"Download and parse upper air observations from an online archive.\n\n Parameters\n ----------\n time : datetime\n The date and time of the desired observation.\n\n site_id : str\n The three letter ICAO identifier of the station for which data should be\n downloaded.\n\n Returns\n -------\n :class:`pandas.DataFrame` containing the data\n\n \"\"\"\n raw_data = self._get_data_raw(time, site_id)\n soup = BeautifulSoup(raw_data, 'html.parser')\n tabular_data = StringIO(soup.find_all('pre')[0].contents[0])\n col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed']\n df = pd.read_fwf(tabular_data, skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names)\n df['u_wind'], df['v_wind'] = get_wind_components(df['speed'],\n np.deg2rad(df['direction']))\n\n # Drop any rows with all NaN values for T, Td, winds\n df = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed',\n 'u_wind', 'v_wind'), how='all').reset_index(drop=True)\n\n # Parse metadata\n meta_data = soup.find_all('pre')[1].contents[0]\n lines = meta_data.splitlines()\n\n # If the station doesn't have a name identified we need to insert a\n # record showing this for parsing to proceed.\n if 'Station number' in lines[1]:\n lines.insert(1, 'Station identifier: ')\n\n station = lines[1].split(':')[1].strip()\n station_number = int(lines[2].split(':')[1].strip())\n sounding_time = datetime.strptime(lines[3].split(':')[1].strip(), '%y%m%d/%H%M')\n latitude = float(lines[4].split(':')[1].strip())\n longitude = float(lines[5].split(':')[1].strip())\n elevation = float(lines[6].split(':')[1].strip())\n\n df['station'] = station\n df['station_number'] = station_number\n df['time'] = sounding_time\n df['latitude'] = latitude\n df['longitude'] = longitude\n df['elevation'] = elevation\n\n # Add unit dictionary\n df.units = {'pressure': 'hPa',\n 'height': 'meter',\n 'temperature': 'degC',\n 'dewpoint': 'degC',\n 'direction': 'degrees',\n 'speed': 'knot',\n 'u_wind': 'knot',\n 'v_wind': 'knot',\n 'station': None,\n 'station_number': None,\n 'time': None,\n 'latitude': 'degrees',\n 'longitude': 'degrees',\n 'elevation': 'meter'}\n return df"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ndownload data from the University of Wyoming s upper air archive.", "response": "def _get_data_raw(self, time, site_id):\n \"\"\"Download data from the University of Wyoming's upper air archive.\n\n Parameters\n ----------\n time : datetime\n Date and time for which data should be downloaded\n site_id : str\n Site id for which data should be downloaded\n\n Returns\n -------\n text of the server response\n\n \"\"\"\n path = ('?region=naconf&TYPE=TEXT%3ALIST'\n '&YEAR={time:%Y}&MONTH={time:%m}&FROM={time:%d%H}&TO={time:%d%H}'\n '&STNM={stid}').format(time=time, stid=site_id)\n\n resp = self.get_path(path)\n # See if the return is valid, but has no data\n if resp.text.find('Can\\'t') != -1:\n raise ValueError(\n 'No data available for {time:%Y-%m-%d %HZ} '\n 'for station {stid}.'.format(time=time, stid=site_id))\n\n return resp.text"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nhandle reading an NcStream v1 data block from a file - like object.", "response": "def read_ncstream_data(fobj):\n \"\"\"Handle reading an NcStream v1 data block from a file-like object.\"\"\"\n data = read_proto_object(fobj, stream.Data)\n if data.dataType in (stream.STRING, stream.OPAQUE) or data.vdata:\n log.debug('Reading string/opaque/vlen')\n num_obj = read_var_int(fobj)\n log.debug('Num objects: %d', num_obj)\n blocks = [read_block(fobj) for _ in range(num_obj)]\n if data.dataType == stream.STRING:\n blocks = [b.decode('utf-8', errors='ignore') for b in blocks]\n\n # Again endian isn't coded properly\n dt = data_type_to_numpy(data.dataType).newbyteorder('>')\n if data.vdata:\n return np.array([np.frombuffer(b, dtype=dt) for b in blocks])\n else:\n return np.array(blocks, dtype=dt)\n elif data.dataType in _dtypeLookup:\n log.debug('Reading array data')\n bin_data = read_block(fobj)\n log.debug('Binary data: %s', bin_data)\n\n # Hard code to big endian for now since it's not encoded correctly\n dt = data_type_to_numpy(data.dataType).newbyteorder('>')\n\n # Handle decompressing the bytes\n if data.compress == stream.DEFLATE:\n bin_data = zlib.decompress(bin_data)\n assert len(bin_data) == data.uncompressedSize\n elif data.compress != stream.NONE:\n raise NotImplementedError('Compression type {0} not implemented!'.format(\n data.compress))\n\n # Turn bytes into an array\n return reshape_array(data, np.frombuffer(bin_data, dtype=dt))\n elif data.dataType == stream.STRUCTURE:\n sd = read_proto_object(fobj, stream.StructureData)\n\n # Make a datatype appropriate to the rows of struct\n endian = '>' if data.bigend else '<'\n dt = np.dtype([(endian, np.void, sd.rowLength)])\n\n # Turn bytes into an array\n return reshape_array(data, np.frombuffer(sd.data, dtype=dt))\n elif data.dataType == stream.SEQUENCE:\n log.debug('Reading sequence')\n blocks = []\n magic = read_magic(fobj)\n while magic != MAGIC_VEND:\n if magic == MAGIC_VDATA:\n log.error('Bad magic for struct/seq data!')\n blocks.append(read_proto_object(fobj, stream.StructureData))\n magic = read_magic(fobj)\n return data, blocks\n else:\n raise NotImplementedError(\"Don't know how to handle data type: {0}\".format(\n data.dataType))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nhandles reading an NcStream error from a file - like object and raise as error.", "response": "def read_ncstream_err(fobj):\n \"\"\"Handle reading an NcStream error from a file-like object and raise as error.\"\"\"\n err = read_proto_object(fobj, stream.Error)\n raise RuntimeError(err.message)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef read_messages(fobj, magic_table):\n messages = []\n\n while True:\n magic = read_magic(fobj)\n if not magic:\n break\n\n func = magic_table.get(magic)\n if func is not None:\n messages.append(func(fobj))\n else:\n log.error('Unknown magic: ' + str(' '.join('{0:02x}'.format(b)\n for b in bytearray(magic))))\n\n return messages", "response": "Read messages from a file - like object until stream is exhausted."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreads a block of data and parse using the given protobuf object.", "response": "def read_proto_object(fobj, klass):\n \"\"\"Read a block of data and parse using the given protobuf object.\"\"\"\n log.debug('%s chunk', klass.__name__)\n obj = klass()\n obj.ParseFromString(read_block(fobj))\n log.debug('Header: %s', str(obj))\n return obj"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nread a block from a file - like object.", "response": "def read_block(fobj):\n \"\"\"Read a block.\n\n Reads a block from a file object by first reading the number of bytes to read, which must\n be encoded as a variable-byte length integer.\n\n Parameters\n ----------\n fobj : file-like object\n The file to read from.\n\n Returns\n -------\n bytes\n block of bytes read\n\n \"\"\"\n num = read_var_int(fobj)\n log.debug('Next block: %d bytes', num)\n return fobj.read(num)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef process_vlen(data_header, array):\n source = iter(array)\n return np.array([np.fromiter(itertools.islice(source, size), dtype=array.dtype)\n for size in data_header.vlens])", "response": "This function processes vlen coming back from NCStream v2."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nconverts a DataCol from NCStream v2 into an array with appropriate type.", "response": "def datacol_to_array(datacol):\n \"\"\"Convert DataCol from NCStream v2 into an array with appropriate type.\n\n Depending on the data type specified, this extracts data from the appropriate members\n and packs into a :class:`numpy.ndarray`, recursing as necessary for compound data types.\n\n Parameters\n ----------\n datacol : DataCol\n\n Returns\n -------\n ndarray\n array containing extracted data\n\n \"\"\"\n if datacol.dataType == stream.STRING:\n arr = np.array(datacol.stringdata, dtype=np.object)\n elif datacol.dataType == stream.OPAQUE:\n arr = np.array(datacol.opaquedata, dtype=np.object)\n elif datacol.dataType == stream.STRUCTURE:\n members = OrderedDict((mem.name, datacol_to_array(mem))\n for mem in datacol.structdata.memberData)\n log.debug('Struct members:\\n%s', str(members))\n\n # str() around name necessary because protobuf gives unicode names, but dtype doesn't\n # support them on Python 2\n dt = np.dtype([(str(name), arr.dtype) for name, arr in members.items()])\n log.debug('Struct dtype: %s', str(dt))\n\n arr = np.empty((datacol.nelems,), dtype=dt)\n for name, arr_data in members.items():\n arr[name] = arr_data\n else:\n # Make an appropriate datatype\n endian = '>' if datacol.bigend else '<'\n dt = data_type_to_numpy(datacol.dataType).newbyteorder(endian)\n\n # Turn bytes into an array\n arr = np.frombuffer(datacol.primdata, dtype=dt)\n if arr.size != datacol.nelems:\n log.warning('Array size %d does not agree with nelems %d',\n arr.size, datacol.nelems)\n if datacol.isVlen:\n arr = process_vlen(datacol, arr)\n if arr.dtype == np.object_:\n arr = reshape_array(datacol, arr)\n else:\n # In this case, the array collapsed, need different resize that\n # correctly sizes from elements\n shape = tuple(r.size for r in datacol.section.range) + (datacol.vlens[0],)\n arr = arr.reshape(*shape)\n else:\n arr = reshape_array(datacol, arr)\n return arr"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nextracts the appropriate array shape from the header.", "response": "def reshape_array(data_header, array):\n \"\"\"Extract the appropriate array shape from the header.\n\n Can handle taking a data header and either bytes containing data or a StructureData\n instance, which will have binary data as well as some additional information.\n\n Parameters\n ----------\n array : :class:`numpy.ndarray`\n data_header : Data\n\n \"\"\"\n shape = tuple(r.size for r in data_header.section.range)\n if shape:\n return array.reshape(*shape)\n else:\n return array"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef data_type_to_numpy(datatype, unsigned=False):\n basic_type = _dtypeLookup[datatype]\n\n if datatype in (stream.STRING, stream.OPAQUE):\n return np.dtype(basic_type)\n\n if unsigned:\n basic_type = basic_type.replace('i', 'u')\n return np.dtype('=' + basic_type)", "response": "Convert an ncstream datatype to a numpy one."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef struct_to_dtype(struct):\n # str() around name necessary because protobuf gives unicode names, but dtype doesn't\n # support them on Python 2\n fields = [(str(var.name), data_type_to_numpy(var.dataType, var.unsigned))\n for var in struct.vars]\n for s in struct.structs:\n fields.append((str(s.name), struct_to_dtype(s)))\n\n log.debug('Structure fields: %s', fields)\n dt = np.dtype(fields)\n return dt", "response": "Convert a Structure specification to a numpy structured dtype."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef unpack_variable(var):\n # If we actually get a structure instance, handle turning that into a variable\n if var.dataType == stream.STRUCTURE:\n return None, struct_to_dtype(var), 'Structure'\n elif var.dataType == stream.SEQUENCE:\n log.warning('Sequence support not implemented!')\n\n dt = data_type_to_numpy(var.dataType, var.unsigned)\n if var.dataType == stream.OPAQUE:\n type_name = 'opaque'\n elif var.dataType == stream.STRING:\n type_name = 'string'\n else:\n type_name = dt.name\n\n if var.data:\n log.debug('Storing variable data: %s %s', dt, var.data)\n if var.dataType == stream.STRING:\n data = var.data\n else:\n # Always sent big endian\n data = np.frombuffer(var.data, dtype=dt.newbyteorder('>'))\n else:\n data = None\n\n return data, dt, type_name", "response": "Unpack an NCStream Variable into a numpy array and a type name."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef unpack_attribute(att):\n if att.unsigned:\n log.warning('Unsupported unsigned attribute!')\n\n # TDS 5.0 now has a dataType attribute that takes precedence\n if att.len == 0: # Empty\n val = None\n elif att.dataType == stream.STRING: # Then look for new datatype string\n val = att.sdata\n elif att.dataType: # Then a non-zero new data type\n val = np.frombuffer(att.data,\n dtype='>' + _dtypeLookup[att.dataType], count=att.len)\n elif att.type: # Then non-zero old-data type0\n val = np.frombuffer(att.data,\n dtype=_attrConverters[att.type], count=att.len)\n elif att.sdata: # This leaves both 0, try old string\n val = att.sdata\n else: # Assume new datatype is Char (0)\n val = np.array(att.data, dtype=_dtypeLookup[att.dataType])\n\n if att.len == 1:\n val = val[0]\n\n return att.name, val", "response": "Unpack an embedded attribute into a python or numpy object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef read_var_int(file_obj):\n # Read all bytes from here, stopping with the first one that does not have\n # the MSB set. Save the lower 7 bits, and keep stacking to the *left*.\n val = 0\n shift = 0\n while True:\n # Read next byte\n next_val = ord(file_obj.read(1))\n val |= ((next_val & 0x7F) << shift)\n shift += 7\n if not next_val & 0x80:\n break\n\n return val", "response": "Reads a variable - length integer from the specified file - like object."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef fetch_data(self, **var):\n varstr = ','.join(name + self._convert_indices(ind)\n for name, ind in var.items())\n query = self.query().add_query_parameter(req='data', var=varstr)\n return self._fetch(query)", "response": "Retrieve data from CDMRemote for one or more variables."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef query(self):\n q = super(CDMRemote, self).query()\n\n # Turn on compression if it's been set on the object\n if self.deflate:\n q.add_query_parameter(deflate=self.deflate)\n\n return q", "response": "Generate a new query for the object containing the object s ID."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef check_token(func):\n @wraps(func)\n def wrapper(*args, **kwargs):\n response = func(*args, **kwargs)\n if response.status_code == 401:\n raise InvalidToken('Access token invalid or no longer valid')\n else:\n return response\n return wrapper", "response": "Check if access token is valid."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nuploads a file to the remote server.", "response": "def upload(self, remote_path, file_content, ondup=None, **kwargs):\n \"\"\"\u4e0a\u4f20\u5355\u4e2a\u6587\u4ef6\uff08<2G\uff09.\n\n | \u767e\u5ea6PCS\u670d\u52a1\u76ee\u524d\u652f\u6301\u6700\u59272G\u7684\u5355\u4e2a\u6587\u4ef6\u4e0a\u4f20\u3002\n | \u5982\u9700\u652f\u6301\u8d85\u5927\u6587\u4ef6\uff08>2G\uff09\u7684\u65ad\u70b9\u7eed\u4f20\uff0c\u8bf7\u53c2\u8003\u4e0b\u9762\u7684\u201c\u5206\u7247\u6587\u4ef6\u4e0a\u4f20\u201d\u65b9\u6cd5\u3002\n\n :param remote_path: \u7f51\u76d8\u4e2d\u6587\u4ef6\u7684\u4fdd\u5b58\u8def\u5f84\uff08\u5305\u542b\u6587\u4ef6\u540d\uff09\u3002\n \u5fc5\u987b\u4ee5 /apps/ \u5f00\u5934\u3002\n\n .. warning::\n * \u8def\u5f84\u957f\u5ea6\u9650\u5236\u4e3a1000\uff1b\n * \u5f84\u4e2d\u4e0d\u80fd\u5305\u542b\u4ee5\u4e0b\u5b57\u7b26\uff1a``\\\\\\\\ ? | \" > < : *``\uff1b\n * \u6587\u4ef6\u540d\u6216\u8def\u5f84\u540d\u5f00\u5934\u7ed3\u5c3e\u4e0d\u80fd\u662f ``.``\n \u6216\u7a7a\u767d\u5b57\u7b26\uff0c\u7a7a\u767d\u5b57\u7b26\u5305\u62ec\uff1a\n ``\\\\r, \\\\n, \\\\t, \u7a7a\u683c, \\\\0, \\\\x0B`` \u3002\n :param file_content: \u4e0a\u4f20\u6587\u4ef6\u7684\u5185\u5bb9/\u6587\u4ef6\u5bf9\u8c61 \u3002\n (e.g. ``open('foobar', 'rb')`` )\n :param ondup: \uff08\u53ef\u9009\uff09\n\n * 'overwrite'\uff1a\u8868\u793a\u8986\u76d6\u540c\u540d\u6587\u4ef6\uff1b\n * 'newcopy'\uff1a\u8868\u793a\u751f\u6210\u6587\u4ef6\u526f\u672c\u5e76\u8fdb\u884c\u91cd\u547d\u540d\uff0c\u547d\u540d\u89c4\u5219\u4e3a\u201c\n \u6587\u4ef6\u540d_\u65e5\u671f.\u540e\u7f00\u201d\u3002\n :return: Response \u5bf9\u8c61\n \"\"\"\n\n params = {\n 'path': remote_path,\n 'ondup': ondup\n }\n files = {'file': ('file', file_content, '')}\n url = 'https://c.pcs.baidu.com/rest/2.0/pcs/file'\n return self._request('file', 'upload', url=url, extra_params=params,\n files=files, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef upload_tmpfile(self, file_content, **kwargs):\n\n params = {\n 'type': 'tmpfile'\n }\n files = {'file': ('file', file_content, '')}\n url = 'https://c.pcs.baidu.com/rest/2.0/pcs/file'\n return self._request('file', 'upload', url=url, extra_params=params,\n files=files, **kwargs)", "response": "Upload a file to a temporary file."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef upload_superfile(self, remote_path, block_list, ondup=None, **kwargs):\n\n params = {\n 'path': remote_path,\n 'ondup': ondup\n }\n data = {\n 'param': json.dumps({'block_list': block_list}),\n }\n return self._request('file', 'createsuperfile', extra_params=params,\n data=data, **kwargs)", "response": "Upload a superfile to the remote path."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef mkdir(self, remote_path, **kwargs):\n\n data = {\n 'path': remote_path\n }\n return self._request('file', 'mkdir', data=data, **kwargs)", "response": "Create a new directory."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget file meta data", "response": "def meta(self, remote_path, **kwargs):\n \"\"\"\u83b7\u53d6\u5355\u4e2a\u6587\u4ef6\u6216\u76ee\u5f55\u7684\u5143\u4fe1\u606f.\n\n :param remote_path: \u7f51\u76d8\u4e2d\u6587\u4ef6/\u76ee\u5f55\u7684\u8def\u5f84\uff0c\u5fc5\u987b\u4ee5 /apps/ \u5f00\u5934\u3002\n\n .. warning::\n * \u8def\u5f84\u957f\u5ea6\u9650\u5236\u4e3a1000\uff1b\n * \u5f84\u4e2d\u4e0d\u80fd\u5305\u542b\u4ee5\u4e0b\u5b57\u7b26\uff1a``\\\\\\\\ ? | \" > < : *``\uff1b\n * \u6587\u4ef6\u540d\u6216\u8def\u5f84\u540d\u5f00\u5934\u7ed3\u5c3e\u4e0d\u80fd\u662f ``.``\n \u6216\u7a7a\u767d\u5b57\u7b26\uff0c\u7a7a\u767d\u5b57\u7b26\u5305\u62ec\uff1a\n ``\\\\r, \\\\n, \\\\t, \u7a7a\u683c, \\\\0, \\\\x0B`` \u3002\n :return: Response \u5bf9\u8c61\n \"\"\"\n\n params = {\n 'path': remote_path\n }\n return self._request('file', 'meta', extra_params=params, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef move(self, from_path, to_path, **kwargs):\n\n data = {\n 'from': from_path,\n 'to': to_path,\n }\n return self._request('file', 'move', data=data, **kwargs)", "response": "move file from one location to another"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef copy(self, from_path, to_path, **kwargs):\n\n data = {\n 'from': from_path,\n 'to': to_path,\n }\n return self._request('file', 'copy', data=data, **kwargs)", "response": "Copy a file from one location to another."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef delete(self, remote_path, **kwargs):\n\n data = {\n 'path': remote_path\n }\n return self._request('file', 'delete', data=data, **kwargs)", "response": "Delete a file from the server."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef multi_delete(self, path_list, **kwargs):\n\n data = {\n 'param': json.dumps({\n 'list': [{'path': path} for path in path_list]\n }),\n }\n return self._request('file', 'delete', data=data, **kwargs)", "response": "Delete a list of files."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef thumbnail(self, remote_path, height, width, quality=100, **kwargs):\n\n params = {\n 'path': remote_path,\n 'height': height,\n 'width': width,\n 'quality': quality,\n }\n return self._request('thumbnail', 'generate', extra_params=params,\n **kwargs)", "response": "Get a thumbnail of a file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef video_convert(self, remote_path, video_type, **kwargs):\n\n params = {\n 'path': remote_path,\n 'type': video_type,\n }\n return self._request('file', 'streaming', extra_params=params,\n **kwargs)", "response": "Convert a file to a video type."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef download_stream(self, remote_path, **kwargs):\n\n params = {\n 'path': remote_path,\n }\n url = 'https://d.pcs.baidu.com/rest/2.0/pcs/file'\n return self._request('stream', 'download', url=url,\n extra_params=params, **kwargs)", "response": "Download a file from the Baidu server."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef rapid_upload(self, remote_path, content_length, content_md5,\n content_crc32, slice_md5, ondup=None, **kwargs):\n \"\"\"\u79d2\u4f20\u4e00\u4e2a\u6587\u4ef6.\n\n .. warning::\n * \u88ab\u79d2\u4f20\u6587\u4ef6\u5fc5\u987b\u5927\u4e8e256KB\uff08\u5373 256*1024 B\uff09\u3002\n * \u6821\u9a8c\u6bb5\u4e3a\u6587\u4ef6\u7684\u524d256KB\uff0c\u79d2\u4f20\u63a5\u53e3\u9700\u8981\u63d0\u4f9b\u6821\u9a8c\u6bb5\u7684MD5\u3002\n (\u975e\u5f3a\u4e00\u81f4\u63a5\u53e3\uff0c\u4e0a\u4f20\u540e\u8bf7\u7b49\u5f851\u79d2\u540e\u518d\u8bfb\u53d6)\n\n :param remote_path: \u4e0a\u4f20\u6587\u4ef6\u7684\u5168\u8def\u5f84\u540d\u3002\n\n .. warning::\n * \u8def\u5f84\u957f\u5ea6\u9650\u5236\u4e3a1000\uff1b\n * \u5f84\u4e2d\u4e0d\u80fd\u5305\u542b\u4ee5\u4e0b\u5b57\u7b26\uff1a``\\\\\\\\ ? | \" > < : *``\uff1b\n * \u6587\u4ef6\u540d\u6216\u8def\u5f84\u540d\u5f00\u5934\u7ed3\u5c3e\u4e0d\u80fd\u662f ``.``\n \u6216\u7a7a\u767d\u5b57\u7b26\uff0c\u7a7a\u767d\u5b57\u7b26\u5305\u62ec\uff1a\n ``\\\\r, \\\\n, \\\\t, \u7a7a\u683c, \\\\0, \\\\x0B`` \u3002\n :param content_length: \u5f85\u79d2\u4f20\u6587\u4ef6\u7684\u957f\u5ea6\u3002\n :param content_md5: \u5f85\u79d2\u4f20\u6587\u4ef6\u7684MD5\u3002\n :param content_crc32: \u5f85\u79d2\u4f20\u6587\u4ef6\u7684CRC32\u3002\n :param slice_md5: \u5f85\u79d2\u4f20\u6587\u4ef6\u6821\u9a8c\u6bb5\u7684MD5\u3002\n :param ondup: \uff08\u53ef\u9009\uff09\n\n * 'overwrite'\uff1a\u8868\u793a\u8986\u76d6\u540c\u540d\u6587\u4ef6\uff1b\n * 'newcopy'\uff1a\u8868\u793a\u751f\u6210\u6587\u4ef6\u526f\u672c\u5e76\u8fdb\u884c\u91cd\u547d\u540d\uff0c\u547d\u540d\u89c4\u5219\u4e3a\u201c\n \u6587\u4ef6\u540d_\u65e5\u671f.\u540e\u7f00\u201d\u3002\n :return: Response \u5bf9\u8c61\n \"\"\"\n data = {\n 'path': remote_path,\n 'content-length': content_length,\n 'content-md5': content_md5,\n 'content-crc32': content_crc32,\n 'slice-md5': slice_md5,\n 'ondup': ondup,\n }\n return self._request('file', 'rapidupload', data=data, **kwargs)", "response": "Upload a file to a rapid server."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef add_download_task(self, source_url, remote_path,\n rate_limit=None, timeout=60 * 60,\n expires=None, callback='', **kwargs):\n \"\"\"\u6dfb\u52a0\u79bb\u7ebf\u4e0b\u8f7d\u4efb\u52a1\uff0c\u5b9e\u73b0\u5355\u4e2a\u6587\u4ef6\u79bb\u7ebf\u4e0b\u8f7d.\n\n :param source_url: \u6e90\u6587\u4ef6\u7684URL\u3002\n :param remote_path: \u4e0b\u8f7d\u540e\u7684\u6587\u4ef6\u4fdd\u5b58\u8def\u5f84\u3002\n\n .. warning::\n * \u8def\u5f84\u957f\u5ea6\u9650\u5236\u4e3a1000\uff1b\n * \u5f84\u4e2d\u4e0d\u80fd\u5305\u542b\u4ee5\u4e0b\u5b57\u7b26\uff1a``\\\\\\\\ ? | \" > < : *``\uff1b\n * \u6587\u4ef6\u540d\u6216\u8def\u5f84\u540d\u5f00\u5934\u7ed3\u5c3e\u4e0d\u80fd\u662f ``.``\n \u6216\u7a7a\u767d\u5b57\u7b26\uff0c\u7a7a\u767d\u5b57\u7b26\u5305\u62ec\uff1a\n ``\\\\r, \\\\n, \\\\t, \u7a7a\u683c, \\\\0, \\\\x0B`` \u3002\n :param rate_limit: \u4e0b\u8f7d\u9650\u901f\uff0c\u9ed8\u8ba4\u4e0d\u9650\u901f\u3002\n :type rate_limit: int or long\n :param timeout: \u4e0b\u8f7d\u8d85\u65f6\u65f6\u95f4\uff0c\u9ed8\u8ba43600\u79d2\u3002\n :param expires: \u8bf7\u6c42\u5931\u6548\u65f6\u95f4\uff0c\u5982\u679c\u6709\uff0c\u5219\u4f1a\u6821\u9a8c\u3002\n :type expires: int\n :param callback: \u4e0b\u8f7d\u5b8c\u6bd5\u540e\u7684\u56de\u8c03\uff0c\u9ed8\u8ba4\u4e3a\u7a7a\u3002\n :type callback: str\n :return: Response \u5bf9\u8c61\n \"\"\"\n\n data = {\n 'source_url': source_url,\n 'save_path': remote_path,\n 'expires': expires,\n 'rate_limit': rate_limit,\n 'timeout': timeout,\n 'callback': callback,\n }\n return self._request('services/cloud_dl', 'add_task',\n data=data, **kwargs)", "response": "Add a download task to the cache."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nqueries Cloud DL Task", "response": "def query_download_tasks(self, task_ids, operate_type=1,\n expires=None, **kwargs):\n \"\"\"\u6839\u636e\u4efb\u52a1ID\u53f7\uff0c\u67e5\u8be2\u79bb\u7ebf\u4e0b\u8f7d\u4efb\u52a1\u4fe1\u606f\u53ca\u8fdb\u5ea6\u4fe1\u606f\u3002\n\n :param task_ids: \u8981\u67e5\u8be2\u7684\u4efb\u52a1ID\u5217\u8868\n :type task_ids: list or tuple\n :param operate_type:\n * 0\uff1a\u67e5\u4efb\u52a1\u4fe1\u606f\n * 1\uff1a\u67e5\u8fdb\u5ea6\u4fe1\u606f\uff0c\u9ed8\u8ba4\u4e3a1\n :param expires: \u8bf7\u6c42\u5931\u6548\u65f6\u95f4\uff0c\u5982\u679c\u6709\uff0c\u5219\u4f1a\u6821\u9a8c\u3002\n :type expires: int\n :return: Response \u5bf9\u8c61\n \"\"\"\n\n params = {\n 'task_ids': ','.join(map(str, task_ids)),\n 'op_type': operate_type,\n 'expires': expires,\n }\n return self._request('services/cloud_dl', 'query_task',\n extra_params=params, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nlist all the tasks in the specified base URL.", "response": "def list_download_tasks(self, need_task_info=1, start=0, limit=10, asc=0,\n create_time=None, status=None, source_url=None,\n remote_path=None, expires=None, **kwargs):\n \"\"\"\u67e5\u8be2\u79bb\u7ebf\u4e0b\u8f7d\u4efb\u52a1ID\u5217\u8868\u53ca\u4efb\u52a1\u4fe1\u606f.\n\n :param need_task_info: \u662f\u5426\u9700\u8981\u8fd4\u56de\u4efb\u52a1\u4fe1\u606f:\n\n * 0\uff1a\u4e0d\u9700\u8981\n * 1\uff1a\u9700\u8981\uff0c\u9ed8\u8ba4\u4e3a1\n :param start: \u67e5\u8be2\u4efb\u52a1\u8d77\u59cb\u4f4d\u7f6e\uff0c\u9ed8\u8ba4\u4e3a0\u3002\n :param limit: \u8bbe\u5b9a\u8fd4\u56de\u4efb\u52a1\u6570\u91cf\uff0c\u9ed8\u8ba4\u4e3a10\u3002\n :param asc:\n\n * 0\uff1a\u964d\u5e8f\uff0c\u9ed8\u8ba4\u503c\n * 1\uff1a\u5347\u5e8f\n :param create_time: \u4efb\u52a1\u521b\u5efa\u65f6\u95f4\uff0c\u9ed8\u8ba4\u4e3a\u7a7a\u3002\n :type create_time: int\n :param status: \u4efb\u52a1\u72b6\u6001\uff0c\u9ed8\u8ba4\u4e3a\u7a7a\u3002\n 0:\u4e0b\u8f7d\u6210\u529f\uff0c1:\u4e0b\u8f7d\u8fdb\u884c\u4e2d 2:\u7cfb\u7edf\u9519\u8bef\uff0c3:\u8d44\u6e90\u4e0d\u5b58\u5728\uff0c\n 4:\u4e0b\u8f7d\u8d85\u65f6\uff0c5:\u8d44\u6e90\u5b58\u5728\u4f46\u4e0b\u8f7d\u5931\u8d25, 6:\u5b58\u50a8\u7a7a\u95f4\u4e0d\u8db3,\n 7:\u76ee\u6807\u5730\u5740\u6570\u636e\u5df2\u5b58\u5728, 8:\u4efb\u52a1\u53d6\u6d88.\n :type status: int\n :param source_url: \u6e90\u5730\u5740URL\uff0c\u9ed8\u8ba4\u4e3a\u7a7a\u3002\n :param remote_path: \u6587\u4ef6\u4fdd\u5b58\u8def\u5f84\uff0c\u9ed8\u8ba4\u4e3a\u7a7a\u3002\n\n .. warning::\n * \u8def\u5f84\u957f\u5ea6\u9650\u5236\u4e3a1000\uff1b\n * \u5f84\u4e2d\u4e0d\u80fd\u5305\u542b\u4ee5\u4e0b\u5b57\u7b26\uff1a``\\\\\\\\ ? | \" > < : *``\uff1b\n * \u6587\u4ef6\u540d\u6216\u8def\u5f84\u540d\u5f00\u5934\u7ed3\u5c3e\u4e0d\u80fd\u662f ``.``\n \u6216\u7a7a\u767d\u5b57\u7b26\uff0c\u7a7a\u767d\u5b57\u7b26\u5305\u62ec\uff1a\n ``\\\\r, \\\\n, \\\\t, \u7a7a\u683c, \\\\0, \\\\x0B`` \u3002\n :param expires: \u8bf7\u6c42\u5931\u6548\u65f6\u95f4\uff0c\u5982\u679c\u6709\uff0c\u5219\u4f1a\u6821\u9a8c\u3002\n :type expires: int\n :return: Response \u5bf9\u8c61\n \"\"\"\n\n data = {\n 'expires': expires,\n 'start': start,\n 'limit': limit,\n 'asc': asc,\n 'source_url': source_url,\n 'save_path': remote_path,\n 'create_time': create_time,\n 'status': status,\n 'need_task_info': need_task_info,\n }\n return self._request('services/cloud_dl', 'list_task',\n data=data, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef cancel_download_task(self, task_id, expires=None, **kwargs):\n\n data = {\n 'expires': expires,\n 'task_id': task_id,\n }\n return self._request('services/cloud_dl', 'cancle_task',\n data=data, **kwargs)", "response": "\u53d6\u6d88\u79bb\u7ebf\u4e0b\u8f7d\u4efb\u52a1.\n\n :param task_id: \u8981\u53d6\u6d88\u7684\u4efb\u52a1ID\u53f7\u3002\n :type task_id: str\n :param expires: \u8bf7\u6c42\u5931\u6548\u65f6\u95f4\uff0c\u5982\u679c\u6709\uff0c\u5219\u4f1a\u6821\u9a8c\u3002\n :type expires: int\n :return: Response \u5bf9\u8c61"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef list_recycle_bin(self, start=0, limit=1000, **kwargs):\n\n params = {\n 'start': start,\n 'limit': limit,\n }\n return self._request('file', 'listrecycle',\n extra_params=params, **kwargs)", "response": "List all the files in the current user s archive."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef restore_recycle_bin(self, fs_id, **kwargs):\n\n data = {\n 'fs_id': fs_id,\n }\n return self._request('file', 'restore', data=data, **kwargs)", "response": "\u8fd8\u539f\u5355\u4e2a\u6587\u4ef6\u6216\u76ee\u5f55\uff08\u975e\u5f3a\u4e00\u81f4\u63a5\u53e3\uff0c\u8c03\u7528\u540e\u8bf7sleep 1\u79d2\u8bfb\u53d6\uff09.\n\n :param fs_id: \u6240\u8fd8\u539f\u7684\u6587\u4ef6\u6216\u76ee\u5f55\u5728PCS\u7684\u4e34\u65f6\u552f\u4e00\u6807\u8bc6ID\u3002\n :type fs_id: str\n :return: Response \u5bf9\u8c61"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nrestore all the files in a set of files", "response": "def multi_restore_recycle_bin(self, fs_ids, **kwargs):\n \"\"\"\u6279\u91cf\u8fd8\u539f\u6587\u4ef6\u6216\u76ee\u5f55\uff08\u975e\u5f3a\u4e00\u81f4\u63a5\u53e3\uff0c\u8c03\u7528\u540e\u8bf7sleep1\u79d2 \uff09.\n\n :param fs_ids: \u6240\u8fd8\u539f\u7684\u6587\u4ef6\u6216\u76ee\u5f55\u5728 PCS \u7684\u4e34\u65f6\u552f\u4e00\u6807\u8bc6 ID \u7684\u5217\u8868\u3002\n :type fs_ids: list or tuple\n :return: Response \u5bf9\u8c61\n \"\"\"\n\n data = {\n 'param': json.dumps({\n 'list': [{'fs_id': fs_id} for fs_id in fs_ids]\n }),\n }\n return self._request('file', 'restore', data=data, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets a new Access Token.", "response": "def get_new_access_token(refresh_token, client_id, client_secret,\n scope=None, **kwargs):\n \"\"\"\u4f7f\u7528 Refresh Token \u5237\u65b0\u4ee5\u83b7\u5f97\u65b0\u7684 Access Token.\n\n :param refresh_token: \u7528\u4e8e\u5237\u65b0 Access Token \u7528\u7684 Refresh Token\uff1b\n :param client_id: \u5e94\u7528\u7684 API Key\uff1b\n :param client_secret: \u5e94\u7528\u7684 Secret Key;\n :param scope: \u4ee5\u7a7a\u683c\u5206\u9694\u7684\u6743\u9650\u5217\u8868\uff0c\u82e5\u4e0d\u4f20\u9012\u6b64\u53c2\u6570\uff0c\u4ee3\u8868\u8bf7\u6c42\u7684\u6570\u636e\u8bbf\u95ee\n \u64cd\u4f5c\u6743\u9650\u4e0e\u4e0a\u6b21\u83b7\u53d6 Access Token \u65f6\u4e00\u81f4\u3002\u901a\u8fc7 Refresh Token\n \u5237\u65b0 Access Token \u65f6\u6240\u8981\u6c42\u7684 scope \u6743\u9650\u8303\u56f4\u5fc5\u987b\u5c0f\u4e8e\u7b49\u4e8e\u4e0a\u6b21\n \u83b7\u53d6 Access Token \u65f6\u6388\u4e88\u7684\u6743\u9650\u8303\u56f4\u3002 \u5173\u4e8e\u6743\u9650\u7684\u5177\u4f53\u4fe1\u606f\u8bf7\u53c2\u8003\n \u201c `\u6743\u9650\u5217\u8868`__ \u201d\u3002\n :return: Response \u5bf9\u8c61\n\n \u5173\u4e8e ``response.json()`` \u5b57\u5178\u7684\u5185\u5bb9\u6240\u4ee3\u8868\u7684\u542b\u4e49\uff0c\n \u8bf7\u53c2\u8003 `\u76f8\u5173\u7684\u767e\u5ea6\u5e2e\u52a9\u6587\u6863`__ \u3002\n\n __ http://developer.baidu.com/wiki/index.php?title=docs/oauth/baiduoauth/list\n __ http://developer.baidu.com/wiki/index.php?title=docs/oauth/refresh\n \"\"\"\n data = {\n 'grant_type': 'refresh_token',\n 'refresh_token': refresh_token,\n 'client_id': client_id,\n 'client_secret': client_secret,\n }\n if scope:\n data['scope'] = scope\n url = 'https://openapi.baidu.com/oauth/2.0/token'\n return requests.post(url, data=data)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nlogs-in to verisure app api api Login before calling any read or write commands getTaxID", "response": "def login(self):\n \"\"\" Login to verisure app api\n\n Login before calling any read or write commands\n\n \"\"\"\n if os.path.exists(self._cookieFileName):\n with open(self._cookieFileName, 'r') as cookieFile:\n self._vid = cookieFile.read().strip()\n\n try:\n self._get_installations()\n except ResponseError:\n self._vid = None\n os.remove(self._cookieFileName)\n\n if self._vid is None:\n self._create_cookie()\n with open(self._cookieFileName, 'w') as cookieFile:\n cookieFile.write(self._vid)\n self._get_installations()\n\n self._giid = self.installations[0]['giid']"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting information about installations", "response": "def _get_installations(self):\n \"\"\" Get information about installations \"\"\"\n response = None\n for base_url in urls.BASE_URLS:\n urls.BASE_URL = base_url\n try:\n response = requests.get(\n urls.get_installations(self._username),\n headers={\n 'Cookie': 'vid={}'.format(self._vid),\n 'Accept': 'application/json,'\n 'text/javascript, */*; q=0.01',\n })\n if 2 == response.status_code // 100:\n break\n elif 503 == response.status_code:\n continue\n else:\n raise ResponseError(response.status_code, response.text)\n except requests.exceptions.RequestException as ex:\n raise RequestError(ex)\n\n _validate_response(response)\n self.installations = json.loads(response.text)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget overview for installation", "response": "def get_overview(self):\n \"\"\" Get overview for installation \"\"\"\n response = None\n try:\n response = requests.get(\n urls.overview(self._giid),\n headers={\n 'Accept': 'application/json, text/javascript, */*; q=0.01',\n 'Accept-Encoding': 'gzip, deflate',\n 'Content-Type': 'application/json',\n 'Cookie': 'vid={}'.format(self._vid)})\n except requests.exceptions.RequestException as ex:\n raise RequestError(ex)\n _validate_response(response)\n return json.loads(response.text)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nturns on or off smartplug on the specified smartplug device label.", "response": "def set_smartplug_state(self, device_label, state):\n \"\"\" Turn on or off smartplug\n\n Args:\n device_label (str): Smartplug device label\n state (boolean): new status, 'True' or 'False'\n \"\"\"\n response = None\n try:\n response = requests.post(\n urls.smartplug(self._giid),\n headers={\n 'Content-Type': 'application/json',\n 'Cookie': 'vid={}'.format(self._vid)},\n data=json.dumps([{\n \"deviceLabel\": device_label,\n \"state\": state}]))\n except requests.exceptions.RequestException as ex:\n raise RequestError(ex)\n _validate_response(response)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets recent events from the object.", "response": "def get_history(self, filters=(), pagesize=15, offset=0):\n \"\"\" Get recent events\n\n Args:\n filters (string set): 'ARM', 'DISARM', 'FIRE', 'INTRUSION',\n 'TECHNICAL', 'SOS', 'WARNING', 'LOCK',\n 'UNLOCK'\n pagesize (int): Number of events to display\n offset (int): Skip pagesize * offset first events\n \"\"\"\n response = None\n try:\n response = requests.get(\n urls.history(self._giid),\n headers={\n 'Accept': 'application/json, text/javascript, */*; q=0.01',\n 'Cookie': 'vid={}'.format(self._vid)},\n params={\n \"offset\": int(offset),\n \"pagesize\": int(pagesize),\n \"notificationCategories\": filters})\n except requests.exceptions.RequestException as ex:\n raise RequestError(ex)\n _validate_response(response)\n return json.loads(response.text)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_climate(self, device_label):\n response = None\n try:\n response = requests.get(\n urls.climate(self._giid),\n headers={\n 'Accept': 'application/json, text/javascript, */*; q=0.01',\n 'Cookie': 'vid={}'.format(self._vid)},\n params={\n \"deviceLabel\": device_label})\n except requests.exceptions.RequestException as ex:\n raise RequestError(ex)\n _validate_response(response)\n return json.loads(response.text)", "response": "Get climate history from the device."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set_lock_state(self, code, device_label, state):\n response = None\n try:\n response = requests.put(\n urls.set_lockstate(self._giid, device_label, state),\n headers={\n 'Accept': 'application/json, text/javascript, */*; q=0.01',\n 'Content-Type': 'application/json',\n 'Cookie': 'vid={}'.format(self._vid)},\n data=json.dumps({\"code\": str(code)}))\n except requests.exceptions.RequestException as ex:\n raise RequestError(ex)\n _validate_response(response)\n return json.loads(response.text)", "response": "Set lock state of a lock in a specific device."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets lock state transaction status", "response": "def get_lock_state_transaction(self, transaction_id):\n \"\"\" Get lock state transaction status\n\n Args:\n transaction_id: Transaction ID received from set_lock_state\n \"\"\"\n response = None\n try:\n response = requests.get(\n urls.get_lockstate_transaction(self._giid, transaction_id),\n headers={\n 'Accept': 'application/json, text/javascript, */*; q=0.01',\n 'Cookie': 'vid={}'.format(self._vid)})\n except requests.exceptions.RequestException as ex:\n raise RequestError(ex)\n _validate_response(response)\n return json.loads(response.text)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_lock_config(self, device_label):\n response = None\n try:\n response = requests.get(\n urls.lockconfig(self._giid, device_label),\n headers={\n 'Accept': 'application/json, text/javascript, */*; q=0.01',\n 'Cookie': 'vid={}'.format(self._vid)})\n except requests.exceptions.RequestException as ex:\n raise RequestError(ex)\n _validate_response(response)\n return json.loads(response.text)", "response": "Get lock configuration of a specific lock in a specific device."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set_lock_config(self, device_label, volume=None, voice_level=None,\n auto_lock_enabled=None):\n \"\"\" Set lock configuration\n\n Args:\n device_label (str): device label of lock\n volume (str): 'SILENCE', 'LOW' or 'HIGH'\n voice_level (str): 'ESSENTIAL' or 'NORMAL'\n auto_lock_enabled (boolean): auto lock enabled\n \"\"\"\n response = None\n data = {}\n if volume:\n data['volume'] = volume\n if voice_level:\n data['voiceLevel'] = voice_level\n if auto_lock_enabled is not None:\n data['autoLockEnabled'] = auto_lock_enabled\n try:\n response = requests.put(\n urls.lockconfig(self._giid, device_label),\n headers={\n 'Content-Type': 'application/json',\n 'Cookie': 'vid={}'.format(self._vid)},\n data=json.dumps(data))\n except requests.exceptions.RequestException as ex:\n raise RequestError(ex)\n _validate_response(response)", "response": "Set lock configuration for a specific lock in the nexus."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef capture_image(self, device_label):\n response = None\n try:\n response = requests.post(\n urls.imagecapture(self._giid, device_label),\n headers={\n 'Content-Type': 'application/json',\n 'Cookie': 'vid={}'.format(self._vid)})\n except requests.exceptions.RequestException as ex:\n raise RequestError(ex)\n _validate_response(response)", "response": "Capture smartcam image of a specific device label."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_camera_imageseries(self, number_of_imageseries=10, offset=0):\n response = None\n try:\n response = requests.get(\n urls.get_imageseries(self._giid),\n headers={\n 'Accept': 'application/json, text/javascript, */*; q=0.01',\n 'Cookie': 'vid={}'.format(self._vid)},\n params={\n \"numberOfImageSeries\": int(number_of_imageseries),\n \"offset\": int(offset),\n \"fromDate\": \"\",\n \"toDate\": \"\",\n \"onlyNotViewed\": \"\",\n \"_\": self._giid})\n except requests.exceptions.RequestException as ex:\n raise RequestError(ex)\n _validate_response(response)\n return json.loads(response.text)", "response": "Get smartcam image series"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndownload image taken by a smartcam", "response": "def download_image(self, device_label, image_id, file_name):\n \"\"\" Download image taken by a smartcam\n\n Args:\n device_label (str): device label of camera\n image_id (str): image id from image series\n file_name (str): path to file\n \"\"\"\n response = None\n try:\n response = requests.get(\n urls.download_image(self._giid, device_label, image_id),\n headers={\n 'Cookie': 'vid={}'.format(self._vid)},\n stream=True)\n except requests.exceptions.RequestException as ex:\n raise RequestError(ex)\n _validate_response(response)\n with open(file_name, 'wb') as image_file:\n for chunk in response.iter_content(chunk_size=1024):\n if chunk:\n image_file.write(chunk)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef logout(self):\n response = None\n try:\n response = requests.delete(\n urls.login(),\n headers={\n 'Cookie': 'vid={}'.format(self._vid)})\n except requests.exceptions.RequestException as ex:\n raise RequestError(ex)\n _validate_response(response)", "response": "Logout and remove vid"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset the heat pump mode for a specific user.", "response": "def set_heat_pump_mode(self, device_label, mode):\n \"\"\" Set heatpump mode\n Args:\n mode (str): 'HEAT', 'COOL', 'FAN' or 'AUTO'\n \"\"\"\n response = None\n try:\n response = requests.put(\n urls.set_heatpump_state(self._giid, device_label),\n headers={\n 'Accept': 'application/json',\n 'Content-Type': 'application/json',\n 'Cookie': 'vid={}'.format(self._vid)},\n data=json.dumps({'mode': mode}))\n except requests.exceptions.RequestException as ex:\n raise RequestError(ex)\n _validate_response(response)\n return json.loads(response.text)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nset the heatpump mode for a specific feature.", "response": "def set_heat_pump_feature(self, device_label, feature):\n \"\"\" Set heatpump mode\n Args:\n feature: 'QUIET', 'ECONAVI', or 'POWERFUL'\n \"\"\"\n response = None\n try:\n response = requests.put(\n urls.set_heatpump_feature(self._giid, device_label, feature),\n headers={\n 'Accept': 'application/json',\n 'Content-Type': 'application/json',\n 'Cookie': 'vid={}'.format(self._vid)})\n except requests.exceptions.RequestException as ex:\n raise RequestError(ex)\n _validate_response(response)\n return json.loads(response.text)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef print_result(overview, *names):\n if names:\n for name in names:\n toprint = overview\n for part in name.split('/'):\n toprint = toprint[part]\n print(json.dumps(toprint, indent=4, separators=(',', ': ')))\n else:\n print(json.dumps(overview, indent=4, separators=(',', ': ')))", "response": "Print the result of a verisure request."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef main():\n parser = argparse.ArgumentParser(\n description='Read or change status of verisure devices')\n parser.add_argument(\n 'username',\n help='MyPages username')\n parser.add_argument(\n 'password',\n help='MyPages password')\n parser.add_argument(\n '-i', '--installation',\n help='Installation number',\n type=int,\n default=1)\n parser.add_argument(\n '-c', '--cookie',\n help='File to store cookie in',\n default='~/.verisure-cookie')\n\n commandsparser = parser.add_subparsers(\n help='commands',\n dest='command')\n\n # installations command\n commandsparser.add_parser(\n COMMAND_INSTALLATIONS,\n help='Get information about installations')\n\n # overview command\n overview_parser = commandsparser.add_parser(\n COMMAND_OVERVIEW,\n help='Read status of one or many device types')\n overview_parser.add_argument(\n 'filter',\n nargs='*',\n help='Read status for device type')\n\n # armstate command\n commandsparser.add_parser(\n COMMAND_ARMSTATE,\n help='Get arm state')\n\n # Set command\n set_parser = commandsparser.add_parser(\n COMMAND_SET,\n help='Set status of a device')\n set_device = set_parser.add_subparsers(\n help='device',\n dest='device')\n\n # Set smartplug\n set_smartplug = set_device.add_parser(\n 'smartplug',\n help='set smartplug value')\n set_smartplug.add_argument(\n 'device_label',\n help='device label')\n set_smartplug.add_argument(\n 'new_value',\n choices=[\n 'on',\n 'off'],\n help='new value')\n\n # Set alarm\n set_alarm = set_device.add_parser(\n 'alarm',\n help='set alarm status')\n set_alarm.add_argument(\n 'code',\n help='alarm code')\n set_alarm.add_argument(\n 'new_status',\n choices=[\n 'ARMED_HOME',\n 'ARMED_AWAY',\n 'DISARMED'],\n help='new status')\n\n # Set lock\n set_lock = set_device.add_parser(\n 'lock',\n help='set lock status')\n set_lock.add_argument(\n 'code',\n help='alarm code')\n set_lock.add_argument(\n 'serial_number',\n help='serial number')\n set_lock.add_argument(\n 'new_status',\n choices=[\n 'lock',\n 'unlock'],\n help='new status')\n\n # Get climate history\n history_climate = commandsparser.add_parser(\n COMMAND_CLIMATE,\n help='get climate history')\n history_climate.add_argument(\n 'device_label',\n help='device label')\n\n # Event log command\n eventlog_parser = commandsparser.add_parser(\n COMMAND_EVENTLOG,\n help='Get event log')\n eventlog_parser.add_argument(\n '-p', '--pagesize',\n type=int,\n default=15,\n help='Number of elements on one page')\n eventlog_parser.add_argument(\n '-o', '--offset',\n type=int,\n default=0,\n help='Page offset')\n eventlog_parser.add_argument(\n '-f', '--filter',\n nargs='*',\n default=[],\n choices=[\n 'ARM',\n 'DISARM',\n 'FIRE',\n 'INTRUSION',\n 'TECHNICAL',\n 'SOS',\n 'WARNING',\n 'LOCK',\n 'UNLOCK'],\n help='Filter event log')\n\n # Capture command\n capture_parser = commandsparser.add_parser(\n COMMAND_CAPTURE,\n help='Capture image')\n capture_parser.add_argument(\n 'device_label',\n help='Device label')\n\n # Image series command\n commandsparser.add_parser(\n COMMAND_IMAGESERIES,\n help='Get image series')\n\n # Get image command\n getimage_parser = commandsparser.add_parser(\n COMMAND_GETIMAGE,\n help='Download image')\n getimage_parser.add_argument(\n 'device_label',\n help='Device label')\n getimage_parser.add_argument(\n 'image_id',\n help='image ID')\n getimage_parser.add_argument(\n 'file_name',\n help='Output file name')\n\n # Vacation mode command\n commandsparser.add_parser(\n COMMAND_VACATIONMODE,\n help='Get vacation mode info')\n\n # Door window status command\n commandsparser.add_parser(\n COMMAND_DOOR_WINDOW,\n help='Get door/window status')\n\n # Test ethernet command\n commandsparser.add_parser(\n COMMAND_TEST_ETHERNET,\n help='Update ethernet status')\n\n args = parser.parse_args()\n session = verisure.Session(args.username, args.password, args.cookie)\n session.login()\n try:\n session.set_giid(session.installations[args.installation - 1]['giid'])\n if args.command == COMMAND_INSTALLATIONS:\n print_result(session.installations)\n if args.command == COMMAND_OVERVIEW:\n print_result(session.get_overview(), *args.filter)\n if args.command == COMMAND_ARMSTATE:\n print_result(session.get_arm_state())\n if args.command == COMMAND_SET:\n if args.device == 'smartplug':\n session.set_smartplug_state(\n args.device_label,\n args.new_value == 'on')\n if args.device == 'alarm':\n print_result(session.set_arm_state(\n args.code,\n args.new_status))\n if args.device == 'lock':\n print_result(session.set_lock_state(\n args.code,\n args.serial_number,\n args.new_status))\n if args.command == COMMAND_CLIMATE:\n print_result(session.get_climate(args.device_label))\n if args.command == COMMAND_EVENTLOG:\n print_result(\n session.get_history(\n args.filter,\n pagesize=args.pagesize,\n offset=args.offset))\n if args.command == COMMAND_CAPTURE:\n session.capture_image(args.device_label)\n if args.command == COMMAND_IMAGESERIES:\n print_result(session.get_camera_imageseries())\n if args.command == COMMAND_GETIMAGE:\n session.download_image(\n args.device_label,\n args.image_id,\n args.file_name)\n if args.command == COMMAND_VACATIONMODE:\n print_result(session.get_vacation_mode())\n if args.command == COMMAND_DOOR_WINDOW:\n print_result(session.get_door_window())\n if args.command == COMMAND_TEST_ETHERNET:\n session.test_ethernet()\n except verisure.session.ResponseError as ex:\n print(ex.text)", "response": "Start verisure command line"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the ContentType id of the model.", "response": "def type_id(self):\n \"\"\"\n Shortcut to retrieving the ContentType id of the model.\n \"\"\"\n try:\n return ContentType.objects.get_for_model(self.model, for_concrete_model=False).id\n except DatabaseError as e:\n raise DatabaseError(\"Unable to fetch ContentType object, is a plugin being registered before the initial syncdb? (original error: {0})\".format(str(e)))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the cache key which is used to store a rendered item.", "response": "def get_output_cache_key(self, placeholder_name, instance):\n \"\"\"\n .. versionadded:: 0.9\n Return the default cache key which is used to store a rendered item.\n By default, this function generates the cache key using :func:`get_output_cache_base_key`.\n \"\"\"\n cachekey = self.get_output_cache_base_key(placeholder_name, instance)\n if self.cache_output_per_site:\n cachekey = \"{0}-s{1}\".format(cachekey, settings.SITE_ID)\n\n # Append language code\n if self.cache_output_per_language:\n # NOTE: Not using self.language_code, but using the current language instead.\n # That is what the {% trans %} tags are rendered as after all.\n # The render_placeholder() code can switch the language if needed.\n user_language = get_language()\n if user_language not in self.cache_supported_language_codes:\n user_language = 'unsupported'\n cachekey = \"{0}.{1}\".format(cachekey, user_language)\n\n return cachekey"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_output_cache_keys(self, placeholder_name, instance):\n base_key = self.get_output_cache_base_key(placeholder_name, instance)\n cachekeys = [\n base_key,\n ]\n\n if self.cache_output_per_site:\n site_ids = list(Site.objects.values_list('pk', flat=True))\n if settings.SITE_ID not in site_ids:\n site_ids.append(settings.SITE_ID)\n\n base_key = get_rendering_cache_key(placeholder_name, instance)\n cachekeys = [\"{0}-s{1}\".format(base_key, site_id) for site_id in site_ids]\n\n if self.cache_output_per_language or self.render_ignore_item_language:\n # Append language code to all keys,\n # have to invalidate a lot more items in memcache.\n # Also added \"None\" suffix, since get_parent_language_code() may return that.\n # TODO: ideally for render_ignore_item_language, only invalidate all when the fallback language changed.\n total_list = []\n cache_languages = list(self.cache_supported_language_codes) + ['unsupported', 'None']\n\n # All variants of the Placeholder (for full page caching)\n placeholder = instance.placeholder\n total_list.extend(get_placeholder_cache_key(placeholder, lc) for lc in cache_languages)\n\n # All variants of the ContentItem in different languages\n for user_language in cache_languages:\n total_list.extend(\"{0}.{1}\".format(base, user_language) for base in cachekeys)\n cachekeys = total_list\n\n return cachekeys", "response": "Return the possible cache keys for a rendered item."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the cached output for a rendered item.", "response": "def get_cached_output(self, placeholder_name, instance):\n \"\"\"\n .. versionadded:: 0.9\n Return the cached output for a rendered item, or ``None`` if no output is cached.\n\n This method can be overwritten to implement custom caching mechanisms.\n By default, this function generates the cache key using :func:`get_output_cache_key`\n and retrieves the results from the configured Django cache backend (e.g. memcached).\n \"\"\"\n cachekey = self.get_output_cache_key(placeholder_name, instance)\n return cache.get(cachekey)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nsets the cached output for a rendered item.", "response": "def set_cached_output(self, placeholder_name, instance, output):\n \"\"\"\n .. versionadded:: 0.9\n Store the cached output for a rendered item.\n\n This method can be overwritten to implement custom caching mechanisms.\n By default, this function generates the cache key using :func:`~fluent_contents.cache.get_rendering_cache_key`\n and stores the results in the configured Django cache backend (e.g. memcached).\n\n When custom cache keys are used, also include those in :func:`get_output_cache_keys`\n so the cache will be cleared when needed.\n\n .. versionchanged:: 1.0\n The received data is no longer a HTML string, but :class:`~fluent_contents.models.ContentItemOutput` object.\n \"\"\"\n cachekey = self.get_output_cache_key(placeholder_name, instance)\n if self.cache_timeout is not DEFAULT_TIMEOUT:\n cache.set(cachekey, output, self.cache_timeout)\n else:\n # Don't want to mix into the default 0/None issue.\n cache.set(cachekey, output)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef render(self, request, instance, **kwargs):\n render_template = self.get_render_template(request, instance, **kwargs)\n if not render_template:\n return str(_(u\"{No rendering defined for class '%s'}\" % self.__class__.__name__))\n\n context = self.get_context(request, instance, **kwargs)\n return self.render_to_string(request, render_template, context)", "response": "Returns a string that can be used to render a single item."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef render_to_string(self, request, template, context, content_instance=None):\n if not content_instance:\n content_instance = PluginContext(request)\n\n content_instance.update(context)\n return render_to_string(template, content_instance.flatten(), request=request)", "response": "Render a custom template with the context as context instance."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nregister a Media class to the frontend.", "response": "def register_frontend_media(request, media):\n \"\"\"\n Add a :class:`~django.forms.Media` class to the current request.\n This will be rendered by the ``render_plugin_media`` template tag.\n \"\"\"\n if not hasattr(request, '_fluent_contents_frontend_media'):\n request._fluent_contents_frontend_media = Media()\n\n add_media(request._fluent_contents_frontend_media, media)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_rendering_cache_key(placeholder_name, contentitem):\n if not contentitem.pk:\n return None\n return \"contentitem.@{0}.{1}.{2}\".format(\n placeholder_name,\n contentitem.plugin.type_name, # always returns the upcasted name.\n contentitem.pk, # already unique per language_code\n )", "response": "Returns a cache key for the content item output."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a cache key for an existing placeholder object.", "response": "def get_placeholder_cache_key(placeholder, language_code):\n \"\"\"\n Return a cache key for an existing placeholder object.\n\n This key is used to cache the entire output of a placeholder.\n \"\"\"\n return _get_placeholder_cache_key_for_id(\n placeholder.parent_type_id,\n placeholder.parent_id,\n placeholder.slot,\n language_code\n )"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_placeholder_cache_key_for_parent(parent_object, placeholder_name, language_code):\n parent_type = ContentType.objects.get_for_model(parent_object)\n return _get_placeholder_cache_key_for_id(\n parent_type.id,\n parent_object.pk,\n placeholder_name,\n language_code\n )", "response": "Return a cache key for a placeholder."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef remove_stale_items(self, stale_cts):\n stale_ct_ids = list(stale_cts.keys())\n items = (ContentItem.objects\n .non_polymorphic() # very important, or polymorphic skips them on fetching derived data\n .filter(polymorphic_ctype__in=stale_ct_ids)\n .order_by('polymorphic_ctype', 'pk')\n )\n if not items:\n self.stdout.write(\"No stale items found.\")\n return\n\n if self.dry_run:\n self.stdout.write(\"The following content items are stale:\")\n else:\n self.stdout.write(\"The following content items were stale:\")\n\n for item in items:\n ct = stale_cts[item.polymorphic_ctype_id]\n self.stdout.write(\"- #{id} points to removed {app_label}.{model}\".format(\n id=item.pk, app_label=ct.app_label, model=ct.model\n ))\n\n if not self.dry_run:\n try:\n item.delete()\n except PluginNotFound:\n Model.delete(item)", "response": "Remove stale items from the database."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nremoving items that no longer point to an existing parent.", "response": "def remove_unreferenced_items(self, stale_cts):\n \"\"\"\n See if there are items that no longer point to an existing parent.\n \"\"\"\n stale_ct_ids = list(stale_cts.keys())\n parent_types = (ContentItem.objects.order_by()\n .exclude(polymorphic_ctype__in=stale_ct_ids)\n .values_list('parent_type', flat=True).distinct())\n\n num_unreferenced = 0\n\n for ct_id in parent_types:\n parent_ct = ContentType.objects.get_for_id(ct_id)\n unreferenced_items = (ContentItem.objects\n .filter(parent_type=ct_id)\n .order_by('polymorphic_ctype', 'pk'))\n\n if parent_ct.model_class() is not None:\n # Only select the items that are part of removed pages,\n # unless the parent type was removed - then removing all is correct.\n unreferenced_items = unreferenced_items.exclude(\n parent_id__in=parent_ct.get_all_objects_for_this_type()\n )\n\n if unreferenced_items:\n for item in unreferenced_items:\n self.stdout.write(\n \"- {cls}#{id} points to nonexisting {app_label}.{model}\".format(\n cls=item.__class__.__name__, id=item.pk,\n app_label=parent_ct.app_label, model=parent_ct.model\n ))\n num_unreferenced += 1\n if not self.dry_run and self.remove_unreferenced:\n item.delete()\n\n if not num_unreferenced:\n self.stdout.write(\"No unreferenced items found.\")\n else:\n self.stdout.write(\"{0} unreferenced items found.\".format(num_unreferenced))\n if not self.remove_unreferenced:\n self.stdout.write(\"Re-run this command with --remove-unreferenced to remove these items\")"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the list of elements that are not already in the queryset", "response": "def __initial_minus_queryset(self):\n \"\"\"\n Gives all elements from self._initial having a slot value that is not already\n in self.get_queryset()\n \"\"\"\n queryset = self.get_queryset()\n\n def initial_not_in_queryset(initial):\n for x in queryset:\n if x.slot == initial['slot']:\n return False\n\n return True\n\n return list(filter(initial_not_in_queryset, self._initial))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nvalidating and return the Placeholder object that points to the given placeholder object.", "response": "def _get_placeholder_arg(arg_name, placeholder):\n \"\"\"\n Validate and return the Placeholder object that the template variable points to.\n \"\"\"\n if placeholder is None:\n raise RuntimeWarning(u\"placeholder object is None\")\n elif isinstance(placeholder, Placeholder):\n return placeholder\n elif isinstance(placeholder, Manager):\n manager = placeholder\n try:\n parent_object = manager.instance # read RelatedManager code\n except AttributeError:\n parent_object = None\n\n try:\n placeholder = manager.all()[0]\n if parent_object is not None:\n placeholder.parent = parent_object # Fill GFK cache\n return placeholder\n except IndexError:\n raise RuntimeWarning(u\"No placeholders found for query '{0}.all.0'\".format(arg_name))\n else:\n raise ValueError(u\"The field '{0}' does not refer to a placeholder object!\".format(arg_name))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _split_js(media, domain):\n # Read internal property without creating new Media instance.\n if not media._js:\n return ImmutableMedia.empty_instance\n\n needs_local = domain == 'local'\n new_js = []\n for url in media._js:\n if needs_local == _is_local(url):\n new_js.append(url)\n\n if not new_js:\n return ImmutableMedia.empty_instance\n else:\n return Media(js=new_js)", "response": "Split the local or external URLs from a Media object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _split_css(media, domain):\n # Read internal property without creating new Media instance.\n if not media._css:\n return ImmutableMedia.empty_instance\n\n needs_local = domain == 'local'\n new_css = {}\n for medium, url in six.iteritems(media._css):\n if needs_local == _is_local(url):\n new_css.setdefault(medium, []).append(url)\n\n if not new_css:\n return ImmutableMedia.empty_instance\n else:\n return Media(css=new_css)", "response": "Extract the local or external URLs from a Media object."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nparses the node syntax for the national cache object.", "response": "def parse(cls, parser, token):\n \"\"\"\n Parse the node syntax:\n\n .. code-block:: html+django\n\n {% page_placeholder parentobj slotname title=\"test\" role=\"m\" %}\n \"\"\"\n bits, as_var = parse_as_var(parser, token)\n tag_name, args, kwargs = parse_token_kwargs(parser, bits, allowed_kwargs=cls.allowed_kwargs, compile_args=True, compile_kwargs=True)\n\n # Play with the arguments\n if len(args) == 2:\n parent_expr = args[0]\n slot_expr = args[1]\n elif len(args) == 1:\n # Allow 'page' by default. Works with most CMS'es, including django-fluent-pages.\n parent_expr = Variable('page')\n slot_expr = args[0]\n else:\n raise TemplateSyntaxError(\"\"\"{0} tag allows two arguments: 'parent object' 'slot name' and optionally: title=\"..\" role=\"..\".\"\"\".format(tag_name))\n\n cls.validate_args(tag_name, *args, **kwargs)\n return cls(\n tag_name=tag_name,\n as_var=as_var,\n parent_expr=parent_expr,\n slot_expr=slot_expr,\n **kwargs\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the string literal that is used in the template screens.", "response": "def get_title(self):\n \"\"\"\n Return the string literal that is used in the template.\n The title is used in the admin screens.\n \"\"\"\n try:\n return extract_literal(self.meta_kwargs['title'])\n except KeyError:\n slot = self.get_slot()\n if slot is not None:\n return slot.replace('_', ' ').title()\n\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef extract_literal(templatevar):\n # FilterExpression contains another 'var' that either contains a Variable or SafeData object.\n if hasattr(templatevar, 'var'):\n templatevar = templatevar.var\n if isinstance(templatevar, SafeData):\n # Literal in FilterExpression, can return.\n return templatevar\n else:\n # Variable in FilterExpression, not going to work here.\n return None\n\n if templatevar[0] in ('\"', \"'\") and templatevar[-1] in ('\"', \"'\"):\n return templatevar[1:-1]\n else:\n return None", "response": "Returns the value of a Literal in a FilterExpression."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning True if a template FilterExpression holds a literal boolean value.", "response": "def extract_literal_bool(templatevar):\n \"\"\"\n See if a template FilterExpression holds a literal boolean value.\n\n :type templatevar: django.template.FilterExpression\n :rtype: bool|None\n \"\"\"\n # FilterExpression contains another 'var' that either contains a Variable or SafeData object.\n if hasattr(templatevar, 'var'):\n templatevar = templatevar.var\n if isinstance(templatevar, SafeData):\n # Literal in FilterExpression, can return.\n return is_true(templatevar)\n else:\n # Variable in FilterExpression, not going to work here.\n return None\n\n return is_true(templatevar)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _create_markup_plugin(language, model):\n form = type(\"{0}MarkupItemForm\".format(language.capitalize()), (MarkupItemForm,), {\n 'default_language': language,\n })\n\n classname = \"{0}MarkupPlugin\".format(language.capitalize())\n PluginClass = type(classname, (MarkupPluginBase,), {\n 'model': model,\n 'form': form,\n })\n\n return PluginClass", "response": "Create a new MarkupPlugin class that represents the plugin type."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a dictionary of keyword arguments for the generic parent_type / parent_id fields.", "response": "def get_parent_lookup_kwargs(parent_object):\n \"\"\"\n Return lookup arguments for the generic ``parent_type`` / ``parent_id`` fields.\n\n :param parent_object: The parent object.\n :type parent_object: :class:`~django.db.models.Model`\n \"\"\"\n if parent_object is None:\n return dict(\n parent_type__isnull=True,\n parent_id=0\n )\n elif isinstance(parent_object, models.Model):\n return dict(\n parent_type=ContentType.objects.get_for_model(parent_object),\n parent_id=parent_object.pk\n )\n else:\n raise ValueError(\"parent_object is not a model!\")"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_parent_active_language_choices(parent_object, exclude_current=False):\n assert parent_object is not None, \"Missing parent_object!\"\n\n from .db import ContentItem\n qs = ContentItem.objects \\\n .parent(parent_object, limit_parent_language=False) \\\n .values_list('language_code', flat=True).distinct()\n\n languages = set(qs)\n\n if exclude_current:\n parent_lang = get_parent_language_code(parent_object)\n languages.discard(parent_lang)\n\n if parler_appsettings.PARLER_LANGUAGES and not parler_appsettings.PARLER_SHOW_EXCLUDED_LANGUAGE_TABS:\n site_id = get_parent_site_id(parent_object)\n try:\n lang_dict = parler_appsettings.PARLER_LANGUAGES[site_id]\n except KeyError:\n lang_dict = ()\n\n allowed_languages = set(item['code'] for item in lang_dict)\n languages &= allowed_languages\n\n # No multithreading issue here, object is instantiated for this user only.\n choices = [(lang, str(get_language_title(lang))) for lang in languages if lang]\n choices.sort(key=lambda tup: tup[1])\n return choices", "response": "Get the currently active languages of a parent object."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn a placeholder by slot.", "response": "def get_by_slot(self, parent_object, slot):\n \"\"\"\n Return a placeholder by key.\n \"\"\"\n placeholder = self.parent(parent_object).get(slot=slot)\n placeholder.parent = parent_object # fill the reverse cache\n return placeholder"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncreates a new object with the given parameters and return it.", "response": "def create_for_object(self, parent_object, slot, role='m', title=None):\n \"\"\"\n Create a placeholder with the given parameters\n \"\"\"\n from .db import Placeholder\n parent_attrs = get_parent_lookup_kwargs(parent_object)\n obj = self.create(\n slot=slot,\n role=role or Placeholder.MAIN,\n title=title or slot.title().replace('_', ' '),\n **parent_attrs\n )\n obj.parent = parent_object # fill the reverse cache\n return obj"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef translated(self, *language_codes):\n # this API has the same semantics as django-parler's .translated() for familiarity.\n # However, since this package doesn't filter in a related field, the ORM limitations don't apply.\n if not language_codes:\n language_codes = (get_language(),)\n else:\n # Since some code operates on a True/str switch, make sure that doesn't drip into this low level code.\n for language_code in language_codes:\n if not isinstance(language_code, six.string_types) or language_code.lower() in ('1', '0', 'true', 'false'):\n raise ValueError(\"ContentItemQuerySet.translated() expected language_code to be an ISO code\")\n\n if len(language_codes) == 1:\n return self.filter(language_code=language_codes[0])\n else:\n return self.filter(language_code__in=language_codes)", "response": "Return a new QuerySet with translated objects."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning all content items which are associated with a given parent object.", "response": "def parent(self, parent_object, limit_parent_language=True):\n \"\"\"\n Return all content items which are associated with a given parent object.\n \"\"\"\n lookup = get_parent_lookup_kwargs(parent_object)\n\n # Filter the items by default, giving the expected \"objects for this parent\" items\n # when the parent already holds the language state.\n if limit_parent_language:\n language_code = get_parent_language_code(parent_object)\n if language_code:\n lookup['language_code'] = language_code\n\n return self.filter(**lookup)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef move_to_placeholder(self, placeholder, sort_order=None):\n qs = self.all() # Get clone\n for item in qs:\n # Change the item directly in the resultset.\n item.move_to_placeholder(placeholder, sort_order=sort_order)\n if sort_order is not None:\n sort_order += 1\n\n return qs", "response": "Move the entire queryset to a new object."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncopies the entire queryset to a new object.", "response": "def copy_to_placeholder(self, placeholder, sort_order=None):\n \"\"\"\n .. versionadded: 1.0 Copy the entire queryset to a new object.\n\n Returns a queryset with the newly created objects.\n \"\"\"\n qs = self.all() # Get clone\n for item in qs:\n # Change the item directly in the resultset.\n item.copy_to_placeholder(placeholder, sort_order=sort_order, in_place=True)\n if sort_order is not None:\n sort_order += 1\n\n return qs"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef parent(self, parent_object, limit_parent_language=True):\n return self.all().parent(parent_object, limit_parent_language)", "response": "Return all content items which are associated with a given parent object."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef create_for_placeholder(self, placeholder, sort_order=1, language_code=None, **kwargs):\n if language_code is None:\n # Could also use get_language() or appsettings.FLUENT_CONTENTS_DEFAULT_LANGUAGE_CODE\n # thus avoid the risk of performing an extra query here to the parent.\n # However, this identical behavior to BaseContentItemFormSet,\n # and the parent can be set already via Placeholder.objects.create_for_object()\n language_code = get_parent_language_code(placeholder.parent)\n\n obj = self.create(\n placeholder=placeholder,\n parent_type_id=placeholder.parent_type_id,\n parent_id=placeholder.parent_id,\n sort_order=sort_order,\n language_code=language_code,\n **kwargs\n )\n\n # Fill the reverse caches\n obj.placeholder = placeholder\n parent = getattr(placeholder, '_parent_cache', None) # by GenericForeignKey (_meta.virtual_fields[0].cache_attr)\n if parent is not None:\n obj.parent = parent\n\n return obj", "response": "Create a new Content Item for a given Placeholder."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the data of the placeholder fields.", "response": "def get_placeholder_data(self, request, obj=None):\n \"\"\"\n Return the data of the placeholder fields.\n \"\"\"\n # Return all placeholder fields in the model.\n if not hasattr(self.model, '_meta_placeholder_fields'):\n return []\n\n data = []\n for name, field in self.model._meta_placeholder_fields.items():\n assert isinstance(field, PlaceholderField)\n data.append(PlaceholderData(\n slot=field.slot,\n title=field.verbose_name.capitalize(),\n fallback_language=None, # Information cant' be known by \"render_placeholder\" in the template.\n ))\n\n return data"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_all_allowed_plugins(self):\n # Get all allowed plugins of the various placeholders together.\n if not hasattr(self.model, '_meta_placeholder_fields'):\n # No placeholder fields in the model, no need for inlines.\n return []\n\n plugins = []\n for name, field in self.model._meta_placeholder_fields.items():\n assert isinstance(field, PlaceholderField)\n if field.plugins is None:\n # no limitations, so all is allowed\n return extensions.plugin_pool.get_plugins()\n else:\n plugins += field.plugins\n\n return list(set(plugins))", "response": "Return all allowed plugins for the current model."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _create_markup_model(fixed_language):\n title = backend.LANGUAGE_NAMES.get(fixed_language) or fixed_language\n\n objects = MarkupLanguageManager(fixed_language)\n\n def save(self, *args, **kwargs):\n self.language = fixed_language\n MarkupItem.save(self, *args, **kwargs)\n\n class Meta:\n verbose_name = title\n verbose_name_plural = _('%s items') % title\n proxy = True\n\n classname = \"{0}MarkupItem\".format(fixed_language.capitalize())\n\n new_class = type(str(classname), (MarkupItem,), {\n '__module__': MarkupItem.__module__,\n 'objects': objects,\n 'save': save,\n 'Meta': Meta,\n })\n\n # Make easily browsable\n return new_class", "response": "Create a new MarkupItem model that saves itself in a single language."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a PlaceholderFormField instance for this database Field.", "response": "def formfield(self, **kwargs):\n \"\"\"\n Returns a :class:`PlaceholderFormField` instance for this database Field.\n \"\"\"\n defaults = {\n 'label': capfirst(self.verbose_name),\n 'help_text': self.help_text,\n 'required': not self.blank,\n }\n defaults.update(kwargs)\n return PlaceholderFormField(slot=self.slot, plugins=self._plugins, **defaults)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the set of plugins that this field may display.", "response": "def plugins(self):\n \"\"\"\n Get the set of plugins that this field may display.\n \"\"\"\n from fluent_contents import extensions\n if self._plugins is None:\n return extensions.plugin_pool.get_plugins()\n else:\n try:\n return extensions.plugin_pool.get_plugins_by_name(*self._plugins)\n except extensions.PluginNotFound as e:\n raise extensions.PluginNotFound(str(e) + \" Update the plugin list of '{0}.{1}' field or FLUENT_CONTENTS_PLACEHOLDER_CONFIG['{2}'] setting.\".format(self.model._meta.object_name, self.name, self.slot))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef value_from_object(self, obj):\n try:\n # not using self.attname, access the descriptor instead.\n placeholder = getattr(obj, self.name)\n except Placeholder.DoesNotExist:\n return None # Still allow ModelForm / admin to open and create a new Placeholder if the table was truncated.\n\n return placeholder.id if placeholder else None", "response": "Internal method used to return the value of the object instance."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef can_use_cached_output(self, contentitem):\n return contentitem.plugin.search_output and not contentitem.plugin.search_fields \\\n and super(SearchRenderingPipe, self).can_use_cached_output(contentitem)", "response": "Return True if the contentitem can be cached output."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef render_item(self, contentitem):\n plugin = contentitem.plugin\n if not plugin.search_output and not plugin.search_fields:\n # Only render items when the item was output will be indexed.\n raise SkipItem\n\n if not plugin.search_output:\n output = ContentItemOutput('', cacheable=False)\n else:\n output = super(SearchRenderingPipe, self).render_item(contentitem)\n\n if plugin.search_fields:\n # Just add the results into the output, but avoid caching that somewhere.\n output.html += plugin.get_search_text(contentitem)\n output.cacheable = False\n\n return output", "response": "Render the item - but render as search text instead."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_content_item_inlines(plugins=None, base=BaseContentItemInline):\n COPY_FIELDS = (\n 'form', 'raw_id_fields', 'filter_vertical', 'filter_horizontal',\n 'radio_fields', 'prepopulated_fields', 'formfield_overrides', 'readonly_fields',\n )\n if plugins is None:\n plugins = extensions.plugin_pool.get_plugins()\n\n inlines = []\n for plugin in plugins: # self.model._supported_...()\n # Avoid errors that are hard to trace\n if not isinstance(plugin, extensions.ContentPlugin):\n raise TypeError(\"get_content_item_inlines() expects to receive ContentPlugin instances, not {0}\".format(plugin))\n\n ContentItemType = plugin.model\n\n # Create a new Type that inherits CmsPageItemInline\n # Read the static fields of the ItemType to override default appearance.\n # This code is based on FeinCMS, (c) Simon Meers, BSD licensed\n class_name = '%s_AutoInline' % ContentItemType.__name__\n attrs = {\n '__module__': plugin.__class__.__module__,\n 'model': ContentItemType,\n\n # Add metadata properties for template\n 'name': plugin.verbose_name,\n 'plugin': plugin,\n 'type_name': plugin.type_name,\n 'extra_fieldsets': plugin.fieldsets,\n 'cp_admin_form_template': plugin.admin_form_template,\n 'cp_admin_init_template': plugin.admin_init_template,\n }\n\n # Copy a restricted set of admin fields to the inline model too.\n for name in COPY_FIELDS:\n if getattr(plugin, name):\n attrs[name] = getattr(plugin, name)\n\n inlines.append(type(class_name, (base,), attrs))\n\n # For consistency, enforce ordering\n inlines.sort(key=lambda inline: inline.name.lower())\n\n return inlines", "response": "Dynamically generate genuine django inlines for all registered content item types."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_oembed_providers():\n global _provider_list, _provider_lock\n if _provider_list is not None:\n return _provider_list\n\n # Allow only one thread to build the list, or make request to embed.ly.\n _provider_lock.acquire()\n try:\n # And check whether that already succeeded when the lock is granted.\n if _provider_list is None:\n _provider_list = _build_provider_list()\n finally:\n # Always release if there are errors\n _provider_lock.release()\n\n return _provider_list", "response": "Get the list of OEmbed providers."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nbuilding the provider list from the app settings.", "response": "def _build_provider_list():\n \"\"\"\n Construct the provider registry, using the app settings.\n \"\"\"\n registry = None\n if appsettings.FLUENT_OEMBED_SOURCE == 'basic':\n registry = bootstrap_basic()\n elif appsettings.FLUENT_OEMBED_SOURCE == 'embedly':\n params = {}\n if appsettings.MICAWBER_EMBEDLY_KEY:\n params['key'] = appsettings.MICAWBER_EMBEDLY_KEY\n registry = bootstrap_embedly(**params)\n elif appsettings.FLUENT_OEMBED_SOURCE == 'noembed':\n registry = bootstrap_noembed(nowrap=1)\n elif appsettings.FLUENT_OEMBED_SOURCE == 'list':\n # Fill list manually in the settings, e.g. to have a fixed set of supported secure providers.\n registry = ProviderRegistry()\n for regex, provider in appsettings.FLUENT_OEMBED_PROVIDER_LIST:\n registry.register(regex, Provider(provider))\n else:\n raise ImproperlyConfigured(\"Invalid value of FLUENT_OEMBED_SOURCE, only 'basic', 'list', 'noembed' or 'embedly' is supported.\")\n\n # Add any extra providers defined in the settings\n for regex, provider in appsettings.FLUENT_OEMBED_EXTRA_PROVIDERS:\n registry.register(regex, Provider(provider))\n\n return registry"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nfetch the OEmbed object from the registry and return the response as dictionary.", "response": "def get_oembed_data(url, max_width=None, max_height=None, **params):\n \"\"\"\n Fetch the OEmbed object, return the response as dictionary.\n \"\"\"\n if max_width: params['maxwidth'] = max_width\n if max_height: params['maxheight'] = max_height\n\n registry = get_oembed_providers()\n return registry.request(url, **params)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns all plugins grouped by category.", "response": "def group_plugins_into_categories(plugins):\n \"\"\"\n Return all plugins, grouped by category.\n The structure is a {\"Categorynane\": [list of plugin classes]}\n \"\"\"\n if not plugins:\n return {}\n plugins = sorted(plugins, key=lambda p: p.verbose_name)\n categories = {}\n\n for plugin in plugins:\n title = str(plugin.category or u\"\") # enforce resolving ugettext_lazy proxies.\n if title not in categories:\n categories[title] = []\n categories[title].append(plugin)\n\n return categories"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef plugin_categories_to_choices(categories):\n choices = []\n for category, items in categories.items():\n if items:\n plugin_tuples = tuple((plugin.type_name, plugin.verbose_name) for plugin in items)\n if category:\n choices.append((category, plugin_tuples))\n else:\n choices += plugin_tuples\n\n choices.sort(key=lambda item: item[0])\n return choices", "response": "Return a tuple of plugin model choices suitable for a select field."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd a new Media object to a new Media object.", "response": "def add_media(dest, media):\n \"\"\"\n Optimized version of django.forms.Media.__add__() that doesn't create new objects.\n \"\"\"\n if django.VERSION >= (2, 2):\n dest._css_lists += media._css_lists\n dest._js_lists += media._js_lists\n elif django.VERSION >= (2, 0):\n combined = dest + media\n dest._css = combined._css\n dest._js = combined._js\n else:\n dest.add_css(media._css)\n dest.add_js(media._js)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_dummy_request(language=None):\n\n if settings.ALLOWED_HOSTS and settings.ALLOWED_HOSTS != \"*\":\n host = settings.ALLOWED_HOSTS[0]\n else:\n host = Site.objects.get_current().domain\n\n request = RequestFactory().get(\"/\", HTTP_HOST=host)\n request.session = {}\n request.LANGUAGE_CODE = language or settings.LANGUAGE_CODE\n # Needed for plugin rendering.\n request.current_page = None\n\n if 'django.contrib.auth' in settings.INSTALLED_APPS:\n from django.contrib.auth.models import AnonymousUser\n request.user = AnonymousUser()\n\n return request", "response": "Returns a dummy Request instance populated with cms specific attributes."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_render_language(contentitem):\n plugin = contentitem.plugin\n\n if plugin.render_ignore_item_language \\\n or (plugin.cache_output and plugin.cache_output_per_language):\n # Render the template in the current language.\n # The cache also stores the output under the current language code.\n #\n # It would make sense to apply this for fallback content too,\n # but that would be ambiguous however because the parent_object could also be a fallback,\n # and that case can't be detected here. Hence, better be explicit when desiring multi-lingual content.\n return get_language() # Avoid switching the content,\n else:\n # Render the template in the ContentItem language.\n # This makes sure that {% trans %} tag output matches the language of the model field data.\n return contentitem.language_code", "response": "Returns the language code that should be used to render the content item."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef optimize_logger_level(logger, log_level):\n function_name = _log_functions[log_level]\n if getattr(logger, function_name) is _dummy_log:\n return False\n\n is_level_logged = logger.isEnabledFor(log_level)\n if not is_level_logged:\n setattr(logger, function_name, _dummy_log)\n\n return is_level_logged", "response": "This function is used to optimize the logging level of a logger."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef get_search_field_values(contentitem):\n plugin = contentitem.plugin\n values = []\n for field_name in plugin.search_fields:\n value = getattr(contentitem, field_name)\n\n # Just assume all strings may contain HTML.\n # Not checking for just the PluginHtmlField here.\n if value and isinstance(value, six.string_types):\n value = get_cleaned_string(value)\n\n values.append(value)\n\n return values", "response": "Extract the search fields from the model."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_layout_view(self, request):\n template_name = request.GET['name']\n\n # Check if template is allowed, avoid parsing random templates\n templates = dict(appconfig.SIMPLECMS_TEMPLATE_CHOICES)\n if template_name not in templates:\n jsondata = {'success': False, 'error': 'Template not found'}\n status = 404\n else:\n # Extract placeholders from the template, and pass to the client.\n template = get_template(template_name)\n placeholders = get_template_placeholder_data(template)\n\n jsondata = {\n 'placeholders': [p.as_dict() for p in placeholders],\n }\n status = 200\n\n jsonstr = json.dumps(jsondata)\n return HttpResponse(jsonstr, content_type='application/json', status=status)", "response": "Return the metadata about a layout"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\napply soft hyphenation to the text.", "response": "def softhyphen_filter(textitem, html):\n \"\"\"\n Apply soft hyphenation to the text, which inserts ``­`` markers.\n \"\"\"\n language = textitem.language_code\n\n # Make sure the Django language code gets converted to what django-softhypen 1.0.2 needs.\n if language == 'en':\n language = 'en-us'\n elif '-' not in language:\n language = \"{0}-{0}\".format(language)\n\n return hyphenate(html, language=language)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_cached_placeholder_output(parent_object, placeholder_name):\n if not PlaceholderRenderingPipe.may_cache_placeholders():\n return None\n\n language_code = get_parent_language_code(parent_object)\n cache_key = get_placeholder_cache_key_for_parent(parent_object, placeholder_name, language_code)\n return cache.get(cache_key)", "response": "Get the cached output for a placeholder."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nrenders a placeholder and returns the rendered HTML output.", "response": "def render_placeholder(request, placeholder, parent_object=None, template_name=None, cachable=None, limit_parent_language=True, fallback_language=None):\n \"\"\"\n Render a :class:`~fluent_contents.models.Placeholder` object.\n Returns a :class:`~fluent_contents.models.ContentItemOutput` object\n which contains the HTML output and :class:`~django.forms.Media` object.\n\n This function also caches the complete output of the placeholder\n when all individual items are cacheable.\n\n :param request: The current request object.\n :type request: :class:`~django.http.HttpRequest`\n :param placeholder: The placeholder object.\n :type placeholder: :class:`~fluent_contents.models.Placeholder`\n :param parent_object: Optional, the parent object of the placeholder (already implied by the placeholder)\n :param template_name: Optional template name used to concatenate the placeholder output.\n :type template_name: str | None\n :param cachable: Whether the output is cachable, otherwise the full output will not be cached.\n Default: False when using a template, True otherwise.\n :type cachable: bool | None\n :param limit_parent_language: Whether the items should be limited to the parent language.\n :type limit_parent_language: bool\n :param fallback_language: The fallback language to use if there are no items in the current language. Passing ``True`` uses the default :ref:`FLUENT_CONTENTS_DEFAULT_LANGUAGE_CODE`.\n :type fallback_language: bool/str\n :rtype: :class:`~fluent_contents.models.ContentItemOutput`\n \"\"\"\n output = PlaceholderRenderingPipe(request).render_placeholder(\n placeholder=placeholder,\n parent_object=parent_object,\n template_name=template_name,\n cachable=cachable,\n limit_parent_language=limit_parent_language,\n fallback_language=fallback_language\n )\n\n # Wrap the result after it's stored in the cache.\n if markers.is_edit_mode(request):\n output.html = markers.wrap_placeholder_output(output.html, placeholder)\n\n return output"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nrenders a list of items as HTML string.", "response": "def render_content_items(request, items, template_name=None, cachable=None):\n \"\"\"\n Render a list of :class:`~fluent_contents.models.ContentItem` objects as HTML string.\n This is a variation of the :func:`render_placeholder` function.\n\n Note that the items are not filtered in any way by parent or language.\n The items are rendered as-is.\n\n :param request: The current request object.\n :type request: :class:`~django.http.HttpRequest`\n :param items: The list or queryset of objects to render. Passing a queryset is preferred.\n :type items: list or queryset of :class:`~fluent_contents.models.ContentItem`.\n :param template_name: Optional template name used to concatenate the placeholder output.\n :type template_name: Optional[str]\n :param cachable: Whether the output is cachable, otherwise the full output will not be cached.\n Default: False when using a template, True otherwise.\n :type cachable: Optional[bool]\n\n :rtype: :class:`~fluent_contents.models.ContentItemOutput`\n \"\"\"\n if not items:\n output = ContentItemOutput(mark_safe(u\"\"))\n else:\n output = RenderingPipe(request).render_items(\n placeholder=None,\n items=items,\n parent_object=None,\n template_name=template_name,\n cachable=cachable\n )\n\n # Wrap the result after it's stored in the cache.\n if markers.is_edit_mode(request):\n output.html = markers.wrap_anonymous_output(output.html)\n\n return output"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nrender a placeholder object to search text.", "response": "def render_placeholder_search_text(placeholder, fallback_language=None):\n \"\"\"\n Render a :class:`~fluent_contents.models.Placeholder` object to search text.\n This text can be used by an indexer (e.g. haystack) to produce content search for a parent object.\n\n :param placeholder: The placeholder object.\n :type placeholder: :class:`~fluent_contents.models.Placeholder`\n :param fallback_language: The fallback language to use if there are no items in the current language.\n Passing ``True`` uses the default :ref:`FLUENT_CONTENTS_DEFAULT_LANGUAGE_CODE`.\n :type fallback_language: bool|str\n :rtype: str\n \"\"\"\n parent_object = placeholder.parent # this is a cached lookup thanks to PlaceholderFieldDescriptor\n language = get_parent_language_code(parent_object)\n output = SearchRenderingPipe(language).render_placeholder(\n placeholder=placeholder,\n parent_object=parent_object,\n fallback_language=fallback_language\n )\n return output.html"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef render(self, name, value, attrs=None, renderer=None):\n other_instance_languages = None\n if value and value != \"-DUMMY-\":\n if get_parent_language_code(self.parent_object):\n # Parent is a multilingual object, provide information\n # for the copy dialog.\n other_instance_languages = get_parent_active_language_choices(\n self.parent_object, exclude_current=True)\n\n context = {\n 'cp_plugin_list': list(self.plugins),\n 'placeholder_id': '',\n 'placeholder_slot': self.slot,\n 'other_instance_languages': other_instance_languages,\n }\n return mark_safe(render_to_string('admin/fluent_contents/placeholderfield/widget.html', context))", "response": "Render the placeholder field."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the set of plugins that this widget should display.", "response": "def plugins(self):\n \"\"\"\n Get the set of plugins that this widget should display.\n \"\"\"\n from fluent_contents import extensions # Avoid circular reference because __init__.py imports subfolders too\n if self._plugins is None:\n return extensions.plugin_pool.get_plugins()\n else:\n return extensions.plugin_pool.get_plugins_by_name(*self._plugins)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn a redirect response for the exception.", "response": "def process_exception(self, request, exception):\n \"\"\"\n Return a redirect response for the :class:`~fluent_contents.extensions.HttpRedirectRequest`\n \"\"\"\n if isinstance(exception, HttpRedirectRequest):\n return HttpResponseRedirect(exception.url, status=exception.status)\n else:\n return None"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef render_text(text, language=None):\n # Get the filter\n text_filter = SUPPORTED_LANGUAGES.get(language, None)\n if not text_filter:\n raise ImproperlyConfigured(\"markup filter does not exist: {0}. Valid options are: {1}\".format(\n language, ', '.join(list(SUPPORTED_LANGUAGES.keys()))\n ))\n\n # Convert.\n return text_filter(text)", "response": "Render the text using the template filters provided by Django."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreading the derived table data for all objects tracked as remaining.", "response": "def fetch_remaining_instances(self):\n \"\"\"Read the derived table data for all objects tracked as remaining (=not found in the cache).\"\"\"\n if self.remaining_items:\n self.remaining_items = ContentItem.objects.get_real_instances(self.remaining_items)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the output in the correct ordering.", "response": "def get_output(self, include_exceptions=False):\n \"\"\"\n Return the output in the correct ordering.\n :rtype: list[Tuple[contentitem, O]]\n \"\"\"\n # Order all rendered items in the correct sequence.\n # Don't assume the derived tables are in perfect shape, hence the dict + KeyError handling.\n # The derived tables could be truncated/reset or store_output() could be omitted.\n ordered_output = []\n for item_id in self.output_ordering:\n contentitem = self.item_source[item_id]\n try:\n output = self.item_output[item_id]\n except KeyError:\n # The item was not rendered!\n if not include_exceptions:\n continue\n\n output = self.MISSING\n else:\n # Filter exceptions out.\n if not include_exceptions:\n if isinstance(output, Exception) or output is self.SKIPPED:\n continue\n\n ordered_output.append((contentitem, output))\n\n return ordered_output"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef render_items(self, placeholder, items, parent_object=None, template_name=None, cachable=None):\n # Unless it was done before, disable polymorphic effects.\n is_queryset = False\n if hasattr(items, \"non_polymorphic\"):\n is_queryset = True\n if not items.polymorphic_disabled and items._result_cache is None:\n items = items.non_polymorphic()\n\n # See if the queryset contained anything.\n # This test is moved here, to prevent earlier query execution.\n if not items:\n logger.debug(\"- no items in placeholder '%s'\", get_placeholder_debug_name(placeholder))\n return ContentItemOutput(mark_safe(u\"\".format(escape(get_placeholder_name(placeholder)))), cacheable=True)\n\n # Tracked data during rendering:\n result = self.result_class(\n request=self.request,\n parent_object=parent_object,\n placeholder=placeholder,\n items=items,\n all_cacheable=self._can_cache_merged_output(template_name, cachable),\n )\n if self.edit_mode:\n result.set_uncachable()\n\n if is_queryset:\n # Phase 1: get cached output\n self._fetch_cached_output(items, result=result)\n result.fetch_remaining_instances()\n else:\n # The items is either a list of manually created items, or it's a QuerySet.\n # Can't prevent reading the subclasses only, so don't bother with caching here.\n result.add_remaining_list(items)\n\n # Start the actual rendering of remaining items.\n if result.remaining_items:\n # Phase 2: render remaining items\n self._render_uncached_items(result.remaining_items, result=result)\n\n # And merge all items together.\n return self.merge_output(result, items, template_name)", "response": "Render the items in the given placeholder."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfetches all items from the cache and store them in the result.", "response": "def _fetch_cached_output(self, items, result):\n \"\"\"\n First try to fetch all items from the cache.\n The items are 'non-polymorphic', so only point to their base class.\n If these are found, there is no need to query the derived data from the database.\n \"\"\"\n if not appsettings.FLUENT_CONTENTS_CACHE_OUTPUT or not self.use_cached_output:\n result.add_remaining_list(items)\n return\n\n for contentitem in items:\n result.add_ordering(contentitem)\n output = None\n\n try:\n plugin = contentitem.plugin\n except PluginNotFound as ex:\n result.store_exception(contentitem, ex) # Will deal with that later.\n logger.debug(\"- item #%s has no matching plugin: %s\", contentitem.pk, str(ex))\n continue\n\n # Respect the cache output setting of the plugin\n if self.can_use_cached_output(contentitem):\n result.add_plugin_timeout(plugin)\n output = plugin.get_cached_output(result.placeholder_name, contentitem)\n\n # Support transition to new output format.\n if output is not None and not isinstance(output, ContentItemOutput):\n output = None\n logger.debug(\"Flushed cached output of {0}#{1} to store new ContentItemOutput format (key: {2})\".format(\n plugin.type_name,\n contentitem.pk,\n get_placeholder_name(contentitem.placeholder)\n ))\n\n # For debugging, ignore cached values when the template is updated.\n if output and settings.DEBUG:\n cachekey = get_rendering_cache_key(result.placeholder_name, contentitem)\n if is_template_updated(self.request, contentitem, cachekey):\n output = None\n\n if output:\n result.store_output(contentitem, output)\n else:\n result.add_remaining(contentitem)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ntell whether the code should try reading cached output for the given contentitem.", "response": "def can_use_cached_output(self, contentitem):\n \"\"\"\n Tell whether the code should try reading cached output\n \"\"\"\n plugin = contentitem.plugin\n return appsettings.FLUENT_CONTENTS_CACHE_OUTPUT and plugin.cache_output and contentitem.pk"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _render_uncached_items(self, items, result):\n for contentitem in items:\n # Render the item.\n # Allow derived classes to skip it.\n try:\n output = self.render_item(contentitem)\n except PluginNotFound as ex:\n result.store_exception(contentitem, ex)\n logger.debug(\"- item #%s has no matching plugin: %s\", contentitem.pk, str(ex))\n continue\n except SkipItem:\n result.set_skipped(contentitem)\n continue\n\n # Try caching it.\n self._try_cache_output(contentitem, output, result=result)\n if self.edit_mode:\n output.html = markers.wrap_contentitem_output(output.html, contentitem)\n\n result.store_output(contentitem, output)", "response": "Render a list of items that didn t exist in the cache yet."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nrenders the individual item.", "response": "def render_item(self, contentitem):\n \"\"\"\n Render the individual item.\n May raise :class:`SkipItem` to ignore an item.\n \"\"\"\n render_language = get_render_language(contentitem)\n with smart_override(render_language):\n # Plugin output is likely HTML, but it should be placed in mark_safe() to raise awareness about escaping.\n # This is just like Django's Input.render() and unlike Node.render().\n return contentitem.plugin._render_contentitem(self.request, contentitem)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncombines all rendered items with a template.", "response": "def merge_output(self, result, items, template_name):\n \"\"\"\n Combine all rendered items. Allow rendering the items with a template,\n to inserting separators or nice start/end code.\n \"\"\"\n html_output, media = self.get_html_output(result, items)\n\n if not template_name:\n merged_html = mark_safe(u''.join(html_output))\n else:\n context = {\n 'contentitems': list(zip(items, html_output)),\n 'parent_object': result.parent_object, # Can be None\n 'edit_mode': self.edit_mode,\n }\n\n context = PluginContext(self.request, context)\n merged_html = render_to_string(template_name, context.flatten())\n\n return ContentItemOutput(merged_html, media, cacheable=result.all_cacheable, cache_timeout=result.all_timeout)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncollecting all HTML from the rendered items in the correct ordering.", "response": "def get_html_output(self, result, items):\n \"\"\"\n Collect all HTML from the rendered items, in the correct ordering.\n The media is also collected in the same ordering, in case it's handled by django-compressor for example.\n \"\"\"\n html_output = []\n merged_media = Media()\n for contentitem, output in result.get_output(include_exceptions=True):\n if output is ResultTracker.MISSING:\n # Likely get_real_instances() didn't return an item for it.\n # The get_real_instances() didn't return an item for the derived table. This happens when either:\n # 1. that table is truncated/reset, while there is still an entry in the base ContentItem table.\n # A query at the derived table happens every time the page is being rendered.\n # 2. the model was completely removed which means there is also a stale ContentType object.\n class_name = _get_stale_item_class_name(contentitem)\n html_output.append(mark_safe(u\"\\n\".format(id=contentitem.pk, cls=class_name)))\n logger.warning(\"Missing derived model for ContentItem #{id}: {cls}.\".format(id=contentitem.pk, cls=class_name))\n elif isinstance(output, Exception):\n html_output.append(u'\\n'.format(str(output)))\n else:\n html_output.append(output.html)\n add_media(merged_media, output.media)\n\n return html_output, merged_media"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef render_placeholder(self, placeholder, parent_object=None, template_name=None, cachable=None, limit_parent_language=True, fallback_language=None):\n placeholder_name = get_placeholder_debug_name(placeholder)\n logger.debug(\"Rendering placeholder '%s'\", placeholder_name)\n\n # Determine whether the placeholder can be cached.\n cachable = self._can_cache_merged_output(template_name, cachable)\n try_cache = cachable and self.may_cache_placeholders()\n logger.debug(\"- try_cache=%s cachable=%s template_name=%s\", try_cache, cachable, template_name)\n\n if parent_object is None:\n # To support filtering the placeholders by parent language, the parent object needs to be known.\n # Fortunately, the PlaceholderFieldDescriptor makes sure this doesn't require an additional query.\n parent_object = placeholder.parent\n\n # Fetch the placeholder output from cache.\n language_code = get_parent_language_code(parent_object)\n cache_key = None\n output = None\n if try_cache:\n cache_key = get_placeholder_cache_key_for_parent(parent_object, placeholder.slot, language_code)\n output = cache.get(cache_key)\n if output:\n logger.debug(\"- fetched cached output\")\n\n if output is None:\n # Get the items, and render them\n items, is_fallback = self._get_placeholder_items(placeholder, parent_object, limit_parent_language, fallback_language, try_cache)\n output = self.render_items(placeholder, items, parent_object, template_name, cachable)\n\n if is_fallback:\n # Caching fallbacks is not supported yet,\n # content could be rendered in a different gettext language domain.\n output.cacheable = False\n\n # Store the full-placeholder contents in the cache.\n if try_cache and output.cacheable:\n if output.cache_timeout is not DEFAULT_TIMEOUT:\n # The timeout is based on the minimal timeout used in plugins.\n cache.set(cache_key, output, output.cache_timeout)\n else:\n # Don't want to mix into the default 0/None issue.\n cache.set(cache_key, output)\n\n return output", "response": "Render a single placeholder."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nregister a plugin with the CMS.", "response": "def register(self, plugin):\n \"\"\"\n Make a plugin known to the CMS.\n\n :param plugin: The plugin class, deriving from :class:`ContentPlugin`.\n :type plugin: :class:`ContentPlugin`\n\n The plugin will be instantiated once, just like Django does this with :class:`~django.contrib.admin.ModelAdmin` classes.\n If a plugin is already registered, this will raise a :class:`PluginAlreadyRegistered` exception.\n \"\"\"\n # Duck-Typing does not suffice here, avoid hard to debug problems by upfront checks.\n assert issubclass(plugin, ContentPlugin), \"The plugin must inherit from `ContentPlugin`\"\n assert plugin.model, \"The plugin has no model defined\"\n assert issubclass(plugin.model, ContentItem), \"The plugin model must inherit from `ContentItem`\"\n assert issubclass(plugin.form, ContentItemForm), \"The plugin form must inherit from `ContentItemForm`\"\n\n name = plugin.__name__ # using class here, no instance created yet.\n name = name.lower()\n if name in self.plugins:\n raise PluginAlreadyRegistered(\"{0}: a plugin with this name is already registered\".format(name))\n\n # Avoid registering 2 plugins to the exact same model. If you want to reuse code, use proxy models.\n if plugin.model in self._name_for_model:\n # Having 2 plugins for one model breaks ContentItem.plugin and the frontend code\n # that depends on using inline-model names instead of plugins. Good luck fixing that.\n # Better leave the model==plugin dependency for now.\n existing_plugin = self.plugins[self._name_for_model[plugin.model]]\n raise ModelAlreadyRegistered(\"Can't register the model {0} to {2}, it's already registered to {1}!\".format(\n plugin.model.__name__,\n existing_plugin.name,\n plugin.__name__\n ))\n\n # Make a single static instance, similar to ModelAdmin.\n plugin_instance = plugin()\n self.plugins[name] = plugin_instance\n self._name_for_model[plugin.model] = name # Track reverse for model.plugin link\n\n # Only update lazy indexes if already created\n if self._name_for_ctype_id is not None:\n self._name_for_ctype_id[plugin.type_id] = name\n\n return plugin"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_allowed_plugins(self, placeholder_slot):\n # See if there is a limit imposed.\n slot_config = appsettings.FLUENT_CONTENTS_PLACEHOLDER_CONFIG.get(placeholder_slot) or {}\n plugins = slot_config.get('plugins')\n if not plugins:\n return self.get_plugins()\n else:\n try:\n return self.get_plugins_by_name(*plugins)\n except PluginNotFound as e:\n raise PluginNotFound(str(e) + \" Update the plugin list of the FLUENT_CONTENTS_PLACEHOLDER_CONFIG['{0}'] setting.\".format(placeholder_slot))", "response": "Return the plugins which are supported in the given placeholder name."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning a list of plugins by name or class name.", "response": "def get_plugins_by_name(self, *names):\n \"\"\"\n Return a list of plugins by plugin class, or name.\n \"\"\"\n self._import_plugins()\n plugin_instances = []\n for name in names:\n if isinstance(name, six.string_types):\n try:\n plugin_instances.append(self.plugins[name.lower()])\n except KeyError:\n raise PluginNotFound(\"No plugin named '{0}'.\".format(name))\n elif isinstance(name, type) and issubclass(name, ContentPlugin):\n # Will also allow classes instead of strings.\n plugin_instances.append(self.plugins[self._name_for_model[name.model]])\n else:\n raise TypeError(\"get_plugins_by_name() expects a plugin name or class, not: {0}\".format(name))\n return plugin_instances"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning all model classes which are exposed by plugins.", "response": "def get_model_classes(self):\n \"\"\"\n Return all :class:`~fluent_contents.models.ContentItem` model classes which are exposed by plugins.\n \"\"\"\n self._import_plugins()\n return [plugin.model for plugin in self.plugins.values()]"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_plugin_by_model(self, model_class):\n self._import_plugins() # could happen during rendering that no plugin scan happened yet.\n assert issubclass(model_class, ContentItem) # avoid confusion between model instance and class here!\n\n try:\n name = self._name_for_model[model_class]\n except KeyError:\n raise PluginNotFound(\"No plugin found for model '{0}'.\".format(model_class.__name__))\n return self.plugins[name]", "response": "Returns the corresponding plugin for a given model class."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _import_plugins(self):\n if self.detected:\n return\n\n # In some cases, plugin scanning may start during a request.\n # Make sure there is only one thread scanning for plugins.\n self.scanLock.acquire()\n if self.detected:\n return # previous threaded released + completed\n\n try:\n import_apps_submodule(\"content_plugins\")\n self.detected = True\n finally:\n self.scanLock.release()", "response": "Internal function to import all plugin packages."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\napplying all filters for a given HTML snippet.", "response": "def apply_filters(instance, html, field_name):\n \"\"\"\n Run all filters for a given HTML snippet.\n Returns the results of the pre-filter and post-filter as tuple.\n\n This function can be called from the :meth:`~django.db.models.Model.full_clean` method in the model.\n That function is called when the form values are assigned.\n For example:\n\n .. code-block:: python\n\n def full_clean(self, *args, **kwargs):\n super(TextItem, self).full_clean(*args, **kwargs)\n self.html, self.html_final = apply_filters(self, self.html, field_name='html')\n\n :type instance: fluent_contents.models.ContentItem\n :raise ValidationError: when one of the filters detects a problem.\n \"\"\"\n try:\n html = apply_pre_filters(instance, html)\n\n # Perform post processing. This does not effect the original 'html'\n html_final = apply_post_filters(instance, html)\n except ValidationError as e:\n if hasattr(e, 'error_list'):\n # The filters can raise a \"dump\" ValidationError with a single error.\n # However, during post_clean it's expected that the fields are named.\n raise ValidationError({\n field_name: e.error_list\n })\n raise\n\n return html, html_final"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef apply_pre_filters(instance, html):\n # Allow pre processing. Typical use-case is HTML syntax correction.\n for post_func in appsettings.PRE_FILTER_FUNCTIONS:\n html = post_func(instance, html)\n\n return html", "response": "Apply pre filters to the HTML source code."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\napply post processing functions to the given HTML.", "response": "def apply_post_filters(instance, html):\n \"\"\"\n Allow post processing functions to change the text.\n This change is not saved in the original text.\n\n :type instance: fluent_contents.models.ContentItem\n :raise ValidationError: when one of the filters detects a problem.\n \"\"\"\n for post_func in appsettings.POST_FILTER_FUNCTIONS:\n html = post_func(instance, html)\n\n return html"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef clear_commentarea_cache(comment):\n parent = comment.content_object\n for instance in CommentsAreaItem.objects.parent(parent):\n instance.clear_cache()", "response": "Clean the plugin output cache of a rendered plugin."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef parent_site(self, site):\n # Avoid auto filter if site is already set.\n self._parent_site = site\n\n if sharedcontent_appsettings.FLUENT_SHARED_CONTENT_ENABLE_CROSS_SITE:\n # Allow content to be shared between all sites:\n return self.filter(Q(parent_site=site) | Q(is_cross_site=True))\n else:\n return self.filter(parent_site=site)", "response": "Filter to the given site only give content relevant for that site."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns a single site instance.", "response": "def _single_site(self):\n \"\"\"\n Make sure the queryset is filtered on a parent site, if that didn't happen already.\n \"\"\"\n if appsettings.FLUENT_CONTENTS_FILTER_SITE_ID and self._parent_site is None:\n return self.parent_site(settings.SITE_ID)\n else:\n return self"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_template_placeholder_data(template):\n # Find the instances.\n nodes = get_node_instances(template, PagePlaceholderNode)\n\n # Avoid duplicates, wrap in a class.\n names = set()\n result = []\n for pageplaceholdernode in nodes:\n data = PlaceholderData(\n slot=pageplaceholdernode.get_slot(),\n title=pageplaceholdernode.get_title(),\n role=pageplaceholdernode.get_role(),\n fallback_language=pageplaceholdernode.get_fallback_language(),\n )\n\n if data.slot not in names:\n result.append(data)\n names.add(data.slot)\n\n return result", "response": "Returns the list of PlaceholderData objects found in a template."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef inspect_model(self, model):\n # See which interesting fields the model holds.\n url_fields = sorted(f for f in model._meta.fields if isinstance(f, (PluginUrlField, models.URLField)))\n file_fields = sorted(f for f in model._meta.fields if isinstance(f, (PluginImageField, models.FileField)))\n html_fields = sorted(f for f in model._meta.fields if isinstance(f, (models.TextField, PluginHtmlField)))\n all_fields = [f.name for f in (file_fields + html_fields + url_fields)]\n if not all_fields:\n return []\n\n if model.__name__ in self.exclude:\n self.stderr.write(\"Skipping {0} ({1})\\n\".format(model.__name__, \", \".join(all_fields)))\n return []\n\n sys.stderr.write(\"Inspecting {0} ({1})\\n\".format(model.__name__, \", \".join(all_fields)))\n\n q_notnull = reduce(operator.or_, (Q(**{\"{0}__isnull\".format(f): False}) for f in all_fields))\n qs = model.objects.filter(q_notnull).order_by('pk')\n\n urls = []\n for object in qs:\n # HTML fields need proper html5lib parsing\n for field in html_fields:\n value = getattr(object, field.name)\n if value:\n html_images = self.extract_html_urls(value)\n urls += html_images\n\n for image in html_images:\n self.show_match(object, image)\n\n # Picture fields take the URL from the storage class.\n for field in file_fields:\n value = getattr(object, field.name)\n if value:\n value = unquote_utf8(value.url)\n urls.append(value)\n self.show_match(object, value)\n\n # URL fields can be read directly.\n for field in url_fields:\n value = getattr(object, field.name)\n if value:\n if isinstance(value, six.text_type):\n value = force_text(value)\n else:\n value = value.to_db_value() # AnyUrlValue\n\n urls.append(value)\n self.show_match(object, value)\n return urls", "response": "Inspect a single model."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ntakes all `` from the HTML\n \"\"\"\n p = HTMLParser(tree=treebuilders.getTreeBuilder(\"dom\"))\n dom = p.parse(html)\n urls = []\n\n for img in dom.getElementsByTagName('img'):\n src = img.getAttribute('src')\n if src:\n urls.append(unquote_utf8(src))\n\n srcset = img.getAttribute('srcset')\n if srcset:\n urls += self.extract_srcset(srcset)\n\n for source in dom.getElementsByTagName('source'):\n srcset = source.getAttribute('srcset')\n if srcset:\n urls += self.extract_srcset(srcset)\n\n for source in dom.getElementsByTagName('a'):\n href = source.getAttribute('href')\n if href:\n urls.append(unquote_utf8(href))\n\n return urls"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nextract the set of images from a source set.", "response": "def extract_srcset(self, srcset):\n \"\"\"\n Handle ``srcset=\"image.png 1x, image@2x.jpg 2x\"``\n \"\"\"\n urls = []\n for item in srcset.split(','):\n if item:\n urls.append(unquote_utf8(item.rsplit(' ', 1)[0]))\n return urls"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndeleting the translation from the ContentItems.", "response": "def on_delete_model_translation(instance, **kwargs):\n \"\"\"\n Make sure ContentItems are deleted when a translation in deleted.\n \"\"\"\n translation = instance\n\n parent_object = translation.master\n parent_object.set_current_language(translation.language_code)\n\n # Also delete any associated plugins\n # Placeholders are shared between languages, so these are not affected.\n for item in ContentItem.objects.parent(parent_object, limit_parent_language=True):\n item.delete()"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ntake an HTML fragment and processes it using html5lib to ensure that the HTML is well - formed.", "response": "def clean_html(input, sanitize=False):\n \"\"\"\n Takes an HTML fragment and processes it using html5lib to ensure that the HTML is well-formed.\n\n :param sanitize: Remove unwanted HTML tags and attributes.\n\n >>> clean_html(\"

Foobar

\")\n u'

Foobar

'\n >>> clean_html(\"

FoobarOoops!

\")\n u'

FoobarOoops!

'\n >>> clean_html('

Foobar& oopsThis is a <>link

')\n u'

Foobar& oopsThis is a <>link

'\n \"\"\"\n parser_kwargs = {}\n serializer_kwargs = {}\n if sanitize:\n if HTMLSanitizer is None:\n # new syntax as of 0.99999999/1.0b9 (Released on July 14, 2016)\n serializer_kwargs['sanitize'] = True\n else:\n parser_kwargs['tokenizer'] = HTMLSanitizer\n\n p = HTMLParser(tree=treebuilders.getTreeBuilder(\"dom\"), **parser_kwargs)\n dom_tree = p.parseFragment(input)\n walker = treewalkers.getTreeWalker(\"dom\")\n stream = walker(dom_tree)\n\n s = HTMLSerializer(omit_optional_tags=False, **serializer_kwargs)\n return \"\".join(s.serialize(stream))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn the contents as a dictionary for client - side export.", "response": "def as_dict(self):\n \"\"\"\n Return the contents as dictionary, for client-side export.\n The dictionary contains the fields:\n\n * ``slot``\n * ``title``\n * ``role``\n * ``fallback_language``\n * ``allowed_plugins``\n \"\"\"\n plugins = self.get_allowed_plugins()\n return {\n 'slot': self.slot,\n 'title': self.title,\n 'role': self.role,\n 'fallback_language': self.fallback_language,\n 'allowed_plugins': [plugin.name for plugin in plugins],\n }"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nforwarding the MarkupItem model to the correct proxy model that implements the given language.", "response": "def _forwards(apps, schema_editor):\n \"\"\"\n Make sure that the MarkupItem model actually points\n to the correct proxy model, that implements the given language.\n \"\"\"\n # Need to work on the actual models here.\n from fluent_contents.plugins.markup.models import LANGUAGE_MODEL_CLASSES\n from fluent_contents.plugins.markup.models import MarkupItem\n from django.contrib.contenttypes.models import ContentType\n\n ctype = ContentType.objects.get_for_model(MarkupItem)\n\n for language, proxy_model in LANGUAGE_MODEL_CLASSES.items():\n proxy_ctype = ContentType.objects.get_for_model(proxy_model, for_concrete_model=False)\n MarkupItem.objects.filter(\n polymorphic_ctype=ctype, language=language\n ).update(\n polymorphic_ctype=proxy_ctype\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef get_formset(self, request, obj=None, **kwargs):\n def _placeholder_initial(p):\n # p.as_dict() returns allowed_plugins too for the client-side API.\n return {\n 'slot': p.slot,\n 'title': p.title,\n 'role': p.role,\n }\n\n # Note this method is called twice, the second time in get_fieldsets() as `get_formset(request).form`\n initial = []\n if request.method == \"GET\":\n placeholder_admin = self._get_parent_modeladmin()\n\n # Grab the initial data from the parent PlaceholderEditorBaseMixin\n data = placeholder_admin.get_placeholder_data(request, obj)\n initial = [_placeholder_initial(d) for d in data]\n\n # Order initial properly,\n\n # Inject as default parameter to the constructor\n # This is the BaseExtendedGenericInlineFormSet constructor\n FormSetClass = super(PlaceholderEditorInline, self).get_formset(request, obj, **kwargs)\n FormSetClass.__init__ = curry(FormSetClass.__init__, initial=initial)\n return FormSetClass", "response": "Returns a FormSet subclass with initial data for the current object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_inline_instances(self, request, *args, **kwargs):\n inlines = super(PlaceholderEditorAdmin, self).get_inline_instances(request, *args, **kwargs)\n\n extra_inline_instances = []\n inlinetypes = self.get_extra_inlines()\n for InlineType in inlinetypes:\n inline_instance = InlineType(self.model, self.admin_site)\n extra_inline_instances.append(inline_instance)\n\n return extra_inline_instances + inlines", "response": "Create the inlines for the admin including the placeholder and contentitem inlines."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nreturning the placeholder data as dictionary.", "response": "def get_placeholder_data_view(self, request, object_id):\n \"\"\"\n Return the placeholder data as dictionary.\n This is used in the client for the \"copy\" functionality.\n \"\"\"\n language = 'en' #request.POST['language']\n with translation.override(language): # Use generic solution here, don't assume django-parler is used now.\n obj = self.get_object(request, object_id)\n\n if obj is None:\n json = {'success': False, 'error': 'Page not found'}\n status = 404\n elif not self.has_change_permission(request, obj):\n json = {'success': False, 'error': 'No access to page'}\n status = 403\n else:\n # Fetch the forms that would be displayed,\n # return the data as serialized form data.\n status = 200\n json = {\n 'success': True,\n 'object_id': object_id,\n 'language_code': language,\n 'formset_forms': self._get_object_formset_data(request, obj),\n }\n\n return JsonResponse(json, status=status)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_args(method_or_func):\n try:\n # Python 3.0+\n args = list(inspect.signature(method_or_func).parameters.keys())\n except AttributeError:\n # Python 2.7\n args = inspect.getargspec(method_or_func).args\n return args", "response": "Returns method or function arguments."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef jwt_optional(fn):\n @wraps(fn)\n def wrapper(*args, **kwargs):\n try:\n jwt_data = _decode_jwt_from_headers()\n ctx_stack.top.jwt = jwt_data\n except (NoAuthorizationError, InvalidHeaderError):\n pass\n return fn(*args, **kwargs)\n return wrapper", "response": "Decorator that will decorate a view function with this decorator that will check the request for a valid JWT and put it into the application context."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef decode_jwt(encoded_token):\n secret = config.decode_key\n algorithm = config.algorithm\n audience = config.audience\n return jwt.decode(encoded_token, secret, algorithms=[algorithm], audience=audience)", "response": "Decodes a JWT encoded string into a base64 - encoded token."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef init_app(self, app):\n # Save this so we can use it later in the extension\n if not hasattr(app, 'extensions'): # pragma: no cover\n app.extensions = {}\n app.extensions['flask-jwt-simple'] = self\n\n # Set all the default configurations for this extension\n self._set_default_configuration_options(app)\n self._set_error_handler_callbacks(app)\n\n # Set propagate exceptions, so all of our error handlers properly\n # work in production\n app.config['PROPAGATE_EXCEPTIONS'] = True", "response": "Register this extension with the flask app"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsetting the error handler callbacks used by this extension.", "response": "def _set_error_handler_callbacks(self, app):\n \"\"\"\n Sets the error handler callbacks used by this extension\n \"\"\"\n @app.errorhandler(NoAuthorizationError)\n def handle_no_auth_error(e):\n return self._unauthorized_callback(str(e))\n\n @app.errorhandler(InvalidHeaderError)\n def handle_invalid_header_error(e):\n return self._invalid_token_callback(str(e))\n\n @app.errorhandler(jwt.ExpiredSignatureError)\n def handle_expired_error(e):\n return self._expired_token_callback()\n\n @app.errorhandler(jwt.InvalidTokenError)\n def handle_invalid_token_error(e):\n return self._invalid_token_callback(str(e))"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _set_default_configuration_options(app):\n # Options for JWTs when the TOKEN_LOCATION is headers\n app.config.setdefault('JWT_HEADER_NAME', 'Authorization')\n app.config.setdefault('JWT_HEADER_TYPE', 'Bearer')\n\n # How long an a token created with 'create_jwt' will last before\n # it expires (when using the default jwt_data_callback function).\n app.config.setdefault('JWT_EXPIRES', datetime.timedelta(hours=1))\n\n # What algorithm to use to sign the token. See here for a list of options:\n # https://github.com/jpadilla/pyjwt/blob/master/jwt/api_jwt.py\n app.config.setdefault('JWT_ALGORITHM', 'HS256')\n\n # Key that acts as the identity for the JWT\n app.config.setdefault('JWT_IDENTITY_CLAIM', 'sub')\n\n # Expected value of the audience claim\n app.config.setdefault('JWT_DECODE_AUDIENCE', None)\n\n # Secret key to sign JWTs with. Only used if a symmetric algorithm is\n # used (such as the HS* algorithms).\n app.config.setdefault('JWT_SECRET_KEY', None)\n\n # Keys to sign JWTs with when use when using an asymmetric\n # (public/private key) algorithms, such as RS* or EC*\n app.config.setdefault('JWT_PRIVATE_KEY', None)\n app.config.setdefault('JWT_PUBLIC_KEY', None)", "response": "Sets the default configuration options used by this extension."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef releases(release_id=None, **kwargs):\n if not 'id' in kwargs and release_id is not None:\n kwargs['release_id'] = release_id\n return Fred().release(**kwargs)\n return Fred().releases(**kwargs)", "response": "Get all releases of economic data."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets an economic data series.", "response": "def series(identifier=None, **kwargs):\n \"\"\"Get an economic data series.\"\"\"\n if identifier:\n kwargs['series_id'] = identifier\n if 'release' in kwargs:\n kwargs.pop('release')\n path = 'release'\n elif 'releases' in kwargs:\n kwargs.pop('releases')\n path = 'release'\n else:\n path = None\n return Fred().series(path, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef source(source_id=None, **kwargs):\n if source_id is not None:\n kwargs['source_id'] = source_id\n elif 'id' in kwargs:\n source_id = kwargs.pop('id')\n kwargs['source_id'] = source_id\n if 'releases' in kwargs:\n kwargs.pop('releases')\n path = 'releases'\n else:\n path = None\n return Fred().source(path, **kwargs)", "response": "Get a source of economic data."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the sources of economic data.", "response": "def sources(source_id=None, **kwargs):\n \"\"\"Get the sources of economic data.\"\"\"\n if source_id or 'id' in kwargs:\n return source(source_id, **kwargs)\n return Fred().sources(**kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate the URL path with the Fred endpoint and given arguments.", "response": "def _create_path(self, *args):\n \"\"\"Create the URL path with the Fred endpoint and given arguments.\"\"\"\n args = filter(None, args)\n path = self.endpoint + '/'.join(args)\n return path"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nperforms a GET request againt the Fred API endpoint.", "response": "def get(self, *args, **kwargs):\n \"\"\"Perform a GET request againt the Fred API endpoint.\"\"\"\n location = args[0]\n params = self._get_keywords(location, kwargs)\n url = self._create_path(*args)\n request = requests.get(url, params=params)\n content = request.content\n self._request = request\n return self._output(content)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _get_keywords(self, location, keywords):\n if 'xml' in keywords:\n keywords.pop('xml')\n self.xml = True\n else:\n keywords['file_type'] = 'json'\n if 'id' in keywords:\n if location != 'series':\n location = location.rstrip('s')\n key = '%s_id' % location\n value = keywords.pop('id')\n keywords[key] = value\n if 'start' in keywords:\n time = keywords.pop('start')\n keywords['realtime_start'] = time\n if 'end' in keywords:\n time = keywords.pop('end')\n keywords['realtime_end'] = time\n if 'sort' in keywords:\n order = keywords.pop('sort')\n keywords['sort_order'] = order\n keywords['api_key'] = self.api_key\n return keywords", "response": "Format GET request s parameters from keywords."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef item_extra_kwargs(self, item):\n if use_feed_image:\n feed_image = item.feed_image\n if feed_image:\n image_complete_url = urljoin(\n self.get_site_url(), feed_image.file.url\n )\n else:\n image_complete_url = \"\"\n\n content_field = getattr(item, self.item_content_field)\n try:\n content = expand_db_html(content_field)\n except:\n content = content_field.__html__()\n\n soup = BeautifulSoup(content, 'html.parser')\n # Remove style attribute to remove large botton padding\n for div in soup.find_all(\"div\", {'class': 'responsive-object'}):\n del div['style']\n # Add site url to image source\n for img_tag in soup.findAll('img'):\n if img_tag.has_attr('src'):\n img_tag['src'] = urljoin(self.get_site_url(), img_tag['src'])\n\n fields_to_add = {\n 'content': soup.prettify(formatter=\"html\"),\n }\n\n if use_feed_image:\n fields_to_add['image'] = image_complete_url\n else:\n fields_to_add['image'] = \"\"\n\n return fields_to_add", "response": "Returns a dictionary that is used with\n the add_item call of the custom feed generator."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef treenav_undefined_url(request, item_slug):\n item = get_object_or_404(treenav.MenuItem, slug=item_slug) # noqa\n # do something with item here and return an HttpResponseRedirect\n raise Http404", "response": "This is a sample view that can be used to provide custom handlers for undefined menu items on a per - item basis."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef refresh_hrefs(self, request):\n for item in treenav.MenuItem.objects.all():\n item.save() # refreshes the HREF\n self.message_user(request, _('Menu item HREFs refreshed successfully.'))\n info = self.model._meta.app_label, self.model._meta.model_name\n changelist_url = reverse('admin:%s_%s_changelist' % info, current_app=self.admin_site.name)\n return redirect(changelist_url)", "response": "Refresh all the cached menu item HREFs in the database."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nremoves all MenuItems from Cache.", "response": "def clean_cache(self, request):\n \"\"\"\n Remove all MenuItems from Cache.\n \"\"\"\n treenav.delete_cache()\n self.message_user(request, _('Cache menuitem cache cleaned successfully.'))\n info = self.model._meta.app_label, self.model._meta.model_name\n changelist_url = reverse('admin:%s_%s_changelist' % info, current_app=self.admin_site.name)\n return redirect(changelist_url)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef rebuild_tree(self, request):\n '''\n Rebuilds the tree and clears the cache.\n '''\n self.model.objects.rebuild()\n self.message_user(request, _('Menu Tree Rebuilt.'))\n return self.clean_cache(request)", "response": "Rebuilds the tree and clears the cache."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nrebuild the tree after saving items related to parent.", "response": "def save_related(self, request, form, formsets, change):\n \"\"\"\n Rebuilds the tree after saving items related to parent.\n \"\"\"\n super(MenuItemAdmin, self).save_related(request, form, formsets, change)\n self.model.objects.rebuild()"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _calculate_dispersion(X: Union[pd.DataFrame, np.ndarray], labels: np.ndarray, centroids: np.ndarray) -> float:\n disp = np.sum(np.sum([np.abs(inst - centroids[label]) ** 2 for inst, label in zip(X, labels)])) # type: float\n return disp", "response": "Calculate the dispersion between actual points and their assigned centroids"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncalculates the gap value of the given data n_refs and n_clusters and return the resutling gap value and n_clusters.", "response": "def _calculate_gap(self, X: Union[pd.DataFrame, np.ndarray], n_refs: int, n_clusters: int) -> Tuple[float, int]:\n \"\"\"\n Calculate the gap value of the given data, n_refs, and number of clusters.\n Return the resutling gap value and n_clusters\n \"\"\"\n # Holder for reference dispersion results\n ref_dispersions = np.zeros(n_refs) # type: np.ndarray\n\n # For n_references, generate random sample and perform kmeans getting resulting dispersion of each loop\n for i in range(n_refs):\n\n # Create new random reference set\n random_data = np.random.random_sample(size=X.shape) # type: np.ndarray\n\n # Fit to it, getting the centroids and labels, and add to accumulated reference dispersions array.\n centroids, labels = kmeans2(data=random_data,\n k=n_clusters,\n iter=10,\n minit='points') # type: Tuple[np.ndarray, np.ndarray]\n dispersion = self._calculate_dispersion(X=random_data, labels=labels, centroids=centroids) # type: float\n ref_dispersions[i] = dispersion\n\n # Fit cluster to original data and create dispersion calc.\n centroids, labels = kmeans2(data=X, k=n_clusters, iter=10, minit='points')\n dispersion = self._calculate_dispersion(X=X, labels=labels, centroids=centroids)\n\n # Calculate gap statistic\n gap_value = np.mean(np.log(ref_dispersions)) - np.log(dispersion)\n\n return gap_value, int(n_clusters)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _process_with_rust(self, X: Union[pd.DataFrame, np.ndarray], n_refs: int, cluster_array: np.ndarray):\n from gap_statistic.rust import gapstat\n for label, gap_value in gapstat.optimal_k(X, list(cluster_array)):\n yield (gap_value, label)", "response": "Process gap stat using pure rust\n "} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _process_with_joblib(self, X: Union[pd.DataFrame, np.ndarray], n_refs: int, cluster_array: np.ndarray):\n if Parallel is None:\n raise EnvironmentError('joblib is not installed; cannot use joblib as the parallel backend!')\n\n with Parallel(n_jobs=self.n_jobs) as parallel:\n for gap_value, n_clusters in parallel(delayed(self._calculate_gap)(X, n_refs, n_clusters)\n for n_clusters in cluster_array):\n yield (gap_value, n_clusters)", "response": "Process calling of. calculate_gap method using joblib."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _process_with_multiprocessing(self, X: Union[pd.DataFrame, np.ndarray], n_refs: int, cluster_array: np.ndarray):\n with ProcessPoolExecutor(max_workers=self.n_jobs) as executor:\n\n jobs = [executor.submit(self._calculate_gap, X, n_refs, n_clusters)\n for n_clusters in cluster_array\n ]\n\n for future in as_completed(jobs):\n gap_value, k = future.result()\n yield (gap_value, k)", "response": "Process calling of. calculate_gap method using the multiprocessing library."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _process_non_parallel(self, X: Union[pd.DataFrame, np.ndarray], n_refs: int, cluster_array: np.ndarray):\n for gap_value, n_clusters in [self._calculate_gap(X, n_refs, n_clusters)\n for n_clusters in cluster_array]:\n yield (gap_value, n_clusters)", "response": "Process calling of. calculate_gap method using no parallel backend ; simple for loop"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the last migration batch.", "response": "def get_last(self):\n \"\"\"\n Get the last migration batch.\n\n :rtype: list\n \"\"\"\n query = self.table().where('batch', self.get_last_batch_number())\n\n return query.order_by('migration', 'desc').get()"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef compile_insert(self, query, values):\n # Essentially we will force every insert to be treated as a batch insert which\n # simply makes creating the SQL easier for us since we can utilize the same\n # basic routine regardless of an amount of records given to us to insert.\n table = self.wrap_table(query.from__)\n\n if not isinstance(values, list):\n values = [values]\n\n columns = self.columnize(values[0].keys())\n\n # We need to build a list of parameter place-holders of values that are bound\n # to the query. Each insert should have the exact same amount of parameter\n # bindings so we can just go off the first list of values in this array.\n parameters = self.parameterize(values[0].values())\n\n value = ['(%s)' % parameters] * len(values)\n\n parameters = ', '.join(value)\n\n return 'INSERT INTO %s (%s) VALUES %s' % (table, columns, parameters)", "response": "Compile an insert SQL statement for the given values."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_alter_table_sql(self, diff):\n sql = []\n\n for column_diff in diff.changed_columns.values():\n if self.is_unchanged_binary_column(column_diff):\n continue\n\n old_column_name = column_diff.old_column_name\n column = column_diff.column\n\n if any([column_diff.has_changed('type'),\n column_diff.has_changed('precision'),\n column_diff.has_changed('scale'),\n column_diff.has_changed('fixed')]):\n query = 'ALTER ' + old_column_name + ' TYPE ' + self.get_sql_type_declaration(column.to_dict())\n sql.append('ALTER TABLE ' + diff.name + ' ' + query)\n\n if column_diff.has_changed('default') or column_diff.has_changed('type'):\n if column.get_default() is None:\n default_clause = ' DROP DEFAULT'\n else:\n default_clause = ' SET' + self.get_default_value_declaration_sql(column.to_dict())\n\n query = 'ALTER ' + old_column_name + default_clause\n sql.append('ALTER TABLE ' + diff.name + ' ' + query)\n\n if column_diff.has_changed('notnull'):\n op = 'DROP'\n if column.get_notnull():\n op = 'SET'\n\n query = 'ALTER ' + old_column_name + ' ' + op + ' NOT NULL'\n sql.append('ALTER TABLE ' + diff.name + ' ' + query)\n\n if column_diff.has_changed('autoincrement'):\n if column.get_autoincrement():\n seq_name = self.get_identity_sequence_name(diff.name, old_column_name)\n\n sql.append('CREATE SEQUENCE ' + seq_name)\n sql.append('SELECT setval(\\'' + seq_name + '\\', '\n '(SELECT MAX(' + old_column_name + ') FROM ' + diff.name + '))')\n query = 'ALTER ' + old_column_name + ' SET DEFAULT nextval(\\'' + seq_name + '\\')'\n sql.append('ALTER TABLE ' + diff.name + ' ' + query)\n else:\n query = 'ALTER ' + old_column_name + ' DROP DEFAULT'\n sql.append('ALTER TABLE ' + diff.name + ' ' + query)\n\n if column_diff.has_changed('length'):\n query = 'ALTER ' + old_column_name + ' TYPE ' + self.get_sql_type_declaration(column.to_dict())\n sql.append('ALTER TABLE ' + diff.name + ' ' + query)\n\n for old_column_name, column in diff.renamed_columns.items():\n sql.append('ALTER TABLE ' + diff.name + ' '\n 'RENAME COLUMN ' + old_column_name + ' TO ' + column.get_name())\n\n return sql", "response": "Get the ALTER TABLE SQL statement for the table diff."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _date_based_where(self, type, query, where):\n value = str(where['value']).zfill(2)\n value = self.parameter(value)\n\n return 'strftime(\\'%s\\', %s) %s %s'\\\n % (type, self.wrap(where['column']),\n where['operator'], value)", "response": "Compile a date based where clause"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef compile_select(self, query):\n sql = super(MySqlQueryGrammar, self).compile_select(query)\n\n if query.unions:\n sql = '(%s) %s' % (sql, self._compile_unions(query))\n\n return sql", "response": "Compile a select query into SQL\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncompiling the lock into SQL", "response": "def _compile_lock(self, query, value):\n \"\"\"\n Compile the lock into SQL\n\n :param query: A QueryBuilder instance\n :type query: QueryBuilder\n\n :param value: The lock value\n :type value: bool or str\n\n :return: The compiled lock\n :rtype: str\n \"\"\"\n if isinstance(value, basestring):\n return value\n\n if value is True:\n return 'FOR UPDATE'\n elif value is False:\n return 'LOCK IN SHARE MODE'"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef compile_update(self, query, values):\n sql = super(MySqlQueryGrammar, self).compile_update(query, values)\n\n if query.orders:\n sql += ' %s' % self._compile_orders(query, query.orders)\n\n if query.limit_:\n sql += ' %s' % self._compile_limit(query, query.limit_)\n\n return sql.rstrip()", "response": "Compile an update statement into SQL"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef collapse(self):\n results = []\n\n if isinstance(self._items, dict):\n items = self._items.values()\n\n for values in items:\n if isinstance(values, Collection):\n values = values.all()\n\n results += values\n\n return Collection(results)", "response": "Collapse the collection items into a single element"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ndetermine if an element is in the collection.", "response": "def contains(self, key, value=None):\n \"\"\"\n Determine if an element is in the collection\n\n :param key: The element\n :type key: int or str\n\n :param value: The value of the element\n :type value: mixed\n\n :return: Whether the element is in the collection\n :rtype: bool\n \"\"\"\n if value is not None:\n if isinstance(self._items, list):\n return key in self._items and self._items[self._items.index(key)] == value\n\n return self._items.get(key) == value\n\n return key in self._items"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget a list with the values of a given key.", "response": "def lists(self, value, key=None):\n \"\"\"\n Get a list with the values of a given key\n\n :rtype: list\n \"\"\"\n results = map(lambda x: x[value], self._items)\n\n return list(results)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nruns a map over each of the items in the collection.", "response": "def map(self, callback):\n \"\"\"\n Run a map over each of the item.\n\n :param callback: The map function\n :type callback: callable\n\n :rtype: Collection\n \"\"\"\n if isinstance(self._items, dict):\n return Collection(list(map(callback, self._items.values())))\n\n return Collection(list(map(callback, self._items)))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef unique(self):\n seen = set()\n seen_add = seen.add\n\n return Collection([x for x in self._items if not (x in seen or seen_add(x))])", "response": "Return only unique items from the collection list."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nload a set of relationships onto the collection.", "response": "def load(self, *relations):\n \"\"\"\n Load a set of relationships onto the collection.\n \"\"\"\n if len(self._items) > 0:\n query = self.first().new_query().with_(*relations)\n\n self._items = query.eager_load_relations(self._items)\n\n return self"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncompiles a foreign key command.", "response": "def compile_foreign(self, blueprint, command, _):\n \"\"\"\n Compile a foreign key command.\n\n :param blueprint: The blueprint\n :type blueprint: Blueprint\n\n :param command: The command\n :type command: Fluent\n\n :rtype: str\n \"\"\"\n table = self.wrap_table(blueprint)\n\n on = self.wrap_table(command.on)\n\n columns = self.columnize(command.columns)\n\n on_columns = self.columnize(command.references\n if isinstance(command.references, list)\n else [command.references])\n\n sql = 'ALTER TABLE %s ADD CONSTRAINT %s ' % (table, command.index)\n\n sql += 'FOREIGN KEY (%s) REFERENCES %s (%s)' % (columns, on, on_columns)\n\n if command.get('on_delete'):\n sql += ' ON DELETE %s' % command.on_delete\n\n if command.get('on_update'):\n sql += ' ON UPDATE %s' % command.on_update\n\n return sql"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _get_columns(self, blueprint):\n columns = []\n\n for column in blueprint.get_added_columns():\n sql = self.wrap(column) + ' ' + self._get_type(column)\n\n columns.append(self._add_modifiers(sql, blueprint, column))\n\n return columns", "response": "Get the blueprint s columns definitions."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _add_modifiers(self, sql, blueprint, column):\n for modifier in self._modifiers:\n method = '_modify_%s' % modifier\n\n if hasattr(self, method):\n sql += getattr(self, method)(blueprint, column)\n\n return sql", "response": "Add the column modifiers to the sql"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the pivot attributes from a model.", "response": "def _clean_pivot_attributes(self, model):\n \"\"\"\n Get the pivot attributes from a model.\n\n :type model: eloquent.Model\n \"\"\"\n values = {}\n delete_keys = []\n\n for key, value in model.get_attributes().items():\n if key.find('pivot_') == 0:\n values[key[6:]] = value\n\n delete_keys.append(key)\n\n for key in delete_keys:\n delattr(model, key)\n\n return values"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a QuerySet that will add the constraints for a relationship count query on the same table.", "response": "def get_relation_count_query_for_self_join(self, query, parent):\n \"\"\"\n Add the constraints for a relationship count query on the same table.\n\n :type query: eloquent.orm.Builder\n :type parent: eloquent.orm.Builder\n\n :rtype: eloquent.orm.Builder\n \"\"\"\n query.select(QueryExpression('COUNT(*)'))\n\n table_prefix = self._query.get_query().get_connection().get_table_prefix()\n\n hash_ = self.get_relation_count_hash()\n query.from_('%s AS %s%s' % (self._table, table_prefix, hash_))\n\n key = self.wrap(self.get_qualified_parent_key_name())\n\n return query.where('%s.%s' % (hash_, self._foreign_key), '=', QueryExpression(key))"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _get_select_columns(self, columns=None):\n if columns == ['*'] or columns is None:\n columns = ['%s.*' % self._related.get_table()]\n\n return columns + self._get_aliased_pivot_columns()", "response": "Set the select clause for the relation query."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the pivot columns for the relation.", "response": "def _get_aliased_pivot_columns(self):\n \"\"\"\n Get the pivot columns for the relation.\n\n :rtype: list\n \"\"\"\n defaults = [self._foreign_key, self._other_key]\n\n columns = []\n\n for column in defaults + self._pivot_columns:\n value = '%s.%s AS pivot_%s' % (self._table, column, column)\n if value not in columns:\n columns.append('%s.%s AS pivot_%s' % (self._table, column, column))\n\n return columns"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsetting the join clause for the relation query.", "response": "def _set_join(self, query=None):\n \"\"\"\n Set the join clause for the relation query.\n\n :param query: The query builder\n :type query: eloquent.orm.Builder\n\n :return: self\n :rtype: BelongsToMany\n \"\"\"\n if not query:\n query = self._query\n\n base_table = self._related.get_table()\n\n key = '%s.%s' % (base_table, self._related.get_key_name())\n\n query.join(self._table, key, '=', self.get_other_key())\n\n return self"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nmatch the eagerly loaded results to their parents.", "response": "def match(self, models, results, relation):\n \"\"\"\n Match the eagerly loaded results to their parents.\n\n :type models: list\n :type results: Collection\n :type relation: str\n \"\"\"\n dictionary = self._build_dictionary(results)\n\n for model in models:\n key = model.get_key()\n\n if key in dictionary:\n collection = self._related.new_collection(dictionary[key])\n\n model.set_relation(relation, collection)\n\n return models"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef save(self, model, joining=None, touch=True):\n if joining is None:\n joining = {}\n\n model.save({'touch': False})\n\n self.attach(model.get_key(), joining, touch)\n\n return model", "response": "Save a new model and attach it to the parent model."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _attach_new(self, records, current, touch=True):\n changes = {\n 'attached': [],\n 'updated': []\n }\n\n for id, attributes in records.items():\n if id not in current:\n self.attach(id, attributes, touch)\n\n changes['attached'].append(id)\n elif len(attributes) > 0 and self.update_existing_pivot(id, attributes, touch):\n changes['updated'].append(id)\n\n return changes", "response": "Attach all of the IDs that aren t in the current dict."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates an existing pivot record on the table.", "response": "def update_existing_pivot(self, id, attributes, touch=True):\n \"\"\"\n Update an existing pivot record on the table.\n \"\"\"\n if self.updated_at() in self._pivot_columns:\n attributes = self.set_timestamps_on_attach(attributes, True)\n\n updated = self._new_picot_statement_for_id(id).update(attributes)\n\n if touch:\n self.touch_if_touching()\n\n return updated"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef query(self):\n query = QueryBuilder(self, self._query_grammar, self._post_processor)\n\n return query", "response": "Begin a fluent query"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nforce a soft delete on a soft deleted model.", "response": "def force_delete(self):\n \"\"\"\n Force a hard delete on a soft deleted model.\n \"\"\"\n self._force_deleting = True\n\n self.delete()\n\n self._force_deleting = False"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nperform the actual delete query on this instance.", "response": "def _do_perform_delete_on_model(self):\n \"\"\"\n Perform the actual delete query on this model instance.\n \"\"\"\n if self._force_deleting:\n return self.with_trashed().where(self.get_key_name(), self.get_key()).force_delete()\n\n return self._run_soft_delete()"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef restore(self):\n setattr(self, self.get_deleted_at_column(), None)\n\n self.set_exists(True)\n\n result = self.save()\n\n return result", "response": "Restore a soft - deleted model instance."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds a join clause to the query", "response": "def join(self, table, one=None,\n operator=None, two=None, type='inner', where=False):\n \"\"\"\n Add a join clause to the query\n\n :param table: The table to join with, can also be a JoinClause instance\n :type table: str or JoinClause\n\n :param one: The first column of the join condition\n :type one: str\n\n :param operator: The operator of the join condition\n :type operator: str\n\n :param two: The second column of the join condition\n :type two: str\n\n :param type: The join type\n :type type: str\n\n :param where: Whether to use a \"where\" rather than a \"on\"\n :type where: bool\n\n :return: The current QueryBuilder instance\n :rtype: QueryBuilder\n \"\"\"\n if isinstance(table, JoinClause):\n self.joins.append(table)\n else:\n if one is None:\n raise ArgumentError('Missing \"one\" argument')\n\n join = JoinClause(table, type)\n\n self.joins.append(join.on(\n one, operator, two, 'and', where\n ))\n\n return self"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef join_where(self, table, one, operator, two, type='inner'):\n return self.join(table, one, operator, two, type, True)", "response": "Add a join where clause to the current QueryBuilder instance"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nadds a where in with a sub select to the query", "response": "def _where_in_sub(self, column, query, boolean, negate=False):\n \"\"\"\n Add a where in with a sub select to the query\n\n :param column: The column\n :type column: str\n\n :param query: A QueryBuilder instance\n :type query: QueryBuilder\n\n :param boolean: The boolean operator\n :type boolean: str\n\n :param negate: Whether it is a not where in\n :param negate: bool\n\n :return: The current QueryBuilder instance\n :rtype: QueryBuilder\n \"\"\"\n if negate:\n type = 'not_in_sub'\n else:\n type = 'in_sub'\n\n self.wheres.append({\n 'type': type,\n 'column': column,\n 'query': query,\n 'boolean': boolean\n })\n\n self.merge_bindings(query)\n\n return self"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef or_having(self, column, operator=None, value=None):\n return self.having(column, operator, value, 'or')", "response": "Add a having clause to the current QueryBuilder instance"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef union(self, query, all=False):\n self.unions.append({\n 'query': query,\n 'all': all\n })\n\n return self.merge_bindings(query)", "response": "Adds a union statement to the query"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nexecuting a query for a single record by id", "response": "def find(self, id, columns=None):\n \"\"\"\n Execute a query for a single record by id\n\n :param id: The id of the record to retrieve\n :type id: mixed\n\n :param columns: The columns of the record to retrive\n :type columns: list\n\n :return: mixed\n :rtype: mixed\n \"\"\"\n if not columns:\n columns = ['*']\n\n return self.where('id', '=', id).first(1, columns)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef first(self, limit=1, columns=None):\n if not columns:\n columns = ['*']\n\n results = self.take(limit).get(columns)\n\n if len(results) > 0:\n return results[0]\n\n return", "response": "Execute the query and get the first result"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_fresh(self, columns=None):\n if not columns:\n columns = ['*']\n\n if not self.columns:\n self.columns = columns\n\n return self._processor.process_select(self, self._run_select())", "response": "Execute the query as a fresh select statement and return the result."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef _get_list_select(self, column, key=None):\n if key is None:\n elements = [column]\n else:\n elements = [column, key]\n\n select = []\n for elem in elements:\n dot = elem.find('.')\n\n if dot >= 0:\n select.append(column[dot + 1:])\n else:\n select.append(elem)\n\n return select", "response": "Returns the list of values that should be used in a list of key - value pairs."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef increment(self, column, amount=1, extras=None):\n wrapped = self._grammar.wrap(column)\n\n if extras is None:\n extras = {}\n\n columns = {\n column: self.raw('%s + %s' % (wrapped, amount))\n }\n columns.update(extras)\n\n return self.update(**columns)", "response": "Increment a column s value by a given amount"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef delete(self, id=None):\n if id is not None:\n self.where('id', '=', id)\n\n sql = self._grammar.compile_delete(self)\n\n return self._connection.delete(sql, self.get_bindings())", "response": "Delete a record from the database."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nmerging a list of where clauses and bindings into the current where list", "response": "def merge_wheres(self, wheres, bindings):\n \"\"\"\n Merge a list of where clauses and bindings\n\n :param wheres: A list of where clauses\n :type wheres: list\n\n :param bindings: A list of bindings\n :type bindings: list\n\n :rtype: None\n \"\"\"\n self.wheres = self.wheres + wheres\n self._bindings['where'] = self._bindings['where'] + bindings"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nexecutes the migrate make command.", "response": "def execute(self, i, o):\n \"\"\"\n Executes the command.\n\n :type i: cleo.inputs.input.Input\n :type o: cleo.outputs.output.Output\n \"\"\"\n super(MigrateMakeCommand, self).execute(i, o)\n\n creator = MigrationCreator()\n\n name = i.get_argument('name')\n table = i.get_option('table')\n create = bool(i.get_option('create'))\n\n if not table and create is not False:\n table = create\n\n path = i.get_option('path')\n if path is None:\n path = self._get_migration_path()\n\n file_ = self._write_migration(creator, name, table, create, path)\n\n o.writeln('Create migration: %s' % file_)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef remove(self, builder, model):\n column = model.get_qualified_deleted_at_column()\n\n query = builder.get_query()\n\n wheres = []\n for where in query.wheres:\n # If the where clause is a soft delete date constraint,\n # we will remove it from the query and reset the keys\n # on the wheres. This allows the developer to include\n # deleted model in a relationship result set that is lazy loaded.\n if not self._is_soft_delete_constraint(where, column):\n wheres.append(where)\n\n query.wheres = wheres", "response": "Removes the scope from a given builder."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nextend the query builder with the needed functions.", "response": "def extend(self, builder):\n \"\"\"\n Extend the query builder with the needed functions.\n\n :param builder: The query builder\n :type builder: eloquent.orm.builder.Builder\n \"\"\"\n for extension in self._extensions:\n getattr(self, '_add_%s' % extension)(builder)\n\n builder.on_delete(self._on_delete)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _only_trashed(self, builder):\n model = builder.get_model()\n\n self.remove(builder, model)\n\n builder.get_query().where_not_null(model.get_qualified_deleted_at_column())", "response": "The only - trashed extension."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _add_fluent_indexes(self):\n for column in self._columns:\n for index in ['primary', 'unique', 'index']:\n column_index = column.get(index)\n\n if column_index is True:\n getattr(self, index)(column.name)\n\n break\n elif column_index:\n getattr(self, index)(column.name, column_index)\n\n break", "response": "Add the index commands fluently specified on columns"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef big_integer(self, column, auto_increment=False, unsigned=False):\n return self._add_column('big_integer', column,\n auto_increment=auto_increment,\n unsigned=unsigned)", "response": "Create a new big integer column on the table."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef medium_integer(self, column, auto_increment=False, unsigned=False):\n return self._add_column('medium_integer', column,\n auto_increment=auto_increment,\n unsigned=unsigned)", "response": "Create a new medium integer column on the table."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a new tiny integer column on the table.", "response": "def tiny_integer(self, column, auto_increment=False, unsigned=False):\n \"\"\"\n Create a new tiny integer column on the table.\n\n :param column: The column\n :type column: str\n\n :type auto_increment: bool\n\n :type unsigned: bool\n\n :rtype: Fluent\n \"\"\"\n return self._add_column('tiny_integer', column,\n auto_increment=auto_increment,\n unsigned=unsigned)"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding a new column to the blueprint.", "response": "def _add_column(self, type, name, **parameters):\n \"\"\"\n Add a new column to the blueprint.\n\n :param type: The column type\n :type type: str\n\n :param name: The column name\n :type name: str\n\n :param parameters: The column parameters\n :type parameters: dict\n\n :rtype: Fluent\n \"\"\"\n parameters.update({\n 'type': type,\n 'name': name\n })\n\n column = Fluent(**parameters)\n self._columns.append(column)\n\n return column"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the ALTER TABLE SQL statement for the given table diff.", "response": "def get_alter_table_sql(self, diff):\n \"\"\"\n Get the ALTER TABLE SQL statement\n\n :param diff: The table diff\n :type diff: eloquent.dbal.table_diff.TableDiff\n\n :rtype: list\n \"\"\"\n #sql = self._get_simple_alter_table_sql(diff)\n\n from_table = diff.from_table\n if not isinstance(from_table, Table):\n raise Exception('SQLite platform requires for the alter table the table diff '\n 'referencing the original table')\n\n table = from_table.clone()\n columns = {}\n old_column_names = {}\n new_column_names = {}\n column_sql = []\n for column_name, column in table.get_columns().items():\n column_name = column_name.lower()\n columns[column_name] = column\n old_column_names[column_name] = column.get_name()\n new_column_names[column_name] = column.get_name()\n\n for column_name, column in diff.removed_columns.items():\n column_name = column_name.lower()\n if column_name in columns:\n del columns[column_name]\n del old_column_names[column_name]\n del new_column_names[column_name]\n\n for old_column_name, column in diff.renamed_columns.items():\n old_column_name = old_column_name.lower()\n if old_column_name in columns:\n del columns[old_column_name]\n\n columns[column.get_name().lower()] = column\n\n if old_column_name in new_column_names:\n new_column_names[old_column_name] = column.get_name()\n\n for old_column_name, column_diff in diff.changed_columns.items():\n if old_column_name in columns:\n del columns[old_column_name]\n\n columns[column_diff.column.get_name().lower()] = column_diff.column\n\n if old_column_name in new_column_names:\n new_column_names[old_column_name] = column_diff.column.get_name()\n\n for column_name, column in diff.added_columns.items():\n columns[column_name.lower()] = column\n\n sql = []\n table_sql = []\n\n data_table = Table('__temp__' + table.get_name())\n\n new_table = Table(table.get_name(), columns,\n self.get_primary_index_in_altered_table(diff),\n self.get_foreign_keys_in_altered_table(diff))\n new_table.add_option('alter', True)\n\n sql = self.get_pre_alter_table_index_foreign_key_sql(diff)\n sql.append('CREATE TEMPORARY TABLE %s AS SELECT %s FROM %s'\n % (data_table.get_name(), ', '.join(old_column_names.values()), table.get_name()))\n sql.append(self.get_drop_table_sql(from_table))\n\n sql += self.get_create_table_sql(new_table)\n sql.append('INSERT INTO %s (%s) SELECT %s FROM %s'\n % (new_table.get_name(),\n ', '.join(new_column_names.values()),\n ', '.join(old_column_names.values()),\n data_table.get_name()))\n sql.append(self.get_drop_table_sql(data_table))\n\n sql += self.get_post_alter_table_index_foreign_key_sql(diff)\n\n return sql"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_foreign_keys_in_altered_table(self, diff):\n foreign_keys = diff.from_table.get_foreign_keys()\n column_names = self.get_column_names_in_altered_table(diff)\n\n for key, constraint in foreign_keys.items():\n changed = False\n local_columns = []\n for column_name in constraint.get_local_columns():\n normalized_column_name = column_name.lower()\n if normalized_column_name not in column_names:\n del foreign_keys[key]\n break\n else:\n local_columns.append(column_names[normalized_column_name])\n if column_name != column_names[normalized_column_name]:\n changed = True\n\n if changed:\n pass\n\n return foreign_keys", "response": "Returns a list of foreign keys that are not in the table that have been modified."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds the constraints for a relationship count query.", "response": "def get_relation_count_query(self, query, parent):\n \"\"\"\n Add the constraints for a relationship count query.\n\n :type query: eloquent.orm.Builder\n :type parent: eloquent.orm.Builder\n\n :rtype: Builder\n \"\"\"\n query.select(QueryExpression('COUNT(*)'))\n\n other_key = self.wrap('%s.%s' % (query.get_model().get_table(), self._other_key))\n\n return query.where(self.get_qualified_foreign_key(), '=', QueryExpression(other_key))"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nmatch the eagerly loaded results to their parents.", "response": "def match(self, models, results, relation):\n \"\"\"\n Match the eagerly loaded results to their parents.\n\n :type models: list\n :type results: Collection\n :type relation: str\n \"\"\"\n foreign = self._foreign_key\n\n other = self._other_key\n\n dictionary = {}\n\n for result in results:\n dictionary[result.get_attribute(other)] = result\n\n for model in models:\n value = model.get_attribute(foreign)\n\n if value in dictionary:\n model.set_relation(relation, dictionary[value])\n\n return models"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef associate(self, model):\n self._parent.set_attribute(self._foreign_key, model.get_attribute(self._other_key))\n\n return self._parent.set_relation(self._relation, model)", "response": "Associate the given model instance with the given parent."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef dissociate(self):\n self._parent.set_attribute(self._foreign_key, None)\n\n return self._parent.set_relation(self._relation, None)", "response": "Dissociate previously associated model from the given parent."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nexecuting the command. :type i: cleo.inputs.input.Input :type o: cleo.outputs.output.Output", "response": "def execute(self, i, o):\n \"\"\"\n Executes the command.\n\n :type i: cleo.inputs.input.Input\n :type o: cleo.outputs.output.Output\n \"\"\"\n super(StatusCommand, self).execute(i, o)\n\n database = i.get_option('database')\n repository = DatabaseMigrationRepository(self._resolver, 'migrations')\n\n migrator = Migrator(repository, self._resolver)\n\n if not migrator.repository_exists():\n return o.writeln('No migrations found')\n\n self._prepare_database(migrator, database, i, o)\n\n path = i.get_option('path')\n\n if path is None:\n path = self._get_migration_path()\n\n ran = migrator.get_repository().get_ran()\n\n migrations = []\n for migration in migrator._get_migration_files(path):\n if migration in ran:\n migrations.append(['%s' % migration, 'Yes'])\n else:\n migrations.append(['%s' % migration, 'No'])\n\n if migrations:\n table = self.get_helper('table')\n table.set_headers(['Migration', 'Ran?'])\n table.set_rows(migrations)\n table.render(o)\n else:\n return o.writeln('No migrations found')\n\n for note in migrator.get_notes():\n o.writeln(note)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef compile_rename_column(self, blueprint, command, connection):\n # The code is a little complex. It will propably change\n # if we support complete diffs in dbal\n sql = []\n\n schema = connection.get_schema_manager()\n table = self.get_table_prefix() + blueprint.get_table()\n\n column = connection.get_column(table, command.from_)\n\n columns = schema.list_table_columns(table)\n indexes = schema.list_table_indexes(table)\n foreign_keys = schema.list_table_foreign_keys(table)\n\n diff = self._get_renamed_diff(blueprint, command, column, schema)\n renamed_columns = diff.renamed_columns\n\n old_column_names = list(map(lambda x: x.get_name(), columns))\n\n # We build the new column names\n new_column_names = []\n for column_name in old_column_names:\n if column_name in renamed_columns:\n new_column_names.append(renamed_columns[column_name].get_name())\n else:\n new_column_names.append(column_name)\n\n # We create a temporary table and insert the data into it\n temp_table = '__temp__' + self.get_table_prefix() + blueprint.get_table()\n sql.append('CREATE TEMPORARY TABLE %s AS SELECT %s FROM %s'\n % (temp_table, self.columnize(old_column_names), table))\n\n # We drop the current table\n sql += Blueprint(table).drop().to_sql(None, self)\n\n # Building the list a new columns\n new_columns = []\n for column in columns:\n for column_name, changed_column in renamed_columns.items():\n if column_name == column.get_name():\n new_columns.append(changed_column)\n\n # Here we will try to rebuild a new blueprint to create a new table\n # with the original name\n new_blueprint = Blueprint(table)\n new_blueprint.create()\n primary = []\n for column in columns:\n # Mapping the database type to the blueprint type\n type = schema.get_database_platform().TYPE_MAPPING[column.get_type().lower()]\n\n # If the column is a primary, we will add it to the blueprint later\n if column.get_platform_option('pk'):\n primary.append(column.get_name())\n\n # If the column is not one that's been renamed we reinsert it into the blueprint\n if column.get_name() not in renamed_columns.keys():\n col = getattr(new_blueprint, type)(column.get_name())\n\n # If the column is nullable, we flag it\n if not column.get_notnull():\n col.nullable()\n\n # If the column has a default value, we add it\n if column.get_default() is not None:\n col.default(QueryExpression(column.get_default()))\n\n # Inserting the renamed columns into the blueprint\n for column in new_columns:\n type = schema.get_database_platform().TYPE_MAPPING[column.get_type().lower()]\n\n col = getattr(new_blueprint, type)(column.get_name())\n if not column.get_notnull():\n col.nullable()\n\n if column.get_default() is not None:\n col.default(QueryExpression(column.get_default()))\n\n # We add the primary keys\n if primary:\n new_blueprint.primary(primary)\n\n # We rebuild the indexes\n for index in indexes:\n index_columns = index['columns']\n new_index_columns = []\n index_name = index['name']\n\n for column_name in index_columns:\n if column_name in renamed_columns:\n new_index_columns.append(renamed_columns[column_name].get_name())\n else:\n new_index_columns.append(column_name)\n\n if index_columns != new_index_columns:\n index_name = None\n\n if index['unique']:\n new_blueprint.unique(new_index_columns, index_name)\n else:\n new_blueprint.index(index['columns'], index_name)\n\n for foreign_key in foreign_keys:\n fkey_from = foreign_key['from']\n if fkey_from in renamed_columns:\n fkey_from = renamed_columns[fkey_from].get_name()\n\n new_blueprint.foreign(fkey_from)\\\n .references(foreign_key['to'])\\\n .on(foreign_key['table'])\\\n .on_delete(foreign_key['on_delete'])\\\n .on_update(foreign_key['on_update'])\n\n # We create the table\n sql += new_blueprint.to_sql(None, self)\n\n # We reinsert the data into the new table\n sql.append('INSERT INTO %s (%s) SELECT %s FROM %s'\n % (self.wrap_table(table),\n ', '.join(new_column_names),\n self.columnize(old_column_names),\n self.wrap_table(temp_table)\n ))\n\n # Finally we drop the temporary table\n sql += Blueprint(temp_table).drop().to_sql(None, self)\n\n return sql", "response": "Compile a rename column command."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncompiles a change column command into a series of SQL statements.", "response": "def compile_change(self, blueprint, command, connection):\n \"\"\"\n Compile a change column command into a series of SQL statement.\n\n :param blueprint: The blueprint\n :type blueprint: eloquent.schema.Blueprint\n\n :param command: The command\n :type command: Fluent\n\n :param connection: The connection\n :type connection: eloquent.connections.Connection\n\n :rtype: list\n \"\"\"\n sql = []\n\n schema = connection.get_schema_manager()\n table = self.get_table_prefix() + blueprint.get_table()\n\n columns = schema.list_table_columns(table)\n indexes = schema.list_table_indexes(table)\n foreign_keys = schema.list_table_foreign_keys(table)\n\n diff = self._get_changed_diff(blueprint, schema)\n blueprint_changed_columns = blueprint.get_changed_columns()\n changed_columns = diff.changed_columns\n\n temp_table = '__temp__' + self.get_table_prefix() + blueprint.get_table()\n sql.append('CREATE TEMPORARY TABLE %s AS SELECT %s FROM %s'\n % (temp_table, self.columnize(list(map(lambda x: x.get_name(), columns))), table))\n sql += Blueprint(table).drop().to_sql(None, self)\n\n new_columns = []\n for column in columns:\n for column_name, changed_column in changed_columns.items():\n if column_name == column.get_name():\n for blueprint_column in blueprint_changed_columns:\n if blueprint_column.name == column_name:\n new_columns.append(blueprint_column)\n break\n\n break\n\n new_blueprint = Blueprint(table)\n new_blueprint.create()\n primary = []\n new_column_names = []\n for column in columns:\n type = schema.get_database_platform().TYPE_MAPPING[column.get_type().lower()]\n\n if column.get_platform_option('pk'):\n primary.append(column.get_name())\n\n if column.get_name() not in changed_columns:\n col = getattr(new_blueprint, type)(column.get_name())\n if not column.get_notnull():\n col.nullable()\n\n new_column_names.append(column.get_name())\n\n for column in new_columns:\n column.change = False\n new_blueprint._add_column(**column.get_attributes())\n new_column_names.append(column.name)\n\n if primary:\n new_blueprint.primary(primary)\n\n for index in indexes:\n if index['unique']:\n new_blueprint.unique(index['columns'], index['name'])\n else:\n new_blueprint.index(index['columns'], index['name'])\n\n for foreign_key in foreign_keys:\n new_blueprint.foreign(foreign_key['from'])\\\n .references(foreign_key['to'])\\\n .on(foreign_key['table'])\\\n .on_delete(foreign_key['on_delete'])\\\n .on_update(foreign_key['on_update'])\n\n sql += new_blueprint.to_sql(None, self)\n sql.append('INSERT INTO %s (%s) SELECT %s FROM %s'\n % (self.wrap_table(table),\n ', '.join(sorted(new_column_names)),\n self.columnize(sorted(list(map(lambda x: x.get_name(), columns)))),\n self.wrap_table(temp_table)\n ))\n sql += Blueprint(temp_table).drop().to_sql(None, self)\n\n return sql"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncompile a create table command.", "response": "def compile_create(self, blueprint, command, _):\n \"\"\"\n Compile a create table command.\n \"\"\"\n columns = ', '.join(self._get_columns(blueprint))\n\n sql = 'CREATE TABLE %s (%s' % (self.wrap_table(blueprint), columns)\n\n sql += self._add_foreign_keys(blueprint)\n\n sql += self._add_primary_keys(blueprint)\n\n return sql + ')'"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nmatching the eargerly loaded resuls to their single parents.", "response": "def match_one(self, models, results, relation):\n \"\"\"\n Match the eargerly loaded resuls to their single parents.\n\n :param models: The parents\n :type models: list\n\n :param results: The results collection\n :type results: Collection\n\n :param relation: The relation\n :type relation: str\n\n :rtype: list\n \"\"\"\n return self._match_one_or_many(models, results, relation, 'one')"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef match_many(self, models, results, relation):\n return self._match_one_or_many(models, results, relation, 'many')", "response": "Match the eargerly loaded resuls to their single parents."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nmatch the eargerly loaded resuls to their single parents.", "response": "def _match_one_or_many(self, models, results, relation, type):\n \"\"\"\n Match the eargerly loaded resuls to their single parents.\n\n :param models: The parents\n :type models: list\n\n :param results: The results collection\n :type results: Collection\n\n :param relation: The relation\n :type relation: str\n\n :param type: The match type\n :type type: str\n\n :rtype: list\n \"\"\"\n dictionary = self._build_dictionary(results)\n\n for model in models:\n key = model.get_attribute(self._local_key)\n\n if key in dictionary:\n value = self._get_relation_value(dictionary, key, type)\n\n model.set_relation(relation, value)\n\n return models"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the value of the relationship by one or many type.", "response": "def _get_relation_value(self, dictionary, key, type):\n \"\"\"\n Get the value of the relationship by one or many type.\n\n :type dictionary: dict\n :type key: str\n :type type: str\n \"\"\"\n value = dictionary[key]\n\n if type == 'one':\n return value[0]\n\n return self._related.new_collection(value)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget the first related model record matching the attributes or instantiate it.", "response": "def first_or_new(self, _attributes=None, **attributes):\n \"\"\"\n Get the first related model record matching the attributes or instantiate it.\n\n :param attributes: The attributes\n :type attributes: dict\n\n :rtype: Model\n \"\"\"\n if _attributes is not None:\n attributes.update(_attributes)\n\n instance = self.where(attributes).first()\n\n if instance is None:\n instance = self._related.new_instance()\n instance.set_attribute(self.get_plain_foreign_key(), self.get_parent_key())\n\n return instance"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting the first related record matching the attributes or create it.", "response": "def first_or_create(self, _attributes=None, **attributes):\n \"\"\"\n Get the first related record matching the attributes or create it.\n\n :param attributes: The attributes\n :type attributes: dict\n\n :rtype: Model\n \"\"\"\n if _attributes is not None:\n attributes.update(_attributes)\n\n instance = self.where(attributes).first()\n\n if instance is None:\n instance = self.create(**attributes)\n\n return instance"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nexecute the reset command.", "response": "def execute(self, i, o):\n \"\"\"\n Executes the command.\n\n :type i: cleo.inputs.input.Input\n :type o: cleo.outputs.output.Output\n \"\"\"\n super(ResetCommand, self).execute(i, o)\n\n dialog = self.get_helper('dialog')\n confirm = dialog.ask_confirmation(\n o,\n 'Are you sure you want to reset all of the migrations? ',\n False\n )\n if not confirm:\n return\n\n database = i.get_option('database')\n repository = DatabaseMigrationRepository(self._resolver, 'migrations')\n\n migrator = Migrator(repository, self._resolver)\n\n self._prepare_database(migrator, database, i, o)\n\n pretend = bool(i.get_option('pretend'))\n\n path = i.get_option('path')\n\n if path is None:\n path = self._get_migration_path()\n\n while True:\n count = migrator.rollback(path, pretend)\n\n for note in migrator.get_notes():\n o.writeln(note)\n\n if count == 0:\n break"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_relation_count_query(self, query, parent):\n query = super(MorphToMany, self).get_relation_count_query(query, parent)\n\n return query.where('%s.%s' % (self._table, self._morph_type), self._morph_class)", "response": "Add the constraints for a relationship count query."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nadd constraints for an eager load of the relation.", "response": "def add_eager_constraints(self, models):\n \"\"\"\n Set the constraints for an eager load of the relation.\n\n :type models: list\n \"\"\"\n super(MorphToMany, self).add_eager_constraints(models)\n\n self._query.where('%s.%s' % (self._table, self._morph_type), self._morph_class)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _create_attach_record(self, id, timed):\n record = super(MorphToMany, self)._create_attach_record(id, timed)\n\n record[self._morph_type] = self._morph_class\n\n return record", "response": "Create a new pivot attachement record."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ncreating a new query builder for the pivot table.", "response": "def _new_pivot_query(self):\n \"\"\"\n Create a new query builder for the pivot table.\n\n :rtype: eloquent.orm.Builder\n \"\"\"\n query = super(MorphToMany, self)._new_pivot_query()\n\n return query.where(self._morph_type, self._morph_class)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate a new pivot model instance.", "response": "def new_pivot(self, attributes=None, exists=False):\n \"\"\"\n Create a new pivot model instance.\n \"\"\"\n from .morph_pivot import MorphPivot\n\n pivot = MorphPivot(self._parent, attributes, self._table, exists)\n\n pivot.set_pivot_keys(self._foreign_key, self._other_key)\\\n .set_morph_type(self._morph_type)\\\n .set_morph_class(self._morph_class)\n\n return pivot"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nexecuting the install command.", "response": "def execute(self, i, o):\n \"\"\"\n Executes the command.\n\n :type i: cleo.inputs.input.Input\n :type o: cleo.outputs.output.Output\n \"\"\"\n super(InstallCommand, self).execute(i, o)\n\n database = i.get_option('database')\n repository = DatabaseMigrationRepository(self._resolver, 'migrations')\n\n repository.set_source(database)\n repository.create_repository()\n\n o.writeln('Migration table created successfully')"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef process_insert_get_id(self, query, sql, values, sequence=None):\n if not query.get_connection().transaction_level():\n with query.get_connection().transaction():\n query.get_connection().insert(sql, values)\n\n cursor = query.get_connection().get_cursor()\n if hasattr(cursor, 'lastrowid'):\n id = cursor.lastrowid\n else:\n id = query.get_connection().statement('SELECT LAST_INSERT_ID()')\n else:\n query.get_connection().insert(sql, values)\n\n cursor = query.get_connection().get_cursor()\n if hasattr(cursor, 'lastrowid'):\n id = cursor.lastrowid\n else:\n id = query.get_connection().statement('SELECT LAST_INSERT_ID()')\n\n if isinstance(id, int):\n return id\n\n if str(id).isdigit():\n return int(id)\n\n return id", "response": "Process an insert get ID query."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef chunk(self, count):\n page = 1\n results = self.for_page(page, count).get()\n\n while results:\n yield results\n\n page += 1\n\n results = self.for_page(page, count).get()", "response": "Returns a list of count items from the query\n API."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget a list with the values of a given column and key.", "response": "def lists(self, column, key=None):\n \"\"\"\n Get a list with the values of a given column\n\n :param column: The column to get the values for\n :type column: str\n\n :param key: The key\n :type key: str\n\n :return: The list of values\n :rtype: list or dict\n \"\"\"\n results = self._query.lists(column, key)\n\n if self._model.has_get_mutator(column):\n if isinstance(results, dict):\n for key, value in results.items():\n fill = {column: value}\n\n results[key] = self._model.new_from_builder(fill).column\n else:\n for i, value in enumerate(results):\n fill = {column: value}\n\n results[i] = self._model.new_from_builder(fill).column\n\n return results"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef increment(self, column, amount=1, extras=None):\n extras = self._add_updated_at_column(extras)\n\n return self._query.increment(column, amount, extras)", "response": "Increment a column s value by a given amount"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget the hydrated models without eager loading.", "response": "def get_models(self, columns=None):\n \"\"\"\n Get the hydrated models without eager loading.\n\n :param columns: The columns to get\n :type columns: list\n\n :return: A list of models\n :rtype: list\n \"\"\"\n results = self._query.get(columns)\n\n connection = self._model.get_connection_name()\n\n return self._model.hydrate(results, connection).all()"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _load_relation(self, models, name, constraints):\n relation = self.get_relation(name)\n\n relation.add_eager_constraints(models)\n\n if callable(constraints):\n constraints(relation)\n else:\n relation.merge_query(constraints)\n\n models = relation.init_relation(models, name)\n\n results = relation.get_eager()\n\n return relation.match(models, results, name)", "response": "Load the relationship on a set of models."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding nested relationship count conditions to the query.", "response": "def _has_nested(self, relations, operator='>=', count=1, boolean='and', extra=None):\n \"\"\"\n Add nested relationship count conditions to the query.\n\n :param relations: nested relations\n :type relations: str\n\n :param operator: The operator\n :type operator: str\n\n :param count: The count\n :type count: int\n\n :param boolean: The boolean value\n :type boolean: str\n\n :param extra: The extra query\n :type extra: Builder or callable\n\n :rtype: Builder\n \"\"\"\n relations = relations.split('.')\n\n def closure(q):\n if len(relations) > 1:\n q.where_has(relations.pop(0), closure)\n else:\n q.has(relations.pop(0), operator, count, boolean, extra)\n\n return self.where_has(relations.pop(0), closure)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nadd a relationship count to the query.", "response": "def doesnt_have(self, relation, boolean='and', extra=None):\n \"\"\"\n Add a relationship count to the query.\n\n :param relation: The relation to count\n :type relation: str\n\n :param boolean: The boolean value\n :type boolean: str\n\n :param extra: The extra query\n :type extra: Builder or callable\n\n :rtype: Builder\n \"\"\"\n return self.has(relation, '<', 1, boolean, extra)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nadding a relationship count condition to the query with where clauses.", "response": "def where_has(self, relation, extra, operator='>=', count=1):\n \"\"\"\n Add a relationship count condition to the query with where clauses.\n\n :param relation: The relation to count\n :type relation: str\n\n :param extra: The extra query\n :type extra: Builder or callable\n\n :param operator: The operator\n :type operator: str\n\n :param count: The count\n :type count: int\n\n :rtype: Builder\n \"\"\"\n return self.has(relation, operator, count, 'and', extra)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef or_has(self, relation, operator='>=', count=1):\n return self.has(relation, operator, count, 'or')", "response": "Add a relationship count condition to the query with an or."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nadding a relationship count condition to the query with where clauses and an or.", "response": "def or_where_has(self, relation, extra, operator='>=', count=1):\n \"\"\"\n Add a relationship count condition to the query with where clauses and an \"or\".\n\n :param relation: The relation to count\n :type relation: str\n\n :param extra: The extra query\n :type extra: Builder or callable\n\n :param operator: The operator\n :type operator: str\n\n :param count: The count\n :type count: int\n\n :rtype: Builder\n \"\"\"\n return self.has(relation, operator, count, 'or', extra)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _merge_wheres_to_has(self, has_query, relation):\n relation_query = relation.get_base_query()\n\n has_query.merge_wheres(relation_query.wheres, relation_query.get_bindings())\n\n self._query.merge_bindings(has_query.get_query())", "response": "Merge the wheres from the relation query to a has query."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets the has relation base query", "response": "def _get_has_relation_query(self, relation):\n \"\"\"\n Get the \"has\" relation base query\n\n :type relation: str\n\n :rtype: Builder\n \"\"\"\n from .relations import Relation\n\n return Relation.no_constraints(\n lambda: getattr(self.get_model(), relation)()\n )"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef with_(self, *relations):\n if not relations:\n return self\n\n eagers = self._parse_relations(list(relations))\n\n self._eager_load.update(eagers)\n\n return self", "response": "Sets the relationships that should be eager loaded."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _parse_relations(self, relations):\n results = {}\n\n for relation in relations:\n if isinstance(relation, dict):\n name = list(relation.keys())[0]\n constraints = relation[name]\n else:\n name = relation\n constraints = self.__class__(self.get_query().new_query())\n\n results = self._parse_nested(name, results)\n\n results[name] = constraints\n\n return results", "response": "Parse a list of relations into individual related objects."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _parse_nested(self, name, results):\n progress = []\n\n for segment in name.split('.'):\n progress.append(segment)\n\n last = '.'.join(progress)\n if last not in results:\n results[last] = self.__class__(self.get_query().new_query())\n\n return results", "response": "Parse the nested relationship in a relation."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncall the given model scope.", "response": "def _call_scope(self, scope, *args, **kwargs):\n \"\"\"\n Call the given model scope.\n\n :param scope: The scope to call\n :type scope: str\n \"\"\"\n result = getattr(self._model, scope)(self, *args, **kwargs)\n\n return result or self"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nruns an up migration instance.", "response": "def _run_up(self, path, migration_file, batch, pretend=False):\n \"\"\"\n Run \"up\" a migration instance.\n\n :type migration_file: str\n\n :type batch: int\n\n :type pretend: bool\n \"\"\"\n migration = self._resolve(path, migration_file)\n\n if pretend:\n return self._pretend_to_run(migration, 'up')\n\n migration.up()\n\n self._repository.log(migration_file, batch)\n\n self._note('\u2713 Migrated %s' % migration_file)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _run_down(self, path, migration, pretend=False):\n migration_file = migration['migration']\n\n instance = self._resolve(path, migration_file)\n\n if pretend:\n return self._pretend_to_run(instance, 'down')\n\n instance.down()\n\n self._repository.delete(migration)\n\n self._note('\u2713 Rolled back %s' % migration_file)", "response": "Run down a migration instance."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\npretending to run the migration.", "response": "def _pretend_to_run(self, migration, method):\n \"\"\"\n Pretend to run the migration.\n\n :param migration: The migration\n :type migration: eloquent.migrations.migration.Migration\n\n :param method: The method to execute\n :type method: str\n \"\"\"\n for query in self._get_queries(migration, method):\n name = migration.__class__.__name__\n\n self._note('%s: %s' % (name, query))"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting all of the queries that would be run for a migration.", "response": "def _get_queries(self, migration, method):\n \"\"\"\n Get all of the queries that would be run for a migration.\n\n :param migration: The migration\n :type migration: eloquent.migrations.migration.Migration\n\n :param method: The method to execute\n :type method: str\n\n :rtype: list\n \"\"\"\n connection = migration.get_connection()\n\n db = self.resolve_connection(connection)\n\n logger = logging.getLogger('eloquent.connection.queries')\n level = logger.level\n logger.setLevel(logging.DEBUG)\n handler = MigratorHandler()\n handler.setLevel(logging.DEBUG)\n logger.addHandler(handler)\n\n db.pretend(lambda _: getattr(migration, method)())\n\n logger.removeHandler(handler)\n logger.setLevel(level)\n\n return handler.queries"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _resolve(self, path, migration_file):\n variables = {}\n\n name = '_'.join(migration_file.split('_')[4:])\n migration_file = os.path.join(path, '%s.py' % migration_file)\n\n with open(migration_file) as fh:\n exec(fh.read(), {}, variables)\n\n klass = variables[inflection.camelize(name)]\n\n instance = klass()\n instance.set_schema_builder(self.get_repository().get_connection().get_schema_builder())\n\n return instance", "response": "Resolve a migration instance from a file."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets the ALTER TABLE SQL statement for the given table diff.", "response": "def get_alter_table_sql(self, diff):\n \"\"\"\n Get the ALTER TABLE SQL statement\n\n :param diff: The table diff\n :type diff: eloquent.dbal.table_diff.TableDiff\n\n :rtype: list\n \"\"\"\n column_sql = []\n query_parts = []\n\n if diff.new_name is not False:\n query_parts.append('RENAME TO %s' % diff.new_name)\n\n # Added columns?\n\n # Removed columns?\n\n for column_diff in diff.changed_columns.values():\n column = column_diff.column\n column_dict = column.to_dict()\n\n # Don't propagate default value changes for unsupported column types.\n if column_diff.has_changed('default') \\\n and len(column_diff.changed_properties) == 1 \\\n and (column_dict['type'] == 'text' or column_dict['type'] == 'blob'):\n continue\n\n query_parts.append('CHANGE %s %s'\n % (column_diff.get_old_column_name(),\n self.get_column_declaration_sql(column.get_name(), column_dict)))\n\n for old_column_name, column in diff.renamed_columns.items():\n column_dict = column.to_dict()\n query_parts.append('CHANGE %s %s'\n % (self.quote(old_column_name),\n self.get_column_declaration_sql(self.quote(column.get_name()), column_dict)))\n\n sql = []\n\n if len(query_parts) > 0:\n sql.append('ALTER TABLE %s %s' % (diff.name, ', '.join(query_parts)))\n\n return sql"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the difference between the tables table1 and table2.", "response": "def diff_table(self, table1, table2):\n \"\"\"\n Returns the difference between the tables table1 and table2.\n\n :type table1: Table\n :type table2: Table\n\n :rtype: TableDiff\n \"\"\"\n changes = 0\n table_differences = TableDiff(table1.get_name())\n table_differences.from_table = table1\n\n table1_columns = table1.get_columns()\n table2_columns = table2.get_columns()\n\n # See if all the fields in table1 exist in table2\n for column_name, column in table2_columns.items():\n if not table1.has_column(column_name):\n table_differences.added_columns[column_name] = column\n changes += 1\n\n # See if there are any removed fields in table2\n for column_name, column in table1_columns.items():\n if not table2.has_column(column_name):\n table_differences.removed_columns[column_name] = column\n changes += 1\n continue\n\n # See if column has changed properties in table2\n changed_properties = self.diff_column(column, table2.get_column(column_name))\n\n if changed_properties:\n column_diff = ColumnDiff(column.get_name(),\n table2.get_column(column_name),\n changed_properties)\n column_diff.from_column = column\n table_differences.changed_columns[column.get_name()] = column_diff\n changes += 1\n\n self.detect_column_renamings(table_differences)\n\n # table1_indexes = table1.get_indexes()\n # table2_indexes = table2.get_indexes()\n #\n # # See if all the fields in table1 exist in table2\n # for index_name, index in table2_indexes.items():\n # if (index.is_primary() and table1.has_primary_key()) or table1.has_index(index_name):\n # continue\n #\n # table_differences.added_indexes[index_name] = index\n # changes += 1\n #\n # # See if there are any removed fields in table2\n # for index_name, index in table1_indexes.items():\n # if (index.is_primary() and not table2.has_primary_key())\\\n # or (not index.is_primary() and not table2.has_index(index_name)):\n # table_differences.removed_indexes[index_name] = index\n # changes += 1\n # continue\n #\n # if index.is_primary():\n # table2_index = table2.get_primary_key()\n # else:\n # table2_index = table2.get_index(index_name)\n #\n # if self.diff_index(index, table2_index):\n # table_differences.changed_indexes[index_name] = index\n # changes += 1\n #\n # self.detect_index_renamings(table_differences)\n #\n # from_fkeys = table1.get_foreign_keys()\n # to_fkeys = table2.get_foreign_keys()\n #\n # for key1, constraint1 in from_fkeys.items():\n # for key2, constraint2 in to_fkeys.items():\n # if self.diff_foreign_key(constraint1, constraint2) is False:\n # del from_fkeys[key1]\n # del to_fkeys[key2]\n # else:\n # if constraint1.get_name().lower() == constraint2.get_name().lower():\n # table_differences.changed_foreign_keys.append(constraint2)\n # changes += 1\n # del from_fkeys[key1]\n # del to_fkeys[key2]\n #\n # for constraint1 in from_fkeys.values():\n # table_differences.removed_foreign_keys.append(constraint1)\n # changes += 1\n #\n # for constraint2 in to_fkeys.values():\n # table_differences.added_foreign_keys.append(constraint2)\n # changes += 1\n\n if changes:\n return table_differences\n\n return False"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ndetects if any of the columns that only changed their names.", "response": "def detect_column_renamings(self, table_differences):\n \"\"\"\n Try to find columns that only changed their names.\n\n :type table_differences: TableDiff\n \"\"\"\n rename_candidates = {}\n\n for added_column_name, added_column in table_differences.added_columns.items():\n for removed_column in table_differences.removed_columns.values():\n if len(self.diff_column(added_column, removed_column)) == 0:\n if added_column.get_name() not in rename_candidates:\n rename_candidates[added_column.get_name()] = []\n\n rename_candidates[added_column.get_name()] = (removed_column, added_column, added_column_name)\n\n for candidate_columns in rename_candidates.values():\n if len(candidate_columns) == 1:\n removed_column, added_column, _ = candidate_columns[0]\n removed_column_name = removed_column.get_name().lower()\n added_column_name = added_column.get_name().lower()\n\n if removed_column_name not in table_differences.renamed_columns:\n table_differences.renamed_columns[removed_column_name] = added_column\n del table_differences.added_columns[added_column_name]\n del table_differences.removed_columns[removed_column_name]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the difference between column1 and column2.", "response": "def diff_column(self, column1, column2):\n \"\"\"\n Returns the difference between column1 and column2\n\n :type column1: eloquent.dbal.column.Column\n :type column2: eloquent.dbal.column.Column\n\n :rtype: list\n \"\"\"\n properties1 = column1.to_dict()\n properties2 = column2.to_dict()\n\n changed_properties = []\n\n for prop in ['type', 'notnull', 'unsigned', 'autoincrement']:\n if properties1[prop] != properties2[prop]:\n changed_properties.append(prop)\n\n if properties1['default'] != properties2['default']\\\n or (properties1['default'] is None and properties2['default'] is not None)\\\n or (properties2['default'] is None and properties1['default'] is not None):\n changed_properties.append('default')\n\n if properties1['type'] == 'string' and properties1['type'] != 'guid'\\\n or properties1['type'] in ['binary', 'blob']:\n length1 = properties1['length'] or 255\n length2 = properties2['length'] or 255\n\n if length1 != length2:\n changed_properties.append('length')\n\n if properties1['fixed'] != properties2['fixed']:\n changed_properties.append('fixed')\n elif properties1['type'] in ['decimal', 'float', 'double precision']:\n precision1 = properties1['precision'] or 10\n precision2 = properties2['precision'] or 10\n\n if precision1 != precision2:\n changed_properties.append('precision')\n\n if properties1['scale'] != properties2['scale']:\n changed_properties.append('scale')\n\n return list(set(changed_properties))"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nexecutes the command. :type i: cleo.inputs.input.Input :type o: cleo.outputs.output.Output", "response": "def execute(self, i, o):\n \"\"\"\n Executes the command.\n\n :type i: cleo.inputs.input.Input\n :type o: cleo.outputs.output.Output\n \"\"\"\n config = self._get_config(i)\n self._resolver = DatabaseManager(config)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef call(self, name, options=None, o=None):\n if options is None:\n options = []\n\n command = self.get_application().find(name)\n\n options = [('command', command.get_name())] + options\n\n return command.run(ListInput(options), o)", "response": "Call another command.\n\n :param name: The command name\n :type name: str\n\n :param options: The options\n :type options: list or None\n\n :param o: The output\n :type o: cleo.outputs.output.Output"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_config(self, i):\n variables = {}\n if not i.get_option('config'):\n raise Exception('The --config|-c option is missing.')\n\n with open(i.get_option('config')) as fh:\n exec(fh.read(), {}, variables)\n\n return variables['DATABASES']", "response": "Get the config.\n\n :type i: cleo.inputs.input.Input\n\n :rtype: dict"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef associate(self, model):\n self._parent.set_attribute(self._foreign_key, model.get_key())\n self._parent.set_attribute(self._morph_type, model.get_morph_class())\n\n return self._parent.set_relation(self._relation, model)", "response": "Associate the given model instance with the given parent."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _create_model_by_type(self, type):\n klass = None\n for cls in eloquent.orm.model.Model.__subclasses__():\n morph_class = cls.__morph_class__ or cls.__name__\n if morph_class == type:\n klass = cls\n break\n\n return klass()", "response": "Create a new model instance by type."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget the column listing for a given table.", "response": "def get_column_listing(self, table):\n \"\"\"\n Get the column listing for a given table.\n\n :param table: The table\n :type table: str\n\n :rtype: list\n \"\"\"\n sql = self._grammar.compile_column_exists()\n database = self._connection.get_database_name()\n table = self._connection.get_table_prefix() + table\n\n results = self._connection.select(sql, [database, table])\n\n return self._connection.get_post_processor().process_column_listing(results)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _populate_stub(self, name, stub, table):\n stub = stub.replace('DummyClass', self._get_class_name(name))\n\n if table is not None:\n stub = stub.replace('dummy_table', table)\n\n return stub", "response": "Populate the placeholders in the migration stub."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nset the keys for a save update query.", "response": "def _set_keys_for_save_query(self, query):\n \"\"\"\n Set the keys for a save update query.\n\n :param query: A Builder instance\n :type query: eloquent.orm.Builder\n\n :return: The Builder instance\n :rtype: eloquent.orm.Builder\n \"\"\"\n query.where(self._morph_type, self._morph_class)\n\n return super(MorphPivot, self)._set_keys_for_save_query(query)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef delete(self):\n query = self._get_delete_query()\n\n query.where(self._morph_type, self._morph_class)\n\n return query.delete()", "response": "Delete the pivot model record from the database."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding the constraints for a relationship count query.", "response": "def get_relation_count_query(self, query, parent):\n \"\"\"\n Add the constraints for a relationship count query.\n\n :type query: Builder\n :type parent: Builder\n\n :rtype: Builder\n \"\"\"\n query = super(MorphOneOrMany, self).get_relation_count_query(query, parent)\n\n return query.where(self._morph_type, self._morph_class)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding constraints for an eager load of the relation.", "response": "def add_eager_constraints(self, models):\n \"\"\"\n Set the constraints for an eager load of the relation.\n\n :type models: list\n \"\"\"\n super(MorphOneOrMany, self).add_eager_constraints(models)\n\n self._query.where(self._morph_type, self._morph_class)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsave a new instance of the current record.", "response": "def save(self, model):\n \"\"\"\n Attach a model instance to the parent models.\n\n :param model: The model instance to attach\n :type model: Model\n\n :rtype: Model\n \"\"\"\n model.set_attribute(self.get_plain_morph_type(), self._morph_class)\n\n return super(MorphOneOrMany, self).save(model)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nfinds a model by its primary key or return a new instance of the related model.", "response": "def find_or_new(self, id, columns=None):\n \"\"\"\n Find a model by its primary key or return new instance of the related model.\n\n :param id: The primary key\n :type id: mixed\n\n :param columns: The columns to retrieve\n :type columns: list\n\n :rtype: Collection or Model\n \"\"\"\n if columns is None:\n columns = ['*']\n\n instance = self.find(id, columns)\n\n if instance is None:\n instance = self._related.new_instance()\n self._set_foreign_attributes_for_create(instance)\n\n return instance"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _set_foreign_attributes_for_create(self, model):\n model.set_attribute(self.get_plain_foreign_key(), self.get_parent_key())\n\n model.set_attribute(self.get_plain_morph_type(), self._morph_class)", "response": "Set the foreign ID and type for creation a related model."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _parse_connection_name(self, name):\n if name is None:\n name = self.get_default_connection()\n\n if name.endswith(('::read', '::write')):\n return name.split('::', 1)\n\n return name, None", "response": "Parse the connection name into a tuple of the name read and write types."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef purge(self, name=None):\n self.disconnect(name)\n\n if name in self._connections:\n del self._connections[name]", "response": "Disconnect from the given database and remove from local cache."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef no_constraints(cls, callback):\n cls._constraints = False\n\n results = callback()\n\n cls._constraints = True\n\n return results", "response": "Runs a callback with constraints disabled on the relation."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget all the primary keys for an array of models.", "response": "def get_keys(self, models, key=None):\n \"\"\"\n Get all the primary keys for an array of models.\n\n :type models: list\n :type key: str\n\n :rtype: list\n \"\"\"\n return list(set(map(lambda value: value.get_attribute(key) if key else value.get_key(), models)))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef add_constraints(self):\n parent_table = self._parent.get_table()\n\n self._set_join()\n\n if self._constraints:\n self._query.where('%s.%s' % (parent_table, self._first_key), '=', self._far_parent.get_key())", "response": "Add constraints on the relation query."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_relation_count_query(self, query, parent):\n parent_table = self._parent.get_table()\n\n self._set_join(query)\n\n query.select(QueryExpression('COUNT(*)'))\n\n key = self.wrap('%s.%s' % (parent_table, self._first_key))\n\n return query.where(self.get_has_compare_key(), '=', QueryExpression(key))", "response": "Add the constraints for a relationship count query."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nsetting the join clause for the query.", "response": "def _set_join(self, query=None):\n \"\"\"\n Set the join clause for the query.\n \"\"\"\n if not query:\n query = self._query\n\n foreign_key = '%s.%s' % (self._related.get_table(), self._second_key)\n\n query.join(self._parent.get_table(), self.get_qualified_parent_key_name(), '=', foreign_key)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nplot the highest and lowest scoring of each modality.", "response": "def plot_best_worst_fits(assignments_df, data, modality_col='Modality',\n score='$\\log_2 K$'):\n \"\"\"Violinplots of the highest and lowest scoring of each modality\"\"\"\n ncols = 2\n nrows = len(assignments_df.groupby(modality_col).groups.keys())\n\n fig, axes = plt.subplots(nrows=nrows, ncols=ncols,\n figsize=(nrows*4, ncols*6))\n\n axes_iter = axes.flat\n\n fits = 'Highest', 'Lowest'\n\n for modality, df in assignments_df.groupby(modality_col):\n df = df.sort_values(score)\n\n color = MODALITY_TO_COLOR[modality]\n\n for fit in fits:\n if fit == 'Highest':\n ids = df['Feature ID'][-10:]\n else:\n ids = df['Feature ID'][:10]\n fit_psi = data[ids]\n tidy_fit_psi = fit_psi.stack().reset_index()\n tidy_fit_psi = tidy_fit_psi.rename(columns={'level_0': 'Sample ID',\n 'level_1':\n 'Feature ID',\n 0: '$\\Psi$'})\n if tidy_fit_psi.empty:\n continue\n ax = six.next(axes_iter)\n violinplot(x='Feature ID', y='$\\Psi$', data=tidy_fit_psi,\n color=color, ax=ax)\n ax.set(title='{} {} {}'.format(fit, score, modality), xticks=[])\n sns.despine()\n fig.tight_layout()"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwraps around Seaborn s Violinplot specifically for [ 0 1 ) ranged data", "response": "def violinplot(x=None, y=None, data=None, bw=0.2, scale='width',\n inner=None, ax=None, **kwargs):\n \"\"\"Wrapper around Seaborn's Violinplot specifically for [0, 1] ranged data\n\n What's different:\n - bw = 0.2: Sets bandwidth to be small and the same between datasets\n - scale = 'width': Sets the width of all violinplots to be the same\n - inner = None: Don't plot a boxplot or points inside the violinplot\n \"\"\"\n if ax is None:\n ax = plt.gca()\n\n sns.violinplot(x, y, data=data, bw=bw, scale=scale, inner=inner, ax=ax,\n **kwargs)\n ax.set(ylim=(0, 1), yticks=(0, 0.5, 1))\n return ax"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndrawing a barplot grouped by modality of modality percentage per group.", "response": "def bar(self, counts, phenotype_to_color=None, ax=None, percentages=True):\n \"\"\"Draw barplots grouped by modality of modality percentage per group\n\n Parameters\n ----------\n\n\n Returns\n -------\n\n\n Raises\n ------\n\n \"\"\"\n if percentages:\n counts = 100 * (counts.T / counts.T.sum()).T\n\n # with sns.set(style='whitegrid'):\n if ax is None:\n ax = plt.gca()\n\n full_width = 0.8\n width = full_width / counts.shape[0]\n for i, (group, series) in enumerate(counts.iterrows()):\n left = np.arange(len(self.modality_order)) + i * width\n height = [series[i] if i in series else 0\n for i in self.modality_order]\n color = phenotype_to_color[group]\n ax.bar(left, height, width=width, color=color, label=group,\n linewidth=.5, edgecolor='k')\n ylabel = 'Percentage of events' if percentages else 'Number of events'\n ax.set_ylabel(ylabel)\n ax.set_xticks(np.arange(len(self.modality_order)) + full_width / 2)\n ax.set_xticklabels(self.modality_order)\n ax.set_xlabel('Splicing modality')\n ax.set_xlim(0, len(self.modality_order))\n ax.legend(loc='best')\n ax.grid(axis='y', linestyle='-', linewidth=0.5)\n sns.despine()"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nshows the values underlying bayesian modality estimations of an event", "response": "def event_estimation(self, event, logliks, logsumexps, renamed=''):\n \"\"\"Show the values underlying bayesian modality estimations of an event\n\n Parameters\n ----------\n\n\n Returns\n -------\n\n\n Raises\n ------\n \"\"\"\n plotter = _ModelLoglikPlotter()\n plotter.plot(event, logliks, logsumexps, self.modality_to_color,\n renamed=renamed)\n return plotter"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef predict(self, fitted):\n if fitted.shape[0] != len(self.modalities):\n raise ValueError(\"This data doesn't look like it had the distance \"\n \"between it and the five modalities calculated\")\n return fitted.idxmin()", "response": "Assign the most likely modality given the fitted data."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef logliks(self, x):\n x = x.copy()\n\n # Replace exactly 0 and exactly 1 values with a very small number\n # (machine epsilon, the smallest number that this computer is capable\n # of storing) because 0 and 1 are not in the Beta distribution.\n x[x == 0] = VERY_SMALL_NUMBER\n x[x == 1] = 1 - VERY_SMALL_NUMBER\n\n return np.array([np.log(prob) + rv.logpdf(x[np.isfinite(x)]).sum()\n for prob, rv in\n zip(self.prob_parameters, self.rvs)])", "response": "Calculate the log - likelihood of a feature x for each model."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nconvert floats to integers or a nice looking fraction", "response": "def nice_number_string(number, decimal_places=2):\n \"\"\"Convert floats to either integers or a nice looking fraction\"\"\"\n if number == np.round(number):\n return str(int(number))\n elif number < 1 and number > 0:\n inverse = 1 / number\n if int(inverse) == np.round(inverse):\n return r'\\frac{{1}}{{{}}}'.format(int(inverse))\n else:\n template = '{{:.{0}}}'.format(decimal_places)\n return template.format(number)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef violinplot(self, n=1000, **kwargs):\n kwargs.setdefault('palette', 'Purples')\n\n dfs = []\n\n for rv in self.rvs:\n psi = rv.rvs(n)\n df = pd.Series(psi, name=self.ylabel).to_frame()\n alpha, beta = rv.args\n alpha = self.nice_number_string(alpha, decimal_places=2)\n beta = self.nice_number_string(beta, decimal_places=2)\n\n df['parameters'] = '$\\\\alpha = {0}$\\n$\\\\beta = {1}$'.format(\n alpha, beta)\n dfs.append(df)\n data = pd.concat(dfs)\n\n if 'ax' not in kwargs:\n fig, ax = plt.subplots(figsize=(len(self.alphas)*0.625, 4))\n else:\n ax = kwargs.pop('ax')\n ax = violinplot(x='parameters', y=self.ylabel, data=data,\n ax=ax, **kwargs)\n sns.despine(ax=ax)\n return ax", "response": "Plot violins of each distribution in the model family."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _single_feature_logliks_one_step(self, feature, models):\n x_non_na = feature[~feature.isnull()]\n if x_non_na.empty:\n return pd.DataFrame()\n else:\n dfs = []\n for name, model in models.items():\n df = model.single_feature_logliks(feature)\n df['Modality'] = name\n dfs.append(df)\n return pd.concat(dfs, ignore_index=True)", "response": "Get log - likelihood of models at each parameterization for a single feature."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfits the log2 bayes factor for each modality assignment in the data.", "response": "def fit(self, data):\n \"\"\"Get the modality assignments of each splicing event in the data\n\n Parameters\n ----------\n data : pandas.DataFrame\n A (n_samples, n_events) dataframe of splicing events' PSI scores.\n Must be psi scores which range from 0 to 1\n\n Returns\n -------\n log2_bayes_factors : pandas.DataFrame\n A (n_modalities, n_events) dataframe of the estimated log2\n bayes factor for each splicing event, for each modality\n\n Raises\n ------\n AssertionError\n If any value in ``data`` does not fall only between 0 and 1.\n \"\"\"\n self.assert_less_than_or_equal_1(data.values.flat)\n self.assert_non_negative(data.values.flat)\n\n if isinstance(data, pd.DataFrame):\n log2_bayes_factors = data.apply(self.single_feature_fit)\n elif isinstance(data, pd.Series):\n log2_bayes_factors = self.single_feature_fit(data)\n log2_bayes_factors.name = self.score_name\n return log2_bayes_factors"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\npredicts the most likely modality for each event.", "response": "def predict(self, log2_bayes_factors, reset_index=False):\n \"\"\"Guess the most likely modality for each event\n\n For each event that has at least one non-NA value, if no modalilites\n have logsumexp'd logliks greater than the log Bayes factor threshold,\n then they are assigned the 'multimodal' modality, because we cannot\n reject the null hypothesis that these did not come from the uniform\n distribution.\n\n Parameters\n ----------\n log2_bayes_factors : pandas.DataFrame\n A (4, n_events) dataframe with bayes factors for the Psi~1, Psi~0,\n bimodal, and middle modalities. If an event has no bayes factors\n for any of those modalities, it is ignored\n reset_index : bool\n If True, remove the first level of the index from the dataframe.\n Useful if you are using this function to apply to a grouped\n dataframe where the first level is something other than the\n modality, e.g. the celltype\n\n Returns\n -------\n modalities : pandas.Series\n A (n_events,) series with the most likely modality for each event\n\n \"\"\"\n if reset_index:\n x = log2_bayes_factors.reset_index(level=0, drop=True)\n else:\n x = log2_bayes_factors\n if isinstance(x, pd.DataFrame):\n not_na = (x.notnull() > 0).any()\n not_na_columns = not_na[not_na].index\n x.ix[NULL_MODEL, not_na_columns] = self.logbf_thresh\n elif isinstance(x, pd.Series):\n x[NULL_MODEL] = self.logbf_thresh\n return x.idxmax()"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncalculates log - likelihoods of each modality s parameterization for each single feature.", "response": "def single_feature_logliks(self, feature):\n \"\"\"Calculate log-likelihoods of each modality's parameterization\n\n Used for plotting the estimates of a single feature\n\n Parameters\n ----------\n featre : pandas.Series\n A single feature's values. All values must range from 0 to 1.\n\n Returns\n -------\n logliks : pandas.DataFrame\n The log-likelihood the data, for each model, for each\n parameterization\n\n Raises\n ------\n AssertionError\n If any value in ``x`` does not fall only between 0 and 1.\n \"\"\"\n self.assert_less_than_or_equal_1(feature.values)\n self.assert_non_negative(feature.values)\n\n logliks = self._single_feature_logliks_one_step(\n feature, self.one_param_models)\n\n logsumexps = self.logliks_to_logsumexp(logliks)\n\n # If none of the one-parameter models passed, try the two-param models\n if (logsumexps <= self.logbf_thresh).all():\n logliks_two_params = self._single_feature_logliks_one_step(\n feature, self.two_param_models)\n logliks = pd.concat([logliks, logliks_two_params])\n return logliks"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nget the log2 bayes factor of the fit for each modality", "response": "def single_feature_fit(self, feature):\n \"\"\"Get the log2 bayes factor of the fit for each modality\"\"\"\n if np.isfinite(feature).sum() == 0:\n series = pd.Series(index=MODALITY_ORDER)\n else:\n logbf_one_param = pd.Series(\n {k: v.logsumexp_logliks(feature) for\n k, v in self.one_param_models.items()})\n\n # Check if none of the previous features fit\n if (logbf_one_param <= self.logbf_thresh).all():\n logbf_two_param = pd.Series(\n {k: v.logsumexp_logliks(feature)\n for k, v in self.two_param_models.items()})\n series = pd.concat([logbf_one_param, logbf_two_param])\n series[NULL_MODEL] = self.logbf_thresh\n else:\n series = logbf_one_param\n series.index.name = 'Modality'\n series.name = self.score_name\n return series"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef bin_range_strings(bins, fmt=':g'):\n return [('{' + fmt + '}-{' + fmt + '}').format(i, j)\n for i, j in zip(bins, bins[1:])]", "response": "Given a list of bins make a list of strings of those bin ranges"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nmakes a histogram of each column into a set of binsize", "response": "def binify(data, bins):\n \"\"\"Makes a histogram of each column the provided binsize\n\n Parameters\n ----------\n data : pandas.DataFrame\n A samples x features dataframe. Each feature (column) will be binned\n into the provided bins\n bins : iterable\n Bins you would like to use for this data. Must include the final bin\n value, e.g. (0, 0.5, 1) for the two bins (0, 0.5) and (0.5, 1).\n nbins = len(bins) - 1\n\n Returns\n -------\n binned : pandas.DataFrame\n An nbins x features DataFrame of each column binned across rows\n \"\"\"\n if bins is None:\n raise ValueError('Must specify \"bins\"')\n if isinstance(data, pd.DataFrame):\n binned = data.apply(lambda x: pd.Series(np.histogram(x, bins=bins,\n range=(0, 1))[0]))\n elif isinstance(data, pd.Series):\n binned = pd.Series(np.histogram(data, bins=bins, range=(0, 1))[0])\n else:\n raise ValueError('`data` must be either a 1d vector or 2d matrix')\n binned.index = bin_range_strings(bins)\n\n # Normalize so each column sums to 1\n binned = binned / binned.sum().astype(float)\n return binned"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef kld(p, q):\n try:\n _check_prob_dist(p)\n _check_prob_dist(q)\n except ValueError:\n return np.nan\n # If one of them is zero, then the other should be considered to be 0.\n # In this problem formulation, log0 = 0\n p = p.replace(0, np.nan)\n q = q.replace(0, np.nan)\n\n return (np.log2(p / q) * p).sum(axis=0)", "response": "Compute the Kullback - Leiber divergence of two probability distributions pandas\n dataframes p and q."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncalculates the Jensen - Shannon divergence between two probability distributions pandasDataFrame p and q.", "response": "def jsd(p, q):\n \"\"\"Finds the per-column JSD between dataframes p and q\n\n Jensen-Shannon divergence of two probability distrubutions pandas\n dataframes, p and q. These distributions are usually created by running\n binify() on the dataframe.\n\n Parameters\n ----------\n p : pandas.DataFrame\n An nbins x features DataFrame.\n q : pandas.DataFrame\n An nbins x features DataFrame.\n\n Returns\n -------\n jsd : pandas.Series\n Jensen-Shannon divergence of each column with the same names between\n p and q\n\n Raises\n ------\n ValueError\n If the data provided is not a probability distribution, i.e. it has\n negative values or its columns do not sum to 1, raise ValueError\n \"\"\"\n try:\n _check_prob_dist(p)\n _check_prob_dist(q)\n except ValueError:\n return np.nan\n weight = 0.5\n m = weight * (p + q)\n\n result = weight * kld(p, m) + (1 - weight) * kld(q, m)\n return result"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef entropy(binned, base=2):\n try:\n _check_prob_dist(binned)\n except ValueError:\n np.nan\n return -((np.log(binned) / np.log(base)) * binned).sum(axis=0)", "response": "Find the entropy of each column of a dataframe."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef binify_and_jsd(df1, df2, bins, pair=None):\n binned1 = binify(df1, bins=bins).dropna(how='all', axis=1)\n binned2 = binify(df2, bins=bins).dropna(how='all', axis=1)\n\n binned1, binned2 = binned1.align(binned2, axis=1, join='inner')\n\n series = np.sqrt(jsd(binned1, binned2))\n series.name = pair\n return series", "response": "Binify and calculate jensen - shannon divergence between two dataframes."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ntransform a tall JSD dataframe to a square matrix of mean JSDs", "response": "def jsd_df_to_2d(jsd_df):\n \"\"\"Transform a tall JSD dataframe to a square matrix of mean JSDs\n\n Parameters\n ----------\n jsd_df : pandas.DataFrame\n A (n_features, n_phenotypes^2) dataframe of the JSD between each\n feature between and within phenotypes\n\n Returns\n -------\n jsd_2d : pandas.DataFrame\n A (n_phenotypes, n_phenotypes) symmetric dataframe of the mean JSD\n between and within phenotypes\n \"\"\"\n jsd_2d = jsd_df.mean().reset_index()\n jsd_2d = jsd_2d.rename(\n columns={'level_0': 'phenotype1', 'level_1': 'phenotype2', 0: 'jsd'})\n jsd_2d = jsd_2d.pivot(index='phenotype1', columns='phenotype2',\n values='jsd')\n return jsd_2d + np.tril(jsd_2d.T, -1)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nstarting the loop over the interface calling the given callback for each packet intercepted .", "response": "def run(self, callback=None, limit=0):\n \"\"\"\n Start pcap's loop over the interface, calling the given callback for each packet\n :param callback: a function receiving (win_pcap, param, header, pkt_data) for each packet intercepted\n :param limit: how many packets to capture (A value of -1 or 0 is equivalent to infinity)\n \"\"\"\n if self._handle is None:\n raise self.DeviceIsNotOpen()\n # Set new callback\n self._callback = callback\n # Run loop with callback wrapper\n wtypes.pcap_loop(self._handle, limit, self._callback_wrapper, None)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nsend a packet to the network interface.", "response": "def send(self, packet_buffer):\n \"\"\"\n send a buffer as a packet to the network interface\n :param packet_buffer: buffer to send (length shouldn't exceed MAX_INT)\n \"\"\"\n if self._handle is None:\n raise self.DeviceIsNotOpen()\n buffer_length = len(packet_buffer)\n buf_send = ctypes.cast(ctypes.create_string_buffer(packet_buffer, buffer_length),\n ctypes.POINTER(ctypes.c_ubyte))\n wtypes.pcap_sendpacket(self._handle, buf_send, buffer_length)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncaptures packets on a specific network interface.", "response": "def capture_on(pattern, callback):\n \"\"\"\n :param pattern: a wildcard pattern to match the description of a network interface to capture packets on\n :param callback: a function to call with each intercepted packet\n \"\"\"\n device_name, desc = WinPcapDevices.get_matching_device(pattern)\n if device_name is not None:\n with WinPcap(device_name) as capture:\n capture.run(callback=callback)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ncaptures on a device name.", "response": "def capture_on_device_name(device_name, callback):\n \"\"\"\n :param device_name: the name (guid) of a device as provided by WinPcapDevices.list_devices()\n :param callback: a function to call with each intercepted packet\n \"\"\"\n with WinPcap(device_name) as capture:\n capture.run(callback=callback)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsending a packet to a network interface and optionally capture a response on the next request.", "response": "def send_packet(self, pattern, packet_buffer, callback=None, limit=10):\n \"\"\"\n Send a buffer as a packet to a network interface and optionally capture a response\n :param pattern: a wildcard pattern to match the description of a network interface to capture packets on\n :param packet_buffer: a buffer to send (length shouldn't exceed MAX_INT)\n :param callback: If not None, a function to call with each intercepted packet\n :param limit: how many packets to capture (A value of -1 or 0 is equivalent to infinity)\n \"\"\"\n device_name, desc = WinPcapDevices.get_matching_device(pattern)\n if device_name is not None:\n with WinPcap(device_name) as capture:\n capture.send(packet_buffer)\n if callback is not None:\n capture.run(callback=callback, limit=limit)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ngets the next value for a given sequence.", "response": "def get_next_value(\n sequence_name='default', initial_value=1, reset_value=None,\n *, nowait=False, using=None):\n \"\"\"\n Return the next value for a given sequence.\n\n \"\"\"\n # Inner import because models cannot be imported before their application.\n from .models import Sequence\n\n if reset_value is not None:\n assert initial_value < reset_value\n\n if using is None:\n using = router.db_for_write(Sequence)\n\n connection = connections[using]\n\n if (getattr(connection, 'pg_version', 0) >= 90500\n and reset_value is None and not nowait):\n\n # PostgreSQL \u2265 9.5 supports \"upsert\".\n\n with connection.cursor() as cursor:\n cursor.execute(UPSERT_QUERY, [sequence_name, initial_value])\n last, = cursor.fetchone()\n return last\n\n else:\n\n # Other databases require making more database queries.\n\n with transaction.atomic(using=using, savepoint=False):\n sequence, created = (\n Sequence.objects\n .select_for_update(nowait=nowait)\n .get_or_create(name=sequence_name,\n defaults={'last': initial_value})\n )\n\n if not created:\n sequence.last += 1\n if reset_value is not None and sequence.last >= reset_value:\n sequence.last = initial_value\n sequence.save()\n\n return sequence.last"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef check(self, final_line_count):\n if self._lines_seen[\"version\"]:\n self._process_version_lines()\n self._process_plan_lines(final_line_count)", "response": "Check the status of all provided data and update the suite."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _process_version_lines(self):\n if len(self._lines_seen[\"version\"]) > 1:\n self._add_error(_(\"Multiple version lines appeared.\"))\n elif self._lines_seen[\"version\"][0] != 1:\n self._add_error(_(\"The version must be on the first line.\"))", "response": "Process version line rules."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nprocessing plan line rules.", "response": "def _process_plan_lines(self, final_line_count):\n \"\"\"Process plan line rules.\"\"\"\n if not self._lines_seen[\"plan\"]:\n self._add_error(_(\"Missing a plan.\"))\n return\n\n if len(self._lines_seen[\"plan\"]) > 1:\n self._add_error(_(\"Only one plan line is permitted per file.\"))\n return\n\n plan, at_line = self._lines_seen[\"plan\"][0]\n if not self._plan_on_valid_line(at_line, final_line_count):\n self._add_error(\n _(\"A plan must appear at the beginning or end of the file.\")\n )\n return\n\n if plan.expected_tests != self._lines_seen[\"test\"]:\n self._add_error(\n _(\"Expected {expected_count} tests but only {seen_count} ran.\").format(\n expected_count=plan.expected_tests,\n seen_count=self._lines_seen[\"test\"],\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncheck if a plan is on a valid line.", "response": "def _plan_on_valid_line(self, at_line, final_line_count):\n \"\"\"Check if a plan is on a valid line.\"\"\"\n # Put the common cases first.\n if at_line == 1 or at_line == final_line_count:\n return True\n\n # The plan may only appear on line 2 if the version is at line 1.\n after_version = (\n self._lines_seen[\"version\"]\n and self._lines_seen[\"version\"][0] == 1\n and at_line == 2\n )\n if after_version:\n return True\n\n return False"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nhandle a bail line.", "response": "def handle_bail(self, bail):\n \"\"\"Handle a bail line.\"\"\"\n self._add_error(_(\"Bailed: {reason}\").format(reason=bail.reason))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nhandles a SKIP plan that contains a SKIP directive.", "response": "def handle_skipping_plan(self, skip_plan):\n \"\"\"Handle a plan that contains a SKIP directive.\"\"\"\n skip_line = Result(True, None, skip_plan.directive.text, Directive(\"SKIP\"))\n self._suite.addTest(Adapter(self._filename, skip_line))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nadding an error test to the suite.", "response": "def _add_error(self, message):\n \"\"\"Add an error test to the suite.\"\"\"\n error_line = Result(False, None, message, Directive(\"\"))\n self._suite.addTest(Adapter(self._filename, error_line))"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nformats an exception as diagnostics output.", "response": "def format_exception(exception):\n \"\"\"Format an exception as diagnostics output.\n\n exception is the tuple as expected from sys.exc_info.\n \"\"\"\n exception_lines = traceback.format_exception(*exception)\n # The lines returned from format_exception do not strictly contain\n # one line per element in the list (i.e. some elements have new\n # line characters in the middle). Normalize that oddity.\n lines = \"\".join(exception_lines).splitlines(True)\n return format_as_diagnostics(lines)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse a file - like object fh and return a generator of tap. line. Line objects.", "response": "def parse(self, fh):\n \"\"\"Generate tap.line.Line objects, given a file-like object `fh`.\n\n `fh` may be any object that implements both the iterator and\n context management protocol (i.e. it can be used in both a\n \"with\" statement and a \"for...in\" statement.)\n\n Trailing whitespace and newline characters will be automatically\n stripped from the input lines.\n \"\"\"\n with fh:\n try:\n first_line = next(fh)\n except StopIteration:\n return\n first_parsed = self.parse_line(first_line.rstrip())\n fh_new = itertools.chain([first_line], fh)\n if first_parsed.category == \"version\" and first_parsed.version >= 13:\n if ENABLE_VERSION_13:\n fh_new = peekable(itertools.chain([first_line], fh))\n self._try_peeking = True\n else: # pragma no cover\n print(\n \"\"\"\nWARNING: Optional imports not found, TAP 13 output will be\n ignored. To parse yaml, see requirements in docs:\n https://tappy.readthedocs.io/en/latest/consumers.html#tap-version-13\"\"\"\n )\n\n for line in fh_new:\n yield self.parse_line(line.rstrip(), fh_new)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef parse_line(self, text, fh=None):\n match = self.ok.match(text)\n if match:\n return self._parse_result(True, match, fh)\n\n match = self.not_ok.match(text)\n if match:\n return self._parse_result(False, match, fh)\n\n if self.diagnostic.match(text):\n return Diagnostic(text)\n\n match = self.plan.match(text)\n if match:\n return self._parse_plan(match)\n\n match = self.bail.match(text)\n if match:\n return Bail(match.group(\"reason\"))\n\n match = self.version.match(text)\n if match:\n return self._parse_version(match)\n\n return Unknown()", "response": "Parse a line into whatever TAP category it belongs."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _parse_plan(self, match):\n expected_tests = int(match.group(\"expected\"))\n directive = Directive(match.group(\"directive\"))\n\n # Only SKIP directives are allowed in the plan.\n if directive.text and not directive.skip:\n return Unknown()\n\n return Plan(expected_tests, directive)", "response": "Parse a matching plan line."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nparses a matching result line into a result instance.", "response": "def _parse_result(self, ok, match, fh=None):\n \"\"\"Parse a matching result line into a result instance.\"\"\"\n peek_match = None\n try:\n if fh is not None and self._try_peeking:\n peek_match = self.yaml_block_start.match(fh.peek())\n except StopIteration:\n pass\n if peek_match is None:\n return Result(\n ok,\n number=match.group(\"number\"),\n description=match.group(\"description\").strip(),\n directive=Directive(match.group(\"directive\")),\n )\n indent = peek_match.group(\"indent\")\n concat_yaml = self._extract_yaml_block(indent, fh)\n return Result(\n ok,\n number=match.group(\"number\"),\n description=match.group(\"description\").strip(),\n directive=Directive(match.group(\"directive\")),\n raw_yaml_block=concat_yaml,\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _extract_yaml_block(self, indent, fh):\n raw_yaml = []\n indent_match = re.compile(r\"^{}\".format(indent))\n try:\n fh.next()\n while indent_match.match(fh.peek()):\n raw_yaml.append(fh.next().replace(indent, \"\", 1))\n # check for the end and stop adding yaml if encountered\n if self.yaml_block_end.match(fh.peek()):\n fh.next()\n break\n except StopIteration:\n pass\n return \"\\n\".join(raw_yaml)", "response": "Extract a raw yaml block from a file handler"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef yaml_block(self):\n if LOAD_YAML and self._yaml_block is not None:\n try:\n yaml_dict = yaml.load(self._yaml_block)\n return yaml_dict\n except yaml.error.YAMLError:\n print(\"Error parsing yaml block. Check formatting.\")\n return None", "response": "Lazy load a yaml_block."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nload any files found into a test suite.", "response": "def load(self, files):\n \"\"\"Load any files found into a suite.\n\n Any directories are walked and their files are added as TAP files.\n\n :returns: A ``unittest.TestSuite`` instance\n \"\"\"\n suite = unittest.TestSuite()\n for filepath in files:\n if os.path.isdir(filepath):\n self._find_tests_in_directory(filepath, suite)\n else:\n suite.addTest(self.load_suite_from_file(filepath))\n return suite"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nload a test suite with test lines from the provided TAP file.", "response": "def load_suite_from_file(self, filename):\n \"\"\"Load a test suite with test lines from the provided TAP file.\n\n :returns: A ``unittest.TestSuite`` instance\n \"\"\"\n suite = unittest.TestSuite()\n rules = Rules(filename, suite)\n\n if not os.path.exists(filename):\n rules.handle_file_does_not_exist()\n return suite\n\n line_generator = self._parser.parse_file(filename)\n return self._load_lines(filename, line_generator, suite, rules)"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nload a test suite with test lines from the TAP stream on STDIN.", "response": "def load_suite_from_stdin(self):\n \"\"\"Load a test suite with test lines from the TAP stream on STDIN.\n\n :returns: A ``unittest.TestSuite`` instance\n \"\"\"\n suite = unittest.TestSuite()\n rules = Rules(\"stream\", suite)\n line_generator = self._parser.parse_stdin()\n return self._load_lines(\"stream\", line_generator, suite, rules)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _load_lines(self, filename, line_generator, suite, rules):\n line_counter = 0\n for line in line_generator:\n line_counter += 1\n\n if line.category in self.ignored_lines:\n continue\n\n if line.category == \"test\":\n suite.addTest(Adapter(filename, line))\n rules.saw_test()\n elif line.category == \"plan\":\n if line.skip:\n rules.handle_skipping_plan(line)\n return suite\n rules.saw_plan(line, line_counter)\n elif line.category == \"bail\":\n rules.handle_bail(line)\n return suite\n elif line.category == \"version\":\n rules.saw_version_at(line_counter)\n\n rules.check(line_counter)\n return suite", "response": "Load a suite with lines produced by the line generator."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nkeeping track of which test cases have executed.", "response": "def _track(self, class_name):\n \"\"\"Keep track of which test cases have executed.\"\"\"\n if self._test_cases.get(class_name) is None:\n if self.streaming and self.header:\n self._write_test_case_header(class_name, self.stream)\n\n self._test_cases[class_name] = []\n if self.combined:\n self.combined_test_cases_seen.append(class_name)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef set_plan(self, total):\n self.plan = total\n if self.streaming:\n # This will only write the plan if we haven't written it\n # already but we want to check if we already wrote a\n # test out (in which case we can't just write the plan out\n # right here).\n if not self.combined_test_cases_seen:\n self._write_plan(self.stream)\n elif not self.combined:\n raise ValueError(\n \"set_plan can only be used with combined or streaming output\"\n )", "response": "Notify the tracker how many total tests there will be."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef generate_tap_reports(self):\n # We're streaming but set_plan wasn't called, so we can only\n # know the plan now (at the end).\n if self.streaming and not self._plan_written:\n print(\"1..{0}\".format(self.combined_line_number), file=self.stream)\n self._plan_written = True\n return\n\n if self.combined:\n combined_file = \"testresults.tap\"\n if self.outdir:\n combined_file = os.path.join(self.outdir, combined_file)\n with open(combined_file, \"w\") as out_file:\n self._write_tap_version(out_file)\n if self.plan is not None:\n print(\"1..{0}\".format(self.plan), file=out_file)\n for test_case in self.combined_test_cases_seen:\n self.generate_tap_report(\n test_case, self._test_cases[test_case], out_file\n )\n if self.plan is None:\n print(\"1..{0}\".format(self.combined_line_number), file=out_file)\n else:\n for test_case, tap_lines in self._test_cases.items():\n with open(self._get_tap_file_path(test_case), \"w\") as out_file:\n self._write_tap_version(out_file)\n self.generate_tap_report(test_case, tap_lines, out_file)", "response": "Generate TAP reports.\n\n The results are either combined into a single output file or\n the output file name is generated from the test case."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwrites the plan line to the given stream.", "response": "def _write_plan(self, stream):\n \"\"\"Write the plan line to the stream.\n\n If we have a plan and have not yet written it out, write it to\n the given stream.\n \"\"\"\n if self.plan is not None:\n if not self._plan_written:\n print(\"1..{0}\".format(self.plan), file=stream)\n self._plan_written = True"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _get_tap_file_path(self, test_case):\n sanitized_test_case = test_case.translate(self._sanitized_table)\n tap_file = sanitized_test_case + \".tap\"\n if self.outdir:\n return os.path.join(self.outdir, tap_file)\n return tap_file", "response": "Get the TAP output file path for the test case."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef main(argv=sys.argv, stream=sys.stderr):\n args = parse_args(argv)\n suite = build_suite(args)\n runner = unittest.TextTestRunner(verbosity=args.verbose, stream=stream)\n result = runner.run(suite)\n\n return get_status(result)", "response": "Entry point for tappy command."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef build_suite(args):\n loader = Loader()\n if len(args.files) == 0 or args.files[0] == \"-\":\n suite = loader.load_suite_from_stdin()\n else:\n suite = loader.load(args.files)\n return suite", "response": "Build a test suite by loading TAP files or a TAP stream."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding a failure to the result.", "response": "def addFailure(self, result):\n \"\"\"Add a failure to the result.\"\"\"\n result.addFailure(self, (Exception, Exception(), None))\n # Since TAP will not provide assertion data, clean up the assertion\n # section so it is not so spaced out.\n test, err = result.failures[-1]\n result.failures[-1] = (test, \"\")"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nbases on example https://bitbucket.org/zzzeek/sqlalchemy/src/73095b353124/examples/nested_sets/nested_sets.py?at=master", "response": "def mptt_before_insert(mapper, connection, instance):\n \"\"\" Based on example\n https://bitbucket.org/zzzeek/sqlalchemy/src/73095b353124/examples/nested_sets/nested_sets.py?at=master\n \"\"\"\n table = _get_tree_table(mapper)\n db_pk = instance.get_pk_column()\n table_pk = getattr(table.c, db_pk.name)\n\n if instance.parent_id is None:\n instance.left = 1\n instance.right = 2\n instance.level = instance.get_default_level()\n tree_id = connection.scalar(\n select(\n [\n func.max(table.c.tree_id) + 1\n ]\n )\n ) or 1\n instance.tree_id = tree_id\n else:\n (parent_pos_left,\n parent_pos_right,\n parent_tree_id,\n parent_level) = connection.execute(\n select(\n [\n table.c.lft,\n table.c.rgt,\n table.c.tree_id,\n table.c.level\n ]\n ).where(\n table_pk == instance.parent_id\n )\n ).fetchone()\n\n # Update key of right side\n connection.execute(\n table.update(\n and_(table.c.rgt >= parent_pos_right,\n table.c.tree_id == parent_tree_id)\n ).values(\n lft=case(\n [\n (\n table.c.lft > parent_pos_right,\n table.c.lft + 2\n )\n ],\n else_=table.c.lft\n ),\n rgt=case(\n [\n (\n table.c.rgt >= parent_pos_right,\n table.c.rgt + 2\n )\n ],\n else_=table.c.rgt\n )\n )\n )\n\n instance.level = parent_level + 1\n instance.tree_id = parent_tree_id\n instance.left = parent_pos_right\n instance.right = parent_pos_right + 1"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef mptt_before_update(mapper, connection, instance):\n node_id = getattr(instance, instance.get_pk_name())\n table = _get_tree_table(mapper)\n db_pk = instance.get_pk_column()\n default_level = instance.get_default_level()\n table_pk = getattr(table.c, db_pk.name)\n mptt_move_inside = None\n left_sibling = None\n left_sibling_tree_id = None\n\n if hasattr(instance, 'mptt_move_inside'):\n mptt_move_inside = instance.mptt_move_inside\n\n if hasattr(instance, 'mptt_move_before'):\n (\n right_sibling_left,\n right_sibling_right,\n right_sibling_parent,\n right_sibling_level,\n right_sibling_tree_id\n ) = connection.execute(\n select(\n [\n table.c.lft,\n table.c.rgt,\n table.c.parent_id,\n table.c.level,\n table.c.tree_id\n ]\n ).where(\n table_pk == instance.mptt_move_before\n )\n ).fetchone()\n current_lvl_nodes = connection.execute(\n select(\n [\n table.c.lft,\n table.c.rgt,\n table.c.parent_id,\n table.c.tree_id\n ]\n ).where(\n and_(\n table.c.level == right_sibling_level,\n table.c.tree_id == right_sibling_tree_id,\n table.c.lft < right_sibling_left\n )\n )\n ).fetchall()\n if current_lvl_nodes:\n (\n left_sibling_left,\n left_sibling_right,\n left_sibling_parent,\n left_sibling_tree_id\n ) = current_lvl_nodes[-1]\n instance.parent_id = left_sibling_parent\n left_sibling = {\n 'lft': left_sibling_left,\n 'rgt': left_sibling_right,\n 'is_parent': False\n }\n # if move_before to top level\n elif not right_sibling_parent:\n left_sibling_tree_id = right_sibling_tree_id - 1\n\n # if placed after a particular node\n if hasattr(instance, 'mptt_move_after'):\n (\n left_sibling_left,\n left_sibling_right,\n left_sibling_parent,\n left_sibling_tree_id\n ) = connection.execute(\n select(\n [\n table.c.lft,\n table.c.rgt,\n table.c.parent_id,\n table.c.tree_id\n ]\n ).where(\n table_pk == instance.mptt_move_after\n )\n ).fetchone()\n instance.parent_id = left_sibling_parent\n left_sibling = {\n 'lft': left_sibling_left,\n 'rgt': left_sibling_right,\n 'is_parent': False\n }\n\n \"\"\" Get subtree from node\n\n SELECT id, name, level FROM my_tree\n WHERE left_key >= $left_key AND right_key <= $right_key\n ORDER BY left_key\n \"\"\"\n subtree = connection.execute(\n select([table_pk])\n .where(\n and_(\n table.c.lft >= instance.left,\n table.c.rgt <= instance.right,\n table.c.tree_id == instance.tree_id\n )\n ).order_by(\n table.c.lft\n )\n ).fetchall()\n subtree = [x[0] for x in subtree]\n\n \"\"\" step 0: Initialize parameters.\n\n Put there left and right position of moving node\n \"\"\"\n (\n node_pos_left,\n node_pos_right,\n node_tree_id,\n node_parent_id,\n node_level\n ) = connection.execute(\n select(\n [\n table.c.lft,\n table.c.rgt,\n table.c.tree_id,\n table.c.parent_id,\n table.c.level\n ]\n ).where(\n table_pk == node_id\n )\n ).fetchone()\n\n # if instance just update w/o move\n # XXX why this str() around parent_id comparison?\n if not left_sibling \\\n and str(node_parent_id) == str(instance.parent_id) \\\n and not mptt_move_inside:\n if left_sibling_tree_id is None:\n return\n\n # fix tree shorting\n if instance.parent_id is not None:\n (\n parent_id,\n parent_pos_right,\n parent_pos_left,\n parent_tree_id,\n parent_level\n ) = connection.execute(\n select(\n [\n table_pk,\n table.c.rgt,\n table.c.lft,\n table.c.tree_id,\n table.c.level\n ]\n ).where(\n table_pk == instance.parent_id\n )\n ).fetchone()\n if node_parent_id is None and node_tree_id == parent_tree_id:\n instance.parent_id = None\n return\n\n # delete from old tree\n mptt_before_delete(mapper, connection, instance, False)\n\n if instance.parent_id is not None:\n \"\"\" Put there right position of new parent node (there moving node\n should be moved)\n \"\"\"\n (\n parent_id,\n parent_pos_right,\n parent_pos_left,\n parent_tree_id,\n parent_level\n ) = connection.execute(\n select(\n [\n table_pk,\n table.c.rgt,\n table.c.lft,\n table.c.tree_id,\n table.c.level\n ]\n ).where(\n table_pk == instance.parent_id\n )\n ).fetchone()\n # 'size' of moving node (including all it's sub nodes)\n node_size = node_pos_right - node_pos_left + 1\n\n # left sibling node\n if not left_sibling:\n left_sibling = {\n 'lft': parent_pos_left,\n 'rgt': parent_pos_right,\n 'is_parent': True\n }\n\n # insert subtree in exist tree\n instance.tree_id = parent_tree_id\n _insert_subtree(\n table,\n connection,\n node_size,\n node_pos_left,\n node_pos_right,\n parent_pos_left,\n parent_pos_right,\n subtree,\n parent_tree_id,\n parent_level,\n node_level,\n left_sibling,\n table_pk\n )\n else:\n # if insert after\n if left_sibling_tree_id or left_sibling_tree_id == 0:\n tree_id = left_sibling_tree_id + 1\n connection.execute(\n table.update(\n table.c.tree_id > left_sibling_tree_id\n ).values(\n tree_id=table.c.tree_id + 1\n )\n )\n # if just insert\n else:\n tree_id = connection.scalar(\n select(\n [\n func.max(table.c.tree_id) + 1\n ]\n )\n )\n\n connection.execute(\n table.update(\n table_pk.in_(\n subtree\n )\n ).values(\n lft=table.c.lft - node_pos_left + 1,\n rgt=table.c.rgt - node_pos_left + 1,\n level=table.c.level - node_level + default_level,\n tree_id=tree_id\n )\n )", "response": "This function is used to move a node in nested set to the new tree."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncalling by the flush event listener to recursively expire left and right attributes of all modified instances in the specified session.", "response": "def after_flush_postexec(self, session, context):\n \"\"\"\n Event listener to recursively expire `left` and `right` attributes the\n parents of all modified instances part of this flush.\n \"\"\"\n instances = self.instances[session]\n while instances:\n instance = instances.pop()\n if instance not in session:\n continue\n parent = self.get_parent_value(instance)\n\n while parent != NO_VALUE and parent is not None:\n instances.discard(parent)\n session.expire(parent, ['left', 'right', 'tree_id', 'level'])\n parent = self.get_parent_value(parent)\n else:\n session.expire(instance, ['left', 'right', 'tree_id', 'level'])\n self.expire_session_for_children(session, instance)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef is_ancestor_of(self, other, inclusive=False):\n if inclusive:\n return (self.tree_id == other.tree_id) \\\n & (self.left <= other.left) \\\n & (other.right <= self.right)\n return (self.tree_id == other.tree_id) \\\n & (self.left < other.left) \\\n & (other.right < self.right)", "response": "class or instance level method which returns True if self is an ancestor of other else False."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef move_inside(self, parent_id):\n session = Session.object_session(self)\n self.parent_id = parent_id\n self.mptt_move_inside = parent_id\n session.add(self)", "response": "Moving one node of tree inside another node"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nmove one node of tree after another node", "response": "def move_after(self, node_id):\n \"\"\" Moving one node of tree after another\n\n For example see :mod:`sqlalchemy_mptt.tests.cases.move_node.test_move_after_function`\n \"\"\" # noqa\n session = Session.object_session(self)\n self.parent_id = self.parent_id\n self.mptt_move_after = node_id\n session.add(self)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef move_before(self, node_id):\n session = Session.object_session(self)\n table = _get_tree_table(self.__mapper__)\n pk = getattr(table.c, self.get_pk_column().name)\n node = session.query(table).filter(pk == node_id).one()\n self.parent_id = node.parent_id\n self.mptt_move_before = node_id\n session.add(self)", "response": "Moving one node of tree before another node"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef drilldown_tree(self, session=None, json=False, json_fields=None):\n if not session:\n session = object_session(self)\n return self.get_tree(\n session,\n json=json,\n json_fields=json_fields,\n query=self._drilldown_query\n )", "response": "This method generates a branch from a tree."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef path_to_root(self, session=None, order=desc):\n table = self.__class__\n query = self._base_query_obj(session=session)\n query = query.filter(table.is_ancestor_of(self, inclusive=True))\n return self._base_order(query, order=order)", "response": "Generate a path from a leaf or intermediate node to the root."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef rebuild_tree(cls, session, tree_id):\n session.query(cls).filter_by(tree_id=tree_id)\\\n .update({cls.left: 0, cls.right: 0, cls.level: 0})\n top = session.query(cls).filter_by(parent_id=None)\\\n .filter_by(tree_id=tree_id).one()\n top.left = left = 1\n top.right = right = 2\n top.level = level = cls.get_default_level()\n\n def recursive(children, left, right, level):\n level = level + 1\n for i, node in enumerate(children):\n same_level_right = children[i - 1].right\n left = left + 1\n\n if i > 0:\n left = left + 1\n if same_level_right:\n left = same_level_right + 1\n\n right = left + 1\n node.left = left\n node.right = right\n parent = node.parent\n\n j = 0\n while parent:\n parent.right = right + 1 + j\n parent = parent.parent\n j += 1\n\n node.level = level\n recursive(node.children, left, right, level)\n\n recursive(top.children, left, right, level)", "response": "This method rebuid tree."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nrebuilding the tree of the related objects.", "response": "def rebuild(cls, session, tree_id=None):\n \"\"\" This function rebuid tree.\n\n Args:\n session (:mod:`sqlalchemy.orm.session.Session`): SQLAlchemy session\n\n Kwargs:\n tree_id (int or str): id of tree, default None\n\n Example:\n\n * :mod:`sqlalchemy_mptt.tests.TestTree.test_rebuild`\n \"\"\"\n\n trees = session.query(cls).filter_by(parent_id=None)\n if tree_id:\n trees = trees.filter_by(tree_id=tree_id)\n for tree in trees:\n cls.rebuild_tree(session, tree.tree_id)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the probability that a life aged x dies before 1 year Returns 0 if the age is not in the mortality table qx", "response": "def qx(mt, x):\n \"\"\" qx: Returns the probability that a life aged x dies before 1 year\n With the convention: the true probability is qx/1000\n Args:\n mt: the mortality table\n x: the age as integer number.\n \"\"\"\n if x < len(mt.qx):\n return mt.qx[x]\n else:\n return 0"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef lx(mt, x):\n if x < len(mt.lx):\n return mt.lx[x]\n else:\n return 0", "response": "Returns the number of survivors at begining of age x"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef dx(mt, x):\n end_x_val = mt.lx.index(0)\n if x < end_x_val: \n return mt.lx[x] - mt.lx[x + 1]\n else:\n return 0.0", "response": "Returns the number of dying at begining of age x"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the probability that x will survive within t years", "response": "def tpx(mt, x, t):\n \"\"\" tpx : Returns the probability that x will survive within t years \"\"\"\n \"\"\" npx : Returns n years survival probability at age x \"\"\"\n return mt.lx[x + t] / mt.lx[x]"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef ex(mt, x):\n sum1 = 0\n for j in mt.lx[x + 1:-1]:\n sum1 += j\n #print sum1\n try:\n return sum1 / mt.lx[x] + 0.5\n except:\n return 0", "response": "ex - Returns the curtate expectation of life. Life expectancy"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the Sx of the given MT", "response": "def Sx(mt, x):\n \"\"\" Return the Sx \"\"\" \n n = len(mt.Nx)\n sum1 = 0\n for j in range(x, n):\n k = mt.Nx[j]\n sum1 += k\n return sum1"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the Cx of the given mt", "response": "def Cx(mt, x):\n \"\"\" Return the Cx \"\"\" \n return ((1 / (1 + mt.i)) ** (x + 1)) * mt.dx[x] * ((1 + mt.i) ** 0.5)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn the Mx of the given MT", "response": "def Mx(mt, x):\n \"\"\" Return the Mx \"\"\"\n n = len(mt.Cx)\n sum1 = 0\n for j in range(x, n):\n k = mt.Cx[j]\n sum1 += k\n return sum1"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef Axn(mt, x, n):\n return (mt.Mx[x] - mt.Mx[x + n]) / mt.Dx[x]", "response": "(A^1)x:n : Returns the EPV (net single premium) of a term insurance."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef AExn(mt, x, n):\n return (mt.Mx[x] - mt.Mx[x + n]) / mt.Dx[x] + mt.Dx[x + n] / mt.Dx[x]", "response": "Returns the EPV of a term insurance."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef tAx(mt, x, t):\n return mt.Mx[x + t] / mt.Dx[x]", "response": "n/Ax : Returns the EPV (net single premium) of a deferred whole life insurance."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreturns the actuarial present value of a temporal annuity", "response": "def aaxn(mt, x, n, m=1):\n \"\"\" \u00e4xn : Return the actuarial present value of a (immediate) temporal (term certain) annuity: \n n-year temporary life annuity-anticipatory. Payable 'm' per year at the beginning of the period \n \"\"\"\n if m == 1:\n return (mt.Nx[x] - mt.Nx[x + n]) / mt.Dx[x]\n else:\n return (mt.Nx[x] - mt.Nx[x + n]) / mt.Dx[x] - ((float(m - 1) / float(m * 2)) * (1 - nEx(mt, x, n)))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef aax(mt, x, m=1):\n return mt.Nx[x] / mt.Dx[x] - (float(m - 1) / float(m * 2))", "response": "\u00e4x : Returns the actuarial present value of an annuity of 1 per time period \n "} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nreturns the actuarial present value of an annuity of 1 per time period", "response": "def ax(mt, x, m=1):\n \"\"\" ax : Returns the actuarial present value of an (immediate) annuity of 1 per time period \n (whole life annuity-late). Payable 'm' per year at the ends of the period \n \"\"\"\n return (mt.Nx[x] / mt.Dx[x] - 1) + (float(m - 1) / float(m * 2))"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef taax(mt, x, t, m=1):\n return mt.Nx[x + t] / mt.Dx[x] - ((float(m - 1) / float(m * 2)) * (1 - nEx(mt, x, t)))", "response": "Return the actuarial present value of a deferred annuity at a given time."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef Iaaxn(mt, x, n, *args):\n return (Sx(mt, x) - Sx(nt, x + n) - n * Nx(nt, x + n)) / Dx(nt, x)", "response": "during a term certain, IAn"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef Iaxn(mt, x, n, *args):\n return (Sx(mt, x + 1) - Sx(mt, x + n + 1) - n * Nx(mt, x + n + 1)) / Dx(mt, x)", "response": "during a term certain, IAn"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef Itaax(mt, x, t):\n return (Sx(mt, x) - Sx(mt, x + t)) / Dx(mt, x)", "response": "Itaax of a given time."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _meanvalueattr(self,v):\n sug = self.layout\n if not self.prevlayer(): return sug.grx[v].bar\n bars = [sug.grx[x].bar for x in self._neighbors(v)]\n return sug.grx[v].bar if len(bars)==0 else float(sum(bars))/len(bars)", "response": "find new position of vertex v according to adjacency in prevlayer and return the mean value of adjacent positions."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nfinding new position of vertex v according to adjacency in layer l + dir", "response": "def _medianindex(self,v):\n \"\"\"\n find new position of vertex v according to adjacency in layer l+dir.\n position is given by the median value of adjacent positions.\n median heuristic is proven to achieve at most 3 times the minimum\n of crossings (while barycenter achieve in theory the order of |V|)\n \"\"\"\n assert self.prevlayer()!=None\n N = self._neighbors(v)\n g=self.layout.grx\n pos = [g[x].pos for x in N]\n lp = len(pos)\n if lp==0: return []\n pos.sort()\n pos = pos[::self.layout.dirh]\n i,j = divmod(lp-1,2)\n return [pos[i]] if j==0 else [pos[i],pos[i+j]]"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\ngetting neighbors of a node in the graph", "response": "def _neighbors(self,v):\n \"\"\"\n neighbors refer to upper/lower adjacent nodes.\n Note that v.N() provides neighbors of v in the graph, while\n this method provides the Vertex and DummyVertex adjacent to v in the\n upper or lower layer (depending on layout.dirv state).\n \"\"\"\n assert self.layout.dag\n dirv = self.layout.dirv\n grxv = self.layout.grx[v]\n try: #(cache)\n return grxv.nvs[dirv]\n except AttributeError:\n grxv.nvs={-1:v.N(-1),+1:v.N(+1)}\n if grxv.dummy: return grxv.nvs[dirv]\n # v is real, v.N are graph neigbors but we need layers neighbors\n for d in (-1,+1):\n tr=grxv.rank+d\n for i,x in enumerate(v.N(d)):\n if self.layout.grx[x].rank==tr:continue\n e=v.e_with(x)\n dum = self.layout.ctrls[e][tr]\n grxv.nvs[d][i]=dum\n return grxv.nvs[dirv]"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncount the number of crossings between two layers.", "response": "def _crossings(self):\n \"\"\"\n counts (inefficently but at least accurately) the number of\n crossing edges between layer l and l+dirv.\n P[i][j] counts the number of crossings from j-th edge of vertex i.\n The total count of crossings is the sum of flattened P:\n x = sum(sum(P,[]))\n \"\"\"\n g=self.layout.grx\n P=[]\n for v in self:\n P.append([g[x].pos for x in self._neighbors(v)])\n for i,p in enumerate(P):\n candidates = sum(P[i+1:],[])\n for j,e in enumerate(p):\n p[j] = len(filter((lambda nx:nx0.5:\n for (l,mvmt) in self.ordering_step():\n pass\n N = N-1\n if N>0:\n for (l,mvmt) in self.ordering_step(oneway=True):\n pass\n self.setxy()\n self.draw_edges()", "response": "draws every node coordinates after converging to optimal ordering by N - 1. 5 rounds and performs edge routing."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef rank_all(self,roots,optimize=False):\n self._edge_inverter()\n r = [x for x in self.g.sV if (len(x.e_in())==0 and x not in roots)]\n self._rank_init(roots+r)\n if optimize: self._rank_optimize()\n self._edge_inverter()", "response": "Computes the rank of all vertices in the graph."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ninitialize the rank of the unranked list of vertices and all .", "response": "def _rank_init(self,unranked):\n \"\"\"Computes rank of provided unranked list of vertices and all\n their children. A vertex will be asign a rank when all its \n inward edges have been *scanned*. When a vertex is asigned\n a rank, its outward edges are marked *scanned*.\n \"\"\"\n assert self.dag\n scan = {}\n # set rank of unranked based on its in-edges vertices ranks:\n while len(unranked)>0:\n l = []\n for v in unranked:\n self.setrank(v)\n # mark out-edges has scan-able:\n for e in v.e_out(): scan[e]=True\n # check if out-vertices are rank-able:\n for x in v.N(+1):\n if not (False in [scan.get(e,False) for e in x.e_in()]):\n if x not in l: l.append(x)\n unranked=l"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\noptimize ranking by pushing long edges toward lower layers as much as possible.", "response": "def _rank_optimize(self):\n \"\"\"optimize ranking by pushing long edges toward lower layers as much as possible.\n see other interersting network flow solver to minimize total edge length\n (http://jgaa.info/accepted/2005/EiglspergerSiebenhallerKaufmann2005.9.3.pdf)\n \"\"\"\n assert self.dag\n for l in reversed(self.layers):\n for v in l:\n gv = self.grx[v]\n for x in v.N(-1):\n if all((self.grx[y].rank>=gv.rank for y in x.N(+1))):\n gx = self.grx[x]\n self.layers[gx.rank].remove(x)\n gx.rank = gv.rank-1\n self.layers[gv.rank-1].append(x)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsetting rank value for vertex v and add it to the corresponding layer.", "response": "def setrank(self,v):\n \"\"\"set rank value for vertex v and add it to the corresponding layer.\n The Layer is created if it is the first vertex with this rank.\n \"\"\"\n assert self.dag\n r=max([self.grx[x].rank for x in v.N(-1)]+[-1])+1\n self.grx[v].rank=r\n # add it to its layer:\n try:\n self.layers[r].append(v)\n except IndexError:\n assert r==len(self.layers)\n self.layers.append(Layer([v]))"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef dummyctrl(self,r,ctrl):\n dv = DummyVertex(r)\n dv.view.w,dv.view.h=self.dw,self.dh\n self.grx[dv] = dv\n dv.ctrl = ctrl\n ctrl[r] = dv\n self.layers[r].append(dv)\n return dv", "response": "creates a DummyVertex at rank r inserted in the ctrl dict and adds it to the edge s control vertices and returns it"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef setdummies(self,e):\n v0,v1 = e.v\n r0,r1 = self.grx[v0].rank,self.grx[v1].rank\n if r0>r1:\n assert e in self.alt_e\n v0,v1 = v1,v0\n r0,r1 = r1,r0\n if (r1-r0)>1:\n # \"dummy vertices\" are stored in the edge ctrl dict,\n # keyed by their rank in layers.\n ctrl=self.ctrls[e]={}\n ctrl[r0]=v0\n ctrl[r1]=v1\n for r in xrange(r0+1,r1):\n self.dummyctrl(r,ctrl)", "response": "creates and defines all needed dummy vertices for edge e.\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef ordering_step(self,oneway=False):\n self.dirv=-1\n crossings = 0\n for l in self.layers:\n mvmt = l.order()\n crossings += mvmt\n yield (l,mvmt)\n if oneway or (crossings == 0):\n return\n self.dirv=+1\n while l:\n mvmt = l.order()\n yield (l,mvmt)\n l = l.nextlayer()", "response": "iterator that computes all vertices ordering in the layers."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncompute all vertex coordinates using Brandes & Kopf.", "response": "def setxy(self):\n \"\"\"computes all vertex coordinates (x,y) using\n an algorithm by Brandes & Kopf.\n \"\"\"\n self._edge_inverter()\n self._detect_alignment_conflicts()\n inf = float('infinity')\n # initialize vertex coordinates attributes:\n for l in self.layers:\n for v in l:\n self.grx[v].root = v\n self.grx[v].align = v\n self.grx[v].sink = v\n self.grx[v].shift = inf\n self.grx[v].X = None\n self.grx[v].x = [0.0]*4\n curvh = self.dirvh # save current dirvh value\n for dirvh in xrange(4):\n self.dirvh = dirvh\n self._coord_vertical_alignment()\n self._coord_horizontal_compact()\n self.dirvh = curvh # restore it\n # vertical coordinate assigment of all nodes:\n Y = 0\n for l in self.layers:\n dY = max([v.view.h/2. for v in l])\n for v in l:\n vx = sorted(self.grx[v].x)\n # mean of the 2 medians out of the 4 x-coord computed above:\n avgm = (vx[1]+vx[2])/2.\n # final xy-coordinates :\n v.view.xy = (avgm,Y+dY)\n Y += 2*dY+self.yspace\n self._edge_inverter()"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _detect_alignment_conflicts(self):\n curvh = self.dirvh # save current dirvh value\n self.dirvh=0\n self.conflicts = []\n for L in self.layers:\n last = len(L)-1\n prev = L.prevlayer()\n if not prev: continue\n k0=0\n k1_init=len(prev)-1\n l=0\n for l1,v in enumerate(L):\n if not self.grx[v].dummy: continue\n if l1==last or v.inner(-1):\n k1=k1_init\n if v.inner(-1):\n k1=self.grx[v.N(-1)[-1]].pos\n for vl in L[l:l1+1]:\n for vk in L._neighbors(vl):\n k = self.grx[vk].pos\n if (kk1):\n self.conflicts.append((vk,vl))\n l=l1+1\n k0=k1\n self.dirvh = curvh", "response": "detect conflicts between dummy nodes and regular crossings"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nperforms vertical alignment according to current dirvh internal state.", "response": "def _coord_vertical_alignment(self):\n \"\"\"performs vertical alignment according to current dirvh internal state.\n \"\"\"\n dirh,dirv = self.dirh,self.dirv\n g = self.grx\n for l in self.layers[::-dirv]:\n if not l.prevlayer(): continue\n r=None\n for vk in l[::dirh]:\n for m in l._medianindex(vk):\n # take the median node in dirv layer:\n um = l.prevlayer()[m]\n # if vk is \"free\" align it with um's root\n if g[vk].align is vk:\n if dirv==1: vpair = (vk,um)\n else: vpair = (um,vk)\n # if vk<->um link is used for alignment\n if (vpair not in self.conflicts) and \\\n (r==None or dirh*r 200).astype(np.int8)\n\n return aryPngData", "response": "Load PNGs with stimulus information for pRF model creation."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nloading information from event text file.", "response": "def load_ev_txt(strPthEv):\n \"\"\"Load information from event text file.\n\n Parameters\n ----------\n input1 : str\n Path to event text file\n Returns\n -------\n aryEvTxt : 2d numpy array, shape [n_measurements, 3]\n Array with info about conditions: type, onset, duration\n Notes\n -----\n Part of py_pRF_mapping library.\n \"\"\"\n aryEvTxt = np.loadtxt(strPthEv, dtype='float', comments='#', delimiter=' ',\n skiprows=0, usecols=(0, 1, 2))\n return aryEvTxt"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\napplying status mapping to a raw API result.", "response": "def adjust_status(info: dict) -> dict:\n \"\"\"Apply status mapping to a raw API result.\"\"\"\n modified_info = deepcopy(info)\n modified_info.update({\n 'level':\n get_nearest_by_numeric_key(STATUS_MAP, int(info['level'])),\n 'level2':\n STATUS_MAP[99] if info['level2'] is None else\n get_nearest_by_numeric_key(STATUS_MAP, int(info['level2']))\n })\n\n return modified_info"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning the status of the CDC for the provided latitude and longitude.", "response": "async def status_by_coordinates(\n self, latitude: float, longitude: float) -> dict:\n \"\"\"Return the CDC status for the provided latitude/longitude.\"\"\"\n cdc_data = await self.raw_cdc_data()\n nearest = await self.nearest_by_coordinates(latitude, longitude)\n return adjust_status(cdc_data[nearest['state']['name']])"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning the CDC status for the specified state.", "response": "async def status_by_state(self, state: str) -> dict:\n \"\"\"Return the CDC status for the specified state.\"\"\"\n data = await self.raw_cdc_data()\n\n try:\n info = next((v for k, v in data.items() if state in k))\n except StopIteration:\n return {}\n\n return adjust_status(info)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the exception class and message of the exception as string.", "response": "def brief_exception_text(exception, secret_values):\n \"\"\"\n Returns the Exception class and the message of the exception as string.\n\n :param exception: The exception to format\n :param secret_values: Values to hide in output\n \"\"\"\n exception_text = _hide_secret_values(str(exception), secret_values)\n return '[{}]\\n{}'.format(type(exception).__name__, exception_text)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nprint the exception message and the name of the exception class to stderr.", "response": "def print_exception(exception, secret_values=None):\n \"\"\"\n Prints the exception message and the name of the exception class to stderr.\n\n :param exception: The exception to print\n :param secret_values: Values to hide in output\n \"\"\"\n print(brief_exception_text(exception, secret_values), file=sys.stderr)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef insert(self, **kwargs):\n if self.is_valid:\n before = self.before_insert()\n if before:\n return before\n\n try:\n self._document['_id'] = self.insert_one(self._document)\n\n self.after_insert()\n\n return self._document\n except PyMongoException as exc:\n return PyMongoError(\n error_message=exc.details.get(\n 'errmsg', exc.details.get('err', 'PyMongoError.')\n ),\n operation='insert', collection=type(self).__name__,\n document=self._document,\n )\n\n return self._errors", "response": "Inserts the document into the database if it is valid. Returns errors otherwise."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef update(self, **kwargs):\n if self.is_valid:\n if '_id' in self._document:\n to_update = self.find_one({'_id': self._id})\n\n if to_update:\n before = self.before_update(old=to_update)\n if before:\n return before\n\n try:\n self.replace_one({'_id': self._id}, self._document)\n\n self.after_update(old=to_update)\n\n return self._document\n except PyMongoException as exc:\n return PyMongoError(\n error_message=exc.details.get(\n 'errmsg', exc.details.get(\n 'err', 'PyMongoError.'\n )\n ),\n operation='update', collection=type(self).__name__,\n document=self._document,\n )\n else:\n return DocumentNotFoundError(type(self).__name__, self._id)\n else:\n return UnidentifiedDocumentError(\n type(self).__name__, self._document\n )\n\n return self._errors", "response": "Updates the document with the given _id saved in the collection if it is valid. Returns errors otherwise."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef delete(self, **kwargs):\n if self.is_valid:\n if '_id' in self._document:\n to_delete = self.find_one({'_id': self._id})\n\n if to_delete:\n before = self.before_delete()\n if before:\n return before\n\n try:\n self.delete_one({'_id': self._id})\n\n self.after_delete()\n\n return self._document\n except PyMongoException as exc:\n return PyMongoError(\n error_message=exc.details.get(\n 'errmsg', exc.details.get(\n 'err', 'PyMongoError.'\n )\n ),\n operation='delete', collection=type(self).__name__,\n document=self._document,\n )\n else:\n return DocumentNotFoundError(type(self).__name__, self._id)\n else:\n return UnidentifiedDocumentError(\n type(self).__name__, self._document\n )", "response": "Deletes the document if it is saved in the collection."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreturning one document dict if one passes the filter.", "response": "def find_one(cls, filter=None, *args, **kwargs):\n \"\"\"\n Returns one document dict if one passes the filter.\n Returns None otherwise.\n \"\"\"\n return cls.collection.find_one(filter, *args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef find(cls, *args, **kwargs):\n return list(cls.collection.find(*args, **kwargs))", "response": "Returns all document dicts that pass the filter\n "} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef aggregate(cls, pipeline=None, **kwargs):\n return list(cls.collection.aggregate(pipeline or [], **kwargs))", "response": "Returns the document dicts returned from the Aggregation Pipeline\n "} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef insert_many(cls, documents, ordered=True):\n return cls.collection.insert_many(documents, ordered).inserted_ids", "response": "Inserts a list of documents into the Collection and returns their _ids\n "} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nupdating all the documents that pass the filter with the update value Will upsert a new document if upsert = True", "response": "def update_many(cls, filter, update, upsert=False):\n \"\"\"\n Updates all documents that pass the filter with the update value\n Will upsert a new document if upsert=True and no document is filtered\n \"\"\"\n return cls.collection.update_many(filter, update, upsert).raw_result"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nreplaces a single entry in the cache with a new value.", "response": "def replace_one(cls, filter, replacement, upsert=False):\n \"\"\"\n Replaces a document that passes the filter.\n Will upsert a new document if upsert=True and no document is filtered\n \"\"\"\n return cls.collection.replace_one(\n filter, replacement, upsert\n ).raw_result"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a Document if any document is filtered returns None otherwise", "response": "def get(cls, filter=None, **kwargs):\n \"\"\"\n Returns a Document if any document is filtered, returns None otherwise\n \"\"\"\n document = cls(cls.find_one(filter, **kwargs))\n return document if document.document else None"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef documents(cls, filter=None, **kwargs):\n documents = [cls(document) for document in cls.find(filter, **kwargs)]\n return [document for document in documents if document.document]", "response": "Returns a list of Documents if any document is filtered by the filter keyword arguments."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nreturn an iterator over all of the statements belonging to a file.", "response": "def in_file(self, fn: str) -> Iterator[Statement]:\n \"\"\"\n Returns an iterator over all of the statements belonging to a file.\n \"\"\"\n yield from self.__file_to_statements.get(fn, [])"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nreturn an iterator over all of the statements located at a given line.", "response": "def at_line(self, line: FileLine) -> Iterator[Statement]:\n \"\"\"\n Returns an iterator over all of the statements located at a given line.\n \"\"\"\n num = line.num\n for stmt in self.in_file(line.filename):\n if stmt.location.start.line == num:\n yield stmt"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef funcFindPrfGpu(idxPrc, vecMdlXpos, vecMdlYpos, vecMdlSd, aryFunc, # noqa\n aryPrfTc, varL2reg, queOut, lgcPrint=True):\n \"\"\"\n Find best pRF model for voxel time course.\n\n Parameters\n ----------\n idxPrc : int\n Process ID of the process calling this function (for CPU\n multi-threading). In GPU version, this parameter is 0 (just one thread\n on CPU).\n vecMdlXpos : np.array\n 1D array with pRF model x positions.\n vecMdlYpos : np.array\n 1D array with pRF model y positions.\n vecMdlSd : np.array\n 1D array with pRF model sizes (SD of Gaussian).\n aryFunc : np.array\n 2D array with functional MRI data, with shape aryFunc[voxel, time].\n aryPrfTc : np.array\n Array with pRF model time courses, with shape\n aryPrfTc[x-pos, y-pos, SD, motion-direction, time]\n varL2reg : float\n L2 regularisation factor for ridge regression.\n queOut : multiprocessing.queues.Queue\n Queue to put the results on.\n lgcPrint : boolean\n Whether print statements should be executed.\n\n Returns\n -------\n lstOut : list\n List containing the following objects:\n idxPrc : int\n Process ID of the process calling this function (for CPU\n multi-threading). In GPU version, this parameter is 0.\n vecBstXpos : np.array\n 1D array with best fitting x-position for each voxel, with shape\n vecBstXpos[voxel].\n vecBstYpos : np.array\n 1D array with best fitting y-position for each voxel, with shape\n vecBstYpos[voxel].\n vecBstSd : np.array\n 1D array with best fitting pRF size for each voxel, with shape\n vecBstSd[voxel].\n vecBstR2 : np.array\n 1D array with R2 value of 'winning' pRF model for each voxel, with\n shape vecBstR2[voxel].\n dummy : np.array\n 2D array that is supposed to contain the beta values of 'winning'\n pRF models for each voxel, with shape aryBeta[voxel, beta]. AT THE\n MOMENT, CONTAINS EMPTY DUMMY ARRAY (np.zeros).\n\n Notes\n -----\n Uses a queue that runs in a separate thread to put model time courses on\n the computational graph.\n \"\"\"\n # -------------------------------------------------------------------------\n # *** Queue-feeding-function that will run in extra thread\n def funcPlcIn():\n \"\"\"Place data on queue.\"\"\"\n # Iteration counter:\n idxCnt = 0\n\n while True:\n\n # Feed example to Tensorflow placeholder\n aryTmp02 = lstPrfTc[idxCnt]\n dicIn = {objPlcHld01: aryTmp02}\n\n # Push to the queue:\n objSess.run(objEnQ, feed_dict=dicIn)\n\n idxCnt += 1\n\n # Stop if coordinator says stop:\n if objCoord.should_stop():\n break\n\n # Stop if all data has been put on the queue:\n elif idxCnt == varNumMdls:\n break\n\n # -------------------------------------------------------------------------\n # *** Prepare pRF model time courses for graph\n if lgcPrint:\n print('------Prepare pRF model time courses for graph')\n\n # Information about pRF model parameters:\n varNumX = np.shape(vecMdlXpos)[0]\n varNumY = np.shape(vecMdlYpos)[0]\n varNumPrfSizes = np.shape(vecMdlSd)[0]\n\n # Number of predictors (betas):\n varNumBeta = aryPrfTc.shape[3]\n\n # At this point, aryPrfTc has the following dimensions:\n # aryPrfTc[x-pos, y-pos, SD, motion-direction, time]\n\n # Reshape pRF model time courses:\n aryPrfTc = np.reshape(aryPrfTc,\n ((aryPrfTc.shape[0]\n * aryPrfTc.shape[1]\n * aryPrfTc.shape[2]),\n aryPrfTc.shape[3],\n aryPrfTc.shape[4]))\n\n # Now, aryPrfTc has the following dimensions:\n # aryPrfTc[(x-pos * y-pos * SD), motion-direction, time]\n\n # Original total number of pRF time course models (before removing models\n # with zero variance):\n varNumMdlsTtl = aryPrfTc.shape[0]\n\n # Change type to float 32:\n aryPrfTc = aryPrfTc.astype(np.float32)\n\n # The pRF model is fitted only if variance along time dimension is not\n # zero. Get variance along time dimension:\n vecVarPrfTc = np.var(aryPrfTc, axis=2)\n\n # Zero with float32 precision for comparison:\n varZero32 = np.array(([0.0])).astype(np.float32)[0]\n\n # Boolean array for models with variance greater than zero for at least\n # one motion direction:\n vecLgcVar = np.max(\n np.greater(vecVarPrfTc,\n varZero32),\n axis=1\n )\n\n # Take models with variance less than zero out of the array:\n aryPrfTc = aryPrfTc[vecLgcVar, :, :]\n\n # Swap axes, so that\n # aryPrfTc[(x-pos * y-pos * SD), time, motion-direction]\n aryPrfTc = np.swapaxes(aryPrfTc, 1, 2)\n\n # Add constant term (ones):\n # aryPrfTc = np.concatenate((aryPrfTc,\n # np.ones((aryPrfTc.shape[0],\n # aryPrfTc.shape[1],\n # 1)).astype(np.float32)),\n # axis=2)\n\n # Size of pRF time courses in MB:\n varSzePrf = np.divide(float(aryPrfTc.nbytes),\n 1000000.0)\n if lgcPrint:\n print(('---------Size of pRF time courses: '\n + str(np.around(varSzePrf))\n + ' MB'))\n\n # Put pRF model time courses into list:\n lstPrfTc = [None] * aryPrfTc.shape[0]\n for idxMdl in range(int(aryPrfTc.shape[0])):\n lstPrfTc[idxMdl] = aryPrfTc[idxMdl, :, :]\n del(aryPrfTc)\n\n # Total number of pRF models to fit:\n varNumMdls = len(lstPrfTc)\n\n # -------------------------------------------------------------------------\n # *** Prepare functional data for graph\n if lgcPrint:\n print('------Prepare functional data for graph')\n\n # Number of voxels to be fitted:\n varNumVox = aryFunc.shape[0]\n\n # Number of volumes:\n varNumVol = aryFunc.shape[1]\n\n # We reshape the voxel time courses, so that time goes down the column,\n # i.e. from top to bottom.\n aryFunc = aryFunc.T\n\n # Change type to float 32:\n aryFunc = aryFunc.astype(np.float32)\n\n # We cannot commit the entire functional data to GPU memory, we need to\n # create chunks. Establish the limit (maximum size) of one chunk (in MB):\n varSzeMax = 50.0 # 20.0\n\n # Size of functional data in MB:\n varSzeFunc = np.divide(float(aryFunc.nbytes),\n 1000000.0)\n if lgcPrint:\n print(('---------Size of functional data: '\n + str(np.around(varSzeFunc))\n + ' MB'))\n\n # Number of chunks to create:\n varNumChnk = int(np.ceil(np.divide(varSzeFunc, varSzeMax)))\n\n if lgcPrint:\n print(('---------Functional data will be split into '\n + str(varNumChnk)\n + ' batches'))\n\n # Vector with the indicies at which the functional data will be separated\n # in order to be chunked up for the parallel processes:\n vecIdxChnks = np.linspace(0,\n varNumVox,\n num=varNumChnk,\n endpoint=False)\n vecIdxChnks = np.hstack((vecIdxChnks, varNumVox))\n\n # List into which the chunks of functional data are put:\n lstFunc = [None] * varNumChnk\n\n # Put functional data into chunks:\n for idxChnk in range(0, varNumChnk):\n # Index of first voxel to be included in current chunk:\n varChnkStr = int(vecIdxChnks[idxChnk])\n # Index of last voxel to be included in current chunk:\n varChnkEnd = int(vecIdxChnks[(idxChnk+1)])\n # Put voxel array into list:\n lstFunc[idxChnk] = aryFunc[:, varChnkStr:varChnkEnd]\n\n # We delete the original array holding the functional data to conserve\n # memory. Therefore, we first need to calculate the mean (will be needed\n # for calculation of R2).\n\n # After finding the best fitting model for each voxel, we still have to\n # calculate the coefficient of determination (R-squared) for each voxel. We\n # start by calculating the total sum of squares (i.e. the deviation of the\n # data from the mean). The mean of each time course:\n vecFuncMean = np.mean(aryFunc, axis=0)\n # Deviation from the mean for each datapoint:\n vecFuncDev = np.subtract(aryFunc, vecFuncMean[None, :])\n # Sum of squares:\n vecSsTot = np.sum(np.power(vecFuncDev,\n 2.0),\n axis=0)\n\n # We don't need the original array with the functional data anymore (the\n # above seems to have created a hard copy):\n del(vecFuncDev)\n del(aryFunc)\n\n # -------------------------------------------------------------------------\n # *** Miscellaneous preparations\n\n # Vector for minimum squared residuals:\n vecResSsMin = np.zeros((varNumVox), dtype=np.float32)\n\n # Vector for indices of models with minimum residuals:\n vecResSsMinIdx = np.zeros((varNumVox), dtype=np.int32)\n\n # Multiply L2 regularization factor with identity matrix:\n aryL2reg = np.multiply(np.eye(varNumBeta),\n varL2reg).astype(np.float32)\n\n # Reduce logging verbosity:\n os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'\n\n # -------------------------------------------------------------------------\n # *** Prepare status indicator\n\n # We create a status indicator for the time consuming pRF model finding\n # algorithm. Number of steps of the status indicator:\n varStsStpSze = 20\n\n # Vector with pRF values at which to give status feedback:\n vecStatPrf = np.linspace(0,\n (varNumMdls * varNumChnk),\n num=(varStsStpSze+1),\n endpoint=True)\n vecStatPrf = np.ceil(vecStatPrf)\n vecStatPrf = vecStatPrf.astype(int)\n\n # Vector with corresponding percentage values at which to give status\n # feedback:\n vecStatPrc = np.linspace(0,\n 100,\n num=(varStsStpSze+1),\n endpoint=True)\n vecStatPrc = np.ceil(vecStatPrc)\n vecStatPrc = vecStatPrc.astype(int)\n\n # Counter for status indicator:\n varCntSts01 = 0\n varCntSts02 = 0\n\n # -------------------------------------------------------------------------\n # *** Loop through chunks\n if lgcPrint:\n print('------Run graph')\n\n for idxChnk in range(varNumChnk):\n if lgcPrint:\n print(('---------Chunk: ' + str(idxChnk)))\n print('lstPrfTc[0].shape')\n print(lstPrfTc[0].shape)\n\n # Define session:\n # objSess = tf.Session()\n with tf.Graph().as_default(), tf.Session() as objSess:\n\n # -----------------------------------------------------------------\n # *** Prepare queue\n if lgcPrint:\n print('------Define computational graph, queue & session')\n\n # Queue capacity:\n varCapQ = 10\n\n # Dimensions of placeholder have to be determined outside of the\n # tensor object, otherwise the object on which the size is\n # calculated is loaded into GPU memory.\n varDim01 = lstPrfTc[0].shape[0]\n varDim02 = lstPrfTc[0].shape[1]\n\n # The queue:\n objQ = tf.FIFOQueue(capacity=varCapQ,\n dtypes=[tf.float32],\n shapes=[(varDim01, varDim02)])\n\n # Method for getting queue size:\n objSzeQ = objQ.size()\n\n # Placeholder that is used to put design matrix on computational\n # graph:\n objPlcHld01 = tf.placeholder(tf.float32,\n shape=[varDim01, varDim02])\n\n # The enqueue operation that puts data on the graph.\n objEnQ = objQ.enqueue([objPlcHld01])\n\n # Number of threads that will be created:\n varNumThrd = 1\n\n # The queue runner (places the enqueue operation on the queue?).\n objRunQ = tf.train.QueueRunner(objQ, [objEnQ] * varNumThrd)\n tf.train.add_queue_runner(objRunQ)\n\n # The tensor object that is retrieved from the queue. Functions\n # like placeholders for the data in the queue when defining the\n # graph.\n objDsng = objQ.dequeue()\n\n # Coordinator needs to be initialised:\n objCoord = tf.train.Coordinator()\n\n # -----------------------------------------------------------------\n # *** Fill queue\n\n # Buffer size (number of samples to put on queue before starting\n # execution of graph):\n varBuff = 10\n\n # Define & run extra thread with graph that places data on queue:\n objThrd = threading.Thread(target=funcPlcIn)\n objThrd.setDaemon(True)\n objThrd.start()\n\n # Stay in this while loop until the specified number of samples\n # (varBuffer) have been placed on the queue).\n varTmpSzeQ = 0\n while varTmpSzeQ < varBuff:\n varTmpSzeQ = objSess.run(objSzeQ)\n\n # -----------------------------------------------------------------\n # *** Prepare & run the graph\n\n # Chunk of functional data:\n aryTmp01 = np.copy(lstFunc[idxChnk])\n with tf.device('/gpu:0'):\n objFunc = tf.Variable(aryTmp01)\n\n # Regularisation factor matrix:\n with tf.device('/gpu:0'):\n objL2reg = tf.Variable(aryL2reg)\n\n # The computational graph. Operation that solves matrix (in the\n # least squares sense), and calculates residuals along time\n # dimension. There are two versions: (1) The number of measurements\n # (e.g. volumes) is greater than or equal to the number of\n # predictors (betas). (2) The number of measurements is less than\n # the number of predictors.\n\n # (1) Number of measurements greater/equal to number of predictors:\n if np.greater_equal(varNumVol, varNumBeta):\n objMatSlve = tf.reduce_sum(\n tf.squared_difference(\n objFunc,\n tf.matmul(\n objDsng,\n tf.matmul(\n tf.matmul(\n tf.matrix_inverse(\n tf.add(\n tf.matmul(\n objDsng,\n objDsng,\n transpose_a=True,\n transpose_b=False\n ),\n objL2reg\n )\n ),\n objDsng,\n transpose_a=False,\n transpose_b=True\n ),\n objFunc\n )\n ),\n ),\n axis=0\n )\n\n # (2) Number of measurements less than number of predictors:\n else:\n objMatSlve = tf.reduce_sum(\n tf.squared_difference(\n objFunc,\n tf.matmul(\n objDsng,\n tf.matmul(\n tf.matmul(\n objDsng,\n tf.matrix_inverse(\n tf.add(\n tf.matmul(\n objDsng,\n objDsng,\n transpose_a=False,\n transpose_b=True\n ),\n objL2reg\n )\n ),\n transpose_a=True,\n transpose_b=False\n ),\n objFunc\n )\n ),\n ),\n axis=0\n )\n\n # Variables need to be (re-)initialised:\n objSess.run(tf.global_variables_initializer())\n\n # Mark graph as read-only (would throw an error in case of memory\n # leak):\n objSess.graph.finalize()\n\n # Index of first voxel in current chunk (needed to assign results):\n varChnkStr = int(vecIdxChnks[idxChnk])\n\n # Index of last voxel in current chunk (needed to assign results):\n varChnkEnd = int(vecIdxChnks[(idxChnk+1)])\n\n # Array for results of current chunk:\n aryTmpRes = np.zeros((varNumMdls,\n lstFunc[idxChnk].shape[1]),\n dtype=np.float32)\n\n # Loop through models:\n for idxMdl in range(varNumMdls):\n\n # Run main computational graph and put results in list:\n # varTme01 = time.time()\n\n aryTmpRes[idxMdl, :] = objSess.run(objMatSlve)\n\n # print(('---------Time for graph call: '\n # + str(time.time() - varTme01)))\n\n # Status indicator:\n if varCntSts02 == vecStatPrf[varCntSts01]:\n # Number of elements on queue:\n varTmpSzeQ = objSess.run(objSzeQ)\n # Prepare status message:\n strStsMsg = ('---------Progress: '\n + str(vecStatPrc[varCntSts01])\n + ' % --- Number of elements on queue: '\n + str(varTmpSzeQ))\n if lgcPrint:\n print(strStsMsg)\n # Only increment counter if the last value has not been\n # reached yet:\n if varCntSts01 < varStsStpSze:\n varCntSts01 = varCntSts01 + int(1)\n # Increment status indicator counter:\n varCntSts02 = varCntSts02 + 1\n\n # Stop threads.\n objCoord.request_stop()\n # objSess.close()\n\n # Get indices of models with minimum residuals (minimum along\n # model-space) for current chunk:\n vecResSsMinIdx[varChnkStr:varChnkEnd] = np.argmin(aryTmpRes, axis=0)\n # Get minimum residuals of those models:\n vecResSsMin[varChnkStr:varChnkEnd] = np.min(aryTmpRes, axis=0)\n\n # -------------------------------------------------------------------------\n # *** Post-process results\n if lgcPrint:\n print('------Post-processing results')\n\n # Array for model parameters. At the moment, we have the indices of the\n # best fitting models, so we need an array that tells us what model\n # parameters these indices refer to.\n aryMdl = np.zeros((varNumMdlsTtl, 3), dtype=np.float32)\n\n # Model parameter can be represented as float32 as well:\n vecMdlXpos = vecMdlXpos.astype(np.float32)\n vecMdlYpos = vecMdlYpos.astype(np.float32)\n vecMdlSd = vecMdlSd.astype(np.float32)\n\n # The first column is to contain model x positions:\n aryMdl[:, 0] = np.repeat(vecMdlXpos, int(varNumY * varNumPrfSizes))\n\n # The second column is to contain model y positions:\n aryMdl[:, 1] = np.repeat(\n np.tile(vecMdlYpos,\n varNumX),\n varNumPrfSizes\n )\n\n # The third column is to contain model pRF sizes:\n aryMdl[:, 2] = np.tile(vecMdlSd, int(varNumX * varNumY))\n\n # Earlier, we had removed models with a variance of zero. Thus, those\n # models were ignored and are not present in the results. We remove them\n # from the model-parameter-array:\n aryMdl = aryMdl[vecLgcVar]\n\n # Retrieve model parameters of 'winning' model for all voxels:\n vecBstXpos = aryMdl[:, 0][vecResSsMinIdx]\n vecBstYpos = aryMdl[:, 1][vecResSsMinIdx]\n vecBstSd = aryMdl[:, 2][vecResSsMinIdx]\n\n # Coefficient of determination (1 - ratio of (residual sum of squares by\n # total sum of squares)):\n vecBstR2 = np.subtract(1.0,\n np.divide(vecResSsMin,\n vecSsTot)\n )\n\n # Output list:\n lstOut = [idxPrc,\n vecBstXpos,\n vecBstYpos,\n vecBstSd,\n vecBstR2,\n np.zeros((varNumVox, (varNumBeta))).astype(np.float32)]\n\n queOut.put(lstOut)", "response": "This function returns a list of objects containing the best pRF model for the given voxel time course."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nwrap multiple paragraphs of text returning a list of wrapped lines.", "response": "def wrap(text, width=70, **kwargs):\n \"\"\"Wrap multiple paragraphs of text, returning a list of wrapped lines.\n\n Reformat the multiple paragraphs 'text' so they fit in lines of no\n more than 'width' columns, and return a list of wrapped lines. By\n default, tabs in 'text' are expanded with string.expandtabs(), and\n all other whitespace characters (including newline) are converted to\n space. See ParagraphWrapper class for available keyword args to customize\n wrapping behaviour.\n \"\"\"\n w = ParagraphWrapper(width=width, **kwargs)\n return w.wrap(text)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef fill(text, width=70, **kwargs):\n w = ParagraphWrapper(width=width, **kwargs)\n return w.fill(text)", "response": "Fill multiple paragraphs of text returning a new string."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef split(cls, text):\n result = [line.strip('\\n') for line in cls.parasep_re.split(text)]\n if result == ['', '']:\n result = ['']\n return result", "response": "Splits text into multiple paragraphs and returns a list of each\n paragraph."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nwraps multiple paragraphs in text and return a list of wrapped lines.", "response": "def wrap(self, text):\n \"\"\"wrap(text : string) -> [string]\n\n Reformat the multiple paragraphs in 'text' so they fit in lines of\n no more than 'self.width' columns, and return a list of wrapped\n lines. Tabs in 'text' are expanded with string.expandtabs(),\n and all other whitespace characters (including newline) are\n converted to space.\n \"\"\"\n lines = []\n\n linewrap = partial(textwrap.TextWrapper.wrap, self)\n for para in self.split(text):\n lines.extend(linewrap(para))\n lines.append('') # Add newline between paragraphs\n\n # Remove trailing newline\n lines = lines[:-1]\n\n return lines"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the URL of the sender number MGT.", "response": "def getSenderNumberMgtURL(self, CorpNum, UserID):\n \"\"\" \ud329\uc2a4 \uc804\uc1a1\ub0b4\uc5ed \ud31d\uc5c5 URL\n args\n CorpNum : \ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\n UserID : \ud68c\uc6d0 \ud31d\ube4c\uc544\uc774\ub514\n return\n 30\ucd08 \ubcf4\uc548 \ud1a0\ud070\uc744 \ud3ec\ud568\ud55c url\n raise\n PopbillException\n \"\"\"\n result = self._httpget('/FAX/?TG=SENDER', CorpNum, UserID)\n return result.url"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef getUnitCost(self, CorpNum):\n\n result = self._httpget('/FAX/UnitCost', CorpNum)\n return int(result.unitCost)", "response": "Get UnitCost of a user."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef getFaxResultRN(self, CorpNum, RequestNum, UserID=None):\n\n if RequestNum == None or RequestNum == '':\n raise PopbillException(-99999999, \"\uc694\uccad\ubc88\ud638\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\n\n return self._httpget('/FAX/Get/' + RequestNum, CorpNum, UserID)", "response": "Get the list of Fax Result"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef sendFax(self, CorpNum, SenderNum, ReceiverNum, ReceiverName, FilePath, ReserveDT=None, UserID=None,\n SenderName=None, adsYN=False, title=None, RequestNum=None):\n \"\"\" \ud329\uc2a4 \ub2e8\uac74 \uc804\uc1a1\n args\n CorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\n SenderNum : \ubc1c\uc2e0\uc790 \ubc88\ud638\n ReceiverNum : \uc218\uc2e0\uc790 \ubc88\ud638\n ReceiverName : \uc218\uc2e0\uc790 \uba85\n FilePath : \ubc1c\uc2e0 \ud30c\uc77c\uacbd\ub85c\n ReserveDT : \uc608\uc57d\uc2dc\uac04(\ud615\uc2dd yyyyMMddHHmmss)\n UserID : \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\n SenderName : \ubc1c\uc2e0\uc790\uba85 (\ub3d9\ubcf4\uc804\uc1a1\uc6a9)\n adsYN : \uad11\uace0\ud329\uc2a4 \uc5ec\ubd80\n title : \ud329\uc2a4\uc81c\ubaa9\n RequestNum : \uc804\uc1a1\uc694\uccad\uc2dc \ud560\ub2f9\ud55c \uc804\uc1a1\uc694\uccad\ubc88\ud638\n return\n \uc811\uc218\ubc88\ud638 (receiptNum)\n raise\n PopbillException\n \"\"\"\n receivers = []\n receivers.append(FaxReceiver(receiveNum=ReceiverNum,\n receiveName=ReceiverName)\n )\n\n return self.sendFax_multi(CorpNum, SenderNum, receivers, FilePath, ReserveDT, UserID, SenderName, adsYN, title,\n RequestNum)", "response": "\ud329\uc2a4 \ub2e8\uac74 \uc804\uc1a1\n args\n CorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\n SenderNum : \ubc1c\uc2e0\uc790 \ubc88\ud638\n ReceiverNum : \uc218\uc2e0\uc790 \ubc88\ud638\n ReceiverName : \uc218\uc2e0\uc790 \uba85\n FilePath : \ubc1c\uc2e0 \ud30c\uc77c\uacbd\ub85c\n ReserveDT : \uc608\uc57d\uc2dc\uac04(\ud615\uc2dd yyyyMMddHHmmss)\n UserID : \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\n SenderName : \ubc1c\uc2e0\uc790\uba85 (\ub3d9\ubcf4\uc804\uc1a1\uc6a9)\n adsYN : \uad11\uace0\ud329\uc2a4 \uc5ec\ubd80\n title : \ud329\uc2a4\uc81c\ubaa9\n RequestNum : \uc804\uc1a1\uc694\uccad\uc2dc \ud560\ub2f9\ud55c \uc804\uc1a1\uc694\uccad\ubc88\ud638\n return\n \uc811\uc218\ubc88\ud638 (receiptNum)\n raise\n PopbillException"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nsending anFax message to the AFIP.", "response": "def sendFax_multi(self, CorpNum, SenderNum, Receiver, FilePath, ReserveDT=None, UserID=None, SenderName=None,\n adsYN=False, title=None, RequestNum=None):\n \"\"\" \ud329\uc2a4 \uc804\uc1a1\n args\n CorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\n SenderNum : \ubc1c\uc2e0\uc790 \ubc88\ud638 (\ub3d9\ubcf4\uc804\uc1a1\uc6a9)\n Receiver : \uc218\uc2e0\uc790 \ubc88\ud638(\ub3d9\ubcf4\uc804\uc1a1\uc6a9)\n FilePath : \ubc1c\uc2e0 \ud30c\uc77c\uacbd\ub85c\n ReserveDT : \uc608\uc57d\uc2dc\uac04(\ud615\uc2dd yyyyMMddHHmmss)\n UserID : \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\n SenderName : \ubc1c\uc2e0\uc790\uba85 (\ub3d9\ubcf4\uc804\uc1a1\uc6a9)\n adsYN : \uad11\uace0\ud329\uc2a4 \uc5ec\ubd80\n title : \ud329\uc2a4\uc81c\ubaa9\n RequestNum : \uc804\uc1a1\uc694\uccad\uc2dc \ud560\ub2f9\ud55c \uc804\uc1a1\uc694\uccad\ubc88\ud638\n return\n \uc811\uc218\ubc88\ud638 (receiptNum)\n raise\n PopbillException\n \"\"\"\n\n if SenderNum == None or SenderNum == \"\":\n raise PopbillException(-99999999, \"\ubc1c\uc2e0\uc790 \ubc88\ud638\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\n if Receiver == None:\n raise PopbillException(-99999999, \"\uc218\uc2e0\uc790 \uc815\ubcf4\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\n if not (type(Receiver) is str or type(Receiver) is FaxReceiver or type(Receiver) is list):\n raise PopbillException(-99999999, \"'Receiver' argument type error. 'FaxReceiver' or List of 'FaxReceiver'.\")\n if FilePath == None:\n raise PopbillException(-99999999, \"\ubc1c\uc2e0 \ud30c\uc77c\uacbd\ub85c\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\n if not (type(FilePath) is str or type(FilePath) is list):\n raise PopbillException(-99999999, \"\ubc1c\uc2e0 \ud30c\uc77c\uc740 \ud30c\uc77c\uacbd\ub85c \ub610\ub294 \uacbd\ub85c\ubaa9\ub85d\ub9cc \uc785\ub825 \uac00\ub2a5\ud569\ub2c8\ub2e4.\")\n if type(FilePath) is list and (len(FilePath) < 1 or len(FilePath) > 20):\n raise PopbillException(-99999999, \"\ud30c\uc77c\uc740 1\uac1c \uc774\uc0c1, 20\uac1c \uae4c\uc9c0 \uc804\uc1a1 \uac00\ub2a5\ud569\ub2c8\ub2e4.\")\n\n req = {\"snd\": SenderNum, \"sndnm\": SenderName, \"fCnt\": 1 if type(FilePath) is str else len(FilePath), \"rcvs\": [],\n \"sndDT\": None}\n\n if (type(Receiver) is str):\n Receiver = FaxReceiver(receiveNum=Receiver)\n\n if (type(Receiver) is FaxReceiver):\n Receiver = [Receiver]\n\n if adsYN:\n req['adsYN'] = True\n\n for r in Receiver:\n req['rcvs'].append({\"rcv\": r.receiveNum, \"rcvnm\": r.receiveName})\n\n if ReserveDT != None:\n req['sndDT'] = ReserveDT\n\n if title != None:\n req['title'] = title\n\n if RequestNum != None:\n req['requestNum'] = RequestNum\n\n postData = self._stringtify(req)\n\n if (type(FilePath) is str):\n FilePath = [FilePath]\n\n files = []\n\n for filePath in FilePath:\n with open(filePath, \"rb\") as f:\n files.append(File(fieldName='file',\n fileName=f.name,\n fileData=f.read())\n )\n result = self._httppost_files('/FAX', postData, files, CorpNum, UserID)\n\n return result.receiptNum"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef resendFaxRN(self, CorpNum, OrgRequestNum, SenderNum, SenderName, ReceiverNum, ReceiverName, ReserveDT=None,\n UserID=None, title=None, RequestNum=None):\n \"\"\" \ud329\uc2a4 \ub2e8\uac74 \uc804\uc1a1\n args\n CorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\n OrgRequestNum : \uc6d0\ubcf8 \ud329\uc2a4 \uc804\uc1a1\uc2dc \ud560\ub2f9\ud55c \uc804\uc1a1\uc694\uccad\ubc88\ud638\n ReceiptNum : \ud329\uc2a4 \uc811\uc218\ubc88\ud638\n SenderNum : \ubc1c\uc2e0\uc790 \ubc88\ud638\n SenderName : \ubc1c\uc2e0\uc790\uba85\n ReceiverNum : \uc218\uc2e0\ubc88\ud638\n ReceiverName : \uc218\uc2e0\uc790\uba85\n ReserveDT : \uc608\uc57d\uc2dc\uac04(\ud615\uc2dd yyyyMMddHHmmss)\n UserID : \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\n title : \ud329\uc2a4\uc81c\ubaa9\n RequestNum : \uc804\uc1a1\uc694\uccad\uc2dc \ud560\ub2f9\ud55c \uc804\uc1a1\uc694\uccad\ubc88\ud638\n return\n \uc811\uc218\ubc88\ud638 (receiptNum)\n raise\n PopbillException\n \"\"\"\n receivers = None\n\n if ReceiverNum != \"\" or ReceiverName != \"\":\n receivers = []\n receivers.append(FaxReceiver(receiveNum=ReceiverNum,\n receiveName=ReceiverName)\n )\n return self.resendFaxRN_multi(CorpNum, OrgRequestNum, SenderNum, SenderName, receivers, ReserveDT,\n UserID, title, RequestNum)", "response": "Resend anFaxRN to a specific sender."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets URL of a preview of a given receipt.", "response": "def getPreviewURL(self, CorpNum, ReceiptNum, UserID):\n \"\"\" \ud329\uc2a4 \ubc1c\uc2e0\ubc88\ud638 \ubaa9\ub85d \ud655\uc778\n args\n CorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\n UserID : \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\n return\n \ucc98\ub9ac\uacb0\uacfc. list of SenderNumber\n raise\n PopbillException\n \"\"\"\n return self._httpget('/FAX/Preview/' + ReceiptNum, CorpNum, UserID).url"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef prepare_outdir(outdir):\n if outdir:\n outdir = os.path.expanduser(outdir)\n if not os.path.isdir(outdir):\n try:\n os.makedirs(outdir)\n except os.error as e:\n raise JobExecutionError('Failed to create outdir \"{}\".\\n{}'.format(outdir, str(e)))", "response": "Creates the output directory if not existing."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef requestJob(self, CorpNum, Type, SDate, EDate, UserID=None):\n\n if Type == None or Type == '':\n raise PopbillException(-99999999, \"\ubb38\uc11c\ud615\ud0dc\uc774 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\n\n if SDate == None or SDate == '':\n raise PopbillException(-99999999, \"\uc2dc\uc791\uc77c\uc790\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\n\n if EDate == None or EDate == '':\n raise PopbillException(-99999999, \"\uc885\ub8cc\uc77c\uc790\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\n\n uri = '/HomeTax/Cashbill/' + Type\n uri += '?SDate=' + SDate\n uri += '&EDate=' + EDate\n\n return self._httppost(uri, \"\", CorpNum, UserID).jobID", "response": "Request a job from the Cashbill."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef summary(self, CorpNum, JobID, TradeType, TradeUsage, UserID=None):\n if JobID == None or len(JobID) != 18:\n raise PopbillException(-99999999, \"\uc791\uc5c5\uc544\uc774\ub514(jobID)\uac00 \uc62c\ubc14\ub974\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\")\n\n uri = '/HomeTax/Cashbill/' + JobID + '/Summary'\n uri += '?TradeType=' + ','.join(TradeType)\n uri += '&TradeUsage=' + ','.join(TradeUsage)\n\n return self._httpget(uri, CorpNum, UserID)", "response": "\uc218\uc9d1 \uacb0\uacfc \uc694\uc57d\uc815\ubcf4 \uc870\ud68c\n args\n CorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\n JobID : \uc791\uc5c5\uc544\uc774\ub514\n TradeType : \ubb38\uc11c\ud615\ud0dc \ubc30\uc5f4, N-\uc77c\ubc18 \ud604\uae08\uc601\uc218\uc99d, C-\ucde8\uc18c \ud604\uae08\uc601\uc218\uc99d\n TradeUsage : \uac70\ub798\uad6c\ubd84 \ubc30\uc5f4, P-\uc18c\ub4f1\uacf5\uc81c\uc6a9, C-\uc9c0\ucd9c\uc99d\ube59\uc6a9\n UserID : \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\n return\n \uc218\uc9d1 \uacb0\uacfc \uc694\uc57d\uc815\ubcf4\n raise\n PopbillException"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef model_node(**kwargs):\n kwargs.setdefault('default', {})\n\n def decorator(model):\n return types.ModelType(model, **kwargs)\n\n return decorator", "response": "A simple decorator for creating a schematics. Model class to add it as a field\n of type schematics. types. ModelType."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef for_each_file(base_dir, func):\n\n for dir_path, _, file_names in os.walk(base_dir):\n for filename in file_names:\n func(os.path.join(dir_path, filename))", "response": "Calls func for every file under base_dir."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef make_file_read_only(file_path):\n old_permissions = os.stat(file_path).st_mode\n os.chmod(file_path, old_permissions & ~WRITE_PERMISSIONS)", "response": "Removes the write permissions for the given file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef load_config(strCsvCnfg, lgcTest=False, lgcPrint=True):\n\n # Dictionary with config information:\n dicCnfg = {}\n\n # Open file with parameter configuration:\n # fleConfig = open(strCsvCnfg, 'r')\n with open(strCsvCnfg, 'r') as fleConfig:\n\n # Read file with ROI information:\n csvIn = csv.reader(fleConfig,\n delimiter='\\n',\n skipinitialspace=True)\n\n # Loop through csv object to fill list with csv data:\n for lstTmp in csvIn:\n\n # Skip comments (i.e. lines starting with '#') and empty lines.\n # Note: Indexing the list (i.e. lstTmp[0][0]) does not work for\n # empty lines. However, if the first condition is no fullfilled\n # (i.e. line is empty and 'if lstTmp' evaluates to false), the\n # second logical test (after the 'and') is not actually carried\n # out.\n if lstTmp and not (lstTmp[0][0] == '#'):\n\n # Name of current parameter (e.g. 'varTr'):\n strParamKey = lstTmp[0].split(' = ')[0]\n # print(strParamKey)\n\n # Current parameter value (e.g. '2.94'):\n strParamVlu = lstTmp[0].split(' = ')[1]\n # print(strParamVlu)\n\n # Put paramter name (key) and value (item) into dictionary:\n dicCnfg[strParamKey] = strParamVlu\n\n # Are model parameters in cartesian or polar coordinates?\n # set either pol (polar) or crt (cartesian)\n dicCnfg['strKwCrd'] = ast.literal_eval(dicCnfg['strKwCrd'])\n if lgcPrint:\n print('---Model coordinates are in: ' + str(dicCnfg['strKwCrd']))\n\n # Number of x- or radial positions to model:\n dicCnfg['varNum1'] = int(dicCnfg['varNum1'])\n # Number of y- or angular positions to model:\n dicCnfg['varNum2'] = int(dicCnfg['varNum2'])\n\n if lgcPrint:\n if dicCnfg['strKwCrd'] == 'crt':\n print('---Number of x-positions to model: ' +\n str(dicCnfg['varNum1']))\n print('---Number of y-positions to model: ' +\n str(dicCnfg['varNum2']))\n\n elif dicCnfg['strKwCrd'] == 'pol':\n print('---Number of radial positions to model: ' +\n str(dicCnfg['varNum1']))\n print('---Number of angular positions to model: ' +\n str(dicCnfg['varNum2']))\n\n # Number of pRF sizes to model:\n dicCnfg['varNumPrfSizes'] = int(dicCnfg['varNumPrfSizes'])\n if lgcPrint:\n print('---Number of pRF sizes to model: '\n + str(dicCnfg['varNumPrfSizes']))\n\n # Extent of visual space from centre of the screen in negative x-direction\n # (i.e. from the fixation point to the left end of the screen) in degrees\n # of visual angle.\n dicCnfg['varExtXmin'] = float(dicCnfg['varExtXmin'])\n if lgcPrint:\n print('---Extent of visual space in negative x-direction: '\n + str(dicCnfg['varExtXmin']))\n\n # Extent of visual space from centre of the screen in positive x-direction\n # (i.e. from the fixation point to the right end of the screen) in degrees\n # of visual angle.\n dicCnfg['varExtXmax'] = float(dicCnfg['varExtXmax'])\n if lgcPrint:\n print('---Extent of visual space in positive x-direction: '\n + str(dicCnfg['varExtXmax']))\n\n # Extent of visual space from centre of the screen in negative y-direction\n # (i.e. from the fixation point to the lower end of the screen) in degrees\n # of visual angle.\n dicCnfg['varExtYmin'] = float(dicCnfg['varExtYmin'])\n if lgcPrint:\n print('---Extent of visual space in negative y-direction: '\n + str(dicCnfg['varExtYmin']))\n\n # Extent of visual space from centre of the screen in positive y-direction\n # (i.e. from the fixation point to the upper end of the screen) in degrees\n # of visual angle.\n dicCnfg['varExtYmax'] = float(dicCnfg['varExtYmax'])\n if lgcPrint:\n print('---Extent of visual space in positive y-direction: '\n + str(dicCnfg['varExtYmax']))\n\n # Minimum pRF model size (standard deviation of 2D Gaussian) [degrees of\n # visual angle]:\n dicCnfg['varPrfStdMin'] = float(dicCnfg['varPrfStdMin'])\n if lgcPrint:\n print('---Minimum pRF model size: ' + str(dicCnfg['varPrfStdMin']))\n\n # Maximum pRF model size (standard deviation of 2D Gaussian) [degrees of\n # visual angle]:\n dicCnfg['varPrfStdMax'] = float(dicCnfg['varPrfStdMax'])\n if lgcPrint:\n print('---Maximum pRF model size: ' + str(dicCnfg['varPrfStdMax']))\n\n # Volume TR of input data [s]:\n dicCnfg['varTr'] = float(dicCnfg['varTr'])\n if lgcPrint:\n print('---Volume TR of input data [s]: ' + str(dicCnfg['varTr']))\n\n # Voxel resolution of fMRI data [mm]:\n dicCnfg['varVoxRes'] = float(dicCnfg['varVoxRes'])\n if lgcPrint:\n print('---Voxel resolution of fMRI data [mm]: '\n + str(dicCnfg['varVoxRes']))\n\n # Number of fMRI volumes and png files to load:\n dicCnfg['varNumVol'] = int(dicCnfg['varNumVol'])\n if lgcPrint:\n print('---Total number of fMRI volumes and png files: '\n + str(dicCnfg['varNumVol']))\n\n # Extent of temporal smoothing for fMRI data and pRF time course models\n # [standard deviation of the Gaussian kernel, in seconds]:\n # same temporal smoothing will be applied to pRF model time courses\n dicCnfg['varSdSmthTmp'] = float(dicCnfg['varSdSmthTmp'])\n if lgcPrint:\n print('---Extent of temporal smoothing (Gaussian SD in [s]): '\n + str(dicCnfg['varSdSmthTmp']))\n\n # Number of processes to run in parallel:\n dicCnfg['varPar'] = int(dicCnfg['varPar'])\n if lgcPrint:\n print('---Number of processes to run in parallel: '\n + str(dicCnfg['varPar']))\n\n # Size of space model in which the pRF models are\n # created (x- and y-dimension).\n dicCnfg['tplVslSpcSze'] = tuple([int(dicCnfg['varVslSpcSzeX']),\n int(dicCnfg['varVslSpcSzeY'])])\n if lgcPrint:\n print('---Size of visual space model (x & y): '\n + str(dicCnfg['tplVslSpcSze']))\n\n # Path(s) of functional data:\n dicCnfg['lstPathNiiFunc'] = ast.literal_eval(dicCnfg['lstPathNiiFunc'])\n if lgcPrint:\n print('---Path(s) of functional data:')\n for strTmp in dicCnfg['lstPathNiiFunc']:\n print(' ' + str(strTmp))\n\n # Path of mask (to restrict pRF model finding):\n dicCnfg['strPathNiiMask'] = ast.literal_eval(dicCnfg['strPathNiiMask'])\n if lgcPrint:\n print('---Path of mask (to restrict pRF model finding):')\n print(' ' + str(dicCnfg['strPathNiiMask']))\n\n # Output basename:\n dicCnfg['strPathOut'] = ast.literal_eval(dicCnfg['strPathOut'])\n if lgcPrint:\n print('---Output basename:')\n print(' ' + str(dicCnfg['strPathOut']))\n\n # Which version to use for pRF finding. 'numpy' or 'cython' for pRF finding\n # on CPU, 'gpu' for using GPU.\n dicCnfg['strVersion'] = ast.literal_eval(dicCnfg['strVersion'])\n if lgcPrint:\n print('---Version (numpy, cython, or gpu): '\n + str(dicCnfg['strVersion']))\n\n # Create pRF time course models?\n dicCnfg['lgcCrteMdl'] = (dicCnfg['lgcCrteMdl'] == 'True')\n if lgcPrint:\n print('---Create pRF time course models: '\n + str(dicCnfg['lgcCrteMdl']))\n\n # Path to npy file with pRF time course models (to save or laod). Without\n # file extension.\n dicCnfg['strPathMdl'] = ast.literal_eval(dicCnfg['strPathMdl'])\n if lgcPrint:\n print('---Path to npy file with pRF time course models (to save '\n + 'or load):')\n print(' ' + str(dicCnfg['strPathMdl']))\n\n # switch to determine which hrf functions should be used\n # 1: canonical, 2: can and temp derivative, 3: can, temp and spat deriv\n dicCnfg['switchHrfSet'] = ast.literal_eval(dicCnfg['switchHrfSet'])\n if lgcPrint:\n print('---Switch to determine which hrf functions should be used: '\n + str(dicCnfg['switchHrfSet']))\n\n # should model fitting be based on k-fold cross-validation?\n # if not, set to 1\n dicCnfg['varNumXval'] = ast.literal_eval(dicCnfg['varNumXval'])\n if lgcPrint:\n print('---Model fitting will have this number of folds for xval: '\n + str(dicCnfg['varNumXval']))\n\n # If we create new pRF time course models, the following parameters have to\n # be provided:\n if dicCnfg['lgcCrteMdl']:\n\n # Name of the npy that holds spatial info about conditions\n dicCnfg['strSptExpInf'] = ast.literal_eval(dicCnfg['strSptExpInf'])\n if lgcPrint:\n print('---Path to npy file with spatial condition info: ')\n print(' ' + str(dicCnfg['strSptExpInf']))\n\n # Name of the npy that holds temporal info about conditions\n dicCnfg['strTmpExpInf'] = ast.literal_eval(dicCnfg['strTmpExpInf'])\n if lgcPrint:\n print('---Path to npy file with temporal condition info: ')\n print(' ' + str(dicCnfg['strTmpExpInf']))\n\n # Factor by which time courses and HRF will be upsampled for the\n # convolutions\n dicCnfg['varTmpOvsmpl'] = ast.literal_eval(dicCnfg['varTmpOvsmpl'])\n if lgcPrint:\n print('---Factor by which time courses and HRF will be upsampled: '\n + str(dicCnfg['varTmpOvsmpl']))\n\n # Is this a test?\n if lgcTest:\n\n # Prepend absolute path of this file to config file paths:\n dicCnfg['strPathNiiMask'] = (strDir + dicCnfg['strPathNiiMask'])\n dicCnfg['strPathOut'] = (strDir + dicCnfg['strPathOut'])\n dicCnfg['strPathMdl'] = (strDir + dicCnfg['strPathMdl'])\n dicCnfg['strSptExpInf'] = (strDir + dicCnfg['strSptExpInf'])\n dicCnfg['strTmpExpInf'] = (strDir + dicCnfg['strTmpExpInf'])\n\n # Loop through functional runs:\n varNumRun = len(dicCnfg['lstPathNiiFunc'])\n for idxRun in range(varNumRun):\n dicCnfg['lstPathNiiFunc'][idxRun] = (\n strDir\n + dicCnfg['lstPathNiiFunc'][idxRun]\n )\n\n return dicCnfg", "response": "Load the config file and return a dictionary with the ROI information."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\nasync def status_by_coordinates(\n self, latitude: float, longitude: float) -> dict:\n \"\"\"Get symptom data for the location nearest to the user's lat/lon.\"\"\"\n return await self.nearest_by_coordinates(latitude, longitude)", "response": "Get the status of the user from the given latitude and longitude."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ngets symptom data for the provided ZIP code.", "response": "async def status_by_zip(self, zip_code: str) -> dict:\n \"\"\"Get symptom data for the provided ZIP code.\"\"\"\n try:\n location = next((\n d for d in await self.user_reports()\n if d['zip'] == zip_code))\n except StopIteration:\n return {}\n\n return await self.status_by_coordinates(\n float(location['latitude']), float(location['longitude']))"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef print_request(request):\n print('{}\\n{}\\n{}\\n\\n{}'.format(\n '-----------START-----------',\n request.method + ' ' + request.url,\n '\\n'.join('{}: {}'.format(k, v) for k, v in request.headers.items()),\n request.body,\n ))", "response": "Prints a request to give the user info as to what they re sending\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make_EPUB(parsed_article,\n output_directory,\n input_path,\n image_directory,\n config_module=None,\n epub_version=None,\n batch=False):\n \"\"\"\n Standard workflow for creating an EPUB document.\n\n make_EPUB is used to produce an EPUB file from a parsed article. In addition\n to the article it also requires a path to the appropriate image directory\n which it will insert into the EPUB file, as well the output directory\n location for the EPUB file.\n\n Parameters\n ----------\n article : openaccess_epub.article.Article instance\n `article` is an Article instance for the XML document to be converted to\n EPUB.\n output_directory : str\n `output_directory` is a string path to the directory in which the EPUB\n will be produced. The name of the directory will be used as the EPUB's\n filename.\n input_path : str\n `input_path` is a string absolute path to the input XML file, used to\n locate input-relative images.\n image_directory : str\n `image_directory` is a string path indicating an explicit image\n directory. If supplied, other image input methods will not be used.\n config_module : config module, optional\n `config_module` is a pre-loaded config module for OpenAccess_EPUB; if\n not used then this function will load the global config file. Might be\n useful in certain cases to dynamically alter configuration.\n epub_version : {None, 2, 3}\n `epub_version` dictates which version of EPUB to be created. An error\n will be raised if the specified version is not supported for the\n publisher. If left to the default, the created version will defer to the\n publisher default version.\n batch : bool, optional\n `batch` indicates that batch creation is being used (such as with the\n `oaepub batch` command). In this case, directory conflicts will be\n automatically resolved (in favor of keeping previous data, skipping\n creation of EPUB).\n\n Returns False in the case of a fatal error, True if successful.\n \"\"\"\n #command_log.info('Creating {0}.epub'.format(output_directory))\n if config_module is None:\n config_module = openaccess_epub.utils.load_config_module()\n\n if epub_version not in (None, 2, 3):\n log.error('Invalid EPUB version: {0}'.format(epub_version))\n raise ValueError('Invalid EPUB version. Should be 2 or 3')\n\n if epub_version is None:\n epub_version = parsed_article.publisher.epub_default\n\n #Handle directory output conflicts\n if os.path.isdir(output_directory):\n if batch: # No user prompt, default to protect previous data\n log.error('Directory conflict during batch conversion, skipping.')\n return False\n else: # User prompting\n openaccess_epub.utils.dir_exists(output_directory)\n else:\n try:\n os.makedirs(output_directory)\n except OSError as err:\n if err.errno != 17:\n log.exception('Unable to recursively create output directories')\n\n #Copy over the basic epub directory\n make_epub_base(output_directory)\n\n #Get the images, if possible, fail gracefully if not\n success = openaccess_epub.utils.images.get_images(output_directory,\n image_directory,\n input_path,\n config_module,\n parsed_article)\n if not success:\n log.critical('Images for the article were not located! Aborting!')\n return False\n\n #Instantiate Navigation and Package\n epub_nav = Navigation()\n epub_package = Package()\n\n #Process the article for navigation and package info\n epub_nav.process(parsed_article)\n epub_package.process(parsed_article)\n\n #Render the content using publisher-specific methods\n parsed_article.publisher.render_content(output_directory, epub_version)\n if epub_version == 2:\n epub_nav.render_EPUB2(output_directory)\n epub_package.render_EPUB2(output_directory)\n elif epub_version == 3:\n epub_nav.render_EPUB3(output_directory)\n epub_package.render_EPUB3(output_directory)\n\n #Zip the directory into EPUB\n epub_zip(output_directory)\n\n return True", "response": "This function creates an EPUB file from an OpenAccess_EPUB article."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate the base structure for an EPUB file in a specified location.", "response": "def make_epub_base(location):\n \"\"\"\n Creates the base structure for an EPUB file in a specified location.\n\n This function creates constant components for the structure of the EPUB in\n a specified directory location.\n\n Parameters\n ----------\n location : str\n A path string to a local directory in which the EPUB is to be built\n \"\"\"\n log.info('Making EPUB base files in {0}'.format(location))\n with open(os.path.join(location, 'mimetype'), 'w') as out: # mimetype file\n out.write('application/epub+zip')\n\n #Create EPUB and META-INF directorys\n os.mkdir(os.path.join(location, 'META-INF'))\n os.mkdir(os.path.join(location, 'EPUB'))\n os.mkdir(os.path.join(location, 'EPUB', 'css'))\n\n with open(os.path.join(location, 'META-INF', 'container.xml'), 'w') as out:\n out.write('''\\\n\n\n \n \n \n''')\n\n with open(os.path.join(location, 'EPUB', 'css', 'default.css') ,'wb') as out:\n out.write(bytes(DEFAULT_CSS, 'UTF-8'))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef epub_zip(outdirect):\n\n def recursive_zip(zipf, directory, folder=None):\n if folder is None:\n folder = ''\n for item in os.listdir(directory):\n if os.path.isfile(os.path.join(directory, item)):\n zipf.write(os.path.join(directory, item),\n os.path.join(directory, item))\n elif os.path.isdir(os.path.join(directory, item)):\n recursive_zip(zipf, os.path.join(directory, item),\n os.path.join(folder, item))\n\n log.info('Zipping up the directory {0}'.format(outdirect))\n epub_filename = outdirect + '.epub'\n epub = zipfile.ZipFile(epub_filename, 'w')\n current_dir = os.getcwd()\n os.chdir(outdirect)\n epub.write('mimetype')\n log.info('Recursively zipping META-INF and EPUB')\n for item in os.listdir('.'):\n if item == 'mimetype':\n continue\n recursive_zip(epub, item)\n os.chdir(current_dir)\n epub.close()", "response": "Zips the input file directory into an EPUB file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nwrite data to CSV file with validation.", "response": "def _write_int(fname, data, append=True):\n \"\"\"Write data to CSV file with validation.\"\"\"\n # pylint: disable=W0705\n data_ex = pexdoc.exh.addex(ValueError, \"There is no data to save to file\")\n fos_ex = pexdoc.exh.addex(\n OSError, \"File *[fname]* could not be created: *[reason]*\"\n )\n data_ex((len(data) == 0) or ((len(data) == 1) and (len(data[0]) == 0)))\n try:\n pmisc.make_dir(fname)\n mode = \"w\" if append is False else \"a\"\n if sys.hexversion < 0x03000000: # pragma: no cover, no branch\n with open(fname, mode) as file_handle:\n csv.writer(file_handle, delimiter=\",\").writerows(data)\n else: # pragma: no cover\n with open(fname, mode, newline=\"\") as file_handle:\n csv.writer(file_handle, delimiter=\",\").writerows(data)\n except (IOError, OSError) as eobj:\n fos_ex(True, _MF(\"fname\", fname, \"reason\", eobj.strerror))"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngenerates a directory description from a given input directory.", "response": "def _input_directory_description(input_identifier, arg_item, input_dir):\n \"\"\"\n Produces a directory description. A directory description is a dictionary containing the following information.\n\n - 'path': An array containing the paths to the specified directories.\n - 'debugInfo': A field to possibly provide debug information.\n - 'found': A boolean that indicates, if the directory exists in the local filesystem.\n - 'listing': A listing that shows which files are in the given directory. This could be None.\n\n :param input_identifier: The input identifier in the cwl description file\n :param arg_item: The corresponding job information\n :param input_dir: TODO\n :return: A directory description\n :raise DirectoryError: If the given directory does not exist or is not a directory.\n \"\"\"\n description = {\n 'path': None,\n 'found': False,\n 'debugInfo': None,\n 'listing': None,\n 'basename': None\n }\n\n try:\n path = location(input_identifier, arg_item)\n\n if input_dir and not os.path.isabs(path):\n path = os.path.join(os.path.expanduser(input_dir), path)\n\n description['path'] = path\n if not os.path.exists(path):\n raise DirectoryError('path does not exist')\n if not os.path.isdir(path):\n raise DirectoryError('path is not a directory')\n\n description['listing'] = arg_item.get('listing')\n description['basename'] = os.path.basename(path)\n\n description['found'] = True\n except:\n description['debugInfo'] = exception_format()\n\n return description"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncheck that the given directory listing is well - formed and if not raises an exception.", "response": "def _check_input_directory_listing(base_directory, listing):\n \"\"\"\n Raises an DirectoryError if files or directories, given in the listing, could not be found in the local filesystem.\n\n :param base_directory: The path to the directory to check\n :param listing: A listing given as dictionary\n :raise DirectoryError: If the given base directory does not contain all of the subdirectories and subfiles given in\n the listing.\n \"\"\"\n\n for sub in listing:\n path = os.path.join(base_directory, sub['basename'])\n if sub['class'] == 'File':\n if not os.path.isfile(path):\n raise DirectoryError('File \\'{}\\' not found but specified in listing.'.format(path))\n if sub['class'] == 'Directory':\n if not os.path.isdir(path):\n raise DirectoryError('Directory \\'{}\\' not found but specified in listing'.format(path))\n sub_listing = sub.get('listing')\n if sub_listing:\n _check_input_directory_listing(path, sub_listing)"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nparses cwl type string into a dictionary containing information about the cwl type string.", "response": "def parse_cwl_type(cwl_type_string):\n \"\"\"\n Parses cwl type information from a cwl type string.\n\n Examples:\n\n - \"File[]\" -> {'type': 'File', 'isArray': True, 'isOptional': False}\n - \"int?\" -> {'type': 'int', 'isArray': False, 'isOptional': True}\n\n :param cwl_type_string: The cwl type string to extract information from\n :return: A dictionary containing information about the parsed cwl type string\n \"\"\"\n\n is_optional = cwl_type_string.endswith('?')\n if is_optional:\n cwl_type_string = cwl_type_string[:-1]\n\n is_array = cwl_type_string.endswith('[]')\n if is_array:\n cwl_type_string = cwl_type_string[:-2]\n\n return {'type': cwl_type_string, 'isArray': is_array, 'isOptional': is_optional}"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nsearching for Directories and in the cwl data and produces a dictionary containing the input file information.", "response": "def cwl_input_directories(cwl_data, job_data, input_dir=None):\n \"\"\"\n Searches for Directories and in the cwl data and produces a dictionary containing input file information.\n\n :param cwl_data: The cwl data as dictionary\n :param job_data: The job data as dictionary\n :param input_dir: TODO\n :return: Returns the a dictionary containing information about input files.\n The keys of this dictionary are the input/output identifiers of the files specified in the cwl description.\n The corresponding value is a dictionary again with the following keys and values:\n - 'isOptional': A bool indicating whether this input directory is optional\n - 'isArray': A bool indicating whether this could be a list of directories\n - 'files': A list of input file descriptions\n\n A input file description is a dictionary containing the following information\n - 'path': The path to the specified directory\n - 'debugInfo': A field to possibly provide debug information\n \"\"\"\n\n results = {}\n\n for input_identifier, input_data in cwl_data['inputs'].items():\n cwl_type = parse_cwl_type(input_data['type'])\n\n (is_optional, is_array, cwl_type) = itemgetter('isOptional', 'isArray', 'type')(cwl_type)\n\n if cwl_type == 'Directory':\n result = {\n 'isOptional': is_optional,\n 'isArray': is_array,\n 'directories': None\n }\n\n if input_identifier in job_data:\n arg = job_data[input_identifier]\n\n if is_array:\n result['directories'] = [_input_directory_description(input_identifier, i, input_dir) for i in arg]\n else:\n result['directories'] = [_input_directory_description(input_identifier, arg, input_dir)]\n\n results[input_identifier] = result\n\n return results"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nreturn a dictionary containing information about the output files given in cwl_data.", "response": "def cwl_output_files(cwl_data, inputs_to_reference, output_dir=None):\n \"\"\"\n Returns a dictionary containing information about the output files given in cwl_data.\n\n :param cwl_data: The cwl data from where to extract the output file information.\n :param inputs_to_reference: Inputs which are used to resolve input references.\n :param output_dir: Path to the directory where output files are expected.\n :return: A dictionary containing information about every output file.\n \"\"\"\n results = {}\n\n for key, val in cwl_data['outputs'].items():\n cwl_type = parse_cwl_type(val['type'])\n (is_optional, is_array, cwl_type) = itemgetter('isOptional', 'isArray', 'type')(cwl_type)\n\n if not cwl_type == 'File':\n continue\n\n result = {\n 'isOptional': is_optional,\n 'path': None,\n 'size': None,\n 'debugInfo': None\n }\n\n glob_path = os.path.expanduser(val['outputBinding']['glob'])\n if output_dir and not os.path.isabs(glob_path):\n glob_path = os.path.join(os.path.expanduser(output_dir), glob_path)\n\n glob_path = resolve_input_references(glob_path, inputs_to_reference)\n matches = glob(glob_path)\n try:\n if len(matches) != 1:\n raise FileError('glob path \"{}\" does not match exactly one file'.format(glob_path))\n\n file_path = matches[0]\n result['path'] = file_path\n\n if not os.path.isfile(file_path):\n raise FileError('path is not a file')\n\n result['size'] = os.path.getsize(file_path) / (1024 * 1024)\n except:\n result['debugInfo'] = exception_format()\n\n results[key] = result\n\n return results"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nread from the FIFO and returns the data.", "response": "def read(self, length=-1):\n \"\"\"\n Reads from the FIFO.\n\n Reads as much data as possible from the FIFO up to the specified\n length. If the length argument is negative or ommited all data\n currently available in the FIFO will be read. If there is no data\n available in the FIFO an empty string is returned.\n\n Args:\n length: The amount of data to read from the FIFO. Defaults to -1.\n \"\"\"\n if 0 <= length < len(self):\n newpos = self.pos + length\n data = self.buf[self.pos:newpos]\n self.pos = newpos\n self.__discard()\n return data\n\n data = self.buf[self.pos:]\n self.clear()\n return data"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef readuntil(self, token, size=0):\n self.__append()\n\n i = self.buf.find(token, self.pos)\n if i < 0:\n index = max(len(token) - 1, size)\n newpos = max(len(self.buf) - index, self.pos)\n data = self.buf[self.pos:newpos]\n self.pos = newpos\n self.__discard()\n return False, data\n\n newpos = i + len(token)\n data = self.buf[self.pos:newpos]\n self.pos = newpos\n self.__discard()\n return True, data", "response": "Reads data from the FIFO until a token is encountered."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nreading a line into the FIFO.", "response": "def peekline(self):\n \"\"\"\n Peeks a line into the FIFO.\n\n Perfroms the same function as readline() without removing data from the\n FIFO. See readline() for further information.\n \"\"\"\n self.__append()\n\n i = self.buf.find(self.eol, self.pos)\n if i < 0:\n return ''\n\n newpos = i + len(self.eol)\n return self.buf[self.pos:newpos]"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef peekuntil(self, token, size=0):\n self.__append()\n\n i = self.buf.find(token, self.pos)\n if i < 0:\n index = max(len(token) - 1, size)\n newpos = max(len(self.buf) - index, self.pos)\n return False, self.buf[self.pos:newpos]\n\n newpos = i + len(token)\n return True, self.buf[self.pos:newpos]", "response": "Reads the next size bytes from the FIFO."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nget URL from Akamai.", "response": "def getURL(self, CorpNum, UserID, ToGo):\n \"\"\"\n :param CorpNum: \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\n :param UserID: \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\n :param ToGo: [PLUSFRIEND-\ud50c\ub7ec\uc2a4\uce5c\uad6c\uacc4\uc815\uad00\ub9ac, SENDER-\ubc1c\uc2e0\ubc88\ud638\uad00\ub9ac, TEMPLATE-\uc54c\ub9bc\ud1a1\ud15c\ud50c\ub9bf\uad00\ub9ac, BOX-\uce74\uce74\uc624\ud1a1\uc804\uc1a1\ub0b4\uc6a9]\n :return: \ud31d\ube4c URL\n \"\"\"\n if ToGo == None or ToGo == '':\n raise PopbillException(-99999999, \"TOGO\uac12\uc774 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\n\n if ToGo == 'SENDER':\n result = self._httpget('/Message/?TG=' + ToGo, CorpNum, UserID)\n else:\n result = self._httpget('/KakaoTalk/?TG=' + ToGo, CorpNum, UserID)\n return result.url"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nget Plus Friend Mgt URL.", "response": "def getPlusFriendMgtURL(self, CorpNum, UserID):\n \"\"\"\n \ud50c\ub7ec\uc2a4\uce5c\uad6c \uacc4\uc815\uad00\ub9ac \ud31d\uc5c5 URL\n :param CorpNum: \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\n :param UserID: \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\n :return: \ud31d\ube4c URL\n \"\"\"\n result = self._httpget('/KakaoTalk/?TG=PLUSFRIEND', CorpNum, UserID)\n return result.url"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef getATSTemplateMgtURL(self, CorpNum, UserID):\n result = self._httpget('/KakaoTalk/?TG=TEMPLATE', CorpNum, UserID)\n return result.url", "response": "Get A TG URL"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ngets the URL of the list of items.", "response": "def getSentListURL(self, CorpNum, UserID):\n \"\"\"\n \uce74\uce74\uc624\ud1a1 \uc804\uc1a1\ub0b4\uc5ed \ud31d\uc5c5 URL\n :param CorpNum: \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\n :param UserID: \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\n :return: \ud31d\ube4c URL\n \"\"\"\n result = self._httpget('/KakaoTalk/?TG=BOX', CorpNum, UserID)\n return result.url"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef sendATS_same(self, CorpNum, TemplateCode, Sender, Content, AltContent, AltSendType, SndDT, KakaoMessages,\n UserID=None, RequestNum=None, ButtonList=None):\n \"\"\"\n \uc54c\ub9bc\ud1a1 \ub300\ub7c9 \uc804\uc1a1\n :param CorpNum: \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\n :param TemplateCode: \ud15c\ud50c\ub9bf\ucf54\ub4dc\n :param Sender: \ubc1c\uc2e0\ubc88\ud638\n :param Content: [\ub3d9\ubcf4] \uc54c\ub9bc\ud1a1 \ub0b4\uc6a9\n :param AltContent: [\ub3d9\ubcf4] \ub300\uccb4\ubb38\uc790 \ub0b4\uc6a9\n :param AltSendType: \ub300\uccb4\ubb38\uc790 \uc720\ud615 [\uacf5\ubc31-\ubbf8\uc804\uc1a1, C-\uc54c\ub9bc\ud1a1\ub0b4\uc6a9, A-\ub300\uccb4\ubb38\uc790\ub0b4\uc6a9]\n :param SndDT: \uc608\uc57d\uc77c\uc2dc [\uc791\uc131\ud615\uc2dd : yyyyMMddHHmmss]\n :param KakaoMessages: \uc54c\ub9bc\ud1a1 \ub0b4\uc6a9 (\ubc30\uc5f4)\n :param UserID: \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\n :param RequestNum : \uc694\uccad\ubc88\ud638\n :return: receiptNum (\uc811\uc218\ubc88\ud638)\n \"\"\"\n if TemplateCode is None or TemplateCode == '':\n raise PopbillException(-99999999, \"\uc54c\ub9bc\ud1a1 \ud15c\ud50c\ub9bf\ucf54\ub4dc\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\n if Sender is None or Sender == '':\n raise PopbillException(-99999999, \"\ubc1c\uc2e0\ubc88\ud638\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\n\n req = {}\n\n if TemplateCode is not None or TemplateCode != '':\n req['templateCode'] = TemplateCode\n if Sender is not None or Sender != '':\n req['snd'] = Sender\n if Content is not None or Content != '':\n req['content'] = Content\n if AltContent is not None or AltContent != '':\n req['altContent'] = AltContent\n if AltSendType is not None or AltSendType != '':\n req['altSendType'] = AltSendType\n if SndDT is not None or SndDT != '':\n req['sndDT'] = SndDT\n if KakaoMessages is not None or KakaoMessages != '':\n req['msgs'] = KakaoMessages\n if ButtonList is not None:\n req['btns'] = ButtonList\n if RequestNum is not None or RequestNum != '':\n req['requestnum'] = RequestNum\n\n postData = self._stringtify(req)\n\n result = self._httppost('/ATS', postData, CorpNum, UserID)\n\n return result.receiptNum", "response": "\uc804\uc1a1 \ubc1c\uc2e0 \ubc88\ud638 \uac00 \uc791\uc131 \ubc1c\uc2e0 \ubc88\ud638"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef sendFTS_same(self, CorpNum, PlusFriendID, Sender, Content, AltContent, AltSendType, SndDT,\n KakaoMessages, KakaoButtons, AdsYN=False, UserID=None, RequestNum=None):\n \"\"\"\n \uce5c\uad6c\ud1a1 \ud14d\uc2a4\ud2b8 \ub300\ub7c9 \uc804\uc1a1\n :param CorpNum: \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\n :param PlusFriendID: \ud50c\ub7ec\uc2a4\uce5c\uad6c \uc544\uc774\ub514\n :param Sender: \ubc1c\uc2e0\ubc88\ud638\n :param Content: [\ub3d9\ubcf4] \uce5c\uad6c\ud1a1 \ub0b4\uc6a9\n :param AltContent: [\ub3d9\ubcf4] \ub300\uccb4\ubb38\uc790 \ub0b4\uc6a9\n :param AltSendType: \ub300\uccb4\ubb38\uc790 \uc720\ud615 [\uacf5\ubc31-\ubbf8\uc804\uc1a1, C-\uc54c\ub9bc\ud1a1\ub0b4\uc6a9, A-\ub300\uccb4\ubb38\uc790\ub0b4\uc6a9]\n :param SndDT: \uc608\uc57d\uc77c\uc2dc [\uc791\uc131\ud615\uc2dd : yyyyMMddHHmmss]\n :param KakaoMessages: \uce5c\uad6c\ud1a1 \ub0b4\uc6a9 (\ubc30\uc5f4)\n :param KakaoButtons: \ubc84\ud2bc \ubaa9\ub85d (\ucd5c\ub300 5\uac1c)\n :param AdsYN: \uad11\uace0 \uc804\uc1a1\uc5ec\ubd80\n :param UserID: \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\n :param RequestNum : \uc694\uccad\ubc88\ud638\n :return: receiptNum (\uc811\uc218\ubc88\ud638)\n \"\"\"\n if PlusFriendID is None or PlusFriendID == '':\n raise PopbillException(-99999999, \"\ud50c\ub7ec\uc2a4\uce5c\uad6c \uc544\uc774\ub514\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\n if Sender is None or Sender == '':\n raise PopbillException(-99999999, \"\ubc1c\uc2e0\ubc88\ud638\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\n\n req = {}\n if PlusFriendID is not None or PlusFriendID != '':\n req['plusFriendID'] = PlusFriendID\n if Sender is not None or Sender != '':\n req['snd'] = Sender\n if AltSendType is not None or AltSendType != '':\n req['altSendType'] = AltSendType\n if Content is not None or Content != '':\n req['content'] = Content\n if AltContent is not None or AltContent != '':\n req['altContent'] = AltContent\n if SndDT is not None or SndDT != '':\n req['sndDT'] = SndDT\n if KakaoMessages:\n req['msgs'] = KakaoMessages\n if KakaoButtons:\n req['btns'] = KakaoButtons\n if AdsYN:\n req['adsYN'] = True\n if RequestNum is not None or RequestNum != '':\n req['requestNum'] = RequestNum\n\n postData = self._stringtify(req)\n\n result = self._httppost('/FTS', postData, CorpNum, UserID)\n\n return result.receiptNum", "response": "\uce5c\uad6c\ud1a1 \ud14d\uc2a4\ud2b8 \ub300\ub7c9 \uc804\uc1a1\n :param CorpNum: \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\n :param PlusFriendID: \ud50c\ub7ec\uc2a4\uce5c\uad6c \uc544\uc774\ub514\n :param Sender: \ubc1c\uc2e0\ubc88\ud638\n :param Content: [\ub3d9\ubcf4] \uce5c\uad6c\ud1a1 \ub0b4\uc6a9\n :param AltContent: [\ub3d9\ubcf4] \ub300\uccb4\ubb38\uc790 \ub0b4\uc6a9\n :param AltSendType: \ub300\uccb4\ubb38\uc790 \uc720\ud615 [\uacf5\ubc31-\ubbf8\uc804\uc1a1, C-\uc54c\ub9bc\ud1a1\ub0b4\uc6a9, A-\ub300\uccb4\ubb38\uc790\ub0b4\uc6a9]\n :param SndDT: \uc608\uc57d\uc77c\uc2dc [\uc791\uc131\ud615\uc2dd : yyyyMMddHHmmss]\n :param KakaoMessages: \uce5c\uad6c\ud1a1 \ub0b4\uc6a9 (\ubc30\uc5f4)\n :param KakaoButtons: \ubc84\ud2bc \ubaa9\ub85d (\ucd5c\ub300 5\uac1c)\n :param AdsYN: \uad11\uace0 \uc804\uc1a1\uc5ec\ubd80\n :param UserID: \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\n :param RequestNum : \uc694\uccad\ubc88\ud638\n :return: receiptNum (\uc811\uc218\ubc88\ud638)"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\ngets UnitCost of a user.", "response": "def getUnitCost(self, CorpNum, MsgType, UserID=None):\n \"\"\"\n \uc804\uc1a1\ub2e8\uac00 \ud655\uc778\n :param CorpNum: \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\n :param MsgType: \uce74\uce74\uc624\ud1a1 \uc720\ud615\n :param UserID: \ud31d\ube4c \ud68c\uc6d0\uc544\uc774\ub514\n :return: unitCost\n \"\"\"\n if MsgType is None or MsgType == \"\":\n raise PopbillException(-99999999, \"\uc804\uc1a1\uc720\ud615\uc774 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\n\n result = self._httpget(\"/KakaoTalk/UnitCost?Type=\" + MsgType, CorpNum)\n return float(result.unitCost)"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef revdocs2reverts(rev_docs, radius=defaults.RADIUS, use_sha1=False,\n resort=False, verbose=False):\n \"\"\"\n Converts a sequence of page-partitioned revision documents into a sequence\n of reverts.\n\n :Params:\n rev_docs : `iterable` ( `dict` )\n a page-partitioned sequence of revision documents\n radius : `int`\n The maximum number of revisions that a revert can reference.\n use_sha1 : `bool`\n Use the sha1 field as the checksum for comparison.\n resort : `bool`\n If True, re-sort the revisions of each page.\n verbose : `bool`\n Print dots and stuff\n \"\"\"\n\n page_rev_docs = groupby(rev_docs, lambda rd: rd.get('page'))\n\n for page_doc, rev_docs in page_rev_docs:\n if verbose:\n sys.stderr.write(page_doc.get('title') + \": \")\n sys.stderr.flush()\n\n if resort:\n if verbose:\n sys.stderr.write(\"(sorting) \")\n sys.stderr.flush()\n rev_docs = sorted(\n rev_docs, key=lambda r: (r.get('timestamp'), r.get('id')))\n\n detector = Detector(radius=radius)\n for rev_doc in rev_docs:\n if not use_sha1 and 'text' not in rev_doc:\n logger.warn(\"Skipping {0}: 'text' field not found in {0}\"\n .format(rev_doc['id'], rev_doc))\n continue\n\n if use_sha1:\n checksum = rev_doc.get('sha1') or DummyChecksum()\n elif 'text' in rev_doc:\n text_bytes = bytes(rev_doc['text'], 'utf8', 'replace')\n checksum = hashlib.sha1(text_bytes).digest()\n\n revert = detector.process(checksum, rev_doc)\n\n if revert:\n yield revert.to_json()\n if verbose:\n sys.stderr.write(\"r\")\n sys.stderr.flush()\n else:\n if verbose:\n sys.stderr.write(\".\")\n sys.stderr.flush()\n\n if verbose:\n sys.stderr.write(\"\\n\")\n sys.stderr.flush()", "response": "Takes a sequence of page - partitioned revision documents and returns a sequence of reverts."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef spm_hrf_compat(t,\n peak_delay=6,\n under_delay=16,\n peak_disp=1,\n under_disp=1,\n p_u_ratio=6,\n normalize=True,\n ):\n \"\"\" SPM HRF function from sum of two gamma PDFs\n\n This function is designed to be partially compatible with SPMs `spm_hrf.m`\n function.\n\n The SPN HRF is a *peak* gamma PDF (with location `peak_delay` and\n dispersion `peak_disp`), minus an *undershoot* gamma PDF (with location\n `under_delay` and dispersion `under_disp`, and divided by the `p_u_ratio`).\n\n Parameters\n ----------\n t : array-like\n vector of times at which to sample HRF\n peak_delay : float, optional\n delay of peak\n peak_disp : float, optional\n width (dispersion) of peak\n under_delay : float, optional\n delay of undershoot\n under_disp : float, optional\n width (dispersion) of undershoot\n p_u_ratio : float, optional\n peak to undershoot ratio. Undershoot divided by this value before\n subtracting from peak.\n normalize : {True, False}, optional\n If True, divide HRF values by their sum before returning. SPM does this\n by default.\n\n Returns\n -------\n hrf : array\n vector length ``len(t)`` of samples from HRF at times `t`\n\n Notes\n -----\n See ``spm_hrf.m`` in the SPM distribution.\n \"\"\"\n if len([v for v in [peak_delay, peak_disp, under_delay, under_disp]\n if v <= 0]):\n raise ValueError(\"delays and dispersions must be > 0\")\n # gamma.pdf only defined for t > 0\n hrf = np.zeros(t.shape, dtype=np.float)\n pos_t = t[t > 0]\n peak = sps.gamma.pdf(pos_t,\n peak_delay / peak_disp,\n loc=0,\n scale=peak_disp)\n undershoot = sps.gamma.pdf(pos_t,\n under_delay / under_disp,\n loc=0,\n scale=under_disp)\n hrf[t > 0] = peak - undershoot / p_u_ratio\n if not normalize:\n return hrf\n return hrf / np.max(hrf)", "response": "This function is designed to be partially compatible with SPMs SPM HRF function from sum of two gamma PDFs at times t."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef dspmt(t):\n t = np.asarray(t)\n return spmt(t) - spmt(t - 1)", "response": "SPM canonical HRF derivative values for time values t"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef cnvlTc(idxPrc,\n aryPrfTcChunk,\n lstHrf,\n varTr,\n varNumVol,\n queOut,\n varOvsmpl=10,\n varHrfLen=32,\n ):\n \"\"\"\n Convolution of time courses with HRF model.\n \"\"\"\n\n # *** prepare hrf time courses for convolution\n print(\"---------Process \" + str(idxPrc) +\n \": Prepare hrf time courses for convolution\")\n # get frame times, i.e. start point of every volume in seconds\n vecFrms = np.arange(0, varTr * varNumVol, varTr)\n # get supersampled frames times, i.e. start point of every volume in\n # seconds, since convolution takes place in temp. upsampled space\n vecFrmTms = np.arange(0, varTr * varNumVol, varTr / varOvsmpl)\n # get resolution of supersampled frame times\n varRes = varTr / float(varOvsmpl)\n\n # prepare empty list that will contain the arrays with hrf time courses\n lstBse = []\n for hrfFn in lstHrf:\n # needs to be a multiple of oversample\n vecTmpBse = hrfFn(np.linspace(0, varHrfLen,\n (varHrfLen // varTr) * varOvsmpl))\n lstBse.append(vecTmpBse)\n\n # *** prepare pixel time courses for convolution\n print(\"---------Process \" + str(idxPrc) +\n \": Prepare pixel time courses for convolution\")\n\n # adjust the input, if necessary, such that input is 2D, with last dim time\n tplInpShp = aryPrfTcChunk.shape\n aryPrfTcChunk = aryPrfTcChunk.reshape((-1, aryPrfTcChunk.shape[-1]))\n\n # Prepare an empty array for ouput\n aryConv = np.zeros((aryPrfTcChunk.shape[0], len(lstHrf),\n aryPrfTcChunk.shape[1]))\n print(\"---------Process \" + str(idxPrc) +\n \": Convolve\")\n # Each time course is convolved with the HRF separately, because the\n # numpy convolution function can only be used on one-dimensional data.\n # Thus, we have to loop through time courses:\n for idxTc in range(0, aryConv.shape[0]):\n\n # Extract the current time course:\n vecTc = aryPrfTcChunk[idxTc, :]\n\n # upsample the pixel time course, so that it matches the hrf time crs\n vecTcUps = np.zeros(int(varNumVol * varTr/varRes))\n vecOns = vecFrms[vecTc.astype(bool)]\n vecInd = np.round(vecOns / varRes).astype(np.int)\n vecTcUps[vecInd] = 1.\n\n # *** convolve\n for indBase, base in enumerate(lstBse):\n # perform the convolution\n col = np.convolve(base, vecTcUps, mode='full')[:vecTcUps.size]\n # get function for downsampling\n f = interp1d(vecFrmTms, col)\n # downsample to original space and assign to ary\n aryConv[idxTc, indBase, :] = f(vecFrms)\n\n # determine output shape\n tplOutShp = tplInpShp[:-1] + (len(lstHrf), ) + (tplInpShp[-1], )\n\n # Create list containing the convolved timecourses, and the process ID:\n lstOut = [idxPrc,\n aryConv.reshape(tplOutShp)]\n\n # Put output to queue:\n queOut.put(lstOut)", "response": "convolves the time courses in a single hrf or pixel time courses in a single pixel time courses"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates double gamma function.", "response": "def funcHrf(varNumVol, varTr):\n \"\"\"Create double gamma function.\n\n Source:\n http://www.jarrodmillman.com/rcsds/lectures/convolution_background.html\n \"\"\"\n vecX = np.arange(0, varNumVol, 1)\n\n # Expected time of peak of HRF [s]:\n varHrfPeak = 6.0 / varTr\n # Expected time of undershoot of HRF [s]:\n varHrfUndr = 12.0 / varTr\n # Scaling factor undershoot (relative to peak):\n varSclUndr = 0.35\n\n # Gamma pdf for the peak\n vecHrfPeak = gamma.pdf(vecX, varHrfPeak)\n # Gamma pdf for the undershoot\n vecHrfUndr = gamma.pdf(vecX, varHrfUndr)\n # Combine them\n vecHrf = vecHrfPeak - varSclUndr * vecHrfUndr\n\n # Scale maximum of HRF to 1.0:\n vecHrf = np.divide(vecHrf, np.max(vecHrf))\n\n return vecHrf"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef cnvlTcOld(idxPrc,\n aryPrfTcChunk,\n varTr,\n varNumVol,\n queOut):\n \"\"\"\n Old version:\n Convolution of time courses with one canonical HRF model.\n \"\"\"\n # Create 'canonical' HRF time course model:\n vecHrf = funcHrf(varNumVol, varTr)\n\n # adjust the input, if necessary, such that input is 2D, with last dim time\n tplInpShp = aryPrfTcChunk.shape\n aryPrfTcChunk = aryPrfTcChunk.reshape((-1, aryPrfTcChunk.shape[-1]))\n\n # Prepare an empty array for ouput\n aryConv = np.zeros(np.shape(aryPrfTcChunk))\n\n # Each time course is convolved with the HRF separately, because the\n # numpy convolution function can only be used on one-dimensional data.\n # Thus, we have to loop through time courses:\n for idxTc, vecTc in enumerate(aryPrfTcChunk):\n\n # In order to avoid an artefact at the end of the time series, we have\n # to concatenate an empty array to both the design matrix and the HRF\n # model before convolution.\n vecTc = np.append(vecTc, np.zeros(100))\n vecHrf = np.append(vecHrf, np.zeros(100))\n\n # Convolve design matrix with HRF model:\n aryConv[idxTc, :] = np.convolve(vecTc, vecHrf,\n mode='full')[:varNumVol]\n\n # determine output shape\n tplOutShp = tplInpShp[:-1] + (1, ) + (tplInpShp[-1], )\n\n # Create list containing the convolved timecourses, and the process ID:\n lstOut = [idxPrc,\n aryConv.reshape(tplOutShp)]\n\n # Put output to queue:\n queOut.put(lstOut)", "response": "This function is used to create a new time course model for a given time series."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\nasync def main() -> None:\n logging.basicConfig(level=logging.INFO)\n async with ClientSession() as websession:\n try:\n # Create a client:\n client = Client(websession)\n\n # Get user data for the client's latitude/longitude:\n user_coord_resp = await client.user_reports.status_by_coordinates(\n LATITUDE, LONGITUDE)\n _LOGGER.info(\n 'User data by latitude/longitude (%s, %s): %s', LATITUDE,\n LONGITUDE, user_coord_resp)\n\n # Get user data for the a specific ZIP code:\n user_zip_resp = await client.user_reports.status_by_zip(ZIP_CODE)\n _LOGGER.info(\n 'User data by ZIP code (%s): %s', ZIP_CODE, user_zip_resp)\n\n # Get CDC data for the client's latitude/longitude:\n cdc_coord_resp = await client.cdc_reports.status_by_coordinates(\n LATITUDE, LONGITUDE)\n _LOGGER.info(\n 'CDC data by latitude/longitude (%s, %s): %s', LATITUDE,\n LONGITUDE, cdc_coord_resp)\n\n # Get CDC data for North Dakota\n cdc_state_resp = await client.cdc_reports.status_by_state(STATE)\n _LOGGER.info(\n 'CDC data by state name (%s): %s', STATE, cdc_state_resp)\n except FluNearYouError as err:\n print(err)", "response": "Create the aiohttp session and run the example."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef call(cls, method, params=None, timeout=600):\n\n params = [] if params is None else params\n\n return cls.make_request(SERVER_ADDRESS, method, params, timeout=timeout)", "response": "Makes a call to the LBRY API"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nget the state of a job.", "response": "def getJobState(self, CorpNum, JobID, UserID=None):\n \"\"\" \uc218\uc9d1 \uc0c1\ud0dc \ud655\uc778\n args\n CorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\n JobID : \uc791\uc5c5\uc544\uc774\ub514\n UserID : \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\n return\n \uc218\uc9d1 \uc0c1\ud0dc \uc815\ubcf4\n raise\n PopbillException\n \"\"\"\n if JobID == None or len(JobID) != 18:\n raise PopbillException(-99999999, \"\uc791\uc5c5\uc544\uc774\ub514(jobID)\uac00 \uc62c\ubc14\ub974\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\")\n\n return self._httpget('/HomeTax/Taxinvoice/' + JobID + '/State', CorpNum, UserID)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nsearches Tax in Aleph", "response": "def search(self, CorpNum, JobID, Type, TaxType, PurposeType, TaxRegIDType, TaxRegIDYN, TaxRegID, Page, PerPage,\n Order, UserID=None):\n \"\"\" \uc218\uc9d1 \uacb0\uacfc \uc870\ud68c\n args\n CorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\n JobID : \uc791\uc5c5\uc544\uc774\ub514\n Type : \ubb38\uc11c\ud615\ud0dc \ubc30\uc5f4, N-\uc77c\ubc18\uc804\uc790\uc138\uae08\uacc4\uc0b0\uc11c, M-\uc218\uc815\uc804\uc790\uc138\uae08\uacc4\uc0b0\uc11c\n TaxType : \uacfc\uc138\ud615\ud0dc \ubc30\uc5f4, T-\uacfc\uc138, N-\uba74\uc138, Z-\uc601\uc138\n PurposeType : \uc601\uc218/\uccad\uad6c, R-\uc601\uc218, C-\uccad\uad6c, N-\uc5c6\uc74c\n TaxRegIDType : \uc885\uc0ac\uc5c5\uc7a5\ubc88\ud638 \uc0ac\uc5c5\uc790\uc720\ud615, S-\uacf5\uae09\uc790, B-\uacf5\uae09\ubc1b\ub294\uc790, T-\uc218\ud0c1\uc790\n TaxRegIDYN : \uc885\uc0ac\uc5c5\uc7a5\ubc88\ud638 \uc720\ubb34, \uacf5\ubc31-\uc804\uccb4\uc870\ud68c, 0-\uc885\uc0ac\uc5c5\uc7a5\ubc88\ud638 \uc5c6\uc74c, 1-\uc885\uc0ac\uc5c5\uc7a5\ubc88\ud638 \uc788\uc74c\n TaxRegID : \uc885\uc0ac\uc5c5\uc7a5\ubc88\ud638, \ucf64\ub9c8(\",\")\ub85c \uad6c\ubd84 \ud558\uc5ec \uad6c\uc131 ex) '0001,0002'\n Page : \ud398\uc774\uc9c0 \ubc88\ud638\n PerPage : \ud398\uc774\uc9c0\ub2f9 \ubaa9\ub85d \uac1c\uc218, \ucd5c\ub300 1000\uac1c\n Order : \uc815\ub82c \ubc29\ud5a5, D-\ub0b4\ub9bc\ucc28\uc21c, A-\uc624\ub984\ucc28\uc21c\n UserID : \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\n return\n \uc218\uc9d1 \uacb0\uacfc \uc815\ubcf4\n raise\n PopbillException\n \"\"\"\n if JobID == None or len(JobID) != 18:\n raise PopbillException(-99999999, \"\uc791\uc5c5\uc544\uc774\ub514(jobID)\uac00 \uc62c\ubc14\ub974\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\")\n\n uri = '/HomeTax/Taxinvoice/' + JobID\n uri += '?Type=' + ','.join(Type)\n uri += '&TaxType=' + ','.join(TaxType)\n uri += '&PurposeType=' + ','.join(PurposeType)\n uri += '&TaxRegIDType=' + TaxRegIDType\n uri += '&TaxRegID=' + TaxRegID\n uri += '&Page=' + str(Page)\n uri += '&PerPage=' + str(PerPage)\n uri += '&Order=' + Order\n\n if TaxRegIDYN != '':\n uri += '&TaxRegIDYN=' + TaxRegIDYN\n\n return self._httpget(uri, CorpNum, UserID)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef getTaxinvoice(self, CorpNum, NTSConfirmNum, UserID=None):\n if NTSConfirmNum == None or len(NTSConfirmNum) != 24:\n raise PopbillException(-99999999, \"\uad6d\uc138\uccad\uc2b9\uc778\ubc88\ud638(NTSConfirmNum)\uac00 \uc62c\ubc14\ub974\uc9c0 \uc54a\uc2b5\ub2c8\ub2e4.\")\n\n return self._httpget('/HomeTax/Taxinvoice/' + NTSConfirmNum, CorpNum, UserID)", "response": "\uc804\uc790\uc138\uae08\uacc4\uc0b0\uc11c \uc0c1\uc138\uc815\ubcf4 \ud655\uc778\n args\n CorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\n NTSConfirmNum : \uad6d\uc138\uccad \uc2b9\uc778\ubc88\ud638\n UserID : \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\n return\n \uc804\uc790\uc138\uae08\uacc4\uc0b0\uc11c \uc815\ubcf4\uac1d\uccb4\n raise\n PopbillException"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make_document(self, titlestring):\n #root = etree.XML('''\\\n#\\\n#\\\n#''')\n\n root = etree.XML('''\\\n\\\n\\\n''')\n\n document = etree.ElementTree(root)\n html = document.getroot()\n head = etree.SubElement(html, 'head')\n etree.SubElement(html, 'body')\n title = etree.SubElement(head, 'title')\n title.text = titlestring\n #The href for the css stylesheet is a standin, can be overwritten\n etree.SubElement(head,\n 'link',\n {'href': 'css/default.css',\n 'rel': 'stylesheet',\n 'type': 'text/css'})\n\n return document", "response": "This method creates a new document for writing as xml\n to the OPS subdirectory of the ePub structure."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef write_document(self, name, document):\n with open(name, 'wb') as out:\n out.write(etree.tostring(document,\n encoding='utf-8',\n pretty_print=True))", "response": "This function will write a document to an XML file."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef format_date_string(self, date_tuple):\n months = ['', 'January', 'February', 'March', 'April', 'May', 'June',\n 'July', 'August', 'September', 'October', 'November', 'December']\n date_string = ''\n if date_tuple.season:\n return '{0}, {1}'.format(date_tuple.season, date_tuple.year)\n else:\n if not date_tuple.month and not date_tuple.day:\n return '{0}'.format(date_tuple.year)\n if date_tuple.month:\n date_string += months[int(date_tuple.month)]\n if date_tuple.day:\n date_string += ' ' + date_tuple.day\n return ', '.join([date_string, date_tuple.year])", "response": "Formats a date_tuple object and returns a string that can be used to place the article content in the article."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef has_out_of_flow_tables(self):\n if self.article.body is None:\n return False\n for table_wrap in self.article.body.findall('.//table-wrap'):\n graphic = table_wrap.xpath('./graphic | ./alternatives/graphic')\n table = table_wrap.xpath('./table | ./alternatives/table')\n if graphic and table:\n return True\n return False", "response": "Returns True if the article has out - of - flow HTML tables False otherwise."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef process(self, article):\n if self.article is not None and not self.collection:\n log.warning('Could not process additional article. Navigation only \\\nhandles one article unless collection mode is set.')\n return False\n\n if article.publisher is None:\n log.error('''Navigation cannot be generated for an Article \\\nwithout a publisher!''')\n return\n self.article = article\n self.article_doi = self.article.doi.split('/')[1]\n self.all_dois.append(self.article.doi)\n if self.collection:\n pass\n else:\n self.title = self.article.publisher.nav_title()\n for author in self.article.publisher.nav_contributors():\n self.contributors.add(author)\n\n #Analyze the structure of the article to create internal mapping\n self.map_navigation()", "response": "Processes an article and creates the internal mapping."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef recursive_article_navmap(self, src_element, depth=0, first=True):\n if depth > self.nav_depth:\n self.nav_depth = depth\n navpoints = []\n tagnames = ['sec', 'fig', 'table-wrap']\n for child in src_element:\n try:\n tagname = child.tag\n except AttributeError:\n continue\n else:\n if tagname not in tagnames:\n continue\n\n #Safely handle missing id attributes\n if 'id' not in child.attrib:\n child.attrib['id'] = self.auto_id\n\n #If in collection mode, we'll prepend the article DOI to avoid\n #collisions\n if self.collection:\n child_id = '-'.join([self.article_doi,\n child.attrib['id']])\n else:\n child_id = child.attrib['id']\n\n #Attempt to infer the correct text as a label\n #Skip the element if we cannot\n child_title = child.find('title')\n if child_title is None:\n continue # If there is no immediate title, skip this element\n label = element_methods.all_text(child_title)\n if not label:\n continue # If no text in the title, skip this element\n source = 'main.{0}.xhtml#{1}'.format(self.article_doi,\n child.attrib['id'])\n if tagname == 'sec':\n children = self.recursive_article_navmap(child, depth=depth + 1)\n navpoints.append(navpoint(child_id,\n label,\n self.play_order,\n source,\n children))\n #figs and table-wraps do not have children\n elif tagname == 'fig': # Add navpoints to list_of_figures\n self.figures_list.append(navpoint(child.attrib['id'],\n label,\n None,\n source,\n []))\n elif tagname == 'table-wrap': # Add navpoints to list_of_tables\n self.tables_list.append(navpoint(child.attrib['id'],\n label,\n None,\n source,\n []))\n return navpoints", "response": "This function recursively traverses the content of an article and adds the correct elements to the NCX file s navMap and Lists."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\ncreates the NCX element for EPUB2.", "response": "def render_EPUB2(self, location):\n \"\"\"\n Creates the NCX specified file for EPUB2\n \"\"\"\n\n def make_navlabel(text):\n \"\"\"\n Creates and returns a navLabel element with the supplied text.\n \"\"\"\n navlabel = etree.Element('navLabel')\n navlabel_text = etree.SubElement(navlabel, 'text')\n navlabel_text.text = text\n return navlabel\n\n def make_navMap(nav=None):\n if nav is None:\n nav_element = etree.Element('navMap')\n for nav_point in self.nav:\n nav_element.append(make_navMap(nav=nav_point))\n else:\n nav_element = etree.Element('navPoint')\n nav_element.attrib['id'] = nav.id\n nav_element.attrib['playOrder'] = nav.playOrder\n nav_element.append(make_navlabel(nav.label))\n content_element = etree.SubElement(nav_element, 'content')\n content_element.attrib['src'] = nav.source\n for child in nav.children:\n nav_element.append(make_navMap(nav=child))\n return nav_element\n root = etree.XML('''\\\n\\\n\\\n\\\n\\\n\\\n\\\n\\\n\\\n\\\n'''.format(**{'uid': ','.join(self.all_dois),\n 'depth': self.nav_depth,\n 'version': __version__}))\n document = etree.ElementTree(root)\n ncx = document.getroot()\n\n #Create the docTitle element\n doctitle = etree.SubElement(ncx, 'docTitle')\n doctitle_text = etree.SubElement(doctitle, 'text')\n doctitle_text.text = self.title\n\n #Create the docAuthor elements\n for contributor in self.contributors:\n if contributor.role == 'author':\n docauthor = etree.SubElement(ncx, 'docAuthor')\n docauthor_text = etree.SubElement(docauthor, 'text')\n docauthor_text.text = contributor.name\n\n #Create the navMap element\n ncx.append(make_navMap())\n\n if self.figures_list:\n navlist = etree.SubElement(ncx, 'navList')\n navlist.append(make_navlabel('List of Figures'))\n for nav_pt in self.figures_list:\n navtarget = etree.SubElement(navlist, 'navTarget')\n navtarget.attrib['id'] = nav_pt.id\n navtarget.append(self.make_navlabel(nav_pt.label))\n content = etree.SubElement(navtarget, 'content')\n content.attrib['src'] = nav_pt.source\n\n if self.tables_list:\n navlist = etree.SubElement(ncx, 'navList')\n navlist.append(make_navlabel('List of Tables'))\n for nav_pt in self.tables_list:\n navtarget = etree.SubElement(navlist, 'navTarget')\n navtarget.attrib['id'] = nav_pt.id\n navtarget.append(self.make_navlabel(nav_pt.label))\n content = etree.SubElement(navtarget, 'content')\n content.attrib['src'] = nav_pt.source\n\n with open(os.path.join(location, 'EPUB', 'toc.ncx'), 'wb') as output:\n output.write(etree.tostring(document, encoding='utf-8', pretty_print=True))"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate 1D Gaussian. Source : http://mathworld. wolfram. com / GaussianFunction. html", "response": "def funcGauss1D(x, mu, sig):\n \"\"\" Create 1D Gaussian. Source:\n http://mathworld.wolfram.com/GaussianFunction.html\n \"\"\"\n arrOut = np.exp(-np.power((x - mu)/sig, 2.)/2)\n # normalize\n# arrOut = arrOut/(np.sqrt(2.*np.pi)*sig)\n # normalize (laternative)\n arrOut = arrOut/np.sum(arrOut)\n return arrOut"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef funcGauss2D(varSizeX, varSizeY, varPosX, varPosY, varSd):\n\n \"\"\" Create 2D Gaussian kernel. Source:\n http://mathworld.wolfram.com/GaussianFunction.html\n \"\"\"\n\n varSizeX = int(varSizeX)\n varSizeY = int(varSizeY)\n\n # aryX and aryY are in reversed order, this seems to be necessary:\n aryY, aryX = sp.mgrid[0:varSizeX,\n 0:varSizeY]\n\n # The actual creation of the Gaussian array:\n aryGauss = (\n (\n np.power((aryX - varPosX), 2.0) +\n np.power((aryY - varPosY), 2.0)\n ) /\n (2.0 * np.power(varSd, 2.0))\n )\n aryGauss = np.exp(-aryGauss)\n # normalize\n # aryGauss = aryGauss/(2*np.pi*np.power(varSd, 2))\n\n return aryGauss", "response": "Create 2D Gaussian kernel. Source : mathworld. wolfram. com / GaussianFunction. html\n "} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef funcConvPar(aryDm,\n vecHrf,\n varNumVol):\n\n \"\"\"\n Function for convolution of pixel-wise 'design matrix' with HRF model.\n \"\"\"\n # In order to avoid an artefact at the end of the time series, we have to\n # concatenate an empty array to both the design matrix and the HRF model\n # before convolution.\n aryDm = np.concatenate((aryDm, np.zeros((aryDm.shape[0], 100))), axis=1)\n vecHrf = np.concatenate((vecHrf, np.zeros((100,))))\n\n aryDmConv = np.empty((aryDm.shape[0], varNumVol))\n for idx in range(0, aryDm.shape[0]):\n vecDm = aryDm[idx, :]\n # Convolve design matrix with HRF model:\n aryDmConv[idx, :] = np.convolve(vecDm, vecHrf,\n mode='full')[:varNumVol]\n return aryDmConv", "response": "Function for convolution of pixel - wise design matrix with HRF model."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nsimulating an AR1 model using the parameters beta c and sigma.", "response": "def simulateAR1(n,\n beta,\n sigma,\n c,\n burnin,\n varNumCNR,\n varNumTP,\n ):\n \"\"\"\n Simulates an AR(1) model using the parameters beta, c, and sigma.\n Returns an array with length n\n\n n := number of time points\n beta := correlation weight\n sigma := standard deviation of the noise, can be a vector\n c := constant added to the noise, default 0\n\n based on:\n source1: https://github.com/ndronen/misc/blob/master/python/ar1.py\n source2: http://stats.stackexchange.com/questions/22742/\n problem-simulating-ar2-process\n source3: https://kurtverstegen.wordpress.com/2013/12/07/simulation/\n \"\"\"\n # Output array with noise time courses\n noise = np.empty((varNumCNR, varNumTP))\n if burnin == 1:\n burnin = 100\n n = n + burnin\n\n noiseTemp = c + sp.random.normal(0, 1, n)\n sims = np.zeros(n)\n sims[0] = noiseTemp[0]\n for i in range(1, n):\n sims[i] = beta*sims[i-1] + noiseTemp[i]\n sims = sims[burnin:]\n noise = sigma[:, np.newaxis]*sp.stats.mstats.zscore(sims)\n return noise"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef funcNrlTcMotPred(idxPrc,\n varPixX,\n varPixY,\n NrlMdlChunk,\n varNumTP,\n aryBoxCar, # aryCond\n path,\n varNumNrlMdls,\n varNumMtDrctn,\n varPar,\n queOut):\n \"\"\"\n Function for creating neural time course models.\n This function should be used to create neural models if different\n predictors for every motion direction are included.\n \"\"\"\n\n# # if hd5 method is used: open file for reading\n# filename = 'aryBoxCar' + str(idxPrc) + '.hdf5'\n# hdf5_path = os.path.join(path, filename)\n# fileH = tables.openFile(hdf5_path, mode='r')\n\n # Output array with pRF model time courses at all modelled standard\n # deviations for current pixel position:\n aryOut = np.empty((len(NrlMdlChunk), varNumTP, varNumMtDrctn),\n dtype='float32')\n\n # Prepare status indicator if this is the first of the parallel processes:\n if idxPrc == 1:\n\n # We create a status indicator for the time consuming pRF model finding\n # algorithm. Number of steps of the status indicator:\n varStsStpSze = 20\n\n # Number of pRF models to fit:\n varNumLoops = varNumNrlMdls/varPar\n\n # Vector with pRF values at which to give status feedback:\n vecStatus = np.linspace(0,\n varNumLoops,\n num=(varStsStpSze+1),\n endpoint=True)\n vecStatus = np.ceil(vecStatus)\n vecStatus = vecStatus.astype(int)\n\n # Vector with corresponding percentage values at which to give status\n # feedback:\n vecStatusPrc = np.linspace(0,\n 100,\n num=(varStsStpSze+1),\n endpoint=True)\n vecStatusPrc = np.ceil(vecStatusPrc)\n vecStatusPrc = vecStatusPrc.astype(int)\n\n # Counter for status indicator:\n varCntSts01 = 0\n varCntSts02 = 0\n\n # Loop through all Gauss parameters that are in this chunk\n for idx, NrlMdlTrpl in enumerate(NrlMdlChunk):\n\n # Status indicator (only used in the first of the parallel\n # processes):\n if idxPrc == 1:\n\n # Status indicator:\n if varCntSts02 == vecStatus[varCntSts01]:\n\n # Prepare status message:\n strStsMsg = ('---------Progress: ' +\n str(vecStatusPrc[varCntSts01]) +\n ' % --- ' +\n str(vecStatus[varCntSts01]) +\n ' loops out of ' +\n str(varNumLoops))\n\n print(strStsMsg)\n\n # Only increment counter if the last value has not been\n # reached yet:\n if varCntSts01 < varStsStpSze:\n varCntSts01 = varCntSts01 + int(1)\n\n # x pos of Gauss model: NrlMdlTrpl[0]\n # y pos of Gauss model: NrlMdlTrpl[1]\n # std of Gauss model: NrlMdlTrpl[2]\n # index of tng crv model: NrlMdlTrpl[3]\n varTmpX = int(np.around(NrlMdlTrpl[0], 0))\n varTmpY = int(np.around(NrlMdlTrpl[1], 0))\n\n # Create pRF model (2D):\n aryGauss = funcGauss2D(varPixX,\n varPixY,\n varTmpX,\n varTmpY,\n NrlMdlTrpl[2])\n\n # Multiply pixel-wise box car model with Gaussian pRF models:\n aryNrlTcTmp = np.multiply(aryBoxCar, aryGauss[:, :, None, None])\n\n # Calculate sum across x- and y-dimensions - the 'area under the\n # Gaussian surface'. This is essentially an unscaled version of the\n # neural time course model (i.e. not yet scaled for the size of\n # the pRF).\n aryNrlTcTmp = np.sum(aryNrlTcTmp, axis=(0, 1))\n\n # Normalise the nrl time course model to the size of the pRF. This\n # gives us the ratio of 'activation' of the pRF at each time point,\n # or, in other words, the neural time course model.\n aryNrlTcTmp = np.divide(aryNrlTcTmp,\n np.sum(aryGauss, axis=(0, 1)))\n\n # Put model time courses into the function's output array:\n aryOut[idx, :, :] = aryNrlTcTmp\n\n # Status indicator (only used in the first of the parallel\n # processes):\n if idxPrc == 1:\n # Increment status indicator counter:\n varCntSts02 = varCntSts02 + 1\n\n # Output list:\n lstOut = [idxPrc,\n aryOut,\n ]\n\n queOut.put(lstOut)", "response": "This function is used to create neural time courses for every motion direction."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nfunctions for finding best pRF model for voxel time course. This function should be used if there is only one predictor.", "response": "def funcFindPrf(idxPrc,\n aryFuncChnk,\n aryPrfTc,\n aryMdls,\n queOut):\n\n \"\"\"\n Function for finding best pRF model for voxel time course.\n This function should be used if there is only one predictor.\n \"\"\"\n\n # Number of voxels to be fitted in this chunk:\n varNumVoxChnk = aryFuncChnk.shape[0]\n\n # Number of volumes:\n varNumVol = aryFuncChnk.shape[1]\n\n # Vectors for pRF finding results [number-of-voxels times one]:\n vecBstXpos = np.zeros(varNumVoxChnk)\n vecBstYpos = np.zeros(varNumVoxChnk)\n vecBstSd = np.zeros(varNumVoxChnk)\n # vecBstR2 = np.zeros(varNumVoxChnk)\n\n # Vector for best R-square value. For each model fit, the R-square value is\n # compared to this, and updated if it is lower than the best-fitting\n # solution so far. We initialise with an arbitrary, high value\n vecBstRes = np.add(np.zeros(varNumVoxChnk),\n 100000.0)\n\n # We reshape the voxel time courses, so that time goes down the column,\n # i.e. from top to bottom.\n aryFuncChnk = aryFuncChnk.T\n\n # Constant term for the model:\n vecConst = np.ones((varNumVol), dtype=np.float32)\n\n # Change type to float 32:\n aryFuncChnk = aryFuncChnk.astype(np.float32)\n aryPrfTc = aryPrfTc.astype(np.float32)\n\n # Number of pRF models to fit:\n varNumMdls = len(aryMdls)\n\n # Prepare status indicator if this is the first of the parallel processes:\n if idxPrc == 0:\n\n # We create a status indicator for the time consuming pRF model finding\n # algorithm. Number of steps of the status indicator:\n varStsStpSze = 20\n\n # Vector with pRF values at which to give status feedback:\n vecStatPrf = np.linspace(0,\n varNumMdls,\n num=(varStsStpSze+1),\n endpoint=True)\n vecStatPrf = np.ceil(vecStatPrf)\n vecStatPrf = vecStatPrf.astype(int)\n\n # Vector with corresponding percentage values at which to give status\n # feedback:\n vecStatPrc = np.linspace(0,\n 100,\n num=(varStsStpSze+1),\n endpoint=True)\n vecStatPrc = np.ceil(vecStatPrc)\n vecStatPrc = vecStatPrc.astype(int)\n\n # Counter for status indicator:\n varCntSts01 = 0\n varCntSts02 = 0\n\n # Loop through pRF models:\n for idxMdls in range(0, varNumMdls):\n\n # Status indicator (only used in the first of the parallel\n # processes):\n if idxPrc == 0:\n\n # Status indicator:\n if varCntSts02 == vecStatPrf[varCntSts01]:\n\n # Prepare status message:\n strStsMsg = ('---------Progress: ' +\n str(vecStatPrc[varCntSts01]) +\n ' % --- ' +\n str(vecStatPrf[varCntSts01]) +\n ' pRF models out of ' +\n str(varNumMdls))\n\n print(strStsMsg)\n\n # Only increment counter if the last value has not been\n # reached yet:\n if varCntSts01 < varStsStpSze:\n varCntSts01 = varCntSts01 + int(1)\n\n # Current pRF time course model:\n vecMdlTc = aryPrfTc[idxMdls, :].flatten()\n\n # We create a design matrix including the current pRF time\n # course model, and a constant term:\n aryDsgn = np.vstack([vecMdlTc,\n vecConst]).T\n\n # Calculation of the ratio of the explained variance (R square)\n # for the current model for all voxel time courses.\n\n# print('------------np.linalg.lstsq on pRF: ' +\n# str(idxX) +\n# 'x ' +\n# str(idxY) +\n# 'y ' +\n# str(idxSd) +\n# 'z --- START')\n# varTmeTmp01 = time.time()\n\n # Change type to float32:\n # aryDsgn = aryDsgn.astype(np.float32)\n\n # Calculate the least-squares solution for all voxels:\n vecTmpRes = np.linalg.lstsq(aryDsgn, aryFuncChnk)[1]\n\n# varTmeTmp02 = time.time()\n# varTmeTmp03 = np.around((varTmeTmp02 - varTmeTmp01),\n# decimals=2)\n# print('------------np.linalg.lstsq on pRF: ' +\n# str(idxX) +\n# 'x ' +\n# str(idxY) +\n# 'y ' +\n# str(idxSd) +\n# 'z --- DONE elapsed time: ' +\n# str(varTmeTmp03) +\n# 's')\n\n # Check whether current residuals are lower than previously\n # calculated ones:\n vecLgcTmpRes = np.less(vecTmpRes, vecBstRes)\n\n # Replace best x and y position values, and SD values.\n vecBstXpos[vecLgcTmpRes] = aryMdls[idxMdls][0]\n vecBstYpos[vecLgcTmpRes] = aryMdls[idxMdls][1]\n vecBstSd[vecLgcTmpRes] = aryMdls[idxMdls][2]\n\n # Replace best residual values:\n vecBstRes[vecLgcTmpRes] = vecTmpRes[vecLgcTmpRes]\n\n# varTmeTmp04 = time.time()\n# varTmeTmp05 = np.around((varTmeTmp04 - varTmeTmp02),\n# decimals=2)\n# print('------------selection of best-fitting pRF model: ' +\n# str(idxX) +\n# 'x ' +\n# str(idxY) +\n# 'y ' +\n# str(idxSd) +\n# 'z --- elapsed time: ' +\n# str(varTmeTmp05) +\n# 's')\n\n # Status indicator (only used in the first of the parallel\n # processes):\n if idxPrc == 0:\n\n # Increment status indicator counter:\n varCntSts02 = varCntSts02 + 1\n\n # After finding the best fitting model for each voxel, we still have to\n # calculate the coefficient of determination (R-squared) for each voxel. We\n # start by calculating the total sum of squares (i.e. the deviation of the\n # data from the mean). The mean of each time course:\n vecFuncMean = np.mean(aryFuncChnk, axis=0)\n # Deviation from the mean for each datapoint:\n vecFuncDev = np.subtract(aryFuncChnk, vecFuncMean[None, :])\n # Sum of squares:\n vecSsTot = np.sum(np.power(vecFuncDev,\n 2.0),\n axis=0)\n # Coefficient of determination:\n vecBstR2 = np.subtract(1.0,\n np.divide(vecBstRes,\n vecSsTot))\n\n # Output list:\n lstOut = [idxPrc,\n vecBstXpos,\n vecBstYpos,\n vecBstSd,\n vecBstR2]\n\n queOut.put(lstOut)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef funcFindPrfMltpPrdXVal(idxPrc,\n aryFuncChnkTrn,\n aryFuncChnkTst,\n aryPrfMdlsTrnConv,\n aryPrfMdlsTstConv,\n aryMdls,\n queOut):\n \"\"\"\n Function for finding best pRF model for voxel time course.\n This function should be used if there are several predictors.\n \"\"\"\n\n # Number of voxels to be fitted in this chunk:\n varNumVoxChnk = aryFuncChnkTrn.shape[0]\n\n # Number of volumes:\n varNumVolTrn = aryFuncChnkTrn.shape[2]\n varNumVolTst = aryFuncChnkTst.shape[2]\n\n # get number of cross validations\n varNumXval = aryPrfMdlsTrnConv.shape[2]\n\n # Vectors for pRF finding results [number-of-voxels times one]:\n vecBstXpos = np.zeros(varNumVoxChnk)\n vecBstYpos = np.zeros(varNumVoxChnk)\n vecBstSd = np.zeros(varNumVoxChnk)\n # vecBstR2 = np.zeros(varNumVoxChnk)\n\n # Vector for temporary residuals values that are obtained during\n # the different loops of cross validation\n vecTmpResXVal = np.empty((varNumVoxChnk, varNumXval), dtype='float32')\n\n # Vector for best residual values.\n vecBstRes = np.add(np.zeros(varNumVoxChnk),\n 100000.0)\n\n # Constant term for the model:\n vecConstTrn = np.ones((varNumVolTrn), dtype=np.float32)\n vecConstTst = np.ones((varNumVolTst), dtype=np.float32)\n\n # Change type to float 32:\n aryPrfMdlsTrnConv = aryPrfMdlsTrnConv.astype(np.float32)\n aryPrfMdlsTstConv = aryPrfMdlsTstConv.astype(np.float32)\n\n # Number of pRF models to fit:\n varNumMdls = len(aryMdls)\n\n # Prepare status indicator if this is the first of the parallel processes:\n if idxPrc == 0:\n\n # We create a status indicator for the time consuming pRF model finding\n # algorithm. Number of steps of the status indicator:\n varStsStpSze = 20\n\n # Vector with pRF values at which to give status feedback:\n vecStatPrf = np.linspace(0,\n varNumMdls,\n num=(varStsStpSze+1),\n endpoint=True)\n vecStatPrf = np.ceil(vecStatPrf)\n vecStatPrf = vecStatPrf.astype(int)\n\n # Vector with corresponding percentage values at which to give status\n # feedback:\n vecStatPrc = np.linspace(0,\n 100,\n num=(varStsStpSze+1),\n endpoint=True)\n vecStatPrc = np.ceil(vecStatPrc)\n vecStatPrc = vecStatPrc.astype(int)\n\n # Counter for status indicator:\n varCntSts01 = 0\n varCntSts02 = 0\n\n # Loop through pRF models:\n for idxMdls in range(0, varNumMdls):\n\n # Status indicator (only used in the first of the parallel\n # processes):\n if idxPrc == 0:\n\n # Status indicator:\n if varCntSts02 == vecStatPrf[varCntSts01]:\n\n # Prepare status message:\n strStsMsg = ('---------Progress: ' +\n str(vecStatPrc[varCntSts01]) +\n ' % --- ' +\n str(vecStatPrf[varCntSts01]) +\n ' pRF models out of ' +\n str(varNumMdls))\n\n print(strStsMsg)\n\n # Only increment counter if the last value has not been\n # reached yet:\n if varCntSts01 < varStsStpSze:\n varCntSts01 = varCntSts01 + int(1)\n\n # Loop through different cross validations\n for idxXval in range(0, varNumXval):\n # Current pRF time course model:\n vecMdlTrn = aryPrfMdlsTrnConv[idxMdls, :, idxXval, :]\n vecMdlTst = aryPrfMdlsTstConv[idxMdls, :, idxXval, :]\n\n # We create a design matrix including the current pRF time\n # course model, and a constant term:\n aryDsgnTrn = np.vstack([vecMdlTrn,\n vecConstTrn]).T\n\n aryDsgnTst = np.vstack([vecMdlTst,\n vecConstTst]).T\n\n # Calculate the least-squares solution for all voxels\n # and get parameter estimates from the training fit\n aryTmpPrmEst = np.linalg.lstsq(aryDsgnTrn,\n aryFuncChnkTrn[:, idxXval, :].T)[0]\n # calculate predicted model fit based on training data\n aryTmpMdlTc = np.dot(aryDsgnTst, aryTmpPrmEst)\n # calculate residual sum of squares between test data and\n # predicted model fit based on training data\n vecTmpResXVal[:, idxXval] = np.sum(\n (np.subtract(aryFuncChnkTst[:, idxXval, :].T,\n aryTmpMdlTc))**2, axis=0)\n\n vecTmpRes = np.mean(vecTmpResXVal, axis=1)\n # Check whether current residuals are lower than previously\n # calculated ones:\n vecLgcTmpRes = np.less(vecTmpRes, vecBstRes)\n\n # Replace best x and y position values, and SD values.\n vecBstXpos[vecLgcTmpRes] = aryMdls[idxMdls][0]\n vecBstYpos[vecLgcTmpRes] = aryMdls[idxMdls][1]\n vecBstSd[vecLgcTmpRes] = aryMdls[idxMdls][2]\n\n # Replace best residual values:\n vecBstRes[vecLgcTmpRes] = vecTmpRes[vecLgcTmpRes]\n\n # Status indicator (only used in the first of the parallel\n # processes):\n if idxPrc == 0:\n\n # Increment status indicator counter:\n varCntSts02 = varCntSts02 + 1\n\n # Output list:\n lstOut = [idxPrc,\n vecBstXpos,\n vecBstYpos,\n vecBstSd,\n ]\n\n queOut.put(lstOut)", "response": "This function finds the best pRF model for voxel time course."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef move_images_to_cache(source, destination):\n if os.path.isdir(destination):\n log.debug('Cached images for this article already exist')\n return\n else:\n log.debug('Cache location: {0}'.format(destination))\n try:\n shutil.copytree(source, destination)\n except:\n log.exception('Images could not be moved to cache')\n else:\n log.info('Moved images to cache'.format(destination))", "response": "Handles the movement of images to the cache."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef image_cache(article_cache, img_dir):\n log.debug('Looking for image directory in the cache')\n if os.path.isdir(article_cache):\n log.info('Cached image directory found: {0}'.format(article_cache))\n shutil.copytree(article_cache, img_dir)\n return True\n return False", "response": "This method is used by get_images to copy images out of the cache."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make_image_cache(img_cache):\n log.info('Initiating the image cache at {0}'.format(img_cache))\n if not os.path.isdir(img_cache):\n utils.mkdir_p(img_cache)\n utils.mkdir_p(os.path.join(img_cache, '10.1371'))\n utils.mkdir_p(os.path.join(img_cache, '10.3389'))", "response": "Create the image cache if it does not exist"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nfetches the images from Frontiers website and store them in output_dir.", "response": "def fetch_frontiers_images(doi, output_dir):\n \"\"\"\n Fetch the images from Frontiers' website. This method may fail to properly\n locate all the images and should be avoided if the files can be accessed\n locally. Downloading the images to an appropriate directory in the cache,\n or to a directory specified by passed argument are the preferred means to\n access images.\n \"\"\"\n log.info('Fetching Frontiers images')\n log.warning('This method may fail to locate all images.')\n\n def download_image(fetch, img_file):\n try:\n image = urllib.request.urlopen(fetch)\n except urllib.error.HTTPError as e:\n if e.code == 503: # Server overloaded\n time.sleep(1) # Wait one second\n try:\n image = urllib.request.urlopen(fetch)\n except:\n return None\n elif e.code == 500:\n print('urllib.error.HTTPError {0}'.format(e.code))\n return None\n else:\n with open(img_file, 'wb') as outimage:\n outimage.write(image.read())\n return True\n\n def check_equation_completion(images):\n \"\"\"\n In some cases, equations images are not exposed in the fulltext (hidden\n behind a rasterized table). This attempts to look for gaps and fix them\n \"\"\"\n log.info('Checking for complete equations')\n files = os.listdir(output_dir)\n inline_equations = []\n for e in files:\n if e[0] == 'i':\n inline_equations.append(e)\n missing = []\n highest = 0\n if inline_equations:\n inline_equations.sort()\n highest = int(inline_equations[-1][1:4])\n i = 1\n while i < highest:\n name = 'i{0}.gif'.format(str(i).zfill(3))\n if name not in inline_equations:\n missing.append(name)\n i += 1\n get = images[0][:-8]\n for m in missing:\n loc = os.path.join(output_dir, m)\n download_image(get + m, loc)\n print('Downloaded image {0}'.format(loc))\n #It is possible that we need to go further than the highest\n highest += 1\n name = 'i{0}.gif'.format(str(highest).zfill(3))\n loc = os.path.join(output_dir, name)\n while download_image(get + name, loc):\n print('Downloaded image {0}'.format(loc))\n highest += 1\n name = 'i{0}.gif'.format(str(highest).zfill(3))\n\n print('Processing images for {0}...'.format(doi))\n #We use the DOI of the article to locate the page.\n doistr = 'http://dx.doi.org/{0}'.format(doi)\n logging.debug('Accessing DOI address-{0}'.format(doistr))\n page = urllib.request.urlopen(doistr)\n if page.geturl()[-8:] == 'abstract':\n full = page.geturl()[:-8] + 'full'\n elif page.geturl()[-4:] == 'full':\n full = page.geturl()\n print(full)\n page = urllib.request.urlopen(full)\n with open('temp', 'w') as temp:\n temp.write(page.read())\n images = []\n with open('temp', 'r') as temp:\n for l in temp.readlines():\n images += re.findall('http://\\w{7}.\\w{3}.\\w{3}.rackcdn.com/\\d{5}/f\\w{4}-\\d{2}-\\d{5}-HTML/image_m/f\\w{4}-\\d{2}-\\d{5}-\\D{1,2}\\d{3}.\\D{3})', l)\n images += re.findall('http://\\w{7}.\\w{3}.\\w{3}.rackcdn.com/\\d{5}/f\\w{4}-\\d{2}-\\d{5}-r2/image_m/f\\w{4}-\\d{2}-\\d{5}-\\D{1,2}\\d{3}.\\D{3})', l)\n images += re.findall('http://\\w{7}.\\w{3}.\\w{3}.rackcdn.com/\\d{5}/f\\w{4}-\\d{2}-\\d{5}-HTML/image_n/f\\w{4}-\\d{2}-\\d{5}-\\D{1,2}\\d{3}.\\D{3})', l)\n os.remove('temp')\n for i in images:\n loc = os.path.join(output_dir, i.split('-')[-1])\n download_image(i, loc)\n print('Downloaded image {0}'.format(loc))\n if images:\n check_equation_completion(images)\n print(\"Done downloading images\")"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef fetch_plos_images(article_doi, output_dir, document):\n log.info('Processing images for {0}...'.format(article_doi))\n\n #A dict of URLs for PLoS subjournals\n journal_urls = {'pgen': 'http://www.plosgenetics.org/article/{0}',\n 'pcbi': 'http://www.ploscompbiol.org/article/{0}',\n 'ppat': 'http://www.plospathogens.org/article/{0}',\n 'pntd': 'http://www.plosntds.org/article/{0}',\n 'pmed': 'http://www.plosmedicine.org/article/{0}',\n 'pbio': 'http://www.plosbiology.org/article/{0}',\n 'pone': 'http://www.plosone.org/article/{0}',\n 'pctr': 'http://clinicaltrials.ploshubs.org/article/{0}'}\n\n #Identify subjournal name for base URL\n subjournal_name = article_doi.split('.')[1]\n base_url = journal_urls[subjournal_name]\n\n #Acquire and xml elements\n graphics = document.document.getroot().findall('.//graphic')\n graphics += document.document.getroot().findall('.//inline-graphic')\n\n #Begin to download\n log.info('Downloading images, this may take some time...')\n for graphic in graphics:\n nsmap = document.document.getroot().nsmap\n xlink_href = graphic.attrib['{' + nsmap['xlink'] + '}' + 'href']\n\n #Equations are handled a bit differently than the others\n #Here we decide that an image name starting with \"e\" is an equation\n if xlink_href.split('.')[-1].startswith('e'):\n resource = 'fetchObject.action?uri=' + xlink_href + '&representation=PNG'\n else:\n resource = xlink_href + '/largerimage'\n full_url = base_url.format(resource)\n try:\n image = urllib.request.urlopen(full_url)\n except urllib.error.HTTPError as e:\n if e.code == 503: # Server overload error\n time.sleep(1) # Wait a second\n try:\n image = urllib.request.urlopen(full_url)\n except:\n return False # Happened twice, give up\n else:\n log.error('urllib.error.HTTPError {0}'.format(e.code))\n return False\n else:\n img_name = xlink_href.split('.')[-1] + '.png'\n img_path = os.path.join(output_dir, img_name)\n with open(img_path, 'wb') as output:\n output.write(image.read())\n log.info('Downloaded image {0}'.format(img_name))\n log.info('Done downloading images')\n return True", "response": "Fetch the images for a PLoS article from the internet."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef save_tc_to_nii(strCsvCnfg, lgcTest=False, lstRat=None, lgcMdlRsp=False,\n strPathHrf=None, lgcSaveRam=False):\n \"\"\"\n Save empirical and fitted time courses to nii file format.\n\n Parameters\n ----------\n strCsvCnfg : str\n Absolute file path of config file used for pRF fitting.\n lgcTest : boolean\n Whether this is a test (pytest). If yes, absolute path of pyprf libary\n will be prepended to config file paths.\n lstRat : None or list\n Ratio of size of center to size of suppressive surround.\n lgcMdlRsp : boolean\n Should the aperture responses for the winner model also be saved?\n strPathHrf : str or None:\n Path to npy file with custom hrf parameters. If None, defaults\n parameters were used.\n lgcSaveRam : boolean\n Whether to also save a nii file that uses little RAM.\n\n Notes\n -----\n This function does not return any arguments but, instead, saves nii files\n to disk.\n\n \"\"\"\n\n # %% Load configuration settings that were used for fitting\n\n # Load config parameters from csv file into dictionary:\n dicCnfg = load_config(strCsvCnfg, lgcTest=lgcTest)\n\n # Load config parameters from dictionary into namespace:\n cfg = cls_set_config(dicCnfg)\n\n # if fitting was done with custom hrf, make sure to retrieve results with\n # '_hrf' appendix\n if strPathHrf is not None:\n cfg.strPathOut = cfg.strPathOut + '_hrf'\n\n # If suppressive surround flag is on, make sure to retrieve results with\n # '_supsur' appendix\n if lstRat is not None:\n cfg.strPathOut = cfg.strPathOut + '_supsur'\n cfg.strPathMdl = cfg.strPathMdl + '_supsur'\n # Append 1.0 as the first entry, which is the key for fitting without\n # surround (only centre)\n lstRat.insert(0, 1.0)\n\n # %% Load previous pRF fitting results\n\n # Derive paths to the x, y, sigma winner parameters from pyprf_feature\n lstWnrPrm = [cfg.strPathOut + '_x_pos.nii.gz',\n cfg.strPathOut + '_y_pos.nii.gz',\n cfg.strPathOut + '_SD.nii.gz']\n\n # Check if fitting has been performed, i.e. whether parameter files exist\n # Throw error message if they do not exist.\n errorMsg = 'Files that should have resulted from fitting do not exist. \\\n \\nPlease perform pRF fitting first, calling e.g.: \\\n \\npyprf_feature -config /path/to/my_config_file.csv'\n assert os.path.isfile(lstWnrPrm[0]), errorMsg\n assert os.path.isfile(lstWnrPrm[1]), errorMsg\n assert os.path.isfile(lstWnrPrm[2]), errorMsg\n\n # Load the x, y, sigma winner parameters from pyprf_feature\n aryIntGssPrm = load_res_prm(lstWnrPrm,\n lstFlsMsk=[cfg.strPathNiiMask])[0][0]\n\n # Load beta parameters estimates, aka weights for time courses\n lstPathBeta = [cfg.strPathOut + '_Betas.nii.gz']\n aryBetas = load_res_prm(lstPathBeta, lstFlsMsk=[cfg.strPathNiiMask])[0][0]\n assert os.path.isfile(lstPathBeta[0]), errorMsg\n\n # Load ratio image, if fitting was obtained with suppressive surround\n if lstRat is not None:\n lstPathRatio = [cfg.strPathOut + '_Ratios.nii.gz']\n aryRatio = load_res_prm(lstPathRatio,\n lstFlsMsk=[cfg.strPathNiiMask])[0][0]\n assert os.path.isfile(lstPathRatio[0]), errorMsg\n\n # Some voxels were excluded because they did not have sufficient mean\n # and/or variance - exclude their initial parameters, too\n # Get inclusion mask and nii header\n aryLgcMsk, aryLgcVar, hdrMsk, aryAff, aryFunc, tplNiiShp = prep_func(\n cfg.strPathNiiMask, cfg.lstPathNiiFunc, varAvgThr=-100)\n # Apply inclusion mask\n aryIntGssPrm = aryIntGssPrm[aryLgcVar, :]\n aryBetas = aryBetas[aryLgcVar, :]\n if lstRat is not None:\n aryRatio = aryRatio[aryLgcVar, :]\n\n # Get array with model parameters that were fitted on a grid\n # [x positions, y positions, sigmas]\n aryMdlParams = crt_mdl_prms((int(cfg.varVslSpcSzeX),\n int(cfg.varVslSpcSzeY)), cfg.varNum1,\n cfg.varExtXmin, cfg.varExtXmax, cfg.varNum2,\n cfg.varExtYmin, cfg.varExtYmax,\n cfg.varNumPrfSizes, cfg.varPrfStdMin,\n cfg.varPrfStdMax, kwUnt='deg',\n kwCrd=cfg.strKwCrd)\n # Load logical for parameter exclusion in unstimulated area\n lgcMdlInc = np.load(cfg.strPathMdl + '_lgcMdlInc.npy')\n # Apply logical\n aryMdlParams = aryMdlParams[lgcMdlInc, :]\n\n # Get corresponding pRF model time courses\n aryPrfTc = np.load(cfg.strPathMdl + '.npy')\n\n # The model time courses will be preprocessed such that they are smoothed\n # (temporally) with same factor as the data and that they will be z-scored:\n aryPrfTc = prep_models(aryPrfTc, varSdSmthTmp=cfg.varSdSmthTmp)\n\n if lgcMdlRsp:\n aryMdlRsp = np.load(cfg.strPathMdl + '_mdlRsp.npy')\n\n # %% Derive fitted time course models for all voxels\n\n # Initialize array that will collect the fitted time courses\n aryFitTc = np.zeros((aryFunc.shape), dtype=np.float32)\n # If desired, initiliaze array that will collect model responses underlying\n # the fitted time course\n if lgcMdlRsp:\n if lstRat is not None:\n aryFitMdlRsp = np.zeros((aryIntGssPrm.shape[0], aryMdlRsp.shape[1],\n aryMdlRsp.shape[3]),\n dtype=np.float32)\n else:\n aryFitMdlRsp = np.zeros((aryIntGssPrm.shape[0],\n aryMdlRsp.shape[1]), dtype=np.float32)\n\n # create vector that allows to check whether every voxel is visited\n # exactly once\n vecVxlTst = np.zeros(aryIntGssPrm.shape[0])\n\n # Find unique rows of fitted model parameters\n aryUnqRows, aryUnqInd = fnd_unq_rws(aryIntGssPrm, return_index=False,\n return_inverse=True)\n\n # Loop over all best-fitting model parameter combinations found\n print('---Assign models to voxels')\n for indRow, vecPrm in enumerate(aryUnqRows):\n # Get logical for voxels for which this prm combi was the best\n lgcVxl = [aryUnqInd == indRow][0]\n if np.all(np.invert(lgcVxl)):\n print('---No voxel found')\n # Mark those voxels that were visited\n vecVxlTst[lgcVxl] += 1\n\n # Get logical index for the model number\n # This can only be 1 index, so we directly get 1st entry of array\n lgcMdl = np.where(np.isclose(aryMdlParams, vecPrm,\n atol=0.01).all(axis=1))[0][0]\n # Tell user if no model was found\n if lgcMdl is None:\n print('---No model found')\n\n # Get model time courses\n aryMdlTc = aryPrfTc[lgcMdl, ...]\n # Get beta parameter estimates\n aryWeights = aryBetas[lgcVxl, :]\n\n # If fitting was done with surround suppression, find ratios for voxels\n # and the indices of these ratios in lstRat\n if lstRat is not None:\n aryVxlRatio = aryRatio[lgcVxl, :]\n indRat = [ind for ind, rat1 in enumerate(lstRat) for rat2 in\n aryVxlRatio[:, 0] if np.isclose(rat1, rat2)]\n indVxl = range(len(indRat))\n\n # Combine model time courses and weights to yield fitted time course\n if lstRat is not None:\n aryFitTcTmp = np.tensordot(aryWeights, aryMdlTc, axes=([1], [0]))\n aryFitTc[lgcVxl, :] = aryFitTcTmp[indVxl, indRat, :]\n else:\n aryFitTc[lgcVxl, :] = np.dot(aryWeights, aryMdlTc)\n\n # If desired by user, also save the model responses per voxels\n if lgcMdlRsp:\n # If desired also save the model responses that won\n if lstRat is not None:\n aryFitMdlRsp[lgcVxl, :] = aryMdlRsp[lgcMdl, :, indRat, :]\n else:\n aryFitMdlRsp[lgcVxl, :] = aryMdlRsp[lgcMdl, :]\n\n # check that every voxel was visited exactly once\n errMsg = 'At least one voxel visited more than once for tc recreation'\n assert len(vecVxlTst) == np.sum(vecVxlTst), errMsg\n\n # %% Export preprocessed voxel time courses as nii\n\n # List with name suffices of output images:\n lstNiiNames = ['_EmpTc']\n\n # Create full path names from nii file names and output path\n lstNiiNames = [cfg.strPathOut + strNii + '.nii.gz' for strNii in\n lstNiiNames]\n\n # export aryFunc as a single 4D nii file\n print('---Save empirical time courses')\n export_nii(aryFunc, lstNiiNames, aryLgcMsk, aryLgcVar, tplNiiShp,\n aryAff, hdrMsk, outFormat='4D')\n print('------Done.')\n\n # If desired by user, also save RAM-saving version of nii\n if lgcSaveRam:\n strPthRamOut = cfg.strPathOut + '_EmpTc_saveRAM' + '.nii.gz'\n imgNii = nb.Nifti1Image(np.expand_dims(np.expand_dims(aryFunc, axis=1),\n axis=1),\n affine=np.eye(4))\n nb.save(imgNii, strPthRamOut)\n\n # %% Export fitted time courses and, if desired, model responses as nii\n\n # List with name suffices of output images:\n lstNiiNames = ['_FitTc']\n\n # Create full path names from nii file names and output path\n lstNiiNames = [cfg.strPathOut + strNii + '.nii.gz' for strNii in\n lstNiiNames]\n\n # export aryFitTc as a single 4D nii file\n print('---Save fitted time courses')\n export_nii(aryFitTc, lstNiiNames, aryLgcMsk, aryLgcVar, tplNiiShp,\n aryAff, hdrMsk, outFormat='4D')\n print('------Done.')\n\n if lgcMdlRsp:\n\n # Create full path name\n strNpyName = cfg.strPathOut + '_FitMdlRsp' + '.npy'\n\n # Save aryFitMdlRsp as npy file\n print('---Save fitted model responses')\n np.save(strNpyName, aryFitMdlRsp)\n print('------Done.')\n\n # Save the mask so we know which voxels these parameters belonged to\n strNpyMskName = cfg.strPathOut + '_FitMdlRsp_Mask' + '.npy'\n aryLgcMsk[aryLgcMsk] = aryLgcVar\n print('---Save mask for fitted model responses')\n np.save(strNpyMskName, aryLgcMsk)\n print('------Done.')\n\n # If desired by user, also save RAM-saving version of nii\n if lgcSaveRam:\n strPthRamOut = cfg.strPathOut + '_FitTc_saveRAM' + '.nii.gz'\n imgNii = nb.Nifti1Image(np.expand_dims(np.expand_dims(aryFitTc,\n axis=1),\n axis=1),\n affine=np.eye(4))\n nb.save(imgNii, strPthRamOut)", "response": "Save empirical and fitted time courses to nii file format."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nsimulates pRF response given pRF parameters and stimulus apertures.", "response": "def pyprf_sim(strPrior, strStmApr, lgcNoise=False, lgcRtnNrl=True,\n lstRat=None, lgcTest=False):\n \"\"\"\n Simulate pRF response given pRF parameters and stimulus apertures.\n\n Parameters\n ----------\n strPrior : str\n Absolute file path of config file used for pRF fitting.\n strStmApr : str\n Absolute file path to stimulus aperture used in in-silico experiment.\n lgcNoise : boolean\n Should noise be added to the simulated pRF time course. By default, no\n noise is added.\n lgcRtnNrl : boolean\n Should neural time course, unconvolved with hrf, be returned as well?\n lstRat : None or list\n Ratio of size of center to size of suppressive surround.\n lgcTest : boolean\n Whether this is a test (pytest). If yes, absolute path of pyprf libary\n will be prepended to config file paths.\n\n Notes\n -----\n [1] This function does not return any arguments but, instead, saves nii\n filex to disk.\n [2] strStmApr should be a path to a npy file that contains a 3D numpy\n array. This arrays consists of binary images in boolean array from that\n represent the stimulus aperture. Images are stacked along last axis.\n\n \"\"\"\n\n # %% Load configuration settings that were used for fitting\n\n # Load config parameters from csv file into dictionary:\n dicCnfg = load_config(strPrior, lgcTest=lgcTest)\n\n # Load config parameters from dictionary into namespace:\n cfg = cls_set_config(dicCnfg)\n\n # If suppressive surround flag is on, make sure to retrieve results from\n # that fitting\n if lstRat is not None:\n cfg.strPathOut = cfg.strPathOut + '_supsur'\n\n # %% Load previous pRF fitting results\n\n # Derive paths to the x, y, sigma winner parameters from pyprf_feature\n lstWnrPrm = [cfg.strPathOut + '_x_pos.nii.gz',\n cfg.strPathOut + '_y_pos.nii.gz',\n cfg.strPathOut + '_SD.nii.gz']\n\n # Check if fitting has been performed, i.e. whether parameter files exist\n # Throw error message if they do not exist.\n errorMsg = 'Files that should have resulted from fitting do not exist. \\\n \\nPlease perform pRF fitting first, calling e.g.: \\\n \\npyprf_feature -config /path/to/my_config_file.csv'\n assert os.path.isfile(lstWnrPrm[0]), errorMsg\n assert os.path.isfile(lstWnrPrm[1]), errorMsg\n assert os.path.isfile(lstWnrPrm[2]), errorMsg\n\n # Load the x, y, sigma winner parameters from pyprf_feature\n aryIntGssPrm = load_res_prm(lstWnrPrm,\n lstFlsMsk=[cfg.strPathNiiMask])[0][0]\n\n # Also load suppresive surround params if suppressive surround flag was on\n if lstRat is not None:\n # Load beta parameters estimates, aka weights, this is later needed to\n # scale responses of the center wrt to the surround\n lstPathBeta = [cfg.strPathOut + '_Betas.nii.gz']\n aryBetas = load_res_prm(lstPathBeta,\n lstFlsMsk=[cfg.strPathNiiMask])[0][0]\n # Load ratio of prf sizes\n lstPathRat = [cfg.strPathOut + '_Ratios.nii.gz']\n aryRat = load_res_prm(lstPathRat, lstFlsMsk=[cfg.strPathNiiMask])[0][0]\n\n # Some voxels were excluded because they did not have sufficient mean\n # and/or variance - exclude their initial parameters, too\n # Get inclusion mask and nii header\n aryLgcMsk, aryLgcVar, hdrMsk, aryAff, _, tplNiiShp = prep_func(\n cfg.strPathNiiMask, cfg.lstPathNiiFunc, varAvgThr=100.)\n # Apply inclusion mask\n aryIntGssPrm = aryIntGssPrm[aryLgcVar, :]\n if lstRat is not None:\n aryBetas = aryBetas[aryLgcVar, :]\n aryRat = aryRat[aryLgcVar]\n\n # %% Load stimulus aperture and create model responses to stimuli\n\n # Load stimulus aperture\n aryStmApr = np.load(strStmApr)\n # Which dimensions does the representation have in pixel space?\n tplStmApr = aryStmApr.shape[:2]\n\n # Convert winner parameters from degrees of visual angle to pixel\n vecIntX, vecIntY, vecIntSd = rmp_deg_pixel_xys(aryIntGssPrm[:, 0],\n aryIntGssPrm[:, 1],\n aryIntGssPrm[:, 2],\n tplStmApr,\n cfg.varExtXmin,\n cfg.varExtXmax,\n cfg.varExtYmin,\n cfg.varExtYmax)\n\n aryIntGssPrmPxl = np.column_stack((vecIntX, vecIntY, vecIntSd))\n\n # Create 2D Gauss model responses to spatial conditions.\n print('---Create 2D Gauss model responses to spatial conditions')\n aryMdlRsp = crt_mdl_rsp(aryStmApr, tplStmApr, aryIntGssPrmPxl, cfg.varPar)\n\n # If supsur flag was provided, also create responses with supsur params\n # and combine positive center response with negative surround response\n if lstRat is not None:\n aryIntGssPrmPxlSur = np.copy(aryIntGssPrmPxl)\n # Adjust pRF sizes using the ratio of pRF sizes\n aryIntGssPrmPxlSur[:, 2] = np.multiply(aryIntGssPrmPxlSur[:, 2],\n aryRat)\n aryMdlRspSur = crt_mdl_rsp(aryStmApr, tplStmApr, aryIntGssPrmPxlSur,\n cfg.varPar)\n # Now the responses of the center and the surround need to be combined\n # in a meaningful way. One way this could be done is to take the ratio\n # of gain parameters that were found when fitting (i.e. betas)\n varGainRat = np.divide(aryBetas[:, 0], aryBetas[:, 1])\n aryMdlRsp = np.subtract(aryMdlRsp,\n np.multiply(varGainRat, aryMdlRspSur))\n\n # %% Convolve time courses with hrf function\n\n # First temporally upsamle the model response\n aryMdlRspUps = np.repeat(aryMdlRsp, cfg.varTmpOvsmpl, axis=-1)\n # Convolve with hrf function\n arySimRsp = crt_prf_tc(aryMdlRspUps, aryMdlRsp.shape[-1], cfg.varTr,\n cfg.varTmpOvsmpl, 1, tplStmApr, cfg.varPar)\n # Squeeze simulated reponse. This step is necessary because crt_prf_tc is,\n # in principle, capable of convolving with deriavtes of canonical function\n if arySimRsp.shape[1] > 1:\n print('***WARNING: pyprf_sim expects 1 hrf function, currently***')\n arySimRsp = np.squeeze(arySimRsp)\n # Save memory by deleting upsampled time course\n del(aryMdlRspUps)\n\n # %% Add auto-correlated noise\n\n if lgcNoise:\n print('***Adding noise feature not yet implemented***')\n\n # %% Export simulated prf, and if desired neural, time courses as nii\n\n # List with name suffices of output images:\n lstNiiNames = ['_SimPrfTc']\n\n # Create full path names from nii file names and output path\n lstNiiNames = [cfg.strPathOut + strNii + '.nii.gz' for strNii in\n lstNiiNames]\n\n # export beta parameter as a single 4D nii file\n print('---Save simulated pRF time courses')\n export_nii(arySimRsp, lstNiiNames, aryLgcMsk, aryLgcVar, tplNiiShp,\n aryAff, hdrMsk, outFormat='4D')\n print('------Done.')\n\n if lgcRtnNrl:\n\n # List with name suffices of output images:\n lstNiiNames = ['_SimNrlTc']\n\n # Create full path names from nii file names and output path\n lstNiiNames = [cfg.strPathOut + strNii + '.nii.gz' for strNii in\n lstNiiNames]\n\n # export beta parameter as a single 4D nii file\n print('---Save simulated neural time courses')\n export_nii(aryMdlRsp, lstNiiNames, aryLgcMsk, aryLgcVar, tplNiiShp,\n aryAff, hdrMsk, outFormat='4D')\n print('------Done.')"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef get_nearest_by_numeric_key(data: dict, key: int) -> Any:\n return data.get(key, data[min(data.keys(), key=lambda k: abs(k - key))])", "response": "Return the dict element whose numeric key is closest to a target."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef resource(self, uri, methods=frozenset({'GET'}), host=None,\n strict_slashes=None, stream=False, version=None, name=None,\n **kwargs):\n \"\"\"\n Create a blueprint resource route from a decorated function.\n\n :param uri: endpoint at which the route will be accessible.\n :param methods: list of acceptable HTTP methods.\n :param host:\n :param strict_slashes:\n :param version:\n :param name: user defined route name for url_for\n :return: function or class instance\n\n Accepts any keyword argument that will be passed to the app resource.\n \"\"\"\n if strict_slashes is None:\n strict_slashes = self.strict_slashes\n\n def decorator(handler):\n self.resources.append((\n FutureRoute(handler, uri, methods, host, strict_slashes,\n stream, version, name),\n kwargs))\n\n return handler\n return decorator", "response": "Create a blueprint resource route from a function."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nadding a blueprint resource route to the app.", "response": "def add_resource(self, handler, uri, methods=frozenset({'GET'}),\n host=None, strict_slashes=None, version=None, name=None,\n **kwargs):\n \"\"\"\n Create a blueprint resource route from a function.\n\n :param uri: endpoint at which the route will be accessible.\n :param methods: list of acceptable HTTP methods.\n :param host:\n :param strict_slashes:\n :param version:\n :param name: user defined route name for url_for\n :return: function or class instance\n\n Accepts any keyword argument that will be passed to the app resource.\n \"\"\"\n self.resource(uri=uri, methods=methods, host=host,\n strict_slashes=strict_slashes, version=version,\n name=name, **kwargs)(handler)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _get_db():\n from .settings import settings\n mongo = settings.MONGODB\n\n if 'URI' in mongo and mongo['URI']:\n uri = mongo['URI']\n else:\n uri = 'mongodb://'\n\n if all(mongo.get(key) for key in ('USERNAME', 'PASSWORD')):\n uri += '{0}:{1}@'.format(mongo['USERNAME'], mongo['PASSWORD'])\n\n if 'HOSTS' in mongo and mongo['HOSTS']:\n uri += ','.join(\n '{0}:{1}'.format(host, port)\n for (host, port) in zip(mongo['HOSTS'], mongo['PORTS']),\n )\n else:\n uri += '{0}:{1}'.format(mongo['HOST'], mongo.get('PORT', 27017))\n\n uri += '/' + mongo['DATABASE']\n\n if 'OPTIONS' in mongo and mongo['OPTIONS']:\n uri += '?{0}'.format('&'.join(mongo['OPTIONS']))\n\n client = ConnectionFailureProxy(MongoClient(uri, connect=False))\n database = client[parse_uri(uri)['database']]\n\n return database", "response": "Returns the connection to the database using the settings. MONGODB"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncalculates voxel betas and R^2 for the best model.", "response": "def getBetas(idxPrc,\n aryPrfTc,\n lstAllMdlInd,\n aryFuncChnk,\n aryBstIndChnk,\n betaSw,\n queOut):\n \"\"\"Calculate voxel betas and R^2 for the best model.\n\n Parameters\n ----------\n idxPrc : TODO\n (?)\n aryPrfTc : np.array, shape (?)\n Population receptive field time courses.\n lstAllMdlInd : list\n List of the indices of all models.\n aryFuncChnk : TODO\n Chunk of something(?)\n aryBstIndChnk : np.array, shape (?)\n Points for every voxel to the index of the best model\n betaSw : str, iterator, or np.array, shape (?)\n Best beta correlation coefficients found in training.\n queOut : TODO\n Queue output (?)\n\n Notes\n -----\n This is done after fitting with cross validation, since during the\n fitting process, we never fit the model to the entire data.\n \"\"\"\n\n # get number of motion directions\n varNumMtnDrctns = aryPrfTc.shape[3]\n varNumVoxChnk = aryBstIndChnk.shape[0]\n # prepare array for best beta weights\n if type(betaSw) is sklearn.model_selection._split.KFold:\n aryEstimMtnCrvTrn = np.zeros((varNumVoxChnk, varNumMtnDrctns,\n betaSw.get_n_splits()), dtype='float32')\n aryEstimMtnCrvTst = np.zeros((varNumVoxChnk, varNumMtnDrctns,\n betaSw.get_n_splits()), dtype='float32')\n resTrn = np.zeros((varNumVoxChnk, betaSw.get_n_splits()),\n dtype='float32')\n resTst = np.zeros((varNumVoxChnk, betaSw.get_n_splits()),\n dtype='float32')\n aryErrorTrn = np.zeros((varNumVoxChnk), dtype='float32')\n aryErrorTst = np.zeros((varNumVoxChnk), dtype='float32')\n contrast = np.array([\n [1, 0, 0, 0, 0],\n [0, 1, 0, 0, 0],\n [0, 0, 1, 0, 0],\n [0, 0, 0, 1, 0],\n ])\n denomTrn = np.zeros((varNumVoxChnk, betaSw.get_n_splits(),\n len(contrast)), dtype='float32')\n denomTst = np.zeros((varNumVoxChnk, betaSw.get_n_splits(),\n len(contrast)), dtype='float32')\n\n elif type(betaSw) is np.ndarray and betaSw.dtype == 'bool':\n aryEstimMtnCrvTrn = np.zeros((varNumVoxChnk, varNumMtnDrctns,\n ), dtype='float32')\n aryEstimMtnCrvTst = np.zeros((varNumVoxChnk, varNumMtnDrctns,\n ), dtype='float32')\n resTrn = np.zeros((varNumVoxChnk),\n dtype='float32')\n resTst = np.zeros((varNumVoxChnk),\n dtype='float32')\n aryErrorTrn = np.zeros((varNumVoxChnk), dtype='float32')\n aryErrorTst = np.zeros((varNumVoxChnk), dtype='float32')\n contrast = np.array([\n [1, 0, 0, 0, 0],\n [0, 1, 0, 0, 0],\n [0, 0, 1, 0, 0],\n [0, 0, 0, 1, 0],\n ])\n denomTrn = np.zeros((varNumVoxChnk,\n len(contrast)), dtype='float32')\n denomTst = np.zeros((varNumVoxChnk,\n len(contrast)), dtype='float32')\n\n else:\n aryEstimMtnCrv = np.zeros((varNumVoxChnk, varNumMtnDrctns),\n dtype='float32')\n # prepare array for best residuals\n vecBstRes = np.zeros(varNumVoxChnk, dtype='float32')\n vecBstRes[:] = np.inf\n # prepare counter to check that every voxel is matched to one winner mdl\n vecLgcCounter = np.zeros(varNumVoxChnk, dtype='float32')\n\n # We reshape the voxel time courses, so that time goes down the column\n aryFuncChnk = aryFuncChnk.T\n\n # Change type to float 32:\n aryFuncChnk = aryFuncChnk.astype(np.float32)\n aryPrfTc = aryPrfTc.astype(np.float32)\n\n # Prepare status indicator if this is the first of the parallel processes:\n if idxPrc == 0:\n\n # We create a status indicator for the time consuming pRF model finding\n # algorithm. Number of steps of the status indicator:\n varStsStpSze = 20\n\n # Number of pRF models to fit:\n varNumMdls = len(lstAllMdlInd)\n\n # Vector with pRF values at which to give status feedback:\n vecStatPrf = np.linspace(0,\n varNumMdls,\n num=(varStsStpSze+1),\n endpoint=True)\n vecStatPrf = np.ceil(vecStatPrf)\n vecStatPrf = vecStatPrf.astype(int)\n\n # Vector with corresponding percentage values at which to give status\n # feedback:\n vecStatPrc = np.linspace(0,\n 100,\n num=(varStsStpSze+1),\n endpoint=True)\n vecStatPrc = np.ceil(vecStatPrc)\n vecStatPrc = vecStatPrc.astype(int)\n\n # Counter for status indicator:\n varCntSts01 = 0\n varCntSts02 = 0\n\n # Loop through pRF models:\n for idx, mdlInd in enumerate(lstAllMdlInd):\n\n # Status indicator (only used in the first of the parallel\n # processes):\n if idxPrc == 0:\n\n # Status indicator:\n if varCntSts02 == vecStatPrf[varCntSts01]:\n\n # Prepare status message:\n strStsMsg = ('---------Progress: ' +\n str(vecStatPrc[varCntSts01]) +\n ' % --- ' +\n str(vecStatPrf[varCntSts01]) +\n ' pRF models out of ' +\n str(varNumMdls))\n\n print(strStsMsg)\n\n # Only increment counter if the last value has not been\n # reached yet:\n if varCntSts01 < varStsStpSze:\n varCntSts01 = varCntSts01 + int(1)\n\n # check whether any model had this particular x, y, sigma combination\n # as its best model\n lgcTemp = [aryBstIndChnk == idx][0]\n\n if np.greater(np.sum(lgcTemp), 0):\n # get current design matrix\n aryDsgnTmp = aryPrfTc[mdlInd].T\n\n if betaSw is 'train': # training\n\n aryTmpPrmEst, aryTmpRes = np.linalg.lstsq(\n aryDsgnTmp, aryFuncChnk[:, lgcTemp])[0:2]\n aryEstimMtnCrv[lgcTemp, :] = aryTmpPrmEst.T\n vecBstRes[lgcTemp] = aryTmpRes\n\n elif type(betaSw) is np.ndarray and betaSw.dtype == 'float':\n # get beta weights for axis of motion tuning curves\n aryEstimMtnCrv[lgcTemp, :] = np.linalg.lstsq(\n aryDsgnTmp, aryFuncChnk[:, lgcTemp])[0].T\n # calculate prediction\n aryPredTc = np.dot(aryDsgnTmp,\n betaSw[lgcTemp, :].T)\n # Sum of squares:\n vecBstRes[lgcTemp] = np.sum((aryFuncChnk[:, lgcTemp] -\n aryPredTc) ** 2, axis=0)\n\n elif type(betaSw) is np.ndarray and betaSw.dtype == 'bool':\n # get beta weights for training\n betas, resTrn[lgcTemp] = np.linalg.lstsq(\n aryDsgnTmp[betaSw, :], aryFuncChnk[betaSw][:, lgcTemp])[0:2]\n aryEstimMtnCrvTrn[lgcTemp, :] = betas.T\n # get beta weights for validation\n betas, resTst[lgcTemp] = np.linalg.lstsq(\n aryDsgnTmp[~betaSw, :], aryFuncChnk[~betaSw][:, lgcTemp])[0:2]\n aryEstimMtnCrvTrn[lgcTemp, :] = betas.T\n # calculate CC for training\n aryCcTrn = np.linalg.pinv(\n np.dot(aryDsgnTmp[betaSw, :].T,\n aryDsgnTmp[betaSw, :]))\n aryCcTst = np.linalg.pinv(\n np.dot(aryDsgnTmp[~betaSw, :].T,\n aryDsgnTmp[~betaSw, :]))\n # calculate Error for training\n aryErrorTrn[lgcTemp] = np.var(\n np.subtract(aryFuncChnk[betaSw][:, lgcTemp],\n np.dot(aryDsgnTmp[betaSw, :],\n aryEstimMtnCrvTrn[lgcTemp, :].T)), axis=0)\n # calculate Error for test\n aryErrorTst[lgcTemp] = np.var(\n np.subtract(aryFuncChnk[~betaSw][:, lgcTemp],\n np.dot(aryDsgnTmp[~betaSw, :],\n aryEstimMtnCrvTst[lgcTemp, :].T)), axis=0)\n # calculate denominator for training\n for indContr, contr in enumerate(contrast):\n denomTrn[lgcTemp, indContr] = np.sqrt(\n aryErrorTrn[lgcTemp] * np.dot(\n np.dot(contr, aryCcTrn), contr.T))\n denomTst[lgcTemp, indContr] = np.sqrt(\n aryErrorTst[lgcTemp] * np.dot(\n np.dot(contr, aryCcTst), contr.T))\n\n elif type(betaSw) is sklearn.model_selection._split.KFold:\n for idxCV, (idxTrn, idxVal) in enumerate(betaSw.split(aryDsgnTmp)):\n # get beta weights for training\n betas, resTrn[lgcTemp, idxCV] = np.linalg.lstsq(\n aryDsgnTmp[idxTrn], aryFuncChnk[idxTrn][:, lgcTemp])[0:2]\n aryEstimMtnCrvTrn[lgcTemp, :, idxCV] = betas.T\n # get beta weights for validation\n betas, resTst[lgcTemp, idxCV] = np.linalg.lstsq(\n aryDsgnTmp[idxVal], aryFuncChnk[idxVal][:, lgcTemp])[0:2]\n aryEstimMtnCrvTst[lgcTemp, :, idxCV] = betas.T\n # calculate CC for training\n aryCcTrn = np.linalg.pinv(\n np.dot(aryDsgnTmp[idxTrn].T,\n aryDsgnTmp[idxTrn]))\n aryCcTst = np.linalg.pinv(\n np.dot(aryDsgnTmp[idxVal].T,\n aryDsgnTmp[idxVal]))\n # calculate Error for training\n aryErrorTrn[lgcTemp] = np.var(\n np.subtract(aryFuncChnk[idxTrn][:, lgcTemp],\n np.dot(aryDsgnTmp[idxTrn],\n aryEstimMtnCrvTrn[lgcTemp, :, idxCV].T)), axis=0)\n # calculate Error for test\n aryErrorTst[lgcTemp] = np.var(\n np.subtract(aryFuncChnk[idxVal][:, lgcTemp],\n np.dot(aryDsgnTmp[idxVal],\n aryEstimMtnCrvTst[lgcTemp, :, idxCV].T)), axis=0)\n # calculate denominator for training\n for indContr, contr in enumerate(contrast):\n denomTrn[lgcTemp, idxCV, indContr] = np.sqrt(\n aryErrorTrn[lgcTemp] * np.dot(\n np.dot(contr, aryCcTrn), contr.T))\n denomTst[lgcTemp, idxCV, indContr] = np.sqrt(\n aryErrorTst[lgcTemp] * np.dot(\n np.dot(contr, aryCcTst), contr.T))\n\n # increase logical counter to verify later that every voxel\n # was visited only once\n vecLgcCounter[lgcTemp] += 1\n\n # Status indicator (only used in the first of the parallel\n # processes):\n if idxPrc == 0:\n # Increment status indicator counter:\n varCntSts02 = varCntSts02 + 1\n\n # check that every voxel was visited only once\n strErrMsg = ('It looks like at least voxel was revisted more than once. ' +\n 'Check whether the R2 was calculated correctly')\n assert np.sum(vecLgcCounter) == len(vecLgcCounter), strErrMsg\n\n if type(betaSw) is sklearn.model_selection._split.KFold:\n\n # calculate t-values\n aryTvalsTrn = np.empty((varNumVoxChnk, contrast.shape[0],\n betaSw.get_n_splits()))\n aryTvalsTst = np.empty((varNumVoxChnk, contrast.shape[0],\n betaSw.get_n_splits()))\n\n for ind1, contr in enumerate(contrast):\n for ind2 in range(betaSw.get_n_splits()):\n aryTvalsTrn[:, ind1, ind2] = np.divide(\n np.dot(contr, aryEstimMtnCrvTrn[:, :, ind2].T),\n denomTrn[:, ind2, ind1])\n aryTvalsTst[:, ind1, ind2] = np.divide(\n np.dot(contr, aryEstimMtnCrvTst[:, :, ind2].T),\n denomTst[:, ind2, ind1])\n\n # Output list:\n lstOut = [idxPrc,\n aryEstimMtnCrvTrn,\n aryEstimMtnCrvTst,\n aryTvalsTrn,\n aryTvalsTst,\n ]\n queOut.put(lstOut)\n\n elif type(betaSw) is np.ndarray and betaSw.dtype == 'bool':\n # calculate t-values\n aryTvalsTrn = np.empty((varNumVoxChnk, contrast.shape[0],\n ))\n aryTvalsTst = np.empty((varNumVoxChnk, contrast.shape[0],\n ))\n\n for ind1, contr in enumerate(contrast):\n aryTvalsTrn[:, ind1] = np.divide(\n np.dot(contr, aryEstimMtnCrvTrn.T),\n denomTrn[:, ind1])\n aryTvalsTst[:, ind1] = np.divide(\n np.dot(contr, aryEstimMtnCrvTst.T),\n denomTst[:, ind1])\n\n # Output list:\n lstOut = [idxPrc,\n aryEstimMtnCrvTrn,\n aryEstimMtnCrvTst,\n aryTvalsTrn,\n aryTvalsTst,\n ]\n queOut.put(lstOut)\n\n else:\n # After finding the best fitting model for each voxel, we still have to\n # calculate the coefficient of determination (R-squared) for each voxel. We\n # start by calculating the total sum of squares (i.e. the deviation of the\n # data from the mean). The mean of each time course:\n vecFuncMean = np.mean(aryFuncChnk, axis=0)\n # Deviation from the mean for each datapoint:\n vecFuncDev = np.subtract(aryFuncChnk, vecFuncMean[None, :])\n # Sum of squares:\n vecSsTot = np.sum(np.power(vecFuncDev,\n 2.0),\n axis=0)\n # Coefficient of determination:\n vecBstR2 = np.subtract(1.0,\n np.divide(vecBstRes,\n vecSsTot))\n # Output list:\n lstOut = [idxPrc,\n vecBstR2,\n aryEstimMtnCrv]\n queOut.put(lstOut)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef nav_creators(self, article):\n creator_list = []\n for contrib_group in article.metadata.front.article_meta.contrib_group:\n for contrib in contrib_group.contrib:\n if not contrib.attrs['contrib-type'] == 'author':\n continue\n if contrib.collab:\n auth = etree.tostring(contrib.collab[0].node, method='text', encoding='utf-8')\n file_as = auth\n elif contrib.anonymous:\n auth = 'Anonymous'\n file_as = auth\n else:\n name = contrib.name[0] # Work with only first name listed\n surname = name.surname.text\n given = name.given_names\n if given: # Given is optional\n if given.text: # Odd instances of empty tags\n auth = ' '.join([surname, given.text])\n given_initial = given.text[0]\n file_as = ', '.join([surname, given_initial])\n else:\n auth = surname\n file_as = auth\n else:\n auth = surname\n file_as = auth\n new_creator = creator(auth, 'aut', file_as)\n creator_list.append(new_creator)\n return creator_list", "response": "This method returns the list of creator objects that are available for the article."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef nav_title(self, article):\n article_title = article.metadata.front.article_meta.title_group.article_title.node\n return str(etree.tostring(article_title, method='text', encoding='utf-8'), encoding='utf-8')", "response": "This method returns the string representation of the article s title."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef format_citation(citation, citation_type=None):\n cite_types = {'book': self.format_book_citation,\n 'commun': self.format_commun_citation,\n 'confproc': self.format_confproc_citation,\n 'discussion': self.format_discussion_citation,\n 'gov': self.format_gov_citation,\n 'journal': self.format_journal_citation,\n 'list': self.format_list_citation,\n 'other': self.format_other_citation,\n 'patent': self.format_patent_citation,\n 'thesis': self.format_thesis_citation,\n 'web': self.format_web_citation,\n '': self.format_other_citation, # Empty becomes 'other'\n None: self.format_other_citation} # None becomes 'other'\n\n #Only check if no citation_type value is passed\n if citation_type is None:\n #Get the citation-type attribute value\n if 'citation-type' in nlm_citation.attrib:\n citation_type = nlm_citation.attrib['citation-type']\n\n #Pass the citation to the appropriate function and return result\n return cite_types[citation_type](citation)\n\n @staticmethod\n def format_book_citation(self, citation):\n \"\"\"\n citation-type=\\\"book\\\"\n \"\"\"\n\n #Get the count of authors\n author_group_count = int(citation.xpath('count(person-group) + count(collab)'))\n #Detect if there are non-authors\n if citation.xpath('person-group[@person-group-type!=\\'author\\']'):\n non_authors = True\n else:\n non_authors= False\n #Detect article-title\n if citation.xpath('article-title'):\n article_title = True\n else:\n article_title = False\n #Find out if there is at least one author or compiler\n auth_or_comp = False\n for person_group in citation.findall('person-group'):\n if 'person-group-type' in person_group.attrib:\n if person_group.attrib['person-group-type'] in ['author', 'compiler']:\n auth_or_comp = True\n break\n\n #These pieces of information allow us to provide two special use cases\n #and one general use case.\n #First special case:\n if author_group_count > 0 and non_authors and article_title:\n pass\n\n #Second special case\n elif auth_or_comp:\n pass\n\n #General case\n else:\n pass\n\n @staticmethod\n def format_commun_citation(self, citation):\n \"\"\"\n citation-type=\\\"commun\\\"\n \"\"\"\n\n @staticmethod\n def format_confproc_citation(self, citation):\n \"\"\"\n citation-type=\\\"confproc\\\"\n \"\"\"\n\n @staticmethod\n def format_discussion_citation(self, citation):\n \"\"\"\n citation-type=\\\"discussion\\\"\n \"\"\"\n\n @staticmethod\n def format_gov_citation(self, citation):\n \"\"\"\n citation-type=\\\"gov\\\"\n \"\"\"\n\n @staticmethod\n def format_journal_citation(self, citation):\n \"\"\"\n citation-type=\\\"journal\\\"\n \"\"\"\n\n @staticmethod\n def format_list_citation(self, citation):\n \"\"\"\n citation-type=\\\"list\\\"\n \"\"\"\n @staticmethod\n def format_other_citation(self, citation):\n \"\"\"\n citation-type=\\\"other\\\"\n \"\"\"\n\n @staticmethod\n def format_patent_citation(self, citation):\n \"\"\"\n citation-type=\\\"patent\\\"\n \"\"\"\n\n @staticmethod\n def format_thesis_citation(self, citation):\n \"\"\"\n citation-type=\\\"thesis\\\"\n \"\"\"\n #Treat the same as \"book\"\n return format_book_citation(citation)\n\n @staticmethod\n def format_web_citation(self, citation):\n \"\"\"\n citation-type=\\\"web\\\"\n \"\"\"", "response": "This method formats the given citation into a single - level citation list."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef plos_doi_to_xmlurl(doi_string):\n #Create URL to request DOI resolution from http://dx.doi.org\n doi_url = 'http://dx.doi.org/{0}'.format(doi_string)\n log.debug('DOI URL: {0}'.format(doi_url))\n #Open the page, follow the redirect\n try:\n resolved_page = urllib.request.urlopen(doi_url)\n except urllib.error.URLError as err:\n print('Unable to resolve DOI URL, or could not connect')\n raise err\n else:\n #Given the redirection, attempt to shape new request for PLoS servers\n resolved_address = resolved_page.geturl()\n log.debug('DOI resolved to {0}'.format(resolved_address))\n parsed = urllib.parse.urlparse(resolved_address)\n xml_url = '{0}://{1}'.format(parsed.scheme, parsed.netloc)\n xml_url += '/article/fetchObjectAttachment.action?uri='\n xml_path = parsed.path.replace(':', '%3A').replace('/', '%2F')\n xml_path = xml_path.split('article%2F')[1]\n xml_url += '{0}{1}'.format(xml_path, '&representation=XML')\n log.debug('Shaped PLoS request for XML {0}'.format(xml_url))\n #Return this url to the calling function\n return xml_url", "response": "Given a PLoS DOI into a URL path to the XML file."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef url_input(url_string, download=True):\n log.debug('URL Input - {0}'.format(url_string))\n try:\n open_xml = urllib.request.urlopen(url_string)\n except urllib.error.URLError as err:\n print('utils.input.url_input received a bad URL, or could not connect')\n raise err\n else:\n #Employ a quick check on the mimetype of the link\n if not open_xml.headers['Content-Type'] == 'text/xml':\n sys.exit('URL request does not appear to be XML')\n filename = open_xml.headers['Content-Disposition'].split('\\\"')[1]\n if download:\n with open(filename, 'wb') as xml_file:\n xml_file.write(open_xml.read())\n return openaccess_epub.utils.file_root_name(filename)", "response": "This method expects a URL string and returns the root name of the xml file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nvalidate a given listing against the CWL job listing schema.", "response": "def _red_listing_validation(key, listing):\n \"\"\"\n Raises an RedValidationError, if the given listing does not comply with cwl_job_listing_schema.\n If listing is None or an empty list, no exception is thrown.\n\n :param key: The input key to build an error message if needed.\n :param listing: The listing to validate\n :raise RedValidationError: If the given listing does not comply with cwl_job_listing_schema\n \"\"\"\n\n if listing:\n try:\n jsonschema.validate(listing, cwl_job_listing_schema)\n except ValidationError as e:\n raise RedValidationError('REDFILE listing of input \"{}\" does not comply with jsonschema: {}'\n .format(key, e.context))"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns a list of mounting connectors from the red data.", "response": "def red_get_mount_connectors(red_data, ignore_outputs):\n \"\"\"\n Returns a list of mounting connectors\n\n :param red_data: The red data to be searched\n :param ignore_outputs: If outputs should be ignored\n :return: A list of connectors with active mount option.\n \"\"\"\n\n keys = []\n\n batches = red_data.get('batches')\n inputs = red_data.get('inputs')\n if batches:\n for batch in batches:\n keys.extend(red_get_mount_connectors_from_inputs(batch['inputs']))\n elif inputs:\n keys.extend(red_get_mount_connectors_from_inputs(inputs))\n\n if not ignore_outputs:\n outputs = red_data.get('outputs')\n if batches:\n for batch in batches:\n batch_outputs = batch.get('outputs')\n if batch_outputs:\n keys.extend(red_get_mount_connectors_from_outputs(batch_outputs))\n\n elif outputs:\n keys.extend(red_get_mount_connectors_from_outputs(outputs))\n\n return keys"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ninvoking the cleanup functions for all inputs.", "response": "def cleanup(connector_manager, red_data, tmp_dir):\n \"\"\"\n Invokes the cleanup functions for all inputs.\n \"\"\"\n for key, arg in red_data['inputs'].items():\n val = arg\n\n if isinstance(arg, list):\n for index, i in enumerate(arg):\n if not isinstance(i, dict):\n continue\n\n # connector_class should be one of 'File' or 'Directory'\n connector_class = i['class']\n input_key = '{}_{}'.format(key, index)\n path = os.path.join(tmp_dir, input_key)\n connector_data = i['connector']\n internal = {URL_SCHEME_IDENTIFIER: path}\n\n if connector_class == 'File':\n connector_manager.receive_cleanup(connector_data, input_key, internal)\n elif connector_class == 'Directory':\n connector_manager.receive_directory_cleanup(connector_data, input_key, internal)\n\n elif isinstance(arg, dict):\n # connector_class should be one of 'File' or 'Directory'\n connector_class = arg['class']\n input_key = key\n path = os.path.join(tmp_dir, input_key)\n connector_data = val['connector']\n internal = {URL_SCHEME_IDENTIFIER: path}\n\n if connector_class == 'File':\n connector_manager.receive_cleanup(connector_data, input_key, internal)\n elif connector_class == 'Directory':\n connector_manager.receive_directory_cleanup(connector_data, input_key, internal)\n\n try:\n os.rmdir(tmp_dir)\n except (OSError, FileNotFoundError):\n # Maybe, raise a warning here, because not all connectors have cleaned up their contents correctly.\n pass"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef _execute_connector(connector_command, top_level_argument, *file_contents, listing=None):\n # create temp_files for every file_content\n temp_files = []\n for file_content in file_contents:\n if file_content is None:\n continue\n tmp_file = tempfile.NamedTemporaryFile('w')\n json.dump(file_content, tmp_file)\n tmp_file.flush()\n temp_files.append(tmp_file)\n\n tmp_listing_file = None\n if listing:\n tmp_listing_file = tempfile.NamedTemporaryFile('w')\n json.dump(listing, tmp_listing_file)\n tmp_listing_file.flush()\n\n command = [connector_command, top_level_argument]\n command.extend([t.name for t in temp_files])\n\n if tmp_listing_file:\n command.append('--listing {}'.format(tmp_listing_file.name))\n\n result = execute(' '.join(command))\n\n # close temp_files\n for temp_file in temp_files:\n temp_file.close()\n\n if tmp_listing_file:\n tmp_listing_file.close()\n\n return result['returnCode'], result['stdErr']", "response": "Executes a connector command and returns the return code and stderr of the connector command."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef directory_listing_content_check(directory_path, listing):\n if listing:\n for sub in listing:\n path = os.path.join(directory_path, sub['basename'])\n if sub['class'] == 'File':\n if not os.path.isfile(path):\n return 'listing contains \"{}\" but this file could not be found on disk.'.format(path)\n elif sub['class'] == 'Directory':\n if not os.path.isdir(path):\n return 'listing contains \"{}\" but this directory could not be found on disk'.format(path)\n listing = sub.get('listing')\n if listing:\n return ConnectorManager.directory_listing_content_check(path, listing)\n return None", "response": "Checks if a given listing is present under the given directory path."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_cuda_devices():\n\n devices = []\n\n try:\n import pycuda.autoinit\n import pycuda.driver as cuda\n\n for device_id in range(cuda.Device.count()):\n vram = cuda.Device(device_id).total_memory()\n\n devices.append(GPUDevice(device_id, vram))\n except ImportError:\n raise InsufficientGPUError('No Nvidia-GPUs could be found, because \"pycuda\" could not be imported.')\n\n return devices", "response": "Imports pycuda at runtime and reads GPU information."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ndetermining sufficient GPUs for the given requirements and returns a list of GPUDevices.", "response": "def match_gpus(available_devices, requirements):\n \"\"\"\n Determines sufficient GPUs for the given requirements and returns a list of GPUDevices.\n If there aren't sufficient GPUs a InsufficientGPUException is thrown.\n\n :param available_devices: A list of GPUDevices\n :param requirements: A list of GPURequirements\n\n :return: A list of sufficient devices\n \"\"\"\n\n if not requirements:\n return []\n\n if not available_devices:\n raise InsufficientGPUError(\"No GPU devices available, but {} devices required.\".format(len(requirements)))\n\n available_devices = available_devices.copy()\n\n used_devices = []\n\n for req in requirements:\n dev = search_device(req, available_devices)\n if dev:\n used_devices.append(dev)\n available_devices.remove(dev)\n else:\n raise InsufficientGPUError(\"Not all GPU requirements could be fulfilled.\")\n\n return used_devices"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_gpu_requirements(gpus_reqs):\n requirements = []\n\n if gpus_reqs:\n if type(gpus_reqs) is dict:\n count = gpus_reqs.get('count')\n if count:\n for i in range(count):\n requirements.append(GPURequirement())\n elif type(gpus_reqs) is list:\n for gpu_req in gpus_reqs:\n requirements.append(GPURequirement(min_vram=gpu_req['minVram']))\n return requirements\n else:\n # If no requirements are supplied\n return []", "response": "Extracts the GPU from a dictionary requirements as list of GPURequirements."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef set_nvidia_environment_variables(environment, gpu_ids):\n\n if gpu_ids:\n nvidia_visible_devices = \"\"\n for gpu_id in gpu_ids:\n nvidia_visible_devices += \"{},\".format(gpu_id)\n environment[\"NVIDIA_VISIBLE_DEVICES\"] = nvidia_visible_devices", "response": "Sets the environment variables that are set in the Nvidia - GPUs environment variable."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef is_sufficient(self, device):\n\n sufficient = True\n if (self.min_vram is not None) and (device.vram < self.min_vram):\n sufficient = False\n\n return sufficient", "response": "Returns whether the device is sufficient for this requirement."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef cache_location():\n '''Cross-platform placement of cached files'''\n plat = platform.platform()\n log.debug('Platform read as: {0}'.format(plat))\n if plat.startswith('Windows'):\n log.debug('Windows platform detected')\n return os.path.join(os.environ['APPDATA'], 'OpenAccess_EPUB')\n elif plat.startswith('Darwin'):\n log.debug('Mac platform detected')\n elif plat.startswith('Linux'):\n log.debug('Linux platform detected')\n else:\n log.warning('Unhandled platform for cache_location')\n\n #This code is written for Linux and Mac, don't expect success for others\n path = os.path.expanduser('~')\n if path == '~':\n path = os.path.expanduser('~user')\n if path == '~user':\n log.critical('Could not resolve the correct cache location')\n sys.exit('Could not resolve the correct cache location')\n cache_loc = os.path.join(path, '.OpenAccess_EPUB')\n log.debug('Cache located: {0}'.format(cache_loc))\n return cache_loc", "response": "Cross - platform placement of cached files"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef load_config_module():\n import imp\n config_path = config_location()\n try:\n config = imp.load_source('config', config_path)\n except IOError:\n log.critical('Config file not found. oaepub exiting...')\n sys.exit('Config file not found. Please run \\'oaepub configure\\'')\n else:\n log.debug('Config file loaded from {0}'.format(config_path))\n return config", "response": "Load the config. py file and return the module object."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef get_absolute_path(some_path):\n if os.path.isabs(some_path):\n return some_path\n else:\n return evaluate_relative_path(os.getcwd(), some_path)", "response": "This function returns an absolute path for the input path."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef file_root_name(name):\n base = os.path.basename(name)\n root = os.path.splitext(base)[0]\n if not root:\n warning = 'file_root_name returned an empty root name from \\\"{0}\\\"'\n log.warning(warning.format(name))\n return root", "response": "Returns the root name of a file from a full file path."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef files_with_ext(extension, directory='.', recursive=False):\n if recursive:\n log.info('Recursively searching {0} for files with extension \"{1}\"'.format(directory, extension))\n for dirname, subdirnames, filenames in os.walk(directory):\n for filename in filenames:\n filepath = os.path.join(dirname, filename)\n _root, ext = os.path.splitext(filepath)\n if extension.lower() == ext.lower():\n yield filepath\n\n else:\n log.info('Looking in {0} for files with extension: \"{1}\"'.format(directory, extension))\n for name in os.listdir(directory):\n filepath = os.path.join(directory, name)\n if not os.path.isfile(filepath): # Skip non-files\n continue\n _root, ext = os.path.splitext(filepath)\n if extension.lower() == ext.lower():\n yield filepath", "response": "Generator function that will iterate over all files in the specified directory and return a path to the files which possess a matching extension."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nchecking if a directory already exists and if yes deletes it if it does not.", "response": "def dir_exists(directory):\n \"\"\"\n If a directory already exists that will be overwritten by some action, this\n will ask the user whether or not to continue with the deletion.\n\n If the user responds affirmatively, then the directory will be removed. If\n the user responds negatively, then the process will abort.\n \"\"\"\n log.info('Directory exists! Asking the user')\n reply = input('''The directory {0} already exists.\nIt will be overwritten if the operation continues.\nReplace? [Y/n]'''.format(directory))\n if reply.lower() in ['y', 'yes', '']:\n shutil.rmtree(directory)\n os.makedirs(directory)\n else:\n log.critical('Aborting process, user declined overwriting {0}'.format(directory))\n sys.exit('Aborting process!')"} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\ngetting the nearest dict entry based on latitude and longitude.", "response": "def get_nearest_by_coordinates(\n data: list, latitude_key: str, longitude_key: str,\n target_latitude: float, target_longitude: float) -> Any:\n \"\"\"Get the closest dict entry based on latitude/longitude.\"\"\"\n return min(\n data,\n key=lambda p: haversine(\n target_latitude,\n target_longitude,\n float(p[latitude_key]),\n float(p[longitude_key])\n ))"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ndetermine the distance between two latitude and longitude pairs.", "response": "def haversine(lat1: float, lon1: float, lat2: float, lon2: float) -> float:\n \"\"\"Determine the distance between two latitude/longitude pairs.\"\"\"\n lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2])\n\n # haversine formula\n dlon = lon2 - lon1\n dlat = lat2 - lat1\n calc_a = sin(dlat / 2)**2 + cos(lat1) * cos(lat2) * sin(dlon / 2)**2\n calc_c = 2 * asin(sqrt(calc_a))\n\n return 6371 * calc_c"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef in_file(self, fn: str) -> Iterator[InsertionPoint]:\n logger.debug(\"finding insertion points in file: %s\", fn)\n yield from self.__file_insertions.get(fn, [])", "response": "Returns an iterator over all of the insertion points in a given file."} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nreturn an iterator over all of the insertion points located at a specific line.", "response": "def at_line(self, line: FileLine) -> Iterator[InsertionPoint]:\n \"\"\"\n Returns an iterator over all of the insertion points located at a\n given line.\n \"\"\"\n logger.debug(\"finding insertion points at line: %s\", str(line))\n filename = line.filename # type: str\n line_num = line.num # type: int\n for ins in self.in_file(filename):\n if line_num == ins.location.line:\n logger.debug(\"found insertion point at line [%s]: %s\",\n str(line), ins)\n yield ins"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef getUnitCost(self, CorpNum):\n\n result = self._httpget('/CloseDown/UnitCost', CorpNum)\n\n return float(result.unitCost)", "response": "Get UnitCost of a particular item."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef checkCorpNums(self, MemberCorpNum, CorpNumList):\n if CorpNumList == None or len(CorpNumList) < 1:\n raise PopbillException(-99999999,\"\uc870\uc8c4\ud560 \uc0ac\uc5c5\uc790\ubc88\ud638 \ubaa9\ub85d\uc774 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\n\n postData = self._stringtify(CorpNumList)\n\n return self._httppost('/CloseDown',postData,MemberCorpNum)", "response": "\ud734\ud3d0\uc5c5\uc870\ud68c \ub300\ub7c9 \ud655\uc778, \ucd5c\ub300 1000\uac74\n args\n MemberCorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\n CorpNumList : \uc870\ud68c\ud560 \uc0ac\uc5c5\uc790\ubc88\ud638 \ubc30\uc5f4\n return\n \ud734\ud3d0\uc5c5\uc815\ubcf4 Object as List\n raise\n PopbillException"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef loadPng(varNumVol, tplPngSize, strPathPng):\n print('------Load PNGs')\n # Create list of png files to load:\n lstPngPaths = [None] * varNumVol\n for idx01 in range(0, varNumVol):\n lstPngPaths[idx01] = (strPathPng + str(idx01) + '.png')\n\n # Load png files. The png data will be saved in a numpy array of the\n # following order: aryPngData[x-pixel, y-pixel, PngNumber]. The\n # sp.misc.imread function actually contains three values per pixel (RGB),\n # but since the stimuli are black-and-white, any one of these is sufficient\n # and we discard the others.\n aryPngData = np.zeros((tplPngSize[0],\n tplPngSize[1],\n varNumVol))\n for idx01 in range(0, varNumVol):\n aryPngData[:, :, idx01] = np.array(Image.open(lstPngPaths[idx01]))\n\n # Convert RGB values (0 to 255) to integer ones and zeros:\n aryPngData = (aryPngData > 0).astype(int)\n\n return aryPngData", "response": "Load PNG files and return a 2d numpy array of the size of the image."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nloading presentation order of motion directions.", "response": "def loadPrsOrd(vecRunLngth, strPathPresOrd, vecVslStim):\n \"\"\"Load presentation order of motion directions.\n\n Parameters\n ----------\n vecRunLngth : list\n Number of volumes in every run\n strPathPresOrd : str\n Path to the npy vector containing order of presented motion directions.\n vecVslStim: list\n Key of (stimulus) condition presented in every run\n Returns\n -------\n aryPresOrdAprt : 1d numpy array, shape [n_vols]\n Presentation order of aperture position.\n aryPresOrdMtn : 1d numpy array, shape [n_vols]\n Presentation order of motion direction.\n \"\"\"\n print('------Load presentation order of motion directions')\n aryPresOrd = np.empty((0, 2))\n for idx01 in range(0, len(vecRunLngth)):\n # reconstruct file name\n # ---> consider: some runs were shorter than others(replace next row)\n filename1 = (strPathPresOrd + str(vecVslStim[idx01]) +\n '.pickle')\n # filename1 = (strPathPresOrd + str(idx01+1) + '.pickle')\n # load array\n with open(filename1, 'rb') as handle:\n array1 = pickle.load(handle)\n tempCond = array1[\"Conditions\"]\n tempCond = tempCond[:vecRunLngth[idx01], :]\n # add temp array to aryPresOrd\n aryPresOrd = np.concatenate((aryPresOrd, tempCond), axis=0)\n aryPresOrdAprt = aryPresOrd[:, 0].astype(int)\n aryPresOrdMtn = aryPresOrd[:, 1].astype(int)\n\n return aryPresOrdAprt, aryPresOrdMtn"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate pixel - wise boxcar functions.", "response": "def crtPwBoxCarFn(varNumVol, aryPngData, aryPresOrd, vecMtDrctn):\n \"\"\"Create pixel-wise boxcar functions.\n\n Parameters\n ----------\n input1 : 2d numpy array, shape [n_samples, n_measurements]\n Description of input 1.\n input2 : float, positive\n Description of input 2.\n Returns\n -------\n data : 2d numpy array, shape [n_samples, n_measurements]\n Closed data.\n Reference\n ---------\n [1]\n \"\"\"\n print('------Create pixel-wise boxcar functions')\n aryBoxCar = np.empty(aryPngData.shape[0:2] + (len(vecMtDrctn),) +\n (varNumVol,), dtype='int64')\n for ind, num in enumerate(vecMtDrctn):\n aryCondTemp = np.zeros((aryPngData.shape), dtype='int64')\n lgcTempMtDrctn = [aryPresOrd == num][0]\n aryCondTemp[:, :, lgcTempMtDrctn] = np.copy(\n aryPngData[:, :, lgcTempMtDrctn])\n aryBoxCar[:, :, ind, :] = aryCondTemp\n\n return aryBoxCar"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef crtGauss2D(varSizeX, varSizeY, varPosX, varPosY, varSd):\n varSizeX = int(varSizeX)\n varSizeY = int(varSizeY)\n\n # aryX and aryY are in reversed order, this seems to be necessary:\n aryY, aryX = sp.mgrid[0:varSizeX,\n 0:varSizeY]\n\n # The actual creation of the Gaussian array:\n aryGauss = (\n (np.square((aryX - varPosX)) + np.square((aryY - varPosY))) /\n (2.0 * np.square(varSd))\n )\n aryGauss = np.exp(-aryGauss) / (2 * np.pi * np.square(varSd))\n\n return aryGauss", "response": "Create 2D Gaussian kernel."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef cnvlGauss2D(idxPrc, aryBoxCar, aryMdlParamsChnk, tplPngSize, varNumVol,\n queOut):\n \"\"\"Spatially convolve boxcar functions with 2D Gaussian.\n\n Parameters\n ----------\n idxPrc : 2d numpy array, shape [n_samples, n_measurements]\n Description of input 1.\n aryBoxCar : float, positive\n Description of input 2.\n aryMdlParamsChnk : 2d numpy array, shape [n_samples, n_measurements]\n Description of input 1.\n tplPngSize : float, positive\n Description of input 2.\n varNumVol : 2d numpy array, shape [n_samples, n_measurements]\n Description of input 1.\n queOut : float, positive\n Description of input 2.\n Returns\n -------\n data : 2d numpy array, shape [n_samples, n_measurements]\n Closed data.\n Reference\n ---------\n [1]\n \"\"\"\n # Number of combinations of model parameters in the current chunk:\n varChnkSze = np.size(aryMdlParamsChnk, axis=0)\n\n # Determine number of motion directions\n varNumMtnDrtn = aryBoxCar.shape[2]\n\n # Output array with pRF model time courses:\n aryOut = np.zeros([varChnkSze, varNumMtnDrtn, varNumVol])\n\n # Loop through different motion directions:\n for idxMtn in range(0, varNumMtnDrtn):\n # Loop through combinations of model parameters:\n for idxMdl in range(0, varChnkSze):\n\n # Spatial parameters of current model:\n varTmpX = aryMdlParamsChnk[idxMdl, 1]\n varTmpY = aryMdlParamsChnk[idxMdl, 2]\n varTmpSd = aryMdlParamsChnk[idxMdl, 3]\n\n # Create pRF model (2D):\n aryGauss = crtGauss2D(tplPngSize[0],\n tplPngSize[1],\n varTmpX,\n varTmpY,\n varTmpSd)\n\n # Multiply pixel-time courses with Gaussian pRF models:\n aryPrfTcTmp = np.multiply(aryBoxCar[:, :, idxMtn, :],\n aryGauss[:, :, None])\n\n # Calculate sum across x- and y-dimensions - the 'area under the\n # Gaussian surface'. This is essentially an unscaled version of the\n # pRF time course model (i.e. not yet scaled for size of the pRF).\n aryPrfTcTmp = np.sum(aryPrfTcTmp, axis=(0, 1))\n\n # Put model time courses into function's output with 2d Gaussian\n # arrray:\n aryOut[idxMdl, idxMtn, :] = aryPrfTcTmp\n\n # Put column with the indicies of model-parameter-combinations into the\n # output array (in order to be able to put the pRF model time courses into\n # the correct order after the parallelised function):\n lstOut = [idxPrc,\n aryOut]\n\n # Put output to queue:\n queOut.put(lstOut)", "response": "Spatially convolve boxcar functions with 2D Gaussian."} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ncreate neural time courses from pixel - wise boxcar functions.", "response": "def crtPrfNrlTc(aryBoxCar, varNumMtDrctn, varNumVol, tplPngSize, varNumX,\n varExtXmin, varExtXmax, varNumY, varExtYmin, varExtYmax,\n varNumPrfSizes, varPrfStdMin, varPrfStdMax, varPar):\n \"\"\"Create neural model time courses from pixel-wise boxcar functions.\n\n Parameters\n ----------\n aryBoxCar : 4d numpy array, shape [n_x_pix, n_y_pix, n_mtn_dir, n_vol]\n Description of input 1.\n varNumMtDrctn : float, positive\n Description of input 2.\n varNumVol : float, positive\n Description of input 2.\n tplPngSize : tuple\n Description of input 2.\n varNumX : float, positive\n Description of input 2.\n varExtXmin : float, positive\n Description of input 2.\n varExtXmax : float, positive\n Description of input 2.\n varNumY : float, positive\n Description of input 2.\n varExtYmin : float, positive\n Description of input 2.\n varExtYmax : float, positive\n Description of input 2.\n varNumPrfSizes : float, positive\n Description of input 2.\n varPrfStdMin : float, positive\n Description of input 2.\n varPrfStdMax : float, positive\n Description of input 2.\n varPar : float, positive\n Description of input 2.\n Returns\n -------\n aryNrlTc : 5d numpy array, shape [n_x_pos, n_y_pos, n_sd, n_mtn_dir, n_vol]\n Closed data.\n Reference\n ---------\n [1]\n \"\"\"\n print('------Create neural time course models')\n\n # Vector with the x-indicies of the positions in the super-sampled visual\n # space at which to create pRF models.\n vecX = np.linspace(0, (tplPngSize[0] - 1), varNumX, endpoint=True)\n\n # Vector with the y-indicies of the positions in the super-sampled visual\n # space at which to create pRF models.\n vecY = np.linspace(0, (tplPngSize[1] - 1), varNumY, endpoint=True)\n\n # We calculate the scaling factor from degrees of visual angle to pixels\n # separately for the x- and the y-directions (the two should be the same).\n varDgr2PixX = tplPngSize[0] / (varExtXmax - varExtXmin)\n varDgr2PixY = tplPngSize[1] / (varExtYmax - varExtYmin)\n\n # Check whether varDgr2PixX and varDgr2PixY are similar:\n strErrMsg = 'ERROR. The ratio of X and Y dimensions in stimulus ' + \\\n 'space (in degrees of visual angle) and the ratio of X and Y ' + \\\n 'dimensions in the upsampled visual space do not agree'\n assert 0.5 > np.absolute((varDgr2PixX - varDgr2PixY)), strErrMsg\n\n # Vector with pRF sizes to be modelled (still in degree of visual angle):\n vecPrfSd = np.linspace(varPrfStdMin, varPrfStdMax, varNumPrfSizes,\n endpoint=True)\n\n # We multiply the vector containing pRF sizes with the scaling factors.\n # Now the vector with the pRF sizes can be used directly for creation of\n # Gaussian pRF models in visual space.\n vecPrfSd = np.multiply(vecPrfSd, varDgr2PixX)\n\n # Number of pRF models to be created (i.e. number of possible combinations\n # of x-position, y-position, and standard deviation):\n varNumMdls = varNumX * varNumY * varNumPrfSizes\n\n # Array for the x-position, y-position, and standard deviations for which\n # pRF model time courses are going to be created, where the columns\n # correspond to: (0) an index starting from zero, (1) the x-position, (2)\n # the y-position, and (3) the standard deviation. The parameters are in\n # units of the upsampled visual space.\n aryMdlParams = np.zeros((varNumMdls, 4))\n\n # Counter for parameter array:\n varCntMdlPrms = 0\n\n # Put all combinations of x-position, y-position, and standard deviations\n # into the array:\n\n # Loop through x-positions:\n for idxX in range(0, varNumX):\n\n # Loop through y-positions:\n for idxY in range(0, varNumY):\n\n # Loop through standard deviations (of Gaussian pRF models):\n for idxSd in range(0, varNumPrfSizes):\n\n # Place index and parameters in array:\n aryMdlParams[varCntMdlPrms, 0] = varCntMdlPrms\n aryMdlParams[varCntMdlPrms, 1] = vecX[idxX]\n aryMdlParams[varCntMdlPrms, 2] = vecY[idxY]\n aryMdlParams[varCntMdlPrms, 3] = vecPrfSd[idxSd]\n\n # Increment parameter index:\n varCntMdlPrms = varCntMdlPrms + 1\n\n # The long array with all the combinations of model parameters is put into\n # separate chunks for parallelisation, using a list of arrays.\n lstMdlParams = np.array_split(aryMdlParams, varPar)\n\n # Create a queue to put the results in:\n queOut = mp.Queue()\n\n # Empty list for results from parallel processes (for pRF model time course\n # results):\n lstPrfTc = [None] * varPar\n\n # Empty list for processes:\n lstPrcs = [None] * varPar\n\n print('---------Creating parallel processes')\n\n # Create processes:\n for idxPrc in range(0, varPar):\n lstPrcs[idxPrc] = mp.Process(target=cnvlGauss2D,\n args=(idxPrc, aryBoxCar,\n lstMdlParams[idxPrc], tplPngSize,\n varNumVol, queOut)\n )\n # Daemon (kills processes when exiting):\n lstPrcs[idxPrc].Daemon = True\n\n # Start processes:\n for idxPrc in range(0, varPar):\n lstPrcs[idxPrc].start()\n\n # Collect results from queue:\n for idxPrc in range(0, varPar):\n lstPrfTc[idxPrc] = queOut.get(True)\n\n # Join processes:\n for idxPrc in range(0, varPar):\n lstPrcs[idxPrc].join()\n\n print('---------Collecting results from parallel processes')\n # Put output arrays from parallel process into one big array\n lstPrfTc = sorted(lstPrfTc)\n aryPrfTc = np.empty((0, varNumMtDrctn, varNumVol))\n for idx in range(0, varPar):\n aryPrfTc = np.concatenate((aryPrfTc, lstPrfTc[idx][1]), axis=0)\n\n # check that all the models were collected correctly\n assert aryPrfTc.shape[0] == varNumMdls\n\n # Clean up:\n del(aryMdlParams)\n del(lstMdlParams)\n del(lstPrfTc)\n\n # Array representing the low-resolution visual space, of the form\n # aryPrfTc[x-position, y-position, pRF-size, varNum Vol], which will hold\n # the pRF model time courses.\n aryNrlTc = np.zeros([varNumX, varNumY, varNumPrfSizes, varNumMtDrctn,\n varNumVol])\n\n # We use the same loop structure for organising the pRF model time courses\n # that we used for creating the parameter array. Counter:\n varCntMdlPrms = 0\n\n # Put all combinations of x-position, y-position, and standard deviations\n # into the array:\n\n # Loop through x-positions:\n for idxX in range(0, varNumX):\n\n # Loop through y-positions:\n for idxY in range(0, varNumY):\n\n # Loop through standard deviations (of Gaussian pRF models):\n for idxSd in range(0, varNumPrfSizes):\n\n # Put the pRF model time course into its correct position in\n # the 4D array, leaving out the first column (which contains\n # the index):\n aryNrlTc[idxX, idxY, idxSd, :, :] = aryPrfTc[\n varCntMdlPrms, :, :]\n\n # Increment parameter index:\n varCntMdlPrms = varCntMdlPrms + 1\n\n return aryNrlTc"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ncreate 2D Gaussian kernel. Parameters ---------- aryNrlTc : 5d numpy array, shape [n_x_pos, n_y_pos, n_sd, n_mtn_dir, n_vol] Description of input 1. varNumVol : float, positive Description of input 2. varTr : float, positive Description of input 1. tplPngSize : tuple Description of input 1. varNumMtDrctn : int, positive Description of input 1. switchHrfSet : Description of input 1. lgcOldSchoolHrf : int, positive Description of input 1. varPar : int, positive Description of input 1. Returns ------- data : 2d numpy array, shape [n_samples, n_measurements] Closed data. Reference --------- [1]", "response": "def cnvlPwBoxCarFn(aryNrlTc, varNumVol, varTr, tplPngSize, varNumMtDrctn,\n switchHrfSet, lgcOldSchoolHrf, varPar,):\n \"\"\"Create 2D Gaussian kernel.\n\n Parameters\n ----------\n aryNrlTc : 5d numpy array, shape [n_x_pos, n_y_pos, n_sd, n_mtn_dir, n_vol]\n Description of input 1.\n varNumVol : float, positive\n Description of input 2.\n varTr : float, positive\n Description of input 1.\n tplPngSize : tuple\n Description of input 1.\n varNumMtDrctn : int, positive\n Description of input 1.\n switchHrfSet :\n Description of input 1.\n lgcOldSchoolHrf : int, positive\n Description of input 1.\n varPar : int, positive\n Description of input 1.\n Returns\n -------\n data : 2d numpy array, shape [n_samples, n_measurements]\n Closed data.\n Reference\n ---------\n [1]\n \"\"\"\n print('------Convolve every pixel box car function with hrf function(s)')\n\n # Create hrf time course function:\n if switchHrfSet == 3:\n lstHrf = [spmt, dspmt, ddspmt]\n elif switchHrfSet == 2:\n lstHrf = [spmt, dspmt]\n elif switchHrfSet == 1:\n lstHrf = [spmt]\n\n # adjust the input, if necessary, such that input is 2D, with last dim time\n tplInpShp = aryNrlTc.shape\n aryNrlTc = np.reshape(aryNrlTc, (-1, aryNrlTc.shape[-1]))\n\n # Put input data into chunks:\n lstNrlTc = np.array_split(aryNrlTc, varPar)\n\n # Create a queue to put the results in:\n queOut = mp.Queue()\n\n # Empty list for processes:\n lstPrcs = [None] * varPar\n\n # Empty list for results of parallel processes:\n lstConv = [None] * varPar\n\n print('---------Creating parallel processes')\n\n if lgcOldSchoolHrf:\n for idxPrc in range(0, varPar):\n lstPrcs[idxPrc] = mp.Process(target=cnvlTcOld,\n args=(idxPrc,\n lstNrlTc[idxPrc],\n varTr,\n varNumVol,\n queOut)\n )\n # Daemon (kills processes when exiting):\n lstPrcs[idxPrc].Daemon = True\n\n else:\n # Create processes:\n for idxPrc in range(0, varPar):\n lstPrcs[idxPrc] = mp.Process(target=cnvlTc,\n args=(idxPrc,\n lstNrlTc[idxPrc],\n lstHrf,\n varTr,\n varNumVol,\n queOut)\n )\n\n # Daemon (kills processes when exiting):\n lstPrcs[idxPrc].Daemon = True\n\n # Start processes:\n for idxPrc in range(0, varPar):\n lstPrcs[idxPrc].start()\n\n # Collect results from queue:\n for idxPrc in range(0, varPar):\n lstConv[idxPrc] = queOut.get(True)\n\n # Join processes:\n for idxPrc in range(0, varPar):\n lstPrcs[idxPrc].join()\n\n print('---------Collecting results from parallel processes')\n # Put output into correct order:\n lstConv = sorted(lstConv)\n # Concatenate convolved pixel time courses (into the same order\n aryNrlTcConv = np.zeros((0, switchHrfSet, varNumVol))\n for idxRes in range(0, varPar):\n aryNrlTcConv = np.concatenate((aryNrlTcConv, lstConv[idxRes][1]),\n axis=0)\n # clean up\n del(aryNrlTc)\n del(lstConv)\n\n # Reshape results:\n tplOutShp = tplInpShp[:-2] + (varNumMtDrctn * len(lstHrf), ) + \\\n (tplInpShp[-1], )\n\n # Return:\n return np.reshape(aryNrlTcConv, tplOutShp)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef rsmplInHighRes(aryBoxCarConv,\n tplPngSize,\n tplVslSpcHighSze,\n varNumMtDrctn,\n varNumVol):\n \"\"\"Resample pixel-time courses in high-res visual space.\n\n Parameters\n ----------\n input1 : 2d numpy array, shape [n_samples, n_measurements]\n Description of input 1.\n input2 : float, positive\n Description of input 2.\n Returns\n -------\n data : 2d numpy array, shape [n_samples, n_measurements]\n Closed data.\n Reference\n ---------\n [1]\n \"\"\"\n # Array for super-sampled pixel-time courses:\n aryBoxCarConvHigh = np.zeros((tplVslSpcHighSze[0],\n tplVslSpcHighSze[1],\n varNumMtDrctn,\n varNumVol))\n\n # Loop through volumes:\n for idxMtn in range(0, varNumMtDrctn):\n\n for idxVol in range(0, varNumVol):\n\n # Range for the coordinates:\n vecRange = np.arange(0, tplPngSize[0])\n\n # The following array describes the coordinates of the pixels in\n # the flattened array (i.e. \"vecOrigPixVal\"). In other words, these\n # are the row and column coordinates of the original pizel values.\n crd2, crd1 = np.meshgrid(vecRange, vecRange)\n aryOrixPixCoo = np.column_stack((crd1.flatten(), crd2.flatten()))\n\n # The following vector will contain the actual original pixel\n # values:\n\n vecOrigPixVal = aryBoxCarConv[:, :, idxMtn, idxVol]\n vecOrigPixVal = vecOrigPixVal.flatten()\n\n # The sampling interval for the creation of the super-sampled pixel\n # data (complex numbers are used as a convention for inclusive\n # intervals in \"np.mgrid()\").:\n\n varStpSzeX = np.complex(tplVslSpcHighSze[0])\n varStpSzeY = np.complex(tplVslSpcHighSze[1])\n\n # The following grid has the coordinates of the points at which we\n # would like to re-sample the pixel data:\n aryPixGridX, aryPixGridY = np.mgrid[0:tplPngSize[0]:varStpSzeX,\n 0:tplPngSize[1]:varStpSzeY]\n\n # The actual resampling:\n aryResampled = griddata(aryOrixPixCoo,\n vecOrigPixVal,\n (aryPixGridX, aryPixGridY),\n method='nearest')\n\n # Put super-sampled pixel time courses into array:\n aryBoxCarConvHigh[:, :, idxMtn, idxVol] = aryResampled\n\n return aryBoxCarConvHigh", "response": "Resample pixel - time courses in high - res visual space."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nparse the Command Line Arguments using argparse.", "response": "def get_arg_parse():\n \"\"\"Parses the Command Line Arguments using argparse.\"\"\"\n # Create parser object:\n objParser = argparse.ArgumentParser()\n\n # Add argument to namespace -strCsvPrf results file path:\n objParser.add_argument('-strCsvPrf', required=True,\n metavar='/path/to/my_prior_res',\n help='Absolute file path of prior pRF results. \\\n Ignored if in testing mode.'\n )\n\n # Add argument to namespace -strStmApr results file path:\n objParser.add_argument('-strStmApr', required=True,\n metavar='/path/to/my_prior_res',\n help='Absolute file path to npy file with \\\n stimulus apertures. Ignored if in testing \\\n mode.'\n )\n\n # Add argument to namespace -lgcNoise flag:\n objParser.add_argument('-lgcNoise', dest='lgcNoise',\n action='store_true', default=False,\n help='Should noise be added to the simulated pRF\\\n time course?')\n\n # Add argument to namespace -lgcRtnNrl flag:\n objParser.add_argument('-lgcRtnNrl', dest='lgcRtnNrl',\n action='store_true', default=False,\n help='Should neural time course, unconvolved with \\\n hrf, be returned as well?')\n\n objParser.add_argument('-supsur', nargs='+',\n help='List of floats that represent the ratio of \\\n size neg surround to size pos center.',\n type=float, default=None)\n\n # Namespace object containign arguments and values:\n objNspc = objParser.parse_args()\n\n return objNspc"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef login_required(wrapped):\n @wraps(wrapped)\n def wrapper(*args, **kwargs):\n request = args[1]\n\n auth_collection = settings.AUTH_COLLECTION[\n settings.AUTH_COLLECTION.rfind('.') + 1:\n ].lower()\n auth_document = request.environ.get(auth_collection)\n\n if auth_document and auth_document.is_authorized(request):\n setattr(request, auth_collection, auth_document)\n return wrapped(*args, **kwargs)\n\n return Response(response=serialize(UnauthorizedError()), status=401)\n\n if hasattr(wrapped, 'decorators'):\n wrapper.decorators = wrapped.decorators\n wrapper.decorators.append('login_required')\n else:\n wrapper.decorators = ['login_required']\n\n return wrapper", "response": "Decorator that ensures that the user is logged in and authorized to execute requests\n Except if the method is in authorized_methods of the auth_collection\n SARL"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef _doAtomicFileCreation(filePath):\n try:\n _os.close(_os.open(filePath, _os.O_CREAT | _os.O_EXCL))\n return True\n except OSError as e:\n if e.errno == _errno.EEXIST:\n return False\n else:\n raise e", "response": "Tries to atomically create the requested file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nfind the next available file-name in a sequence. This function will create a file of zero size and will return the path to it to the caller. No files which exist will be altered in this operation and concurrent executions of this function will return separate files. In case of conflict, the function will attempt to generate a new file name up to maxattempts number of times before failing. The sequence will start from the base argument (default: 0). If used with the prefix/suffix, it will look for the next file in the sequence ignoring any gaps. Hence, if the files \"a.0.txt\" and \"a.3.txt\" exist, then the next file returned will be \"a.4.txt\" when called with prefix=\"a.\" and suffix=\".txt\". In case fnameGen is provided, the first generated filename which does not exist will be created and its path will be returned. Hence, if the files \"a.0.txt\" and \"a.3.txt\" exist, then the next file returned will be \"a.1.txt\" when called with fnameGen = lambda x : \"a.\" + str(x) + \".txt\" Args: folder - string which has path to the folder where the file should be created (default: '.') prefix - prefix of the file to be generated (default: '') suffix - suffix of the file to be generated (default: '') fnameGen - function which generates the filenames given a number as input (default: None) base - the first index to count (default: 0) maxattempts - number of attempts to create the file before failing with OSError (default: 10) Returns: Path of the file which follows the provided pattern and can be opened for writing. Raises: RuntimeError - If an incorrect combination of arguments is provided. OSError - If is unable to create a file (wrong path, drive full, illegal character in filename, etc.).", "response": "def findNextFile(folder='.',\n prefix=None,\n suffix=None,\n fnameGen=None,\n base=0,\n maxattempts=10):\n \"\"\"Finds the next available file-name in a sequence.\n\n This function will create a file of zero size and will return the path to\n it to the caller. No files which exist will be altered in this operation\n and concurrent executions of this function will return separate files. In\n case of conflict, the function will attempt to generate a new file name up\n to maxattempts number of times before failing.\n\n The sequence will start from the base argument (default: 0).\n\n If used with the prefix/suffix, it will look for the next file in the\n sequence ignoring any gaps. Hence, if the files \"a.0.txt\" and \"a.3.txt\"\n exist, then the next file returned will be \"a.4.txt\" when called with\n prefix=\"a.\" and suffix=\".txt\".\n\n In case fnameGen is provided, the first generated filename which does not\n exist will be created and its path will be returned. Hence, if the files\n \"a.0.txt\" and \"a.3.txt\" exist, then the next file returned will be\n \"a.1.txt\" when called with fnameGen = lambda x : \"a.\" + str(x) + \".txt\"\n\n Args:\n folder - string which has path to the folder where the file should\n be created (default: '.')\n prefix - prefix of the file to be generated (default: '')\n suffix - suffix of the file to be generated (default: '')\n fnameGen - function which generates the filenames given a number as\n input (default: None)\n base - the first index to count (default: 0)\n maxattempts - number of attempts to create the file before failing with\n OSError (default: 10)\n\n Returns:\n Path of the file which follows the provided pattern and can be opened\n for writing.\n\n Raises:\n RuntimeError - If an incorrect combination of arguments is provided.\n OSError - If is unable to create a file (wrong path, drive full,\n illegal character in filename, etc.).\n \"\"\"\n expFolder = _os.path.expanduser(_os.path.expandvars(folder))\n return _findNextFile(expFolder, prefix, suffix, fnameGen,\n base, maxattempts, 0)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nexecutes the command line script and returns the path of the next available file in the sequence.", "response": "def _run(passedArgs=None, stderr=None, stdout=None, exitFn=None):\n \"\"\"Executes the script, gets prefix/suffix from the command prompt and\n produces output on STDOUT. For help with command line options, invoke\n script with '--help'.\n \"\"\"\n\n description = \"\"\"Finds the next available file-name in a sequence.\n\n This program will create a file of zero size and will output the path to it\n on STDOUT. No files which exist will be altered in this operation and\n concurrent invocations of this program will return separate files. In case\n of conflict, this program will attempt to generate a new file name up to\n 'maxattempts' number of times before failing. (See --max-attempts)\n\n The sequence will start from the base argument (See --base, default: 0).\n\n This program will look for the next file in the sequence ignoring any gaps.\n Hence, if the files \"a.0.txt\" and \"a.3.txt\" exist, then the next file\n returned will be \"a.4.txt\" when called with prefix=\"a.\" and suffix=\".txt\".\n\n Returns:\n Path of the file which follows the provided pattern and can be opened\n for writing.\n\n Otherwise, it prints an error (wrong path, drive full, illegal\n character in filename, etc.) to stderr and exits with a non-zero error\n code.\n \"\"\"\n\n argParser = _argparse.ArgumentParser(\n description=description,\n formatter_class=_argparse.RawTextHelpFormatter)\n\n argParser.add_argument('prefix',\n help='Prefix for the sequence of files.')\n argParser.add_argument('suffix',\n help='Suffix for the sequence of files.',\n nargs='?',\n default='')\n argParser.add_argument('folder',\n help='The folder where the file will be created.',\n nargs='?',\n default=_os.getcwd())\n argParser.add_argument('-m', '--max-attempts',\n help='Number of attempts to make before giving up.',\n default=10)\n argParser.add_argument('-b', '--base',\n help='From where to start counting (default: 0).',\n default=0)\n\n passedArgs = passedArgs if passedArgs is not None else _sys.argv[1:]\n args = argParser.parse_args(passedArgs)\n\n stdout = _sys.stdout if stdout is None else stdout\n stderr = _sys.stderr if stderr is None else stderr\n\n try:\n nextFile = findNextFile(args.folder, prefix=args.prefix,\n suffix=args.suffix,\n maxattempts=args.max_attempts,\n base=args.base)\n # The newline will be converted to the correct platform specific line\n # ending as `sys.stdout` is opened in non-binary mode.\n # Hence, we do not need to explicitly print out `\\r\\n` for Windows.\n stdout.write(nextFile + u'\\n')\n except OSError as e:\n stderr.write(_os.strerror(e.errno) + u'\\n')\n _sys.exit(e.errno)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nreturns the value str truncated to MAX_ERROR_STR_LEN characters. If the value is truncated it will be truncated to MAX_ERROR_STR_LEN characters. If the value is truncated it will be appended to the end.", "response": "def _errstr(value):\n \"\"\"Returns the value str, truncated to MAX_ERROR_STR_LEN characters. If\n it's truncated, the returned value will have '...' on the end.\n \"\"\"\n\n value = str(value) # We won't make the caller convert value to a string each time.\n if len(value) > MAX_ERROR_STR_LEN:\n return value[:MAX_ERROR_STR_LEN] + '...'\n else:\n return value"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _getStrippedValue(value, strip):\n if strip is None:\n value = value.strip() # Call strip() with no arguments to strip whitespace.\n elif isinstance(strip, str):\n value = value.strip(strip) # Call strip(), passing the strip argument.\n elif strip is False:\n pass # Don't strip anything.\n return value", "response": "Like the strip method except the strip argument describes\n different behavior."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _raiseValidationException(standardExcMsg, customExcMsg=None):\n if customExcMsg is None:\n raise ValidationException(str(standardExcMsg))\n else:\n raise ValidationException(str(customExcMsg))", "response": "Raise ValidationException with standardExcMsg unless customExcMsg is specified."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef _prevalidationCheck(value, blank, strip, allowlistRegexes, blocklistRegexes, excMsg=None):\n\n # TODO - add a allowlistFirst and blocklistFirst to determine which is checked first. (Right now it's allowlist)\n\n value = str(value)\n\n # Optionally strip whitespace or other characters from value.\n value = _getStrippedValue(value, strip)\n\n # Validate for blank values.\n if not blank and value == '':\n # value is blank but blanks aren't allowed.\n _raiseValidationException(_('Blank values are not allowed.'), excMsg)\n elif blank and value == '':\n return True, value # The value is blank and blanks are allowed, so return True to indicate that the caller should return value immediately.\n\n # NOTE: We check if something matches the allow-list first, then we check the block-list second.\n\n # Check the allowlistRegexes.\n if allowlistRegexes is not None:\n for regex in allowlistRegexes:\n if isinstance(regex, re.Pattern):\n if regex.search(value, re.IGNORECASE) is not None:\n return True, value # The value is in the allowlist, so return True to indicate that the caller should return value immediately.\n else:\n if re.search(regex, value, re.IGNORECASE) is not None:\n return True, value # The value is in the allowlist, so return True to indicate that the caller should return value immediately.\n\n # Check the blocklistRegexes.\n if blocklistRegexes is not None:\n for blocklistRegexItem in blocklistRegexes:\n if isinstance(blocklistRegexItem, str):\n regex, response = blocklistRegexItem, DEFAULT_BLOCKLIST_RESPONSE\n else:\n regex, response = blocklistRegexItem\n\n if isinstance(regex, re.Pattern) and regex.search(value, re.IGNORECASE) is not None:\n _raiseValidationException(response, excMsg) # value is on a blocklist\n elif re.search(regex, value, re.IGNORECASE) is not None:\n _raiseValidationException(response, excMsg) # value is on a blocklist\n\n return False, value", "response": "Internal function to perform the prevalidation check of the value passed for the parameter."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nchecks that the input parameters are valid for the current base class. Returns None if the input parameters are invalid. Raises a PySimpleValidateException if any of the arguments are invalid.", "response": "def _validateGenericParameters(blank, strip, allowlistRegexes, blocklistRegexes):\n \"\"\"Returns None if the blank, strip, and blocklistRegexes parameters are valid\n of PySimpleValidate's validation functions have. Raises a PySimpleValidateException\n if any of the arguments are invalid.\"\"\"\n\n # Check blank parameter.\n if not isinstance(blank, bool):\n raise PySimpleValidateException('blank argument must be a bool')\n\n # Check strip parameter.\n if not isinstance(strip, (bool, str, type(None))):\n raise PySimpleValidateException('strip argument must be a bool, None, or str')\n\n # Check allowlistRegexes parameter (including each regex in it).\n if allowlistRegexes is None:\n allowlistRegexes = [] # allowlistRegexes defaults to a blank list.\n\n try:\n len(allowlistRegexes) # Make sure allowlistRegexes is a sequence.\n except:\n raise PySimpleValidateException('allowlistRegexes must be a sequence of regex_strs')\n for response in allowlistRegexes:\n if not isinstance(response[0], str):\n raise PySimpleValidateException('allowlistRegexes must be a sequence of regex_strs')\n\n # Check allowlistRegexes parameter (including each regex in it).\n # NOTE: blocklistRegexes is NOT the same format as allowlistRegex, it can\n # include an \"invalid input reason\" string to display if the input matches\n # the blocklist regex.\n if blocklistRegexes is None:\n blocklistRegexes = [] # blocklistRegexes defaults to a blank list.\n\n try:\n len(blocklistRegexes) # Make sure blocklistRegexes is a sequence of (regex_str, str) or strs.\n except:\n raise PySimpleValidateException('blocklistRegexes must be a sequence of (regex_str, str) tuples or regex_strs')\n for response in blocklistRegexes:\n if isinstance(response, str):\n continue\n if len(response) != 2:\n raise PySimpleValidateException('blocklistRegexes must be a sequence of (regex_str, str) tuples or regex_strs')\n if not isinstance(response[0], str) or not isinstance(response[1], str):\n raise PySimpleValidateException('blocklistRegexes must be a sequence of (regex_str, str) tuples or regex_strs')"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nraises an exception if the arguments are invalid.", "response": "def _validateParamsFor_validateNum(min=None, max=None, lessThan=None, greaterThan=None):\n \"\"\"Raises an exception if the arguments are invalid. This is called by\n the validateNum(), validateInt(), and validateFloat() functions to\n check its arguments. This code was refactored out to a separate function\n so that the PyInputPlus module (or other modules) could check their\n parameters' arguments for inputNum() etc.\n \"\"\"\n\n if (min is not None) and (greaterThan is not None):\n raise PySimpleValidateException('only one argument for min or greaterThan can be passed, not both')\n if (max is not None) and (lessThan is not None):\n raise PySimpleValidateException('only one argument for max or lessThan can be passed, not both')\n\n if (min is not None) and (max is not None) and (min > max):\n raise PySimpleValidateException('the min argument must be less than or equal to the max argument')\n if (min is not None) and (lessThan is not None) and (min >= lessThan):\n raise PySimpleValidateException('the min argument must be less than the lessThan argument')\n if (max is not None) and (greaterThan is not None) and (max <= greaterThan):\n raise PySimpleValidateException('the max argument must be greater than the greaterThan argument')\n\n for name, val in (('min', min), ('max', max),\n ('lessThan', lessThan), ('greaterThan', greaterThan)):\n if not isinstance(val, (int, float, type(None))):\n raise PySimpleValidateException(name + ' argument must be int, float, or NoneType')"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nraises ValidationException if value is not a string.", "response": "def validateStr(value, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None, excMsg=None):\n \"\"\"Raises ValidationException if value is not a string. This function\n is identical to the built-in input() function, but also offers the\n PySimpleValidate features of not allowing blank values by default,\n automatically stripping whitespace, and having allowlist/blocklist\n regular expressions.\n\n Returns value, so it can be used inline in an expression:\n\n print('Hello, ' + validateStr(your_name))\n\n * value (str): The value being validated as a string.\n * blank (bool): If True, a blank string will be accepted. Defaults to False. Defaults to False.\n * strip (bool, str, None): If None, whitespace is stripped from value. If a str, the characters in it are stripped from value. If False, nothing is stripped.\n * allowlistRegexes (Sequence, None): A sequence of regex str that will explicitly pass validation, even if they aren't numbers.\n * blocklistRegexes (Sequence, None): A sequence of regex str or (regex_str, response_str) tuples that, if matched, will explicitly fail validation.\n * excMsg (str): A custom message to use in the raised ValidationException.\n\n >>> import pysimplevalidate as pysv\n >>> pysv.validateStr('hello')\n 'hello'\n >>> pysv.validateStr('')\n Traceback (most recent call last):\n ...\n pysimplevalidate.ValidationException: Blank values are not allowed.\n >>> pysv.validateStr('', blank=True)\n ''\n >>> pysv.validateStr(' hello ')\n 'hello'\n >>> pysv.validateStr('hello', blocklistRegexes=['hello'])\n Traceback (most recent call last):\n ...\n pysimplevalidate.ValidationException: This response is invalid.\n >>> pysv.validateStr('hello', blocklistRegexes=[('hello', 'Hello is not allowed')])\n Traceback (most recent call last):\n ...\n pysimplevalidate.ValidationException: Hello is not allowed\n >>> pysv.validateStr('hello', allowlistRegexes=['hello'], blocklistRegexes=['llo'])\n 'hello'\n \"\"\"\n\n # Validate parameters.\n _validateGenericParameters(blank=blank, strip=strip, allowlistRegexes=None, blocklistRegexes=blocklistRegexes)\n returnNow, value = _prevalidationCheck(value, blank, strip, allowlistRegexes, blocklistRegexes, excMsg)\n\n return value"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\nraise ValidationException if value is not a number or int.", "response": "def validateNum(value, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None, _numType='num',\n min=None, max=None, lessThan=None, greaterThan=None, excMsg=None):\n \"\"\"Raises ValidationException if value is not a float or int.\n\n Returns value, so it can be used inline in an expression:\n\n print(2 + validateNum(your_number))\n\n Note that since int() and float() ignore leading or trailing whitespace\n when converting a string to a number, so does this validateNum().\n\n * value (str): The value being validated as an int or float.\n * blank (bool): If True, a blank string will be accepted. Defaults to False.\n * strip (bool, str, None): If None, whitespace is stripped from value. If a str, the characters in it are stripped from value. If False, nothing is stripped.\n * allowlistRegexes (Sequence, None): A sequence of regex str that will explicitly pass validation, even if they aren't numbers.\n * blocklistRegexes (Sequence, None): A sequence of regex str or (regex_str, response_str) tuples that, if matched, will explicitly fail validation.\n * _numType (str): One of 'num', 'int', or 'float' for the kind of number to validate against, where 'num' means int or float.\n * min (int, float): The (inclusive) minimum value for the value to pass validation.\n * max (int, float): The (inclusive) maximum value for the value to pass validation.\n * lessThan (int, float): The (exclusive) minimum value for the value to pass validation.\n * greaterThan (int, float): The (exclusive) maximum value for the value to pass validation.\n * excMsg (str): A custom message to use in the raised ValidationException.\n\n If you specify min or max, you cannot also respectively specify lessThan\n or greaterThan. Doing so will raise PySimpleValidateException.\n\n >>> import pysimplevalidate as pysv\n >>> pysv.validateNum('3')\n 3\n >>> pysv.validateNum('3.0')\n 3.0\n >>> pysv.validateNum(' 3.0 ')\n 3.0\n >>> pysv.validateNum('549873259847598437598435798435793.589985743957435794357')\n 5.498732598475984e+32\n >>> pysv.validateNum('4', lessThan=4)\n Traceback (most recent call last):\n ...\n pysimplevalidate.ValidationException: Number must be less than 4.\n >>> pysv.validateNum('4', max=4)\n 4\n >>> pysv.validateNum('4', min=2, max=5)\n 4\n \"\"\"\n\n # Validate parameters.\n _validateGenericParameters(blank=blank, strip=strip, allowlistRegexes=None, blocklistRegexes=blocklistRegexes)\n _validateParamsFor_validateNum(min=min, max=max, lessThan=lessThan, greaterThan=greaterThan)\n\n returnNow, value = _prevalidationCheck(value, blank, strip, allowlistRegexes, blocklistRegexes, excMsg)\n if returnNow:\n # If we can convert value to an int/float, then do so. For example,\n # if an allowlist regex allows '42', then we should return 42/42.0.\n if (_numType == 'num' and '.' in value) or (_numType == 'float'):\n try:\n return float(value)\n except ValueError:\n return value # Return the value as is.\n if (_numType == 'num' and '.' not in value) or (_numType == 'int'):\n try:\n return int(value)\n except ValueError:\n return value # Return the value as is.\n\n # Validate the value's type (and convert value back to a number type).\n if (_numType == 'num' and '.' in value):\n # We are expecting a \"num\" (float or int) type and the user entered a float.\n try:\n value = float(value)\n except:\n _raiseValidationException(_('%r is not a number.') % (_errstr(value)), excMsg)\n elif (_numType == 'num' and '.' not in value):\n # We are expecting a \"num\" (float or int) type and the user entered an int.\n try:\n value = int(value)\n except:\n _raiseValidationException(_('%r is not a number.') % (_errstr(value)), excMsg)\n elif _numType == 'float':\n try:\n value = float(value)\n except:\n _raiseValidationException(_('%r is not a float.') % (_errstr(value)), excMsg)\n elif _numType == 'int':\n try:\n if float(value) % 1 != 0:\n # The number is a float that doesn't end with \".0\"\n _raiseValidationException(_('%r is not an integer.') % (_errstr(value)), excMsg)\n value = int(float(value))\n except:\n _raiseValidationException(_('%r is not an integer.') % (_errstr(value)), excMsg)\n\n # Validate against min argument.\n if min is not None and value < min:\n _raiseValidationException(_('Number must be at minimum %s.') % (min), excMsg)\n\n # Validate against max argument.\n if max is not None and value > max:\n _raiseValidationException(_('Number must be at maximum %s.') % (max), excMsg)\n\n # Validate against max argument.\n if lessThan is not None and value >= lessThan:\n _raiseValidationException(_('Number must be less than %s.') % (lessThan), excMsg)\n\n # Validate against max argument.\n if greaterThan is not None and value <= greaterThan:\n _raiseValidationException(_('Number must be greater than %s.') % (greaterThan), excMsg)\n\n return value"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nraising ValidationException if value is not an int.", "response": "def validateInt(value, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None,\n min=None, max=None, lessThan=None, greaterThan=None, excMsg=None):\n \"\"\"Raises ValidationException if value is not a int.\n\n Returns value, so it can be used inline in an expression:\n\n print(2 + validateInt(your_number))\n\n Note that since int() and ignore leading or trailing whitespace\n when converting a string to a number, so does this validateNum().\n\n * value (str): The value being validated as an int or float.\n * blank (bool): If True, a blank string will be accepted. Defaults to False.\n * strip (bool, str, None): If None, whitespace is stripped from value. If a str, the characters in it are stripped from value. If False, nothing is stripped.\n * allowlistRegexes (Sequence, None): A sequence of regex str that will explicitly pass validation, even if they aren't numbers.\n * blocklistRegexes (Sequence, None): A sequence of regex str or (regex_str, response_str) tuples that, if matched, will explicitly fail validation.\n * _numType (str): One of 'num', 'int', or 'float' for the kind of number to validate against, where 'num' means int or float.\n * min (int, float): The (inclusive) minimum value for the value to pass validation.\n * max (int, float): The (inclusive) maximum value for the value to pass validation.\n * lessThan (int, float): The (exclusive) minimum value for the value to pass validation.\n * greaterThan (int, float): The (exclusive) maximum value for the value to pass validation.\n * excMsg (str): A custom message to use in the raised ValidationException.\n\n If you specify min or max, you cannot also respectively specify lessThan\n or greaterThan. Doing so will raise PySimpleValidateException.\n\n >>> import pysimplevalidate as pysv\n >>> pysv.validateInt('42')\n 42\n >>> pysv.validateInt('forty two')\n Traceback (most recent call last):\n ...\n pysimplevalidate.ValidationException: 'forty two' is not an integer.\n \"\"\"\n return validateNum(value=value, blank=blank, strip=strip, allowlistRegexes=None,\n blocklistRegexes=blocklistRegexes, _numType='int', min=min, max=max,\n lessThan=lessThan, greaterThan=greaterThan)"} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nraises PySimpleValidateException if the arguments are invalid.", "response": "def _validateParamsFor_validateChoice(choices, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None,\n numbered=False, lettered=False, caseSensitive=False, excMsg=None):\n \"\"\"Raises PySimpleValidateException if the arguments are invalid. This is called by\n the validateChoice() function to check its arguments. This code was\n refactored out to a separate function so that the PyInputPlus module (or\n other modules) could check their parameters' arguments for inputChoice().\n \"\"\"\n\n if not isinstance(caseSensitive, bool):\n raise PySimpleValidateException('caseSensitive argument must be a bool')\n\n try:\n len(choices)\n except:\n raise PySimpleValidateException('choices arg must be a sequence')\n if blank == False and len(choices) < 2:\n raise PySimpleValidateException('choices must have at least two items if blank is False')\n elif blank == True and len(choices) < 1:\n raise PySimpleValidateException('choices must have at least one item')\n\n\n _validateGenericParameters(blank=blank, strip=strip, allowlistRegexes=None, blocklistRegexes=blocklistRegexes)\n\n for choice in choices:\n if not isinstance(choice, str):\n raise PySimpleValidateException('choice %r must be a string' % (_errstr(choice)))\n if lettered and len(choices) > 26:\n raise PySimpleValidateException('lettered argument cannot be True if there are more than 26 choices')\n if numbered and lettered:\n raise PySimpleValidateException('numbered and lettered arguments cannot both be True')\n\n if len(choices) != len(set(choices)):\n raise PySimpleValidateException('duplicate entries in choices argument')\n\n if not caseSensitive and len(choices) != len(set([choice.upper() for choice in choices])):\n raise PySimpleValidateException('duplicate case-insensitive entries in choices argument')"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nraising ValidationException if value is not one of the values in choices. Returns the selected choice.", "response": "def validateChoice(value, choices, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None,\n numbered=False, lettered=False, caseSensitive=False, excMsg=None):\n \"\"\"Raises ValidationException if value is not one of the values in\n choices. Returns the selected choice.\n\n Returns the value in choices that was selected, so it can be used inline\n in an expression:\n\n print('You chose ' + validateChoice(your_choice, ['cat', 'dog']))\n\n Note that value itself is not returned: validateChoice('CAT', ['cat', 'dog'])\n will return 'cat', not 'CAT'.\n\n If lettered is True, lower or uppercase letters will be accepted regardless\n of what caseSensitive is set to. The caseSensitive argument only matters\n for matching with the text of the strings in choices.\n\n * value (str): The value being validated.\n * blank (bool): If True, a blank string will be accepted. Defaults to False.\n * strip (bool, str, None): If None, whitespace is stripped from value. If a str, the characters in it are stripped from value. If False, nothing is stripped.\n * allowlistRegexes (Sequence, None): A sequence of regex str that will explicitly pass validation, even if they aren't numbers.\n * blocklistRegexes (Sequence, None): A sequence of regex str or (regex_str, response_str) tuples that, if matched, will explicitly fail validation.\n * numbered (bool): If True, this function will also accept a string of the choice's number, i.e. '1' or '2'.\n * lettered (bool): If True, this function will also accept a string of the choice's letter, i.e. 'A' or 'B' or 'a' or 'b'.\n * caseSensitive (bool): If True, then the exact case of the option must be entered.\n * excMsg (str): A custom message to use in the raised ValidationException.\n\n Returns the choice selected as it appeared in choices. That is, if 'cat'\n was a choice and the user entered 'CAT' while caseSensitive is False,\n this function will return 'cat'.\n\n\n >>> import pysimplevalidate as pysv\n >>> pysv.validateChoice('dog', ['dog', 'cat', 'moose'])\n 'dog'\n\n >>> pysv.validateChoice('DOG', ['dog', 'cat', 'moose'])\n 'dog'\n\n >>> pysv.validateChoice('2', ['dog', 'cat', 'moose'], numbered=True)\n 'cat'\n\n >>> pysv.validateChoice('a', ['dog', 'cat', 'moose'], lettered=True)\n 'dog'\n\n >>> pysv.validateChoice('C', ['dog', 'cat', 'moose'], lettered=True)\n 'moose'\n\n >>> pysv.validateChoice('dog', ['dog', 'cat', 'moose'], lettered=True)\n 'dog'\n\n >>> pysv.validateChoice('spider', ['dog', 'cat', 'moose'])\n Traceback (most recent call last):\n ...\n pysimplevalidate.ValidationException: 'spider' is not a valid choice.\n \"\"\"\n\n # Validate parameters.\n _validateParamsFor_validateChoice(choices=choices, blank=blank, strip=strip, allowlistRegexes=None,\n blocklistRegexes=blocklistRegexes, numbered=numbered, lettered=lettered, caseSensitive=caseSensitive)\n\n if '' in choices:\n # blank needs to be set to True here, otherwise '' won't be accepted as a choice.\n blank = True\n\n returnNow, value = _prevalidationCheck(value, blank, strip, allowlistRegexes, blocklistRegexes, excMsg)\n if returnNow:\n return value\n\n # Validate against choices.\n if value in choices:\n return value\n if numbered and value.isdigit() and 0 < int(value) <= len(choices): # value must be 1 to len(choices)\n # Numbered options begins at 1, not 0.\n return choices[int(value) - 1] # -1 because the numbers are 1 to len(choices) but the index are 0 to len(choices) - 1\n if lettered and len(value) == 1 and value.isalpha() and 0 < ord(value.upper()) - 64 <= len(choices):\n # Lettered options are always case-insensitive.\n return choices[ord(value.upper()) - 65]\n if not caseSensitive and value.upper() in [choice.upper() for choice in choices]:\n # Return the original item in choices that value has a case-insensitive match with.\n return choices[[choice.upper() for choice in choices].index(value.upper())]\n\n _raiseValidationException(_('%r is not a valid choice.') % (_errstr(value)), excMsg)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef _validateParamsFor__validateToDateTimeFormat(formats, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None, excMsg=None):\n _validateGenericParameters(blank=blank, strip=strip, allowlistRegexes=allowlistRegexes, blocklistRegexes=blocklistRegexes)\n if formats is None:\n raise PySimpleValidateException('formats parameter must be specified')\n\n if isinstance(formats, str):\n raise PySimpleValidateException('formats argument must be a non-str sequence of strftime format strings')\n\n try:\n len(formats)\n except:\n raise PySimpleValidateException('formats argument must be a non-str sequence of strftime format strings')\n\n for format in formats:\n try:\n time.strftime(format) # This will raise an exception if the format is invalid.\n except:\n raise PySimpleValidateException('formats argument contains invalid strftime format strings')", "response": "Raises PySimpleValidateException if the arguments are invalid."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nraising ValidationException if value is not a time formatted in one of the formats formats. Returns a datetime. time object of value.", "response": "def validateTime(value, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None,\n formats=('%H:%M:%S', '%H:%M', '%X'), excMsg=None):\n \"\"\"Raises ValidationException if value is not a time formatted in one\n of the formats formats. Returns a datetime.time object of value.\n\n * value (str): The value being validated as a time.\n * blank (bool): If True, a blank string will be accepted. Defaults to False.\n * strip (bool, str, None): If None, whitespace is stripped from value. If a str, the characters in it are stripped from value. If False, nothing is stripped.\n * allowlistRegexes (Sequence, None): A sequence of regex str that will explicitly pass validation, even if they aren't numbers.\n * blocklistRegexes (Sequence, None): A sequence of regex str or (regex_str, response_str) tuples that, if matched, will explicitly fail validation.\n * formats: A tuple of strings that can be passed to time.strftime, dictating the possible formats for a valid time.\n * excMsg (str): A custom message to use in the raised ValidationException.\n\n >>> import pysimplevalidate as pysv\n >>> pysv.validateTime('12:00:01')\n datetime.time(12, 0, 1)\n >>> pysv.validateTime('13:00:01')\n datetime.time(13, 0, 1)\n >>> pysv.validateTime('25:00:01')\n Traceback (most recent call last):\n ...\n pysimplevalidate.ValidationException: '25:00:01' is not a valid time.\n >>> pysv.validateTime('hour 12 minute 01', formats=['hour %H minute %M'])\n datetime.time(12, 1)\n \"\"\"\n\n # TODO - handle this\n\n # Reuse the logic in _validateToDateTimeFormat() for this function.\n try:\n dt = _validateToDateTimeFormat(value, formats, blank=blank, strip=strip, allowlistRegexes=allowlistRegexes, blocklistRegexes=blocklistRegexes)\n return datetime.time(dt.hour, dt.minute, dt.second, dt.microsecond)\n except ValidationException:\n _raiseValidationException(_('%r is not a valid time.') % (_errstr(value)), excMsg)"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nraise ValidationException if value is not a time formatted in one of the formats formats. Returns a datetime. date object of value.", "response": "def validateDate(value, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None,\n formats=('%Y/%m/%d', '%y/%m/%d', '%m/%d/%Y', '%m/%d/%y', '%x'), excMsg=None):\n \"\"\"Raises ValidationException if value is not a time formatted in one\n of the formats formats. Returns a datetime.date object of value.\n\n * value (str): The value being validated as a time.\n * blank (bool): If True, a blank string for value will be accepted.\n * strip (bool, str, None): If None, whitespace is stripped from value. If a str, the characters in it are stripped from value. If False, nothing is stripped.\n * allowlistRegexes (Sequence, None): A sequence of regex str that will explicitly pass validation, even if they aren't numbers.\n * blocklistRegexes (Sequence, None): A sequence of regex str or (regex_str, response_str) tuples that, if matched, will explicitly fail validation.\n * formats: A tuple of strings that can be passed to time.strftime, dictating the possible formats for a valid date.\n * excMsg (str): A custom message to use in the raised ValidationException.\n\n >>> import pysimplevalidate as pysv\n >>> pysv.validateDate('2/29/2004')\n datetime.date(2004, 2, 29)\n >>> pysv.validateDate('2/29/2005')\n Traceback (most recent call last):\n ...\n pysimplevalidate.ValidationException: '2/29/2005' is not a valid date.\n >>> pysv.validateDate('September 2019', formats=['%B %Y'])\n datetime.date(2019, 9, 1)\n \"\"\"\n # Reuse the logic in _validateToDateTimeFormat() for this function.\n try:\n dt = _validateToDateTimeFormat(value, formats, blank=blank, strip=strip, allowlistRegexes=allowlistRegexes, blocklistRegexes=blocklistRegexes)\n return datetime.date(dt.year, dt.month, dt.day)\n except ValidationException:\n _raiseValidationException(_('%r is not a valid date.') % (_errstr(value)), excMsg)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef validateDatetime(value, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None,\n formats=('%Y/%m/%d %H:%M:%S', '%y/%m/%d %H:%M:%S', '%m/%d/%Y %H:%M:%S', '%m/%d/%y %H:%M:%S', '%x %H:%M:%S',\n '%Y/%m/%d %H:%M', '%y/%m/%d %H:%M', '%m/%d/%Y %H:%M', '%m/%d/%y %H:%M', '%x %H:%M',\n '%Y/%m/%d %H:%M:%S', '%y/%m/%d %H:%M:%S', '%m/%d/%Y %H:%M:%S', '%m/%d/%y %H:%M:%S', '%x %H:%M:%S'), excMsg=None):\n \"\"\"Raises ValidationException if value is not a datetime formatted in one\n of the formats formats. Returns a datetime.datetime object of value.\n\n * value (str): The value being validated as a datetime.\n * blank (bool): If True, a blank string will be accepted. Defaults to False.\n * strip (bool, str, None): If None, whitespace is stripped from value. If a str, the characters in it are stripped from value. If False, nothing is stripped.\n * allowlistRegexes (Sequence, None): A sequence of regex str that will explicitly pass validation, even if they aren't numbers.\n * blocklistRegexes (Sequence, None): A sequence of regex str or (regex_str, response_str) tuples that, if matched, will explicitly fail validation.\n * formats: A tuple of strings that can be passed to time.strftime, dictating the possible formats for a valid datetime.\n * excMsg (str): A custom message to use in the raised ValidationException.\n\n >>> import pysimplevalidate as pysv\n >>> pysv.validateDatetime('2018/10/31 12:00:01')\n datetime.datetime(2018, 10, 31, 12, 0, 1)\n >>> pysv.validateDatetime('10/31/2018 12:00:01')\n datetime.datetime(2018, 10, 31, 12, 0, 1)\n >>> pysv.validateDatetime('10/31/2018')\n Traceback (most recent call last):\n ...\n pysimplevalidate.ValidationException: '10/31/2018' is not a valid date and time.\n \"\"\"\n\n # Reuse the logic in _validateToDateTimeFormat() for this function.\n try:\n return _validateToDateTimeFormat(value, formats, blank=blank, strip=strip, allowlistRegexes=allowlistRegexes, blocklistRegexes=blocklistRegexes)\n except ValidationException:\n _raiseValidationException(_('%r is not a valid date and time.') % (_errstr(value)), excMsg)", "response": "Raises ValidationException if value is not a datetime formatted in one\n of the formats formats. Returns a datetime. datetime object of the datetime formatted in one\n ."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef validateFilename(value, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None, excMsg=None):\n\n returnNow, value = _prevalidationCheck(value, blank, strip, allowlistRegexes, blocklistRegexes, excMsg)\n if returnNow:\n return value\n\n if (value != value.strip()) or (any(c in value for c in '\\\\/:*?\"<>|')):\n _raiseValidationException(_('%r is not a valid filename.') % (_errstr(value)), excMsg)\n return value", "response": "Raises ValidationException if value is not a valid filename."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef validateFilepath(value, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None, excMsg=None, mustExist=False):\n returnNow, value = _prevalidationCheck(value, blank, strip, allowlistRegexes, blocklistRegexes, excMsg)\n if returnNow:\n return value\n\n if (value != value.strip()) or (any(c in value for c in '*?\"<>|')): # Same as validateFilename, except we allow \\ and / and :\n if ':' in value:\n if value.find(':', 2) != -1 or not value[0].isalpha():\n # For Windows: Colon can only be found at the beginning, e.g. 'C:\\', or the first letter is not a letter drive.\n _raiseValidationException(_('%r is not a valid file path.') % (_errstr(value)), excMsg)\n _raiseValidationException(_('%r is not a valid file path.') % (_errstr(value)), excMsg)\n return value\n raise NotImplementedError()", "response": "Raises ValidationException if value is not a valid file path. Returns the value argument."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef validateIP(value, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None, excMsg=None):\n # Validate parameters.\n _validateGenericParameters(blank=blank, strip=strip, allowlistRegexes=allowlistRegexes, blocklistRegexes=blocklistRegexes)\n\n returnNow, value = _prevalidationCheck(value, blank, strip, allowlistRegexes, blocklistRegexes, excMsg)\n if returnNow:\n return value\n\n # Reuse the logic in validateRegex()\n try:\n try:\n # Check if value is an IPv4 address.\n if validateRegex(value=value, regex=IPV4_REGEX, blank=blank, strip=strip, allowlistRegexes=allowlistRegexes, blocklistRegexes=blocklistRegexes):\n return value\n except:\n pass # Go on to check if it's an IPv6 address.\n\n # Check if value is an IPv6 address.\n if validateRegex(value=value, regex=IPV6_REGEX, blank=blank, strip=strip, allowlistRegexes=allowlistRegexes, blocklistRegexes=blocklistRegexes):\n return value\n except ValidationException:\n _raiseValidationException(_('%r is not a valid IP address.') % (_errstr(value)), excMsg)", "response": "Raises ValidationException if value is not an IPv4 or IPv6 address."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\nraises ValidationException if value does not match the regular expression in regex.", "response": "def validateRegex(value, regex, flags=0, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None, excMsg=None):\n \"\"\"Raises ValidationException if value does not match the regular expression in regex.\n Returns the value argument.\n\n This is similar to calling inputStr() and using the allowlistRegexes\n keyword argument, however, validateRegex() allows you to pass regex\n flags such as re.IGNORECASE or re.VERBOSE. You can also pass a regex\n object directly.\n\n If you want to check if a string is a regular expression string, call\n validateRegexStr().\n\n * value (str): The value being validated as a regular expression string.\n * regex (str, regex): The regular expression to match the value against.\n * flags (int): Identical to the flags argument in re.compile(). Pass re.VERBOSE et al here.\n * blank (bool): If True, a blank string will be accepted. Defaults to False.\n * strip (bool, str, None): If None, whitespace is stripped from value. If a str, the characters in it are stripped from value. If False, nothing is stripped.\n * allowlistRegexes (Sequence, None): A sequence of regex str that will explicitly pass validation, even if they aren't numbers.\n * blocklistRegexes (Sequence, None): A sequence of regex str or (regex_str, response_str) tuples that, if matched, will explicitly fail validation.\n * excMsg (str): A custom message to use in the raised ValidationException.\n\n >>> pysv.validateRegex('cat bat rat', r'(cat)|(dog)|(moose)', re.IGNORECASE)\n 'cat'\n >>> pysv.validateRegex('He said \"Hello\".', r'\"(.*?)\"', re.IGNORECASE)\n '\"Hello\"'\n \"\"\"\n\n # Validate parameters.\n _validateGenericParameters(blank=blank, strip=strip, allowlistRegexes=allowlistRegexes, blocklistRegexes=blocklistRegexes)\n\n returnNow, value = _prevalidationCheck(value, blank, strip, allowlistRegexes, blocklistRegexes, excMsg)\n if returnNow:\n return value\n\n # Search value with regex, whether regex is a str or regex object.\n if isinstance(regex, str):\n # TODO - check flags to see they're valid regex flags.\n mo = re.compile(regex, flags).search(value)\n elif isinstance(regex, REGEX_TYPE):\n mo = regex.search(value)\n else:\n raise PySimpleValidateException('regex must be a str or regex object')\n\n if mo is not None:\n return mo.group()\n else:\n _raiseValidationException(_('%r does not match the specified pattern.') % (_errstr(value)), excMsg)"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef validateRegexStr(value, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None, excMsg=None):\n\n # TODO - I'd be nice to check regexes in other languages, i.e. JS and Perl.\n _validateGenericParameters(blank=blank, strip=strip, allowlistRegexes=allowlistRegexes, blocklistRegexes=blocklistRegexes)\n\n returnNow, value = _prevalidationCheck(value, blank, strip, allowlistRegexes, blocklistRegexes, excMsg)\n if returnNow:\n return value\n\n try:\n return re.compile(value)\n except Exception as ex:\n _raiseValidationException(_('%r is not a valid regular expression: %s') % (_errstr(value), ex), excMsg)", "response": "Raises ValidationException if value can t be used as a regular expression string. Returns the value argument as a regex object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef validateURL(value, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None, excMsg=None):\n\n # Reuse the logic in validateRegex()\n try:\n result = validateRegex(value=value, regex=URL_REGEX, blank=blank, strip=strip, allowlistRegexes=allowlistRegexes, blocklistRegexes=blocklistRegexes)\n if result is not None:\n return result\n except ValidationException:\n # 'localhost' is also an acceptable URL:\n if value == 'localhost':\n return value\n\n _raiseValidationException(_('%r is not a valid URL.') % (value), excMsg)", "response": "Raises ValidationException if value is not a URL."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef validateEmail(value, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None, excMsg=None):\n\n # Reuse the logic in validateRegex()\n try:\n result = validateRegex(value=value, regex=EMAIL_REGEX, blank=blank, strip=strip, allowlistRegexes=allowlistRegexes, blocklistRegexes=blocklistRegexes)\n if result is not None:\n return result\n except ValidationException:\n _raiseValidationException(_('%r is not a valid email address.') % (value), excMsg)", "response": "Raises ValidationException if value is not an email address. Returns the value argument."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nraising ValidationException if value is not a yes or no response.", "response": "def validateYesNo(value, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None, yesVal='yes', noVal='no', caseSensitive=False, excMsg=None):\n \"\"\"Raises ValidationException if value is not a yes or no response.\n Returns the yesVal or noVal argument, not value.\n\n Note that value can be any case (by default) and can also just match the\n\n * value (str): The value being validated as an email address.\n * blank (bool): If True, a blank string will be accepted. Defaults to False.\n * strip (bool, str, None): If None, whitespace is stripped from value. If a str, the characters in it are stripped from value. If False, nothing is stripped.\n * allowlistRegexes (Sequence, None): A sequence of regex str that will explicitly pass validation, even if they aren't numbers.\n * blocklistRegexes (Sequence, None): A sequence of regex str or (regex_str, response_str) tuples that, if matched, will explicitly fail validation.\n * caseSensitive (bool): Determines if value must match the case of yesVal and noVal. Defaults to False.\n * excMsg (str): A custom message to use in the raised ValidationException.\n\n >>> import pysimplevalidate as pysv\n >>> pysv.validateYesNo('y')\n 'yes'\n >>> pysv.validateYesNo('YES')\n 'yes'\n >>> pysv.validateYesNo('No')\n 'no'\n >>> pysv.validateYesNo('OUI', yesVal='oui', noVal='no')\n 'oui'\n \"\"\"\n\n # Validate parameters. TODO - can probably improve this to remove the duplication.\n _validateGenericParameters(blank=blank, strip=strip, allowlistRegexes=allowlistRegexes, blocklistRegexes=blocklistRegexes)\n\n returnNow, value = _prevalidationCheck(value, blank, strip, allowlistRegexes, blocklistRegexes, excMsg)\n if returnNow:\n return value\n\n yesVal = str(yesVal)\n noVal = str(noVal)\n if len(yesVal) == 0:\n raise PySimpleValidateException('yesVal argument must be a non-empty string.')\n if len(noVal) == 0:\n raise PySimpleValidateException('noVal argument must be a non-empty string.')\n if (yesVal == noVal) or (not caseSensitive and yesVal.upper() == noVal.upper()):\n raise PySimpleValidateException('yesVal and noVal arguments must be different.')\n if (yesVal[0] == noVal[0]) or (not caseSensitive and yesVal[0].upper() == noVal[0].upper()):\n raise PySimpleValidateException('first character of yesVal and noVal arguments must be different')\n\n returnNow, value = _prevalidationCheck(value, blank, strip, allowlistRegexes, blocklistRegexes, excMsg)\n if returnNow:\n return value\n\n if caseSensitive:\n if value in (yesVal, yesVal[0]):\n return yesVal\n elif value in (noVal, noVal[0]):\n return noVal\n else:\n if value.upper() in (yesVal.upper(), yesVal[0].upper()):\n return yesVal\n elif value.upper() in (noVal.upper(), noVal[0].upper()):\n return noVal\n\n _raiseValidationException(_('%r is not a valid %s/%s response.') % (_errstr(value), yesVal, noVal), excMsg)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nraise ValidationException if value is not an email address. Returns the yesVal or noVal argument.", "response": "def validateBool(value, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None, trueVal='True', falseVal='False', caseSensitive=False, excMsg=None):\n \"\"\"Raises ValidationException if value is not an email address.\n Returns the yesVal or noVal argument, not value.\n\n * value (str): The value being validated as an email address.\n * blank (bool): If True, a blank string will be accepted. Defaults to False.\n * strip (bool, str, None): If None, whitespace is stripped from value. If a str, the characters in it are stripped from value. If False, nothing is stripped.\n * allowlistRegexes (Sequence, None): A sequence of regex str that will explicitly pass validation, even if they aren't numbers.\n * blocklistRegexes (Sequence, None): A sequence of regex str or (regex_str, response_str) tuples that, if matched, will explicitly fail validation.\n * excMsg (str): A custom message to use in the raised ValidationException.\n\n >>> import pysimplevalidate as pysv\n >>> pysv.validateYesNo('y')\n 'yes'\n >>> pysv.validateYesNo('YES')\n 'yes'\n >>> pysv.validateYesNo('No')\n 'no'\n >>> pysv.validateYesNo('OUI', yesVal='oui', noVal='no')\n 'oui'\n \"\"\"\n\n # Validate parameters. TODO - can probably improve this to remove the duplication.\n _validateGenericParameters(blank=blank, strip=strip, allowlistRegexes=allowlistRegexes, blocklistRegexes=blocklistRegexes)\n\n returnNow, value = _prevalidationCheck(value, blank, strip, allowlistRegexes, blocklistRegexes, excMsg)\n if returnNow:\n return value\n\n # Replace the exception messages used in validateYesNo():\n trueVal = str(trueVal)\n falseVal = str(falseVal)\n if len(trueVal) == 0:\n raise PySimpleValidateException('trueVal argument must be a non-empty string.')\n if len(falseVal) == 0:\n raise PySimpleValidateException('falseVal argument must be a non-empty string.')\n if (trueVal == falseVal) or (not caseSensitive and trueVal.upper() == falseVal.upper()):\n raise PySimpleValidateException('trueVal and noVal arguments must be different.')\n if (trueVal[0] == falseVal[0]) or (not caseSensitive and trueVal[0].upper() == falseVal[0].upper()):\n raise PySimpleValidateException('first character of trueVal and noVal arguments must be different')\n\n result = validateYesNo(value, blank=blank, strip=strip, allowlistRegexes=allowlistRegexes, blocklistRegexes=blocklistRegexes, yesVal=trueVal, noVal=falseVal, caseSensitive=caseSensitive, excMsg=None)\n\n # Return a bool value instead of a string.\n if result == trueVal:\n return True\n elif result == falseVal:\n return False\n else:\n assert False, 'inner validateYesNo() call returned something that was not yesVal or noVal. This should never happen.'"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef validateState(value, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None, excMsg=None, returnStateName=False):\n\n # TODO - note that this is USA-centric. I should work on trying to make this more international.\n\n # Validate parameters.\n _validateGenericParameters(blank=blank, strip=strip, allowlistRegexes=allowlistRegexes, blocklistRegexes=blocklistRegexes)\n\n returnNow, value = _prevalidationCheck(value, blank, strip, allowlistRegexes, blocklistRegexes, excMsg)\n if returnNow:\n return value\n\n if value.upper() in USA_STATES_UPPER.keys(): # check if value is a state abbreviation\n if returnStateName:\n return USA_STATES[value.upper()] # Return full state name.\n else:\n return value.upper() # Return abbreviation.\n elif value.title() in USA_STATES.values(): # check if value is a state name\n if returnStateName:\n return value.title() # Return full state name.\n else:\n return USA_STATES_REVERSED[value.title()] # Return abbreviation.\n\n _raiseValidationException(_('%r is not a state.') % (_errstr(value)), excMsg)", "response": "Raises ValidationException if value is not a USA state. Returns the capitalized state abbreviation unless returnStateName is True\n Returns the titlecased state name."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nraise ValidationException if value is not a month. Returns the titlecased month name.", "response": "def validateMonth(value, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None, monthNames=ENGLISH_MONTHS, excMsg=None):\n \"\"\"Raises ValidationException if value is not a month, like 'Jan' or 'March'.\n Returns the titlecased month.\n\n * value (str): The value being validated as an email address.\n * blank (bool): If True, a blank string will be accepted. Defaults to False.\n * strip (bool, str, None): If None, whitespace is stripped from value. If a str, the characters in it are stripped from value. If False, nothing is stripped.\n * allowlistRegexes (Sequence, None): A sequence of regex str that will explicitly pass validation, even if they aren't numbers.\n * blocklistRegexes (Sequence, None): A sequence of regex str or (regex_str, response_str) tuples that, if matched, will explicitly fail validation.\n * monthNames (Mapping): A mapping of uppercase month abbreviations to month names, i.e. {'JAN': 'January', ... }. The default provides English month names.\n * excMsg (str): A custom message to use in the raised ValidationException.\n\n >>> import pysimplevalidate as pysv\n >>> pysv.validateMonth('Jan')\n 'January'\n >>> pysv.validateMonth('MARCH')\n 'March'\n \"\"\"\n\n # returns full month name, e.g. 'January'\n\n # Validate parameters.\n _validateGenericParameters(blank=blank, strip=strip, allowlistRegexes=allowlistRegexes, blocklistRegexes=blocklistRegexes)\n\n returnNow, value = _prevalidationCheck(value, blank, strip, allowlistRegexes, blocklistRegexes, excMsg)\n if returnNow:\n return value\n\n\n try:\n if (monthNames == ENGLISH_MONTHS) and (1 <= int(value) <= 12): # This check here only applies to months, not when validateDayOfWeek() calls this function.\n return ENGLISH_MONTH_NAMES[int(value) - 1]\n except:\n pass # continue if the user didn't enter a number 1 to 12.\n\n # Both month names and month abbreviations will be at least 3 characters.\n if len(value) < 3:\n _raiseValidationException(_('%r is not a month.') % (_errstr(value)), excMsg)\n\n if value[:3].upper() in monthNames.keys(): # check if value is a month abbreviation\n return monthNames[value[:3].upper()] # It turns out that titlecase is good for all the month.\n elif value.upper() in monthNames.values(): # check if value is a month name\n return value.title()\n\n _raiseValidationException(_('%r is not a month.') % (_errstr(value)), excMsg)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nraise ValidationException if value is not a day of the week such as Mon or Friday. Returns the titlecased day of the week.", "response": "def validateDayOfWeek(value, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None, dayNames=ENGLISH_DAYS_OF_WEEK, excMsg=None):\n \"\"\"Raises ValidationException if value is not a day of the week, such as 'Mon' or 'Friday'.\n Returns the titlecased day of the week.\n\n * value (str): The value being validated as a day of the week.\n * blank (bool): If True, a blank string will be accepted. Defaults to False.\n * strip (bool, str, None): If None, whitespace is stripped from value. If a str, the characters in it are stripped from value. If False, nothing is stripped.\n * allowlistRegexes (Sequence, None): A sequence of regex str that will explicitly pass validation, even if they aren't numbers.\n * blocklistRegexes (Sequence, None): A sequence of regex str or (regex_str, response_str) tuples that, if matched, will explicitly fail validation.\n * dayNames (Mapping): A mapping of uppercase day abbreviations to day names, i.e. {'SUN': 'Sunday', ...} The default provides English day names.\n * excMsg (str): A custom message to use in the raised ValidationException.\n\n >>> import pysimplevalidate as pysv\n >>> pysv.validateDayOfWeek('mon')\n 'Monday'\n >>> pysv.validateDayOfWeek('THURSday')\n 'Thursday'\n \"\"\"\n\n # TODO - reuse validateChoice for this function\n\n # returns full day of the week str, e.g. 'Sunday'\n\n # Reuses validateMonth.\n try:\n return validateMonth(value, blank=blank, strip=strip, allowlistRegexes=allowlistRegexes, blocklistRegexes=blocklistRegexes, monthNames=ENGLISH_DAYS_OF_WEEK)\n except:\n # Replace the exception message.\n _raiseValidationException(_('%r is not a day of the week') % (_errstr(value)), excMsg)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef validateDayOfMonth(value, year, month, blank=False, strip=None, allowlistRegexes=None, blocklistRegexes=None, excMsg=None):\n try:\n daysInMonth = calendar.monthrange(year, month)[1]\n except:\n raise PySimpleValidateException('invalid arguments for year and/or month')\n\n try:\n return validateInt(value, blank=blank, strip=strip, allowlistRegexes=allowlistRegexes, blocklistRegexes=blocklistRegexes, min=1, max=daysInMonth)\n except:\n # Replace the exception message.\n _raiseValidationException(_('%r is not a day in the month of %s %s') % (_errstr(value), ENGLISH_MONTH_NAMES[month - 1], year), excMsg)", "response": "Raises ValidationException if value is not a day of the month from the given year month."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_level(level_string):\n levels = {'debug': logging.DEBUG, 'info': logging.INFO,\n 'warning': logging.WARNING, 'error': logging.ERROR,\n 'critical': logging.CRITICAL}\n try:\n level = levels[level_string.lower()]\n except KeyError:\n sys.exit('{0} is not a recognized logging level'.format(level_string))\n else:\n return level", "response": "Returns an appropriate logging level integer from a string name"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nconfiguring the logging system.", "response": "def config_logging(no_log_file, log_to, log_level, silent, verbosity):\n \"\"\"\n Configures and generates a Logger object, 'openaccess_epub' based on common\n parameters used for console interface script execution in OpenAccess_EPUB.\n\n These parameters are:\n no_log_file\n Boolean. Disables logging to file. If set to True, log_to and\n log_level become irrelevant.\n log_to\n A string name indicating a file path for logging.\n log_level\n Logging level, one of: 'debug', 'info', 'warning', 'error', 'critical'\n silent\n Boolean\n verbosity\n Console logging level, one of: 'debug', 'info', 'warning', 'error',\n 'critical\n\n This method currently only configures a console StreamHandler with a\n message-only Formatter.\n \"\"\"\n\n log_level = get_level(log_level)\n console_level = get_level(verbosity)\n\n #We want to configure our openaccess_epub as the parent log\n log = logging.getLogger('openaccess_epub')\n log.setLevel(logging.DEBUG) # Don't filter at the log level\n standard = logging.Formatter(STANDARD_FORMAT)\n message_only = logging.Formatter(MESSAGE_ONLY_FORMAT)\n\n #Only add FileHandler IF it's allowed AND we have a name for it\n if not no_log_file and log_to is not None:\n fh = logging.FileHandler(filename=log_to)\n fh.setLevel(log_level)\n fh.setFormatter(standard)\n log.addHandler(fh)\n\n #Add on the console StreamHandler at verbosity level if silent not set\n if not silent:\n sh_echo = logging.StreamHandler(sys.stdout)\n sh_echo.setLevel(console_level)\n sh_echo.setFormatter(message_only)\n log.addHandler(sh_echo)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef rmp_pixel_deg_xys(vecX, vecY, vecPrfSd, tplPngSize,\n varExtXmin, varExtXmax, varExtYmin, varExtYmax):\n \"\"\"Remap x, y, sigma parameters from pixel to degree.\n\n Parameters\n ----------\n vecX : 1D numpy array\n Array with possible x parametrs in pixels\n vecY : 1D numpy array\n Array with possible y parametrs in pixels\n vecPrfSd : 1D numpy array\n Array with possible sd parametrs in pixels\n tplPngSize : tuple, 2\n Pixel dimensions of the visual space in pixel (width, height).\n varExtXmin : float\n Extent of visual space from centre in negative x-direction (width)\n varExtXmax : float\n Extent of visual space from centre in positive x-direction (width)\n varExtYmin : int\n Extent of visual space from centre in negative y-direction (height)\n varExtYmax : float\n Extent of visual space from centre in positive y-direction (height)\n\n Returns\n -------\n vecX : 1D numpy array\n Array with possible x parametrs in degree\n vecY : 1D numpy array\n Array with possible y parametrs in degree\n vecPrfSd : 1D numpy array\n Array with possible sd parametrs in degree\n\n \"\"\"\n\n # Remap modelled x-positions of the pRFs:\n vecXdgr = rmp_rng(vecX, varExtXmin, varExtXmax, varOldThrMin=0.0,\n varOldAbsMax=(tplPngSize[0] - 1))\n\n # Remap modelled y-positions of the pRFs:\n vecYdgr = rmp_rng(vecY, varExtYmin, varExtYmax, varOldThrMin=0.0,\n varOldAbsMax=(tplPngSize[1] - 1))\n\n # We calculate the scaling factor from pixels to degrees of visual angle to\n # separately for the x- and the y-directions (the two should be the same).\n varPix2DgrX = np.divide((varExtXmax - varExtXmin), tplPngSize[0])\n varPix2DgrY = np.divide((varExtYmax - varExtYmin), tplPngSize[1])\n\n # Check whether varDgr2PixX and varDgr2PixY are similar:\n strErrMsg = 'ERROR. The ratio of X and Y dimensions in ' + \\\n 'stimulus space (in pixels) do not agree'\n assert 0.5 > np.absolute((varPix2DgrX - varPix2DgrY)), strErrMsg\n\n # Convert prf sizes from degrees of visual angles to pixel\n vecPrfSdDgr = np.multiply(vecPrfSd, varPix2DgrX)\n\n # Return new values.\n return vecXdgr, vecYdgr, vecPrfSdDgr", "response": "Remap x y sigma parameters from pixel to degree."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate an array with all possible model parameter combinations for a given set of pRF models.", "response": "def crt_mdl_prms(tplPngSize, varNum1, varExtXmin, varExtXmax, varNum2,\n varExtYmin, varExtYmax, varNumPrfSizes, varPrfStdMin,\n varPrfStdMax, kwUnt='pix', kwCrd='crt'):\n \"\"\"Create an array with all possible model parameter combinations\n\n Parameters\n ----------\n tplPngSize : tuple, 2\n Pixel dimensions of the visual space (width, height).\n varNum1 : int, positive\n Number of x-positions to model\n varExtXmin : float\n Extent of visual space from centre in negative x-direction (width)\n varExtXmax : float\n Extent of visual space from centre in positive x-direction (width)\n varNum2 : float, positive\n Number of y-positions to model.\n varExtYmin : int\n Extent of visual space from centre in negative y-direction (height)\n varExtYmax : float\n Extent of visual space from centre in positive y-direction (height)\n varNumPrfSizes : int, positive\n Number of pRF sizes to model.\n varPrfStdMin : float, positive\n Minimum pRF model size (standard deviation of 2D Gaussian)\n varPrfStdMax : float, positive\n Maximum pRF model size (standard deviation of 2D Gaussian)\n kwUnt: str\n Keyword to set the unit for model parameter combinations; model\n parameters can be in pixels [\"pix\"] or degrees of visual angles [\"deg\"]\n kwCrd: str\n Keyword to set the coordinate system for model parameter combinations;\n parameters can be in cartesian [\"crt\"] or polar [\"pol\"] coordinates\n\n Returns\n -------\n aryMdlParams : 2d numpy array, shape [n_x_pos*n_y_pos*n_sd, 3]\n Model parameters (x, y, sigma) for all models.\n\n \"\"\"\n\n # Number of pRF models to be created (i.e. number of possible\n # combinations of x-position, y-position, and standard deviation):\n varNumMdls = varNum1 * varNum2 * varNumPrfSizes\n\n # Array for the x-position, y-position, and standard deviations for\n # which pRF model time courses are going to be created, where the\n # columns correspond to: (1) the x-position, (2) the y-position, and\n # (3) the standard deviation. The parameters are in units of the\n # upsampled visual space.\n aryMdlParams = np.zeros((varNumMdls, 3), dtype=np.float32)\n\n # Counter for parameter array:\n varCntMdlPrms = 0\n\n if kwCrd == 'crt':\n\n # Vector with the moddeled x-positions of the pRFs:\n vecX = np.linspace(varExtXmin, varExtXmax, varNum1, endpoint=True)\n\n # Vector with the moddeled y-positions of the pRFs:\n vecY = np.linspace(varExtYmin, varExtYmax, varNum2, endpoint=True)\n\n # Vector with standard deviations pRF models (in degree of vis angle):\n vecPrfSd = np.linspace(varPrfStdMin, varPrfStdMax, varNumPrfSizes,\n endpoint=True)\n\n if kwUnt == 'deg':\n # since parameters are already in degrees of visual angle,\n # we do nothing\n pass\n\n elif kwUnt == 'pix':\n # convert parameters to pixels\n vecX, vecY, vecPrfSd = rmp_deg_pixel_xys(vecX, vecY, vecPrfSd,\n tplPngSize, varExtXmin,\n varExtXmax, varExtYmin,\n varExtYmax)\n\n else:\n print('Unknown keyword provided for possible model parameter ' +\n 'combinations: should be either pix or deg')\n\n # Put all combinations of x-position, y-position, and standard\n # deviations into the array:\n\n # Loop through x-positions:\n for idxX in range(0, varNum1):\n\n # Loop through y-positions:\n for idxY in range(0, varNum2):\n\n # Loop through standard deviations (of Gaussian pRF models):\n for idxSd in range(0, varNumPrfSizes):\n\n # Place index and parameters in array:\n aryMdlParams[varCntMdlPrms, 0] = vecX[idxX]\n aryMdlParams[varCntMdlPrms, 1] = vecY[idxY]\n aryMdlParams[varCntMdlPrms, 2] = vecPrfSd[idxSd]\n\n # Increment parameter index:\n varCntMdlPrms += 1\n\n elif kwCrd == 'pol':\n\n # Vector with the radial position:\n vecRad = np.linspace(0.0, varExtXmax, varNum1, endpoint=True)\n\n # Vector with the angular position:\n vecTht = np.linspace(0.0, 2*np.pi, varNum2, endpoint=False)\n\n # Get all possible combinations on the grid, using matrix indexing ij\n # of output\n aryRad, aryTht = np.meshgrid(vecRad, vecTht, indexing='ij')\n\n # Flatten arrays to be able to combine them with meshgrid\n vecRad = aryRad.flatten()\n vecTht = aryTht.flatten()\n\n # Convert from polar to cartesian\n vecX, vecY = map_pol_to_crt(vecTht, vecRad)\n\n # Vector with standard deviations pRF models (in degree of vis angle):\n vecPrfSd = np.linspace(varPrfStdMin, varPrfStdMax, varNumPrfSizes,\n endpoint=True)\n\n if kwUnt == 'deg':\n # since parameters are already in degrees of visual angle,\n # we do nothing\n pass\n\n elif kwUnt == 'pix':\n # convert parameters to pixels\n vecX, vecY, vecPrfSd = rmp_deg_pixel_xys(vecX, vecY, vecPrfSd,\n tplPngSize, varExtXmin,\n varExtXmax, varExtYmin,\n varExtYmax)\n # Put all combinations of x-position, y-position, and standard\n # deviations into the array:\n\n # Loop through x-positions:\n for idxXY in range(0, varNum1*varNum2):\n\n # Loop through standard deviations (of Gaussian pRF models):\n for idxSd in range(0, varNumPrfSizes):\n\n # Place index and parameters in array:\n aryMdlParams[varCntMdlPrms, 0] = vecX[idxXY]\n aryMdlParams[varCntMdlPrms, 1] = vecY[idxXY]\n aryMdlParams[varCntMdlPrms, 2] = vecPrfSd[idxSd]\n\n # Increment parameter index:\n varCntMdlPrms += 1\n\n else:\n print('Unknown keyword provided for coordinate system for model ' +\n 'parameter combinations: should be either crt or pol')\n\n return aryMdlParams"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncreating 2D Gauss models to spatial conditions.", "response": "def crt_mdl_rsp(arySptExpInf, tplPngSize, aryMdlParams, varPar, strCrd='crt',\n lgcPrint=True):\n \"\"\"Create responses of 2D Gauss models to spatial conditions.\n\n Parameters\n ----------\n arySptExpInf : 3d numpy array, shape [n_x_pix, n_y_pix, n_conditions]\n All spatial conditions stacked along second axis.\n tplPngSize : tuple, 2\n Pixel dimensions of the visual space (width, height).\n aryMdlParams : 2d numpy array, shape [n_x_pos*n_y_pos*n_sd, 3]\n Model parameters (x, y, sigma) for all models.\n varPar : int, positive\n Number of cores to parallelize over.\n strCrd, string, either 'crt' or 'pol'\n Whether model parameters are provided in cartesian or polar coordinates\n lgcPrint : boolean\n Whether print statements should be executed.\n\n Returns\n -------\n aryMdlCndRsp : 2d numpy array, shape [n_x_pos*n_y_pos*n_sd, n_cond]\n Responses of 2D Gauss models to spatial conditions.\n\n \"\"\"\n\n if varPar == 1:\n # if the number of cores requested by the user is equal to 1,\n # we save the overhead of multiprocessing by calling aryMdlCndRsp\n # directly\n aryMdlCndRsp = cnvl_2D_gauss(0, aryMdlParams, arySptExpInf,\n tplPngSize, None, strCrd=strCrd)\n\n else:\n\n # The long array with all the combinations of model parameters is put\n # into separate chunks for parallelisation, using a list of arrays.\n lstMdlParams = np.array_split(aryMdlParams, varPar)\n\n # Create a queue to put the results in:\n queOut = mp.Queue()\n\n # Empty list for results from parallel processes (for pRF model\n # responses):\n lstMdlTc = [None] * varPar\n\n # Empty list for processes:\n lstPrcs = [None] * varPar\n \n if lgcPrint:\n print('---------Running parallel processes')\n\n # Create processes:\n for idxPrc in range(0, varPar):\n lstPrcs[idxPrc] = mp.Process(target=cnvl_2D_gauss,\n args=(idxPrc, lstMdlParams[idxPrc],\n arySptExpInf, tplPngSize, queOut\n ),\n kwargs={'strCrd': strCrd},\n )\n # Daemon (kills processes when exiting):\n lstPrcs[idxPrc].Daemon = True\n\n # Start processes:\n for idxPrc in range(0, varPar):\n lstPrcs[idxPrc].start()\n\n # Collect results from queue:\n for idxPrc in range(0, varPar):\n lstMdlTc[idxPrc] = queOut.get(True)\n\n # Join processes:\n for idxPrc in range(0, varPar):\n lstPrcs[idxPrc].join()\n\n if lgcPrint:\n print('---------Collecting results from parallel processes')\n # Put output arrays from parallel process into one big array\n lstMdlTc = sorted(lstMdlTc)\n aryMdlCndRsp = np.empty((0, arySptExpInf.shape[-1]))\n for idx in range(0, varPar):\n aryMdlCndRsp = np.concatenate((aryMdlCndRsp, lstMdlTc[idx][1]),\n axis=0)\n\n # Clean up:\n del(lstMdlParams)\n del(lstMdlTc)\n\n return aryMdlCndRsp.astype('float16')"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef crt_nrl_tc(aryMdlRsp, aryCnd, aryOns, aryDrt, varTr, varNumVol,\n varTmpOvsmpl, lgcPrint=True):\n \"\"\"Create temporally upsampled neural time courses.\n\n Parameters\n ----------\n aryMdlRsp : 2d numpy array, shape [n_x_pos * n_y_pos * n_sd, n_cond]\n Responses of 2D Gauss models to spatial conditions.\n aryCnd : np.array\n 1D array with condition identifiers (every condition has its own int)\n aryOns : np.array, same len as aryCnd\n 1D array with condition onset times in seconds.\n aryDrt : np.array, same len as aryCnd\n 1D array with condition durations of different conditions in seconds.\n varTr : float, positive\n Time to repeat (TR) of the (fMRI) experiment\n varNumVol : float, positive\n Number of data point (volumes) in the (fMRI) data\n varTmpOvsmpl : float, positive\n Factor by which the time courses should be temporally upsampled.\n lgcPrint: boolean, default True\n Should print messages be sent to user?\n\n Returns\n -------\n aryNrlTc : 2d numpy array,\n shape [n_x_pos * n_y_pos * n_sd, varNumVol*varTmpOvsmpl]\n Neural time course models in temporally upsampled space\n\n Notes\n ---------\n [1] This function first creates boxcar functions based on the conditions\n as they are specified in the temporal experiment information, provided\n by the user in the csv file. Second, it then replaces the 1s in the\n boxcar function by predicted condition values that were previously\n calculated based on the overlap between the assumed 2D Gaussian for the\n current model and the presented stimulus aperture for that condition.\n Since the 2D Gaussian is normalized, the overlap value will be between\n 0 and 1.\n\n \"\"\"\n\n # adjust the input, if necessary, such that input is 2D\n tplInpShp = aryMdlRsp.shape\n aryMdlRsp = aryMdlRsp.reshape((-1, aryMdlRsp.shape[-1]))\n\n # the first spatial condition might code the baseline (blank periods) with\n # all zeros. If this is the case, remove the first spatial condition, since\n # for temporal conditions this is removed automatically below and we need\n # temporal and sptial conditions to maych\n if np.all(aryMdlRsp[:, 0] == 0):\n if lgcPrint:\n print('------------Removed first spatial condition (all zeros)')\n aryMdlRsp = aryMdlRsp[:, 1:]\n\n # create boxcar functions in temporally upsampled space\n aryBxCarTmp = create_boxcar(aryCnd, aryOns, aryDrt, varTr, varNumVol,\n aryExclCnd=np.array([0.]),\n varTmpOvsmpl=varTmpOvsmpl).T\n\n # Make sure that aryMdlRsp and aryBxCarTmp have the same number of\n # conditions\n assert aryMdlRsp.shape[-1] == aryBxCarTmp.shape[0]\n\n # pre-allocate pixelwise boxcar array\n aryNrlTc = np.zeros((aryMdlRsp.shape[0], aryBxCarTmp.shape[-1]),\n dtype='float16')\n # loop through boxcar functions of conditions\n for ind, vecCndOcc in enumerate(aryBxCarTmp):\n # get response predicted by models for this specific spatial condition\n rspValPrdByMdl = aryMdlRsp[:, ind]\n # insert predicted response value several times using broad-casting\n aryNrlTc[..., vecCndOcc.astype('bool')] = rspValPrdByMdl[:, None]\n\n # determine output shape\n tplOutShp = tplInpShp[:-1] + (int(varNumVol*varTmpOvsmpl), )\n\n return aryNrlTc.reshape(tplOutShp).astype('float16')", "response": "This function creates a neural time courses in a temporary space that is compatible with the neural time courses provided by the user."} {"SOURCE": "codesearchnet", "instruction": "Can you write a function in Python 3 where it\ncreates all spatial x feature prf time courses for a given set of features.", "response": "def crt_prf_ftr_tc(aryMdlRsp, aryTmpExpInf, varNumVol, varTr, varTmpOvsmpl,\n switchHrfSet, tplPngSize, varPar, dctPrm=None,\n lgcPrint=True):\n \"\"\"Create all spatial x feature prf time courses.\n\n Parameters\n ----------\n aryMdlRsp : 2d numpy array, shape [n_x_pos * n_y_pos * n_sd, n_cond]\n Responses of 2D Gauss models to spatial conditions\n aryTmpExpInf: 2d numpy array, shape [unknown, 4]\n Temporal information about conditions\n varNumVol : float, positive\n Number of volumes of the (fMRI) data.\n varTr : float, positive\n Time to repeat (TR) of the (fMRI) experiment.\n varTmpOvsmpl : int, positive\n Factor by which the data hs been temporally upsampled.\n switchHrfSet : int, (1, 2, 3)\n Switch to determine which hrf basis functions are used\n tplPngSize : tuple\n Pixel dimensions of the visual space (width, height).\n varPar : int, positive\n Description of input 1.\n dctPrm : dictionary, default None\n Dictionary with customized hrf parameters. If this is None, default\n hrf parameters will be used.\n lgcPrint: boolean, default True\n Should print messages be sent to user?\n\n Returns\n -------\n aryNrlTcConv : 3d numpy array,\n shape [nr of models, nr of unique feautures, varNumVol]\n Prf time course models\n\n \"\"\"\n\n # Identify number of unique features\n vecFeat = np.unique(aryTmpExpInf[:, 3])\n vecFeat = vecFeat[np.nonzero(vecFeat)[0]]\n\n # Preallocate the output array\n aryPrfTc = np.zeros((aryMdlRsp.shape[0], 0, varNumVol),\n dtype=np.float32)\n\n # Loop over unique features\n for indFtr, ftr in enumerate(vecFeat):\n\n if lgcPrint:\n print('---------Create prf time course model for feature ' +\n str(ftr))\n # Derive sptial conditions, onsets and durations for this specific\n # feature\n aryTmpCnd = aryTmpExpInf[aryTmpExpInf[:, 3] == ftr, 0]\n aryTmpOns = aryTmpExpInf[aryTmpExpInf[:, 3] == ftr, 1]\n aryTmpDrt = aryTmpExpInf[aryTmpExpInf[:, 3] == ftr, 2]\n\n # Create temporally upsampled neural time courses.\n aryNrlTcTmp = crt_nrl_tc(aryMdlRsp, aryTmpCnd, aryTmpOns, aryTmpDrt,\n varTr, varNumVol, varTmpOvsmpl,\n lgcPrint=lgcPrint)\n # Convolve with hrf to create model pRF time courses.\n aryPrfTcTmp = crt_prf_tc(aryNrlTcTmp, varNumVol, varTr, varTmpOvsmpl,\n switchHrfSet, tplPngSize, varPar,\n dctPrm=dctPrm, lgcPrint=lgcPrint)\n # Add temporal time course to time course that will be returned\n aryPrfTc = np.concatenate((aryPrfTc, aryPrfTcTmp), axis=1)\n\n return aryPrfTc"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef fnd_unq_rws(A, return_index=False, return_inverse=False):\n\n A = np.require(A, requirements='C')\n assert A.ndim == 2, \"array must be 2-dim'l\"\n\n B = np.unique(A.view([('', A.dtype)]*A.shape[1]),\n return_index=return_index,\n return_inverse=return_inverse)\n\n if return_index or return_inverse:\n return (B[0].view(A.dtype).reshape((-1, A.shape[1]), order='C'),) \\\n + B[1:]\n else:\n return B.view(A.dtype).reshape((-1, A.shape[1]), order='C')", "response": "Find unique rows in 2D array."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\nasync def _request(\n self, method: str, endpoint: str, *, headers: dict = None) -> dict:\n \"\"\"Make a request against air-matters.com.\"\"\"\n url = '{0}/{1}'.format(API_URL_SCAFFOLD, endpoint)\n\n if not headers:\n headers = {}\n headers.update({\n 'Host': DEFAULT_HOST,\n 'Origin': DEFAULT_ORIGIN,\n 'Referer': DEFAULT_ORIGIN,\n 'User-Agent': DEFAULT_USER_AGENT,\n })\n\n async with self._websession.request(\n method,\n url,\n headers=headers,\n ) as resp:\n try:\n resp.raise_for_status()\n return await resp.json(content_type=None)\n except client_exceptions.ClientError as err:\n raise RequestError(\n 'Error requesting data from {0}: {1}'.format(\n endpoint, err)) from None", "response": "Make a request against air - matters. com."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef filter_transform_response(get_response, params):\n\n async def decorated_filter(request, *args, **kwargs):\n response = await get_response(request, *args, **kwargs)\n\n if isinstance(response, HTTPResponse) and \\\n not isinstance(response, Response):\n return response\n\n if not isinstance(response, Response):\n response = Response(response)\n\n return response\n\n return decorated_filter", "response": "This filter handles the response and transforms it to a Response instance."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nreturning user report data.", "response": "async def _raw_user_report_data(self) -> list:\n \"\"\"Return user report data (if accompanied by latitude/longitude).\"\"\"\n data = await self._request('get', 'map/markers')\n return [\n location for location in data\n if location['latitude'] and location['longitude']\n ]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nreturn a list of states.", "response": "async def _raw_state_data(self) -> list:\n \"\"\"Return a list of states.\"\"\"\n data = await self._request('get', 'states')\n return [\n location for location in data\n if location['name'] != 'United States'\n ]"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the nearest report to a lat / lon.", "response": "async def nearest_by_coordinates(\n self, latitude: float, longitude: float) -> dict:\n \"\"\"Get the nearest report (with local and state info) to a lat/lon.\"\"\"\n # Since user data is more granular than state or CDC data, find the\n # user report whose coordinates are closest to the provided\n # coordinates:\n nearest_user_report = get_nearest_by_coordinates(\n await self.user_reports(), 'latitude', 'longitude', latitude,\n longitude)\n\n try:\n # If the user report corresponds to a known state in\n # flunearyou.org's database, we can safely assume that's the\n # correct state:\n nearest_state = next((\n state for state in await self.state_data()\n if state['place_id'] == nearest_user_report['contained_by']))\n except StopIteration:\n # If a place ID doesn't exist (e.g., ZIP Code 98012 doesn't have a\n # place ID), calculate the nearest state by measuring the distance\n # from the provided latitude/longitude to flunearyou.org's\n # latitude/longitude that defines each state:\n nearest_state = get_nearest_by_coordinates(\n await self.state_data(), 'lat', 'lon', latitude, longitude)\n\n return {'local': nearest_user_report, 'state': nearest_state}"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef initialize(self, maxsize, history=None):\n '''size specifies the maximum amount of history to keep'''\n super().__init__()\n\n self.maxsize = int(maxsize)\n self.history = deque(maxlen=self.maxsize) # Preserves order history\n\n # If `items` are specified, then initialize with them\n if history is not None:\n for key, value in history:\n self.insert(key, value)", "response": "initialize the internal state of the object"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef insert(self, key, value):\n '''Adds a new key-value pair. Returns any discarded values.'''\n\n # Add to history and catch expectorate\n if len(self.history) == self.maxsize:\n expectorate = self.history[0]\n else:\n expectorate = None\n\n self.history.append((key, value))\n\n # Add to the appropriate list of values\n if key in self:\n super().__getitem__(key).append(value)\n else:\n super().__setitem__(key, [value])\n\n # Clean up old values\n if expectorate is not None:\n old_key, old_value = expectorate\n super().__getitem__(old_key).pop(0)\n if len(super().__getitem__(old_key)) == 0:\n super().__delitem__(old_key)\n\n return (old_key, old_value)", "response": "Adds a new key - value pair. Returns any discarded values."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ngetting the recently inserted values up to a key", "response": "def up_to(self, key):\n '''Gets the recently inserted values up to a key'''\n for okey, ovalue in reversed(self.history):\n if okey == key:\n break\n else:\n yield ovalue"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef resource(self, uri, methods=frozenset({'GET'}), **kwargs):\n def decorator(f):\n if kwargs.get('stream'):\n f.is_stream = kwargs['stream']\n self.add_resource(f, uri=uri, methods=methods, **kwargs)\n\n return decorator", "response": "Decorator to add a resource route to the routing list."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nregisters a resource route.", "response": "def add_resource(self, handler, uri, methods=frozenset({'GET'}),\n **kwargs):\n \"\"\"\n Register a resource route.\n\n :param handler: function or class instance\n :param uri: path of the URL\n :param methods: list or tuple of methods allowed\n :param host:\n :param strict_slashes:\n :param version:\n :param name: user defined route name for url_for\n :param filters: List of callable that will filter request and\n response data\n :param validators: List of callable added to the filter list.\n\n :return: function or class instance\n \"\"\"\n\n sanic_args = ('host', 'strict_slashes', 'version', 'name')\n view_kwargs = dict((k, v) for k, v in kwargs.items()\n if k in sanic_args)\n\n filters = kwargs.get('filters', self.default_filters)\n validators = kwargs.get('validators', [])\n\n filter_list = list(filters) + list(validators)\n filter_options = {\n 'filter_list': filter_list,\n 'handler': handler,\n 'uri': uri,\n 'methods': methods\n }\n filter_options.update(kwargs)\n\n handler = self.init_filters(filter_list, filter_options)(handler)\n return self.add_route(handler=handler, uri=uri, methods=methods,\n **view_kwargs)"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef check(session, rev_id, page_id=None, radius=defaults.RADIUS,\n before=None, window=None, rvprop=None):\n \"\"\"\n Checks the revert status of a revision. With this method, you can\n determine whether an edit is a 'reverting' edit, was 'reverted' by another\n edit and/or was 'reverted_to' by another edit.\n\n :Parameters:\n session : :class:`mwapi.Session`\n An API session to make use of\n rev_id : int\n the ID of the revision to check\n page_id : int\n the ID of the page the revision occupies (slower if not provided)\n radius : int\n a positive integer indicating the maximum number of revisions\n that can be reverted\n before : :class:`mwtypes.Timestamp`\n if set, limits the search for *reverting* revisions to those which\n were saved before this timestamp\n window : int\n if set, limits the search for *reverting* revisions to those which\n were saved within `window` seconds after the reverted edit\n rvprop : set( str )\n a set of properties to include in revisions\n\n :Returns:\n A triple :class:`mwreverts.Revert` | `None`\n\n * reverting -- If this edit reverted other edit(s)\n * reverted -- If this edit was reverted by another edit\n * reverted_to -- If this edit was reverted to by another edit\n\n :Example:\n\n >>> import mwapi\n >>> import mwreverts.api\n >>>\n >>> session = mwapi.Session(\"https://en.wikipedia.org\")\n >>>\n >>> def print_revert(revert):\n ... if revert is None:\n ... print(None)\n ... else:\n ... print(revert.reverting['revid'],\n ... [r['revid'] for r in revert.reverteds],\n ... revert.reverted_to['revid'])\n ...\n >>> reverting, reverted, reverted_to = \\\n ... mwreverts.api.check(session, 679778587)\n >>> print_revert(reverting)\n None\n >>> print_revert(reverted)\n 679778743 [679778587] 679742862\n >>> print_revert(reverted_to)\n None\n\n \"\"\"\n\n rev_id = int(rev_id)\n radius = int(radius)\n if radius < 1:\n raise TypeError(\"invalid radius. Expected a positive integer.\")\n\n page_id = int(page_id) if page_id is not None else None\n before = Timestamp(before) if before is not None else None\n\n rvprop = set(rvprop) if rvprop is not None else set()\n\n # If we don't have the page_id, we're going to need to look them up\n if page_id is None:\n page_id = get_page_id(session, rev_id)\n\n # Load history and current rev\n current_and_past_revs = list(n_edits_before(\n session,\n rev_id,\n page_id,\n n=radius + 1,\n rvprop={'ids', 'timestamp', 'sha1'} | rvprop\n ))\n\n if len(current_and_past_revs) < 1:\n raise KeyError(\"Revision {0} not found in page {1}.\"\n .format(rev_id, page_id))\n\n current_rev, past_revs = (\n current_and_past_revs[-1], # Current\n current_and_past_revs[:-1] # Past revisions\n )\n\n if window is not None and before is None:\n before = Timestamp(current_rev['timestamp']) + window\n\n # Load future revisions\n future_revs = list(n_edits_after(\n session,\n rev_id + 1,\n page_id,\n n=radius,\n timestamp=before,\n rvprop={'ids', 'timestamp', 'sha1'} | rvprop\n ))\n\n return build_revert_tuple(\n rev_id, past_revs, current_rev, future_revs, radius)", "response": "This method checks the revert status of a revision."} {"SOURCE": "codesearchnet", "instruction": "Implement a Python 3 function for\nchecking the revert status of a deleted revision.", "response": "def check_deleted(session, rev_id, title=None, timestamp=None,\n radius=defaults.RADIUS, before=None, window=None,\n rvprop=None):\n \"\"\"\n Checks the revert status of a deleted revision. With this method, you can\n determine whether an edit is a 'reverting' edit, was 'reverted' by another\n edit and/or was 'reverted_to' by another edit.\n\n :Parameters:\n session : :class:`mwapi.Session`\n An API session to make use of\n rev_id : int\n the ID of the revision to check\n title : str\n the title of the page the revision occupies (slower if not\n provided) Note that the MediaWiki API expects the title to\n include the namespace prefix (e.g. \"User_talk:EpochFail\")\n radius : int\n a positive integer indicating the maximum number of revisions\n that can be reverted\n before : :class:`mwtypes.Timestamp`\n if set, limits the search for *reverting* revisions to those which\n were saved before this timestamp\n window : int\n if set, limits the search for *reverting* revisions to those which\n were saved within `window` seconds after the reverted edit\n rvprop : set( str )\n a set of properties to include in revisions\n\n :Returns:\n A triple :class:`mwreverts.Revert` | `None`\n\n * reverting -- If this edit reverted other edit(s)\n * reverted -- If this edit was reverted by another edit\n * reverted_to -- If this edit was reverted to by another edit\n \"\"\"\n\n rev_id = int(rev_id)\n radius = int(radius)\n if radius < 1:\n raise TypeError(\"invalid radius. Expected a positive integer.\")\n\n title = str(title) if title is not None else None\n before = Timestamp(before) if before is not None else None\n\n rvprop = set(rvprop) if rvprop is not None else set()\n\n # If we don't have the title, we're going to need to look it up\n if title is None or timestamp is None:\n title, timestamp = get_deleted_title_and_timestamp(session, rev_id)\n\n # Load history and current rev\n current_and_past_revs = list(n_deleted_edits_before(\n session, rev_id, title, timestamp, n=radius + 1,\n rvprop={'ids', 'timestamp', 'sha1'} | rvprop\n ))\n\n if len(current_and_past_revs) < 1:\n raise KeyError(\"Revision {0} not found in page {1}.\"\n .format(rev_id, title))\n\n current_rev, past_revs = (\n current_and_past_revs[-1], # Current\n current_and_past_revs[:-1] # Past revisions\n )\n\n if window is not None and before is None:\n before = Timestamp(current_rev['timestamp']) + window\n\n # Load future revisions\n future_revs = list(n_deleted_edits_after(\n session, rev_id + 1, title, timestamp, n=radius, before=before,\n rvprop={'ids', 'timestamp', 'sha1'} | rvprop\n ))\n\n return build_revert_tuple(\n rev_id, past_revs, current_rev, future_revs, radius)"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef get_commits(repo_dir, old_commit, new_commit, hide_merges=True):\n repo = Repo(repo_dir)\n commits = repo.iter_commits(rev=\"{0}..{1}\".format(old_commit, new_commit))\n if hide_merges:\n return [x for x in commits if not x.summary.startswith(\"Merge \")]\n else:\n return list(commits)", "response": "Find all commits between two commit SHAs."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_commit_url(repo_url):\n if \"github.com\" in repo_url:\n return repo_url[:-4] if repo_url.endswith(\".git\") else repo_url\n if \"git.openstack.org\" in repo_url:\n uri = '/'.join(repo_url.split('/')[-2:])\n return \"https://github.com/{0}\".format(uri)\n\n # If it didn't match these conditions, just return it.\n return repo_url", "response": "Determine URL to view commits for repo."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef get_projects(osa_repo_dir, commit):\n # Check out the correct commit SHA from the repository\n repo = Repo(osa_repo_dir)\n checkout(repo, commit)\n\n yaml_files = glob.glob(\n '{0}/playbooks/defaults/repo_packages/*.yml'.format(osa_repo_dir)\n )\n yaml_parsed = []\n for yaml_file in yaml_files:\n with open(yaml_file, 'r') as f:\n yaml_parsed.append(yaml.load(f))\n\n merged_dicts = {k: v for d in yaml_parsed for k, v in d.items()}\n\n return normalize_yaml(merged_dicts)", "response": "Get all projects from multiple YAML files."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreads OSA role information at a particular commit.", "response": "def get_roles(osa_repo_dir, commit, role_requirements):\n \"\"\"Read OSA role information at a particular commit.\"\"\"\n repo = Repo(osa_repo_dir)\n\n checkout(repo, commit)\n\n log.info(\"Looking for file {f} in repo {r}\".format(r=osa_repo_dir,\n f=role_requirements))\n filename = \"{0}/{1}\".format(osa_repo_dir, role_requirements)\n with open(filename, 'r') as f:\n roles_yaml = yaml.load(f)\n\n return normalize_yaml(roles_yaml)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\ncreate an OAI - OSS report header for OpenStack - Ansible.", "response": "def make_osa_report(repo_dir, old_commit, new_commit,\n args):\n \"\"\"Create initial RST report header for OpenStack-Ansible.\"\"\"\n update_repo(repo_dir, args.osa_repo_url, args.update)\n\n # Are these commits valid?\n validate_commits(repo_dir, [old_commit, new_commit])\n\n # Do we have a valid commit range?\n validate_commit_range(repo_dir, old_commit, new_commit)\n\n # Get the commits in the range\n commits = get_commits(repo_dir, old_commit, new_commit)\n\n # Start off our report with a header and our OpenStack-Ansible commits.\n template_vars = {\n 'args': args,\n 'repo': 'openstack-ansible',\n 'commits': commits,\n 'commit_base_url': get_commit_url(args.osa_repo_url),\n 'old_sha': old_commit,\n 'new_sha': new_commit\n }\n return render_template('offline-header.j2', template_vars)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef make_report(storage_directory, old_pins, new_pins, do_update=False,\n version_mappings=None):\n \"\"\"Create RST report from a list of projects/roles.\"\"\"\n report = \"\"\n version_mappings = version_mappings or {}\n for new_pin in new_pins:\n repo_name, repo_url, commit_sha = new_pin\n commit_sha = version_mappings.get(repo_name, {}\n ).get(commit_sha, commit_sha)\n\n # Prepare our repo directory and clone the repo if needed. Only pull\n # if the user requests it.\n repo_dir = \"{0}/{1}\".format(storage_directory, repo_name)\n update_repo(repo_dir, repo_url, do_update)\n\n # Get the old SHA from the previous pins. If this pin didn't exist\n # in the previous OSA revision, skip it. This could happen with newly-\n # added projects and roles.\n try:\n commit_sha_old = next(x[2] for x in old_pins if x[0] == repo_name)\n except Exception:\n continue\n else:\n commit_sha_old = version_mappings.get(repo_name, {}\n ).get(commit_sha_old,\n commit_sha_old)\n\n # Loop through the commits and render our template.\n validate_commits(repo_dir, [commit_sha_old, commit_sha])\n commits = get_commits(repo_dir, commit_sha_old, commit_sha)\n template_vars = {\n 'repo': repo_name,\n 'commits': commits,\n 'commit_base_url': get_commit_url(repo_url),\n 'old_sha': commit_sha_old,\n 'new_sha': commit_sha\n }\n rst = render_template('offline-repo-changes.j2', template_vars)\n report += rst\n\n return report", "response": "Create a RST report from a list of projects and roles."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef normalize_yaml(yaml):\n if isinstance(yaml, list):\n # Normalize the roles YAML data\n normalized_yaml = [(x['name'], x['src'], x.get('version', 'HEAD'))\n for x in yaml]\n else:\n # Extract the project names from the roles YAML and create a list of\n # tuples.\n projects = [x[:-9] for x in yaml.keys() if x.endswith('git_repo')]\n normalized_yaml = []\n for project in projects:\n repo_url = yaml['{0}_git_repo'.format(project)]\n commit_sha = yaml['{0}_git_install_branch'.format(project)]\n normalized_yaml.append((project, repo_url, commit_sha))\n\n return normalized_yaml", "response": "Normalize the YAML from project and role lookups."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nposts the report to a GitHub Gist and return the URL of the gist.", "response": "def post_gist(report_data, old_sha, new_sha):\n \"\"\"Post the report to a GitHub Gist and return the URL of the gist.\"\"\"\n payload = {\n \"description\": (\"Changes in OpenStack-Ansible between \"\n \"{0} and {1}\".format(old_sha, new_sha)),\n \"public\": True,\n \"files\": {\n \"osa-diff-{0}-{1}.rst\".format(old_sha, new_sha): {\n \"content\": report_data\n }\n }\n }\n url = \"https://api.github.com/gists\"\n r = requests.post(url, data=json.dumps(payload))\n response = r.json()\n return response['html_url']"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef prepare_storage_dir(storage_directory):\n storage_directory = os.path.expanduser(storage_directory)\n if not os.path.exists(storage_directory):\n os.mkdir(storage_directory)\n\n return storage_directory", "response": "Prepare the storage directory."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nrenders a jinja template.", "response": "def render_template(template_file, template_vars):\n \"\"\"Render a jinja template.\"\"\"\n # Load our Jinja templates\n template_dir = \"{0}/templates\".format(\n os.path.dirname(os.path.abspath(__file__))\n )\n jinja_env = jinja2.Environment(\n loader=jinja2.FileSystemLoader(template_dir),\n trim_blocks=True\n )\n rendered = jinja_env.get_template(template_file).render(template_vars)\n\n return rendered"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nresetting repository and optionally update it.", "response": "def repo_pull(repo_dir, repo_url, fetch=False):\n \"\"\"Reset repository and optionally update it.\"\"\"\n # Make sure the repository is reset to the master branch.\n repo = Repo(repo_dir)\n repo.git.clean(\"-df\")\n repo.git.reset(\"--hard\")\n repo.git.checkout(\"master\")\n repo.head.reset(index=True, working_tree=True)\n\n # Compile the refspec appropriately to ensure\n # that if the repo is from github it includes\n # all the refs needed, including PR's.\n refspec_list = [\n \"+refs/heads/*:refs/remotes/origin/*\",\n \"+refs/heads/*:refs/heads/*\",\n \"+refs/tags/*:refs/tags/*\"\n ]\n if \"github.com\" in repo_url:\n refspec_list.extend([\n \"+refs/pull/*:refs/remotes/origin/pr/*\",\n \"+refs/heads/*:refs/remotes/origin/*\"])\n\n # Only get the latest updates if requested.\n if fetch:\n repo.git.fetch([\"-u\", \"-v\", \"-f\",\n repo_url,\n refspec_list])\n return repo"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ncloning the repo if it doesn t exist already otherwise update it.", "response": "def update_repo(repo_dir, repo_url, fetch=False):\n \"\"\"Clone the repo if it doesn't exist already, otherwise update it.\"\"\"\n repo_exists = os.path.exists(repo_dir)\n if not repo_exists:\n log.info(\"Cloning repo {}\".format(repo_url))\n repo = repo_clone(repo_dir, repo_url)\n\n # Make sure the repo is properly prepared\n # and has all the refs required\n log.info(\"Fetching repo {} (fetch: {})\".format(repo_url, fetch))\n repo = repo_pull(repo_dir, repo_url, fetch)\n\n return repo"} {"SOURCE": "codesearchnet", "instruction": "How would you code a function in Python 3 to\ntest if a commit exists in the repository.", "response": "def validate_commits(repo_dir, commits):\n \"\"\"Test if a commit is valid for the repository.\"\"\"\n log.debug(\"Validating {c} exist in {r}\".format(c=commits, r=repo_dir))\n repo = Repo(repo_dir)\n for commit in commits:\n try:\n commit = repo.commit(commit)\n except Exception:\n msg = (\"Commit {commit} could not be found in repo {repo}. \"\n \"You may need to pass --update to fetch the latest \"\n \"updates to the git repositories stored on \"\n \"your local computer.\".format(repo=repo_dir, commit=commit))\n raise exceptions.InvalidCommitException(msg)\n\n return True"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef validate_commit_range(repo_dir, old_commit, new_commit):\n # Are there any commits between the two commits that were provided?\n try:\n commits = get_commits(repo_dir, old_commit, new_commit)\n except Exception:\n commits = []\n if len(commits) == 0:\n # The user might have gotten their commits out of order. Let's flip\n # the order of the commits and try again.\n try:\n commits = get_commits(repo_dir, new_commit, old_commit)\n except Exception:\n commits = []\n if len(commits) == 0:\n # Okay, so there really are no commits between the two commits\n # provided by the user. :)\n msg = (\"The commit range {0}..{1} is invalid for {2}.\"\n \"You may need to use the --update option to fetch the \"\n \"latest updates to the git repositories stored on your \"\n \"local computer.\".format(old_commit, new_commit, repo_dir))\n raise exceptions.InvalidCommitRangeException(msg)\n else:\n return 'flip'\n\n return True", "response": "Check if commit range is valid. Flip it if needed."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef get_release_notes(osa_repo_dir, osa_old_commit, osa_new_commit):\n repo = Repo(osa_repo_dir)\n\n # Get a list of tags, sorted\n tags = repo.git.tag().split('\\n')\n tags = sorted(tags, key=LooseVersion)\n # Currently major tags are being printed after rc and\n # b tags. We need to fix the list so that major\n # tags are printed before rc and b releases\n tags = _fix_tags_list(tags)\n\n # Find the closest tag from a given SHA\n # The tag found here is the tag that was cut\n # either on or before the given SHA\n checkout(repo, osa_old_commit)\n old_tag = repo.git.describe()\n\n # If the SHA given is between two release tags, then\n # 'git describe' will return a tag in form of\n # --. For example:\n # 14.0.2-3-g6931e26\n # Since reno does not support this format, we need to\n # strip away the commit number and sha bits.\n if '-' in old_tag:\n old_tag = old_tag[0:old_tag.index('-')]\n\n # Get the nearest tag associated with the new commit\n checkout(repo, osa_new_commit)\n new_tag = repo.git.describe()\n if '-' in new_tag:\n nearest_new_tag = new_tag[0:new_tag.index('-')]\n else:\n nearest_new_tag = new_tag\n\n # Truncate the tags list to only include versions\n # between old_sha and new_sha. The latest release\n # is not included in this list. That version will be\n # printed separately in the following step.\n tags = tags[tags.index(old_tag):tags.index(nearest_new_tag)]\n\n release_notes = \"\"\n # Checkout the new commit, then run reno to get the latest\n # releasenotes that have been created or updated between\n # the latest release and this new commit.\n repo.git.checkout(osa_new_commit, '-f')\n reno_report_command = ['reno',\n 'report',\n '--earliest-version',\n nearest_new_tag]\n reno_report_p = subprocess.Popen(reno_report_command,\n cwd=osa_repo_dir,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE)\n reno_output = reno_report_p.communicate()[0].decode('UTF-8')\n release_notes += reno_output\n\n # We want to start with the latest packaged release first, so\n # the tags list is reversed\n for version in reversed(tags):\n # If version is an rc or b tag, and it has a major\n # release tag, then skip it. There is no need to print\n # release notes for an rc or b release unless we are\n # comparing shas between two rc or b releases.\n repo.git.checkout(version, '-f')\n # We are outputing one version at a time here\n reno_report_command = ['reno',\n 'report',\n '--branch',\n version,\n '--earliest-version',\n version]\n reno_report_p = subprocess.Popen(reno_report_command,\n cwd=osa_repo_dir,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE)\n reno_output = reno_report_p.communicate()[0].decode('UTF-8')\n # We need to ensure the output includes the version we are concerned\n # about.\n # This is due to https://bugs.launchpad.net/reno/+bug/1670173\n if version in reno_output:\n release_notes += reno_output\n\n # Clean up \"Release Notes\" title. We don't need this title for\n # each tagged release.\n release_notes = release_notes.replace(\n \"=============\\nRelease Notes\\n=============\",\n \"\"\n )\n # Replace headers that contain '=' with '~' to comply with osa-differ's\n # formatting\n release_notes = re.sub('===+', _equal_to_tilde, release_notes)\n # Replace headers that contain '-' with '#' to comply with osa-differ's\n # formatting\n release_notes = re.sub('---+', _dash_to_num, release_notes)\n return release_notes", "response": "Get release notes between the two revisions."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncompare to xml.dom.minidom, lxml's treatment of text as .text and .tail attributes of elements is an oddity. It can even be a little frustrating when one is attempting to copy everything underneath some element to another element; one has to write in extra code to handle the text. This method provides the functionality of adding everything underneath the source element, in preserved order, to the destination element.", "response": "def append_all_below(destination, source, join_str=None):\n \"\"\"\n Compared to xml.dom.minidom, lxml's treatment of text as .text and .tail\n attributes of elements is an oddity. It can even be a little frustrating\n when one is attempting to copy everything underneath some element to\n another element; one has to write in extra code to handle the text. This\n method provides the functionality of adding everything underneath the\n source element, in preserved order, to the destination element.\n \"\"\"\n if join_str is None:\n join_str = ' '\n if source.text is not None: # If source has text\n if len(destination) == 0: # Destination has no children\n if destination.text is None: # Destination has no text\n destination.text = source.text\n else: # Destination has a text\n destination.text = join_str.join([destination.text, source.text])\n else: # Destination has children\n #Select last child\n last = destination[-1]\n if last.tail is None: # Last child has no tail\n last.tail = source.text\n else: # Last child has a tail\n last.tail = join_str.join([last.tail, source.text])\n for each_child in source:\n destination.append(deepcopy(each_child))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef all_text(element):\n if element.text is None:\n text = []\n else:\n text = [element.text]\n tails = [child.tail for child in element if child.tail is not None]\n tails = [tail.strip() for tail in tails if tail.strip()]\n return ' '.join(text + tails)", "response": "This method returns all text data underneath the given element."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef ns_format(element, namespaced_string):\n if ':' not in namespaced_string:\n print('This name contains no namespace, returning it unmodified: ' + namespaced_string)\n return namespaced_string\n namespace, name = namespaced_string.split(':')\n return '{' + element.nsmap[namespace] + '}' + name", "response": "This method returns a convenient method for adapting a tag or attribute name to a tag or attribute name."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef rename_attributes(element, attrs):\n for name in attrs.keys():\n if name not in element.attrib:\n continue\n else:\n element.attrib[attrs[name]] = element.attrib.pop(name)", "response": "Renames the attributes of the element."} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef replace(old, new):\n parent = old.getparent()\n parent.replace(old, new)", "response": "A simple way to replace one element node with another."} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\nconverts the node received to a comment in place and returns the comment element.", "response": "def comment(node):\n \"\"\"\n Converts the node received to a comment, in place, and will also return the\n comment element.\n \"\"\"\n parent = node.parentNode\n comment = node.ownerDocument.createComment(node.toxml())\n parent.replaceChild(comment, node)\n return comment"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef uncomment(comment):\n parent = comment.parentNode\n h = html.parser.HTMLParser()\n data = h.unescape(comment.data)\n try:\n node = minidom.parseString(data).firstChild\n except xml.parsers.expat.ExpatError: # Could not parse!\n log.error('Could not uncomment node due to parsing error!')\n return None\n else:\n parent.replaceChild(node, comment)\n return node", "response": "Converts the comment node received to a non - commented element in place and returns the new node."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nserializing an element to text.", "response": "def serialize(element, strip=False):\n \"\"\"\n A handy way to serialize an element to text.\n \"\"\"\n text = etree.tostring(element, method='text', encoding='utf-8')\n if strip:\n text = text.strip()\n return str(text, encoding='utf-8')"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nparse the Command Line Arguments using argparse.", "response": "def get_arg_parse():\n \"\"\"Parses the Command Line Arguments using argparse.\"\"\"\n # Create parser object:\n objParser = argparse.ArgumentParser()\n\n # Add argument to namespace -config file path:\n objParser.add_argument('-config', required=True,\n metavar='/path/to/config.csv',\n help='Absolute file path of config file with \\\n parameters for pRF analysis. Ignored if in \\\n testing mode.'\n )\n\n # Add argument to namespace -prior results file path:\n objParser.add_argument('-strPthPrior', required=True,\n metavar='/path/to/my_prior_res',\n help='Absolute file path of prior pRF results. \\\n Ignored if in testing mode.'\n )\n\n # Add argument to namespace -varNumOpt1 flag:\n objParser.add_argument('-varNumOpt1', required=True, type=int,\n metavar='N1',\n help='Number of radial positions.'\n )\n\n # Add argument to namespace -varNumOpt2 flag:\n objParser.add_argument('-varNumOpt2', required=True, type=int,\n metavar='N2',\n help='Number of angular positions.'\n )\n\n # Add argument to namespace -varNumOpt3 flag:\n objParser.add_argument('-varNumOpt3', default=None, metavar='N3',\n help='Max displacement in radial direction.'\n )\n\n # Add argument to namespace -lgcRstrCentre flag:\n objParser.add_argument('-lgcRstrCentre', dest='lgcRstrCentre',\n action='store_true', default=False,\n help='Restrict fitted models to stimulated area.')\n\n objParser.add_argument('-strPathHrf', default=None, required=False,\n metavar='/path/to/custom_hrf_parameter.npy',\n help='Path to npy file with custom hrf parameters. \\\n Ignored if in testing mode.')\n\n objParser.add_argument('-supsur', nargs='+',\n help='List of floats that represent the ratio of \\\n size neg surround to size pos center.',\n type=float, default=None)\n\n # Namespace object containign arguments and values:\n objNspc = objParser.parse_args()\n\n return objNspc"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef _homogenize_data_filter(dfilter):\n if isinstance(dfilter, tuple) and (len(dfilter) == 1):\n dfilter = (dfilter[0], None)\n if (dfilter is None) or (dfilter == (None, None)) or (dfilter == (None,)):\n dfilter = (None, None)\n elif isinstance(dfilter, dict):\n dfilter = (dfilter, None)\n elif isinstance(dfilter, (list, str)) or (\n isinstance(dfilter, int) and (not isinstance(dfilter, bool))\n ):\n dfilter = (None, dfilter if isinstance(dfilter, list) else [dfilter])\n elif isinstance(dfilter[0], dict) or (\n (dfilter[0] is None) and (not isinstance(dfilter[1], dict))\n ):\n pass\n else:\n dfilter = (dfilter[1], dfilter[0])\n return dfilter", "response": "Make data filter definition consistent."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef _tofloat(obj):\n if \"inf\" in obj.lower().strip():\n return obj\n try:\n return int(obj)\n except ValueError:\n try:\n return float(obj)\n except ValueError:\n return obj", "response": "Convert to float string."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nvalidating column identifier ( s ).", "response": "def _in_header(self, col):\n \"\"\"Validate column identifier(s).\"\"\"\n # pylint: disable=R1704\n if not self._has_header:\n # Conditionally register exceptions so that they do not appear\n # in situations where has_header is always True. In this way\n # they are not auto-documented by default\n icol_ex = pexdoc.exh.addex(RuntimeError, \"Invalid column specification\")\n hnf_ex = pexdoc.exh.addex(ValueError, \"Column *[column_identifier]* not found\")\n col_list = [col] if isinstance(col, (str, int)) else col\n for col in col_list:\n edata = {\"field\": \"column_identifier\", \"value\": col}\n if not self._has_header:\n # Condition not subsumed in raise_exception_if\n # so that calls that always have has_header=True\n # do not pick up this exception\n icol_ex(not isinstance(col, int))\n hnf_ex((col < 0) or (col > len(self._header) - 1), edata)\n else:\n hnf_ex(\n (isinstance(col, int) and ((col < 0) or (col > self._data_cols)))\n or (\n isinstance(col, str) and (col.upper() not in self._header_upper)\n ),\n edata,\n )\n return col_list"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nvalidate that all columns in filter are in header.", "response": "def _validate_rfilter(self, rfilter, letter=\"d\"):\n \"\"\"Validate that all columns in filter are in header.\"\"\"\n if letter == \"d\":\n pexdoc.exh.addai(\n \"dfilter\",\n (\n (not self._has_header)\n and any([not isinstance(item, int) for item in rfilter.keys()])\n ),\n )\n else:\n pexdoc.exh.addai(\n \"rfilter\",\n (\n (not self._has_header)\n and any([not isinstance(item, int) for item in rfilter.keys()])\n ),\n )\n for key in rfilter:\n self._in_header(key)\n rfilter[key] = (\n [rfilter[key]] if isinstance(rfilter[key], str) else rfilter[key]\n )"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef data(self, filtered=False, no_empty=False):\n return self._apply_filter(filtered, no_empty)", "response": "r Returns the file data."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef header(self, filtered=False):\n return (\n self._header\n if (not filtered) or (filtered and self._cfilter is None)\n else self._cfilter\n )", "response": "r Returns the data header."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef write(self, fname=None, filtered=False, header=True, append=False):\n # pylint: disable=R0913\n write_ex = pexdoc.exh.addex(ValueError, \"There is no data to save to file\")\n fname = self._fname if fname is None else fname\n data = self.data(filtered=filtered)\n write_ex((len(data) == 0) or ((len(data) == 1) and (len(data[0]) == 0)))\n if header:\n header = [header] if isinstance(header, str) else header\n cfilter = self._gen_col_index(filtered=filtered)\n filtered_header = (\n [self._header[item] for item in cfilter]\n if self._has_header\n else cfilter\n )\n file_header = filtered_header if isinstance(header, bool) else header\n # Convert None's to ''\n data = [[\"''\" if item is None else item for item in row] for row in data]\n _write_int(fname, [file_header] + data if header else data, append=append)", "response": "r Write the data to a specified comma - separated values file."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef model_creation(dicCnfg, varRat=None, strPathHrf=None):\n # *************************************************************************\n # *** Load parameters from config file\n\n # Load config parameters from dictionary into namespace:\n cfg = cls_set_config(dicCnfg)\n # *************************************************************************\n\n if cfg.lgcCrteMdl:\n\n # *********************************************************************\n # *** Load spatial condition information\n\n print('------Load spatial condition information')\n\n arySptExpInf = np.load(cfg.strSptExpInf)\n\n # Here we assume scientific convention and orientation of images where\n # the origin should fall in the lower left corner, the x-axis occupies\n # the width and the y-axis occupies the height dimension of the screen.\n # We also assume that the first dimension that the user provides\n # indexes x and the second indexes the y-axis. Since python is column\n # major (i.e. first indexes columns, only then rows), we need to rotate\n # arySptExpInf by 90 degrees rightward. This will insure that with the\n # 0th axis we index the scientific x-axis and higher values move us to\n # the right on that x-axis. It will also ensure that the 1st\n # python axis indexes the scientific y-axis and higher values will\n # move us up.\n arySptExpInf = np.rot90(arySptExpInf, k=3)\n\n # Calculate the areas that were stimulated during the experiment\n aryStimArea = np.sum(arySptExpInf, axis=-1).astype(np.bool)\n\n # *********************************************************************\n\n # *********************************************************************\n # *** Load temporal condition information\n\n print('------Load temporal condition information')\n\n aryTmpExpInf = np.load(cfg.strTmpExpInf)\n # add fourth column to make it appropriate for pyprf_feature\n if aryTmpExpInf.shape[-1] == 3:\n print('---------Added fourth column')\n vecNewCol = np.greater(aryTmpExpInf[:, 0], 0).astype(np.float16)\n aryTmpExpInf = np.concatenate(\n (aryTmpExpInf, np.expand_dims(vecNewCol, axis=1)), axis=1)\n\n # *********************************************************************\n\n # *********************************************************************\n # *** Create model parameter combination, for now in pixel.\n aryMdlParams = crt_mdl_prms((int(cfg.varVslSpcSzeX),\n int(cfg.varVslSpcSzeY)), cfg.varNum1,\n cfg.varExtXmin, cfg.varExtXmax,\n cfg.varNum2, cfg.varExtYmin,\n cfg.varExtYmax, cfg.varNumPrfSizes,\n cfg.varPrfStdMin, cfg.varPrfStdMax,\n kwUnt='pix', kwCrd=cfg.strKwCrd)\n\n # If desired by user, also create model parameters for supp surround\n if varRat is not None:\n aryMdlParamsSur = np.copy(aryMdlParams)\n aryMdlParamsSur[:, 2] = aryMdlParamsSur[:, 2] * varRat\n\n # Exclude model parameters whose prf center would lie outside the\n # stimulated area\n print('------Exclude model params with prf center outside stim area')\n varNumMdlBfr = aryMdlParams.shape[0]\n # Get logical for model inclusion\n lgcMdlInc = aryStimArea[aryMdlParams[:, 0].astype(np.int32),\n aryMdlParams[:, 1].astype(np.int32)]\n # Exclude models with prf center outside stimulated area\n aryMdlParams = aryMdlParams[lgcMdlInc, :]\n # Also apply the logical to the surround parameters, if they exist\n if varRat is not None:\n aryMdlParamsSur = aryMdlParamsSur[lgcMdlInc, :]\n\n print('---------Number of models excluded: ' +\n str(varNumMdlBfr-aryMdlParams.shape[0]))\n\n # *********************************************************************\n\n # *********************************************************************\n # *** Create 2D Gauss model responses to spatial conditions.\n\n print('------Create 2D Gauss model responses to spatial conditions')\n\n aryMdlRsp = crt_mdl_rsp(arySptExpInf, (int(cfg.varVslSpcSzeX),\n int(cfg.varVslSpcSzeY)),\n aryMdlParams, cfg.varPar)\n\n # If desired by user, also create model responses for supp surround\n if varRat is not None:\n aryMdlRspSur = crt_mdl_rsp(arySptExpInf, (int(cfg.varVslSpcSzeX),\n int(cfg.varVslSpcSzeY)),\n aryMdlParamsSur, cfg.varPar)\n\n # Delete array to save memory\n del(arySptExpInf)\n\n # *********************************************************************\n\n # *********************************************************************\n # *** Create prf time course models\n\n print('------Create prf time course models')\n\n # Check whether path to npy file with hrf parameters was provided\n if strPathHrf is not None:\n print('---------Load custom hrf parameters')\n aryCstPrm = np.load(strPathHrf)\n dctPrm = {}\n dctPrm['peak_delay'] = aryCstPrm[0]\n dctPrm['under_delay'] = aryCstPrm[1]\n dctPrm['peak_disp'] = aryCstPrm[2]\n dctPrm['under_disp'] = aryCstPrm[3]\n dctPrm['p_u_ratio'] = aryCstPrm[4]\n # If not, set dctPrm to None, which will result in default hrf params\n else:\n print('---------Use default hrf parameters')\n dctPrm = None\n\n aryPrfTc = crt_prf_ftr_tc(aryMdlRsp, aryTmpExpInf, cfg.varNumVol,\n cfg.varTr, cfg.varTmpOvsmpl,\n cfg.switchHrfSet, (int(cfg.varVslSpcSzeX),\n int(cfg.varVslSpcSzeY)),\n cfg.varPar, dctPrm=dctPrm)\n\n # If desired by user, create prf time course models for supp surround\n if varRat is not None:\n print('---------Add suppressive surround')\n aryPrfTcSur = crt_prf_ftr_tc(aryMdlRspSur, aryTmpExpInf,\n cfg.varNumVol, cfg.varTr,\n cfg.varTmpOvsmpl, cfg.switchHrfSet,\n (int(cfg.varVslSpcSzeX),\n int(cfg.varVslSpcSzeY)),\n cfg.varPar, dctPrm=dctPrm)\n # Concatenate aryPrfTc and aryPrfTcSur\n aryPrfTc = np.concatenate((aryPrfTc, aryPrfTcSur), axis=1)\n\n # *********************************************************************\n\n # *********************************************************************\n # *** Save pRF time course models, corresponding params and responses\n\n print('------Save pRF time course models to disk')\n\n # Prepare file name extensions\n strNmeExtMdl = ''\n strNmeExtPrm = '_params'\n strNmeExtRsp = '_mdlRsp'\n strNmeExtMdlInc = '_lgcMdlInc'\n\n # Check whether extensions need to be modified with ratio name\n if varRat is not None:\n strNmeExtMdl = strNmeExtMdl + '_' + str(varRat)\n strNmeExtPrm = strNmeExtPrm + '_' + str(varRat)\n strNmeExtRsp = strNmeExtRsp + '_' + str(varRat)\n # Also include model the parameters and responses of the surround\n # For the pRF time course models, the surround is included above\n aryMdlParams = np.stack((aryMdlParams, aryMdlParamsSur),\n axis=1)\n aryMdlRsp = np.stack((aryMdlRsp, aryMdlRspSur),\n axis=1)\n # Append the npy file name for model exlusion in unstimulated area\n # with general _supsur suffic since it does not depend on specific\n # surround\n strNmeExtMdlInc = '_supsur' + strNmeExtMdlInc\n\n # Save pRF time course models\n np.save(cfg.strPathMdl + strNmeExtMdl, aryPrfTc)\n # Save the corresponding model parameters\n np.save(cfg.strPathMdl + strNmeExtPrm, aryMdlParams)\n # Save the corresponding model responses\n np.save(cfg.strPathMdl + strNmeExtRsp, aryMdlRsp)\n\n # Save logical for parameter exclusion in unstimulated area\n np.save(cfg.strPathMdl + strNmeExtMdlInc, lgcMdlInc)\n\n del(aryMdlParams)\n del(aryMdlRsp)\n\n # *********************************************************************\n\n else:\n\n # *********************************************************************\n # %% Load existing pRF time course models\n\n print('------Load pRF time course models from disk')\n\n # Load the file:\n aryPrfTc = np.load((cfg.strPathMdl + '.npy'))\n\n # Check whether pRF time course model matrix has the expected\n # dimensions:\n vecPrfTcShp = aryPrfTc.shape\n\n # Logical test for correct dimensions:\n strErrMsg = ('---Error: Dimensions of specified pRF time course ' +\n 'models do not agree with specified model parameters')\n assert vecPrfTcShp[0] == cfg.varNum1 * \\\n cfg.varNum2 * cfg.varNumPrfSizes, strErrMsg\n assert vecPrfTcShp[-1] == cfg.varNumVol, strErrMsg\n # Check number of feature. If fitting is performed with sup surround,\n # number of features will be twice as many as simple fitting\n if varRat is None:\n assert vecPrfTcShp[1] == cfg.switchHrfSet, strErrMsg\n else:\n assert vecPrfTcShp[1] == cfg.switchHrfSet*2, strErrMsg\n\n # Load logical for parameter exclusion in unstimulated area\n lgcMdlInc = np.load(cfg.strPathMdl + '_lgcMdlInc.npy')\n\n # *************************************************************************\n\n return aryPrfTc, lgcMdlInc", "response": "Create or load pRF model time courses."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef register(self, CorpNum, cashbill, UserID=None):\r\n if cashbill == None:\r\n raise PopbillException(-99999999, \"\ud604\uae08\uc601\uc218\uc99d \uc815\ubcf4\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n postData = self._stringtify(cashbill)\r\n\r\n return self._httppost('/Cashbill', postData, CorpNum, UserID)", "response": "\uc694\uccad\ucc98\ud2b8 \uc815\ubcf4 \uc694\uccad\ucc98\ud2b8 \uc815\ubcf4 \uc694\uccad\ucc98\ud2b8 \uc815\ubcf4 \uc694\uccad\ucc98\ud2b8 \uc815\ubcf4 \uc694\uccad\ucc98\ud2b8 \uc815\ufffd"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef update(self, CorpNum, MgtKey, cashbill, UserID=None):\r\n if MgtKey == None or MgtKey == \"\":\r\n raise PopbillException(-99999999, \"\uad00\ub9ac\ubc88\ud638\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n if cashbill == None:\r\n raise PopbillException(-99999999, \"\ud604\uae08\uc601\uc218\uc99d \uc815\ubcf4\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n postData = self._stringtify(cashbill)\r\n\r\n return self._httppost('/Cashbill/' + MgtKey, postData, CorpNum, UserID, \"PATCH\")", "response": "A method to update a single entry in a particular resource."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nissue a new entry in the specified resource.", "response": "def issue(self, CorpNum, MgtKey, Memo=None, UserID=None):\r\n \"\"\" \ubc1c\ud589\r\n args\r\n CorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\r\n MgtKey : \uc6d0\ubcf8 \ud604\uae08\uc601\uc218\uc99d \ubb38\uc11c\uad00\ub9ac\ubc88\ud638\r\n Memo : \ubc1c\ud589 \uba54\ubaa8\r\n UserID : \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\r\n return\r\n \ucc98\ub9ac\uacb0\uacfc. consist of code and message\r\n raise\r\n PopbillException\r\n \"\"\"\r\n\r\n if MgtKey == None or MgtKey == \"\":\r\n raise PopbillException(-99999999, \"\uad00\ub9ac\ubc88\ud638\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n postData = \"\"\r\n req = {}\r\n\r\n if Memo != None or Memo != '':\r\n req[\"memo\"] = Memo\r\n\r\n postData = self._stringtify(req)\r\n\r\n return self._httppost('/Cashbill/' + MgtKey, postData, CorpNum, UserID, \"ISSUE\")"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef delete(self, CorpNum, MgtKey, UserID=None):\r\n\r\n if MgtKey == None or MgtKey == \"\":\r\n raise PopbillException(-99999999, \"\uad00\ub9ac\ubc88\ud638\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n return self._httppost('/Cashbill/' + MgtKey, '', CorpNum, UserID, \"DELETE\")", "response": "\uc0ad\uc81c \uc81c \uc601\uc218\uc99d \ud604\uae08\uc601\uc218\uc99d \uc11c \uc815\ubcf4 \uc54a\uc558\uc2b5\ub2c8\ub2e4."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef getInfo(self, CorpNum, MgtKey):\r\n\r\n if MgtKey == None or MgtKey == \"\":\r\n raise PopbillException(-99999999, \"\uad00\ub9ac\ubc88\ud638\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n return self._httpget('/Cashbill/' + MgtKey, CorpNum)", "response": "Get information about a particular item."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nget the details of a particular item.", "response": "def getDetailInfo(self, CorpNum, MgtKey):\r\n \"\"\" \uc0c1\uc138\uc815\ubcf4 \uc870\ud68c\r\n args\r\n CorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\r\n MgtKey : \ubb38\uc11c\uad00\ub9ac\ubc88\ud638\r\n return\r\n \ubb38\uc11c \uc0c1\uc138\uc815\ubcf4 object\r\n raise\r\n PopbillException\r\n \"\"\"\r\n\r\n if MgtKey == None or MgtKey == \"\":\r\n raise PopbillException(-99999999, \"\uad00\ub9ac\ubc88\ud638\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n return self._httpget('/Cashbill/' + MgtKey + '?Detail', CorpNum)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef getLogs(self, CorpNum, MgtKey):\r\n if MgtKey == None or MgtKey == \"\":\r\n raise PopbillException(-99999999, \"\uad00\ub9ac\ubc88\ud638\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n return self._httpget('/Cashbill/' + MgtKey + '/Logs', CorpNum)", "response": "\ubb38\uc11c\uc774\ub825 \uc870\ud68c\r\n args\r\n CorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\r\n MgtKey : \ubb38\uc11c\uad00\ub9ac\ubc88\ud638\r\n return\r\n \ubb38\uc11c \uc774\ub825 \ubaa9\ub85d as List\r\n raise\r\n PopbillException"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef updateEmailConfig(self, Corpnum, EmailType, SendYN, UserID=None):\r\n if EmailType == None or EmailType == '':\r\n raise PopbillException(-99999999, \"\uba54\uc77c\uc804\uc1a1 \ud0c0\uc785\uc774 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n if SendYN == None or SendYN == '':\r\n raise PopbillException(-99999999, \"\uba54\uc77c\uc804\uc1a1 \uc5ec\ubd80 \ud56d\ubaa9\uc774 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n uri = \"/Cashbill/EmailSendConfig?EmailType=\" + EmailType + \"&SendYN=\" + str(SendYN)\r\n return self._httppost(uri, \"\", Corpnum, UserID)", "response": "\uc804\uc1a1 \uc694\uccad\ud55c \uc815\ubcf4 \uc694\uccad\ud55c \uc815\ubcf4 \uc694\uccad\ud55c \uc815\ubcf4 \uc694\uccad\ud55c \uc815\ubcf4 \uc694\uccad\ud55c \uc815\ubcf4 \uc694\uccad\ud574\uc694\uacfc\uacfc\uacfc \uc694\uc790 \uc815\ubcf4 \uc694\uc790 \uc815\ubcf4 \uc694\uc790 \uc815\ubcf4 \uc694\uc790 \uc815\ubcf4 \uc694\uc694\uacfc \uc694\uc694\uc790 \uc815\ubcf4 \uc694\uc790 \uc815\ubcf4 \uc694\uc790 \uc815\ubcf4 \uc815\ubcf4\uc815\ubcf4\uc815\ubcf4\uc694\uc790 \uc790 \uc815\ubcf4\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc790 \uc694\uc790 \uc815\ubcf4 \uc694\uc790 \uc815\ubcf4 \uc815\ubcf4\uc694\uc694\uc694\uc790 \uc790 \uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc790 \uc790 \uc694\uc694\uc694\uc694\uc694\uc694\uc790 \uc790 \uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\uc694\ufffd \uc694\uc694\uc694\ufffd \uc694\uc694\ufffd \uc694\uc694\ufffd\uc694\ufffd\uc694\ufffd\uc694\ufffd\uc694\ufffd\uc694\ufffd\ufffd"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef list(self, request):\n pipeline = [{'$match': request.args.pop('match', {})}]\n\n sort = request.args.pop('sort', {})\n if sort:\n pipeline.append({'$sort': sort})\n\n project = request.args.pop('project', {})\n if project:\n pipeline.append({'$project': project})\n\n return Response(serialize(self.collection.aggregate(pipeline)))", "response": "Returns the list of documents found on the collection"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncreate a new document based on the given data.", "response": "def create(self, request):\n \"\"\"\n Creates a new document based on the given data\n \"\"\"\n document = self.collection(request.json)\n document.created_at = datetime.utcnow()\n document.updated_at = document.created_at\n\n created = document.insert()\n return Response(\n response=serialize(created),\n status=(\n 201 if not all(\n key in created for key in [\n 'error_code', 'error_type', 'error_message'\n ]\n ) else 400\n )\n )"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nretrieving the document with the given _id or 404", "response": "def retrieve(self, request, _id):\n \"\"\"\n Returns the document containing the given _id or 404\n \"\"\"\n _id = deserialize(_id)\n\n retrieved = self.collection.find_one({'_id': _id})\n if retrieved:\n return Response(serialize(retrieved))\n else:\n return Response(\n response=serialize(\n DocumentNotFoundError(self.collection.__name__, _id)\n ),\n status=400\n )"} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nupdates the document with the given _id using the given data", "response": "def update(self, request, _id):\n \"\"\"\n Updates the document with the given _id using the given data\n \"\"\"\n _id = deserialize(_id)\n\n to_update = self.collection.find_one({'_id': _id})\n if to_update:\n document = self.collection(dict(to_update, **request.json))\n document.updated_at = datetime.utcnow()\n\n updated = document.update()\n return Response(\n response=serialize(updated),\n status=(\n 200 if not all(\n key in updated for key in [\n 'error_code', 'error_type', 'error_message'\n ]\n ) else 400\n )\n )\n else:\n return Response(\n response=serialize(\n DocumentNotFoundError(self.collection.__name__, _id)\n ),\n status=400\n )"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ndeleting the document with the given _id if it exists.", "response": "def delete(self, request, _id):\n \"\"\"\n Deletes the document with the given _id if it exists\n \"\"\"\n _id = deserialize(_id)\n\n to_delete = self.collection.get({'_id': _id})\n if to_delete:\n deleted = to_delete.delete()\n\n return Response(\n response=serialize(deleted),\n status=(\n 200 if not all(\n key in deleted for key in [\n 'error_code', 'error_type', 'error_message'\n ]\n ) else 400\n )\n )\n else:\n return Response(\n response=serialize(\n DocumentNotFoundError(self.collection.__name__, _id)\n ),\n status=404\n )"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef with_reactor(*dec_args, **dec_kwargs):\n \n # This method takes care of the decorator protocol, it\n # distinguishes between using the decorator with brackets\n # and without brackets. It then calls `_twisted_test_sync()`.\n\n if len(dec_args) == 1 and callable(dec_args[0]) and not dec_kwargs:\n # decorator used without brackets:\n # @twisted_test\n # def test_xxx():\n # ....\n callee = dec_args[0]\n dec_args = ()\n dec_kwargs = {}\n \n @functools.wraps(callee)\n def wrapper(*call_args, **call_kwargs):\n return _twisted_test_sync(callee, call_args, call_kwargs)\n return wrapper\n\n else:\n # decorator used with brackets:\n # @twisted_test(*dec_args, **dec_args)\n # def test_xxx():\n # ....\n def decorator(callee):\n @functools.wraps(callee)\n def wrapper(*call_args, **call_kwargs):\n return _twisted_test_sync(callee, call_args, call_kwargs, *dec_args, **dec_kwargs)\n return wrapper\n return decorator", "response": "Decorator for test functions that require a running reactor."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nstart the twisted reactor if it is not running already.", "response": "def _ensure_reactor_running():\n \"\"\"\n Starts the twisted reactor if it is not running already.\n \n The reactor is started in a new daemon-thread.\n \n Has to perform dirty hacks so that twisted can register\n signals even if it is not running in the main-thread.\n \"\"\"\n if not reactor.running:\n \n # Some of the `signal` API can only be called\n # from the main-thread. So we do a dirty workaround.\n #\n # `signal.signal()` and `signal.wakeup_fd_capture()`\n # are temporarily monkey-patched while the reactor is\n # starting.\n #\n # The patched functions record the invocations in\n # `signal_registrations`. \n #\n # Once the reactor is started, the main-thread\n # is used to playback the recorded invocations.\n \n signal_registrations = []\n\n # do the monkey patching\n def signal_capture(*args, **kwargs):\n signal_registrations.append((orig_signal, args, kwargs))\n def set_wakeup_fd_capture(*args, **kwargs):\n signal_registrations.append((orig_set_wakeup_fd, args, kwargs))\n orig_signal = signal.signal\n signal.signal = signal_capture\n orig_set_wakeup_fd = signal.set_wakeup_fd\n signal.set_wakeup_fd = set_wakeup_fd_capture\n \n \n # start the reactor in a daemon-thread\n reactor_thread = threading.Thread(target=reactor.run, name=\"reactor\")\n reactor_thread.daemon = True\n reactor_thread.start()\n while not reactor.running:\n time.sleep(0.01)\n \n # Give the reactor a moment to register the signals. \n # Apparently the 'running' flag is set before that.\n time.sleep(0.01)\n \n # Undo the monkey-paching\n signal.signal = orig_signal\n signal.set_wakeup_fd = orig_set_wakeup_fd\n \n # Playback the recorded calls\n for func, args, kwargs in signal_registrations:\n func(*args, **kwargs)"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef _timeoutDeferred(deferred, timeout):\n delayedCall = reactor.callLater(timeout, deferred.cancel)\n def gotResult(result):\n if delayedCall.active():\n delayedCall.cancel()\n return result\n deferred.addBoth(gotResult)", "response": "Cancels the given deferred after the given time."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef find_prf_cpu(idxPrc, aryFuncChnk, aryPrfTc, aryMdlParams, strVersion,\n lgcXval, varNumXval, queOut, lgcRstr=None, lgcPrint=True):\n \"\"\"\n Find best fitting pRF model for voxel time course, using the CPU.\n\n Parameters\n ----------\n idxPrc : int\n Process ID of the process calling this function (for CPU\n multi-threading). In GPU version, this parameter is 0 (just one thread\n on CPU).\n aryFunc : np.array\n 2D array with functional MRI data, with shape aryFunc[voxel, time].\n aryPrfTc : np.array\n Array with pRF model time courses, with shape\n aryPrfTc[x-pos*y-pos*SD, number of feautures, number of volumes]\n aryMdlParams : np.array\n 2D array with all pRF model parameter combinations.\n strVersion : str\n Which version to use for pRF finding; 'numpy' or 'cython'.\n lgcXval: boolean\n Logical to determine whether we cross-validate.\n varNumXval: int\n Number of folds for k-fold cross-validation.\n queOut : multiprocessing.queues.Queue\n Queue to put the results on.\n lgcRstr : boolean numpy array or None, default None\n Logical to restrict certain models to particular voxels.\n lgcPrint : boolean\n Whether print statements should be executed.\n\n Returns\n -------\n lstOut : list\n List containing the following objects:\n idxPrc : int\n Process ID of the process calling this function (for CPU\n multi-threading). In GPU version, this parameter is 0.\n vecBstXpos : np.array\n 1D array with best fitting x-position for each voxel, with shape\n vecBstXpos[voxel].\n vecBstYpos : np.array\n 1D array with best fitting y-position for each voxel, with shape\n vecBstYpos[voxel].\n vecBstSd : np.array\n 1D array with best fitting pRF size for each voxel, with shape\n vecBstSd[voxel].\n vecBstR2 : np.array\n 1D array with R2 value of 'winning' pRF model for each voxel, with\n shape vecBstR2[voxel].\n aryBstBts : np.array\n 2D array with beta parameter estimates of 'winning' pRF model for\n each voxel, with shape aryBstBts[voxel, feautures].\n\n Notes\n -----\n The list with results is not returned directly, but placed on a\n multiprocessing queue. This version performs the model finding on the CPU,\n using numpy or cython (depending on the value of `strVersion`).\n\n \"\"\"\n\n # Number of models in the visual space:\n varNumMdls = aryPrfTc.shape[0]\n\n # Number of feautures\n varNumFtr = aryPrfTc.shape[1]\n\n # Number of voxels to be fitted in this chunk:\n varNumVoxChnk = aryFuncChnk.shape[0]\n\n # Vectors for pRF finding results [number-of-voxels times one]:\n # make sure they have the same precision as aryMdlParams, since this\n # is important for later comparison\n vecBstXpos = np.zeros(varNumVoxChnk, dtype=aryMdlParams.dtype)\n vecBstYpos = np.zeros(varNumVoxChnk, dtype=aryMdlParams.dtype)\n vecBstSd = np.zeros(varNumVoxChnk, dtype=aryMdlParams.dtype)\n\n # Vector for best R-square value. For each model fit, the R-square value is\n # compared to this, and updated if it is lower than the best-fitting\n # solution so far. We initialise with an arbitrary, high value\n vecBstRes = np.add(np.zeros(varNumVoxChnk), np.inf).astype(np.float32)\n\n # array for best beta values. If we update the residual value above because\n # it is lower, we also update the beta values of these voxels\n aryBstBts = np.zeros((varNumVoxChnk, varNumFtr)).astype(np.float32)\n\n # In case we cross-validate we also save and replace the best\n # residual values for every fold (not only mean across folds):\n if lgcXval:\n aryBstResFlds = np.zeros((varNumVoxChnk, varNumXval), dtype=np.float32)\n\n # We reshape the voxel time courses, so that time goes down the column,\n # i.e. from top to bottom.\n aryFuncChnk = aryFuncChnk.T\n\n # Change type to float 32:\n aryFuncChnk = aryFuncChnk.astype(np.float32)\n aryPrfTc = aryPrfTc.astype(np.float32)\n\n # if lgc for Xval is true we already prepare indices for xvalidation\n if lgcXval:\n # obtain iterator for cross-validation\n itXval = KFold(n_splits=varNumXval)\n vecSplts = np.arange(aryPrfTc.shape[-1], dtype=np.int32)\n\n # prepare lists that will hold indices for xvalidation\n lstIdxTrn = []\n lstIdxtst = []\n # Loop over the cross-validations to put indcies in array\n for idxTrn, idxTst in itXval.split(vecSplts):\n lstIdxTrn.append(idxTrn)\n lstIdxtst.append(idxTst)\n # trun lists into array\n aryIdxTrn = np.stack(lstIdxTrn, axis=-1).astype(np.int32)\n aryIdxTst = np.stack(lstIdxtst, axis=-1).astype(np.int32)\n\n # Prepare status indicator if this is the first of the parallel processes:\n if idxPrc == 0:\n\n # We create a status indicator for the time consuming pRF model finding\n # algorithm. Number of steps of the status indicator:\n varStsStpSze = 20\n\n # Vector with pRF values at which to give status feedback:\n vecStatPrf = np.linspace(0,\n varNumMdls,\n num=(varStsStpSze+1),\n endpoint=True)\n vecStatPrf = np.ceil(vecStatPrf)\n vecStatPrf = vecStatPrf.astype(int)\n\n # Vector with corresponding percentage values at which to give status\n # feedback:\n vecStatPrc = np.linspace(0,\n 100,\n num=(varStsStpSze+1),\n endpoint=True)\n vecStatPrc = np.ceil(vecStatPrc)\n vecStatPrc = vecStatPrc.astype(int)\n\n # Counter for status indicator:\n varCntSts01 = 0\n varCntSts02 = 0\n\n # If user does not restrict model space for particular voxels, select\n # all voxels\n if lgcRstr is None:\n lgcVxl = np.arange(varNumVoxChnk, dtype=np.int32)\n\n # There can be pRF model time courses with a variance of zero (i.e. pRF\n # models that are not actually responsive to the stimuli). For time\n # efficiency, and in order to avoid division by zero, we ignore these\n # model time courses.\n aryPrfTcVar = np.var(aryPrfTc, axis=-1)\n\n # Zero with float32 precision for comparison:\n varZero32 = np.array(([0.0])).astype(np.float32)[0]\n\n # Loop through pRF models:\n for idxMdl in range(0, varNumMdls):\n\n # If desired by user, restrict the model fitting such that certain\n # models are restricted to particular voxels\n if lgcRstr is not None:\n # Apply flatnonzero, so we can use cascaded integer indexing later\n lgcVxl = np.flatnonzero(lgcRstr[:, idxMdl])\n\n # Status indicator (only used in the first of the parallel\n # processes):\n if idxPrc == 0:\n\n # Status indicator:\n if varCntSts02 == vecStatPrf[varCntSts01]:\n\n # Prepare status message:\n strStsMsg = ('------------Progress: ' +\n str(vecStatPrc[varCntSts01]) +\n ' % --- ' +\n str(vecStatPrf[varCntSts01]) +\n ' pRF models out of ' +\n str(varNumMdls))\n if lgcPrint:\n print(strStsMsg)\n\n # Only increment counter if the last value has not been\n # reached yet:\n if varCntSts01 < varStsStpSze:\n varCntSts01 = varCntSts01 + int(1)\n\n # Only fit pRF model if:\n # 1) all feature predictors have a variance greater than zero AND\n # 2) at least one voxel is being tested\n if np.logical_and(np.all(np.greater(aryPrfTcVar[idxMdl], varZero32),\n axis=0),\n np.greater(lgcVxl.size, 0)):\n\n # Get predictor time courses for this specific model\n vecMdl = aryPrfTc[idxMdl, :, :].T\n\n # Check whether we need to crossvalidate\n if lgcXval:\n # We do crossvalidate. In this case, we loop through\n # the different folds of the crossvalidation and\n # calculate the cross-validation error for the current\n # model for all voxel time courses.\n\n # Cython version:\n if strVersion == 'cython':\n\n # A cython function is used to calculate the residuals and\n # beta parameter estimates of the current model:\n if varNumFtr == 1:\n # For time course with one predictors\n aryResXval = cy_lst_sq_xval_one(np.squeeze(vecMdl),\n aryFuncChnk[:, lgcVxl],\n aryIdxTrn,\n aryIdxTst)\n\n elif varNumFtr == 2:\n # For time course with two predictors\n aryResXval = cy_lst_sq_xval_two(vecMdl,\n aryFuncChnk[:, lgcVxl],\n aryIdxTrn,\n aryIdxTst)\n\n else:\n if lgcPrint:\n print('Cython currently not implemented for ' +\n 'more than two predictors.')\n\n # Numpy version:\n elif strVersion == 'numpy':\n\n aryResXval = np_lst_sq_xval(vecMdl, aryFuncChnk[:, lgcVxl],\n aryIdxTrn, aryIdxTst)\n\n # calculate the average cross validation error across\n # all folds\n vecTmpRes = np.mean(aryResXval, axis=1)\n\n else:\n # We do not crossvalidate. In this case, we calculate\n # the ratio of the explained variance (R squared)\n # for the current model for all voxel time courses.\n\n # Cython version:\n if strVersion == 'cython':\n\n # A cython function is used to calculate the residuals and\n # beta parameter estimates of the current model:\n if varNumFtr == 1:\n # For time course with one predictor\n aryTmpBts, vecTmpRes = cy_lst_sq_one(\n np.squeeze(vecMdl), aryFuncChnk[:, lgcVxl])\n\n elif varNumFtr == 2:\n # For time course with two predictors\n aryTmpBts, vecTmpRes = \\\n cy_lst_sq_two(vecMdl, aryFuncChnk[:, lgcVxl])\n else:\n if lgcPrint:\n print('Cython currently not implemented for ' +\n 'more than two two predictors.')\n\n # Numpy version:\n elif strVersion == 'numpy':\n\n # Numpy linalg.lstsq is used to calculate the\n # beta values and residuals of the current model:\n aryTmpBts, vecTmpRes = np_lst_sq(vecMdl,\n aryFuncChnk[:, lgcVxl])\n\n # Check whether current crossvalidation error (xval=True)\n # or residuals (xval=False) are lower than previously\n # calculated ones:\n vecLgcTmpRes = np.less(vecTmpRes, vecBstRes[lgcVxl])\n # Apply np.flatnonzero for cascaded integer-indexing\n vecLgcTmpRes = np.flatnonzero(vecLgcTmpRes)\n\n # Replace best x and y position values, and SD values:\n vecBstXpos[lgcVxl[vecLgcTmpRes]] = aryMdlParams[idxMdl, 0]\n vecBstYpos[lgcVxl[vecLgcTmpRes]] = aryMdlParams[idxMdl, 1]\n vecBstSd[lgcVxl[vecLgcTmpRes]] = aryMdlParams[idxMdl, 2]\n\n # Replace best mean residual values:\n vecBstRes[lgcVxl[vecLgcTmpRes]] = vecTmpRes[vecLgcTmpRes]\n\n if not lgcXval:\n # Replace best beta values:\n aryBstBts[lgcVxl[vecLgcTmpRes], :] = \\\n aryTmpBts[:, vecLgcTmpRes].T\n\n # In case we cross-validate we also save and replace the best\n # residual values for every fold (not only mean across folds):\n if lgcXval:\n aryBstResFlds[lgcVxl[vecLgcTmpRes], :] = \\\n aryResXval[vecLgcTmpRes, :]\n\n # Status indicator (only used in the first of the parallel\n # processes):\n if idxPrc == 0:\n\n # Increment status indicator counter:\n varCntSts02 = varCntSts02 + 1\n\n # After finding the best fitting model for each voxel, we still have to\n # calculate the average correlation coefficient between predicted and\n # measured time course (xval=True) or the coefficient of determination\n # (xval=False) for each voxel.\n\n if lgcXval:\n\n # create vector that allows to check whether every voxel is visited\n # exactly once\n vecVxlTst = np.zeros(aryFuncChnk.shape[1])\n\n # Since we did not do this during finding the best model, we still need\n # to calculate deviation from a mean model for every voxel and fold\n # arySsTotXval as well as calculate the best betas for the full model\n\n # concatenate vectors with best x, y, sigma params\n aryBstPrm = np.stack((vecBstXpos, vecBstYpos, vecBstSd), axis=1)\n\n # Find unique rows\n aryUnqRows = fnd_unq_rws(aryBstPrm, return_index=False,\n return_inverse=False)\n\n # Calculate deviation from a mean model for every voxel and fold\n arySsTotXval = np.zeros((aryBstResFlds.shape),\n dtype=aryBstResFlds.dtype)\n\n # Loop over all best-fitting model parameter combinations found\n for vecPrm in aryUnqRows:\n # Get logical for voxels for which this prm combi was the best\n lgcPrm = np.isclose(aryBstPrm, vecPrm, atol=1e-04).all(axis=1)\n # Get logical index for the model number\n # This can only be 1 index, so we directly get 1st entry of array\n lgcIndMdl = np.where(np.isclose(aryMdlParams, vecPrm,\n atol=1e-04).all(axis=1))[0][0]\n\n if np.all(np.invert(lgcPrm)):\n if lgcPrint:\n print('------------No voxel found, process ' + str(idxPrc))\n # Mark those voxels that were visited\n vecVxlTst[lgcPrm] += 1\n\n # Get voxel time course\n aryVxlTc = aryFuncChnk[:, lgcPrm]\n\n # Get model time courses\n aryMdlTc = aryPrfTc[lgcIndMdl, :, :].T\n\n # Calculate beta parameter estimates for entire model\n aryBstBts[lgcPrm, :] = np.linalg.lstsq(aryMdlTc,\n aryVxlTc,\n rcond=-1)[0].T\n\n # loop over cross-validation folds\n for idxXval in range(varNumXval):\n # Get functional data for tst:\n aryFuncChnkTst = aryVxlTc[\n aryIdxTst[:, idxXval], :]\n # Deviation from the mean for each datapoint:\n aryFuncDev = np.subtract(aryFuncChnkTst,\n np.mean(aryFuncChnkTst,\n axis=0)[None, :])\n # Sum of squares:\n vecSsTot = np.sum(np.power(aryFuncDev,\n 2.0),\n axis=0)\n arySsTotXval[lgcPrm, idxXval] = vecSsTot\n\n # check that every voxel was visited exactly once\n errMsg = 'At least one voxel visited more than once for SStot calc'\n assert len(vecVxlTst) == np.sum(vecVxlTst), errMsg\n\n # Calculate coefficient of determination by comparing:\n # aryBstResFlds vs. arySsTotXval\n\n # get logical to check that arySsTotXval is greater than zero in all\n # voxels and folds\n lgcExclZeros = np.all(np.greater(arySsTotXval, np.array([0.0])),\n axis=1)\n if lgcPrint:\n print('------------Nr of voxels: ' + str(len(lgcExclZeros)))\n print('------------Nr of voxels avove 0: ' +\n str(np.sum(lgcExclZeros)))\n\n # Calculate R2 for every crossvalidation fold seperately\n aryBstR2fld = np.subtract(\n 1.0, np.divide(aryBstResFlds,\n arySsTotXval))\n\n # Calculate mean R2 across folds here\n vecBstR2 = np.subtract(\n 1.0, np.mean(np.divide(aryBstResFlds,\n arySsTotXval),\n axis=1))\n\n # Output list:\n lstOut = [idxPrc,\n vecBstXpos,\n vecBstYpos,\n vecBstSd,\n vecBstR2,\n aryBstBts,\n aryBstR2fld]\n\n queOut.put(lstOut)\n\n else:\n # To calculate the coefficient of determination, we start with the\n # total sum of squares (i.e. the deviation of the data from the mean).\n # The mean of each time course:\n vecFuncMean = np.mean(aryFuncChnk, axis=0)\n # Deviation from the mean for each datapoint:\n aryFuncDev = np.subtract(aryFuncChnk, vecFuncMean[None, :])\n # Sum of squares:\n vecSsTot = np.sum(np.power(aryFuncDev,\n 2.0),\n axis=0)\n # Coefficient of determination:\n vecBstR2 = np.subtract(1.0,\n np.divide(vecBstRes,\n vecSsTot))\n\n # Output list:\n lstOut = [idxPrc,\n vecBstXpos,\n vecBstYpos,\n vecBstSd,\n vecBstR2,\n aryBstBts]\n\n queOut.put(lstOut)", "response": "This function finds the best fitting pRF model for a specific voxel time course using the CPU."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\ncheck the revert status of a revision. With this method, you can determine whether an edit is a 'reverting' edit, was 'reverted' by another edit and/or was 'reverted_to' by another edit. :Parameters: session : :class:`mwapi.Session` An API session to make use of rev_id : int the ID of the revision to check page_id : int the ID of the page the revision occupies (slower if not provided) radius : int a positive integer indicating the maximum number of revisions that can be reverted before : :class:`mwtypes.Timestamp` if set, limits the search for *reverting* revisions to those which were saved before this timestamp window : int if set, limits the search for *reverting* revisions to those which were saved within `window` seconds after the reverted edit rvprop : set( str ) a set of properties to include in revisions :Returns: A triple :class:`mwreverts.Revert` | `None` * reverting -- If this edit reverted other edit(s) * reverted -- If this edit was reverted by another edit * reverted_to -- If this edit was reverted to by another edit :Example: >>> import mwdb >>> import mwreverts.api >>> >>> schema = mwdb.Schema(\"mysql+pymysql://enwiki.labsdb/enwiki_p\" + \"?read_default_file=~/replica.my.cnf\") >>> >>> def print_revert(revert): ... if revert is None: ... print(None) ... else: ... print(revert.reverting['rev_id'], ... [r['rev_id'] for r in revert.reverteds], ... revert.reverted_to['rev_id']) ... >>> reverting, reverted, reverted_to = \\\\ ... mwreverts.db.check(schema, 679778587) >>> print_revert(reverting) None >>> print_revert(reverted) 679778743 [679778587] 679742862 >>> print_revert(reverted_to) None", "response": "def check(schema, rev_id, page_id=None, radius=defaults.RADIUS,\n before=None, window=None):\n \"\"\"\n Checks the revert status of a revision. With this method, you can\n determine whether an edit is a 'reverting' edit, was 'reverted' by another\n edit and/or was 'reverted_to' by another edit.\n\n :Parameters:\n session : :class:`mwapi.Session`\n An API session to make use of\n rev_id : int\n the ID of the revision to check\n page_id : int\n the ID of the page the revision occupies (slower if not provided)\n radius : int\n a positive integer indicating the maximum number of revisions\n that can be reverted\n before : :class:`mwtypes.Timestamp`\n if set, limits the search for *reverting* revisions to those which\n were saved before this timestamp\n window : int\n if set, limits the search for *reverting* revisions to those which\n were saved within `window` seconds after the reverted edit\n rvprop : set( str )\n a set of properties to include in revisions\n\n :Returns:\n A triple :class:`mwreverts.Revert` | `None`\n\n * reverting -- If this edit reverted other edit(s)\n * reverted -- If this edit was reverted by another edit\n * reverted_to -- If this edit was reverted to by another edit\n\n :Example:\n\n >>> import mwdb\n >>> import mwreverts.api\n >>>\n >>> schema = mwdb.Schema(\"mysql+pymysql://enwiki.labsdb/enwiki_p\" +\n \"?read_default_file=~/replica.my.cnf\")\n >>>\n >>> def print_revert(revert):\n ... if revert is None:\n ... print(None)\n ... else:\n ... print(revert.reverting['rev_id'],\n ... [r['rev_id'] for r in revert.reverteds],\n ... revert.reverted_to['rev_id'])\n ...\n >>> reverting, reverted, reverted_to = \\\\\n ... mwreverts.db.check(schema, 679778587)\n >>> print_revert(reverting)\n None\n >>> print_revert(reverted)\n 679778743 [679778587] 679742862\n >>> print_revert(reverted_to)\n None\n\n \"\"\"\n\n rev_id = int(rev_id)\n radius = int(radius)\n if radius < 1:\n raise TypeError(\"invalid radius. Expected a positive integer.\")\n\n page_id = int(page_id) if page_id is not None else None\n before = Timestamp(before) if before is not None else None\n\n # If we don't have the page_id, we're going to need to look them up\n if page_id is None:\n page_id = get_page_id(schema, rev_id)\n\n # Load history and current rev\n current_and_past_revs = list(n_edits_before(\n schema, rev_id + 1, page_id, n=radius + 1))\n\n if len(current_and_past_revs) < 1:\n raise KeyError(\"Revision {0} not found in page {1}.\"\n .format(rev_id, page_id))\n\n current_rev, past_revs = (\n current_and_past_revs[-1], # Current rev is the last one returned\n current_and_past_revs[:-1] # The rest are past revs\n )\n if current_rev.rev_id != rev_id:\n raise KeyError(\"Revision {0} not found in page {1}.\"\n .format(rev_id, page_id))\n\n if window is not None and before is None:\n before = Timestamp(current_rev.rev_timestamp) + window\n\n # Load future revisions\n future_revs = list(n_edits_after(\n schema, rev_id, page_id, n=radius, before=before))\n\n return build_revert_tuple(\n rev_id, past_revs, current_rev, future_revs, radius)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef check_archive(schema, rev_id, namespace=None, title=None, timestamp=None,\n radius=defaults.RADIUS,\n before=None, window=None):\n \"\"\"\n Checks the revert status of an archived revision (from a deleted page).\n With this method, you can determine whether an edit is a 'reverting'\n edit, was 'reverted' by another edit and/or was 'reverted_to' by\n another edit.\n\n :Parameters:\n session : :class:`mwapi.Session`\n An API session to make use of\n rev_id : int\n the ID of the revision to check\n namespace : int\n the namespace ID of the page the revision exists in\n title : str\n the title of the page the revision exists in\n timestamp : :class:`mwtypes.Timestamp`\n the timestamp that the revision for `rev_id` was saved\n radius : int\n a positive integer indicating the maximum number of revisions\n that can be reverted\n before : :class:`mwtypes.Timestamp`\n if set, limits the search for *reverting* revisions to those which\n were saved before this timestamp\n window : int\n if set, limits the search for *reverting* revisions to those which\n were saved within `window` seconds after the reverted edit\n rvprop : set( str )\n a set of properties to include in revisions\n\n :Returns:\n A triple :class:`mwreverts.Revert`\n\n * reverting -- If this edit reverted other edit(s)\n * reverted -- If this edit was reverted by another edit\n * reverted_to -- If this edit was reverted to by another edit\n\n \"\"\"\n\n rev_id = int(rev_id)\n radius = int(radius)\n if radius < 1:\n raise TypeError(\"invalid radius. Expected a positive integer.\")\n\n namespace = int(namespace) if namespace is not None else None\n title = str(title) if title is not None else None\n timestamp = Timestamp(timestamp) if timestamp is not None else None\n before = Timestamp(before) if before is not None else None\n\n # If we don't have the page_id, we're going to need to look them up\n if namespace is None or title is None or timestamp is None:\n namespace, title, timestamp = \\\n get_archived_namespace_title_and_timestamp(schema, rev_id)\n\n # Load history and current rev\n current_and_past_revs = list(n_archived_edits_before(\n schema, rev_id + 1, namespace, title, timestamp + 1, n=radius + 1))\n\n if len(current_and_past_revs) < 1:\n raise KeyError(\"Revision {0} not found in page {1}(ns={2}) @ {3}.\"\n .format(rev_id, title, namespace, timestamp))\n\n current_rev, past_revs = (\n current_and_past_revs[-1], # Current rev is the last one returned\n current_and_past_revs[:-1] # The rest are past revs\n )\n if current_rev.ar_rev_id != rev_id:\n raise KeyError(\"Revision {0} not found in page {1}(ns={2}) @ {3}.\"\n .format(rev_id, title, namespace, timestamp))\n\n if window is not None and before is None:\n before = Timestamp(current_rev.ar_timestamp) + window\n\n # Load future revisions\n future_revs = list(n_archived_edits_after(\n schema, rev_id, namespace, title, timestamp, n=radius, before=before))\n\n return build_revert_tuple(\n rev_id, past_revs, current_rev, future_revs, radius)", "response": "Checks the revert status of an archived revision."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function that can\nconvert the given value to a JSON object.", "response": "def _get_json(value):\n \"\"\"Convert the given value to a JSON object.\"\"\"\n if hasattr(value, 'replace'):\n value = value.replace('\\n', ' ')\n try:\n return json.loads(value)\n except json.JSONDecodeError:\n # Escape double quotes.\n if hasattr(value, 'replace'):\n value = value.replace('\"', '\\\\\"')\n # try putting the value into a string\n return json.loads('\"{}\"'.format(value))"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef save_list(key, *values):\n return json.dumps({key: [_get_json(value) for value in values]})", "response": "Convert the given list of parameters to a JSON object."} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nconverts the given parameters to a special JSON object.", "response": "def save_file(key, file_path, *refs):\n \"\"\"Convert the given parameters to a special JSON object.\n\n JSON object is of the form:\n { key: {\"file\": file_path}}, or\n { key: {\"file\": file_path, \"refs\": [refs[0], refs[1], ... ]}}\n\n \"\"\"\n if not os.path.isfile(file_path):\n return error(\"Output '{}' set to a missing file: '{}'.\".format(key, file_path))\n\n result = {key: {\"file\": file_path}}\n\n if refs:\n missing_refs = [\n ref for ref in refs if not (os.path.isfile(ref) or os.path.isdir(ref))\n ]\n if len(missing_refs) > 0:\n return error(\n \"Output '{}' set to missing references: '{}'.\".format(\n key, ', '.join(missing_refs)\n )\n )\n result[key]['refs'] = refs\n\n return json.dumps(result)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\nconvert the given parameters to a special JSON object.", "response": "def save_file_list(key, *files_refs):\n \"\"\"Convert the given parameters to a special JSON object.\n\n Each parameter is a file-refs specification of the form:\n :,, ...,\n where the colon ':' and the list of references are optional.\n\n JSON object is of the form:\n { key: {\"file\": file_path}}, or\n { key: {\"file\": file_path, \"refs\": [refs[0], refs[1], ... ]}}\n\n \"\"\"\n file_list = []\n for file_refs in files_refs:\n if ':' in file_refs:\n try:\n file_name, refs = file_refs.split(':')\n except ValueError as e:\n return error(\"Only one colon ':' allowed in file-refs specification.\")\n else:\n file_name, refs = file_refs, None\n if not os.path.isfile(file_name):\n return error(\n \"Output '{}' set to a missing file: '{}'.\".format(key, file_name)\n )\n file_obj = {'file': file_name}\n\n if refs:\n refs = [ref_path.strip() for ref_path in refs.split(',')]\n missing_refs = [\n ref for ref in refs if not (os.path.isfile(ref) or os.path.isdir(ref))\n ]\n if len(missing_refs) > 0:\n return error(\n \"Output '{}' set to missing references: '{}'.\".format(\n key, ', '.join(missing_refs)\n )\n )\n file_obj['refs'] = refs\n\n file_list.append(file_obj)\n\n return json.dumps({key: file_list})"} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef save_dir(key, dir_path, *refs):\n if not os.path.isdir(dir_path):\n return error(\n \"Output '{}' set to a missing directory: '{}'.\".format(key, dir_path)\n )\n\n result = {key: {\"dir\": dir_path}}\n\n if refs:\n missing_refs = [\n ref for ref in refs if not (os.path.isfile(ref) or os.path.isdir(ref))\n ]\n if len(missing_refs) > 0:\n return error(\n \"Output '{}' set to missing references: '{}'.\".format(\n key, ', '.join(missing_refs)\n )\n )\n result[key][\"refs\"] = refs\n\n return json.dumps(result)", "response": "Convert the given parameters to a special JSON object."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef save_dir_list(key, *dirs_refs):\n dir_list = []\n for dir_refs in dirs_refs:\n if ':' in dir_refs:\n try:\n dir_path, refs = dir_refs.split(':')\n except ValueError as e:\n return error(\"Only one colon ':' allowed in dir-refs specification.\")\n else:\n dir_path, refs = dir_refs, None\n if not os.path.isdir(dir_path):\n return error(\n \"Output '{}' set to a missing directory: '{}'.\".format(key, dir_path)\n )\n dir_obj = {'dir': dir_path}\n\n if refs:\n refs = [ref_path.strip() for ref_path in refs.split(',')]\n missing_refs = [\n ref for ref in refs if not (os.path.isfile(ref) or os.path.isdir(ref))\n ]\n if len(missing_refs) > 0:\n return error(\n \"Output '{}' set to missing references: '{}'.\".format(\n key, ', '.join(missing_refs)\n )\n )\n dir_obj['refs'] = refs\n\n dir_list.append(dir_obj)\n\n return json.dumps({key: dir_list})", "response": "Convert the given parameters to a special JSON object."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef progress(progress):\n if isinstance(progress, int) or isinstance(progress, float):\n progress = float(progress)\n else:\n try:\n progress = float(json.loads(progress))\n except (TypeError, ValueError):\n return warning(\"Progress must be a float.\")\n\n if not 0 <= progress <= 1:\n return warning(\"Progress must be a float between 0 and 1.\")\n\n return json.dumps({'proc.progress': progress})", "response": "Convert given progress to a JSON object."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef checkrc(rc, *args):\n try:\n rc = int(rc)\n except (TypeError, ValueError):\n return error(\"Invalid return code: '{}'.\".format(rc))\n\n acceptable_rcs = []\n error_msg = \"\"\n\n if len(args):\n for code in args[:-1]:\n try:\n acceptable_rcs.append(int(code))\n except (TypeError, ValueError):\n return error(\"Invalid return code: '{}'.\".format(code))\n\n try:\n acceptable_rcs.append(int(args[-1]))\n except (TypeError, ValueError):\n error_msg = args[-1]\n\n if rc in acceptable_rcs:\n rc = 0\n\n ret = {'proc.rc': rc}\n if rc and error_msg:\n ret['proc.error'] = error_msg\n\n return json.dumps(ret)", "response": "Check if return code is in list that contains acceptable return codes."} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef import_file(\n src,\n file_name,\n imported_format=ImportedFormat.BOTH,\n progress_from=0.0,\n progress_to=None,\n):\n \"\"\"Import file to working directory.\n\n :param src: Source file path or URL\n :param file_name: Source file name\n :param imported_format: Import file format (extracted, compressed or both)\n :param progress_from: Initial progress value\n :param progress_to: Final progress value\n :return: Destination file path (if extracted and compressed, extracted path given)\n \"\"\"\n\n if progress_to is not None:\n if not isinstance(progress_from, float) or not isinstance(progress_to, float):\n raise ValueError(\"Progress_from and progress_to must be float\")\n\n if progress_from < 0 or progress_from > 1:\n raise ValueError(\"Progress_from must be between 0 and 1\")\n\n if progress_to < 0 or progress_to > 1:\n raise ValueError(\"Progress_to must be between 0 and 1\")\n\n if progress_from >= progress_to:\n raise ValueError(\"Progress_to must be higher than progress_from\")\n\n print(\"Importing and compressing {}...\".format(file_name))\n\n def importGz():\n \"\"\"Import gzipped file.\n\n The file_name must have .gz extension.\n \"\"\"\n if imported_format != ImportedFormat.COMPRESSED: # Extracted file required\n with open(file_name[:-3], 'wb') as f_out, gzip.open(src, 'rb') as f_in:\n try:\n shutil.copyfileobj(f_in, f_out, CHUNK_SIZE)\n except zlib.error:\n raise ValueError(\"Invalid gzip file format: {}\".format(file_name))\n\n else: # Extracted file not-required\n # Verify the compressed file.\n with gzip.open(src, 'rb') as f:\n try:\n while f.read(CHUNK_SIZE) != b'':\n pass\n except zlib.error:\n raise ValueError(\"Invalid gzip file format: {}\".format(file_name))\n\n if imported_format != ImportedFormat.EXTRACTED: # Compressed file required\n try:\n shutil.copyfile(src, file_name)\n except shutil.SameFileError:\n pass # Skip copy of downloaded files\n\n if imported_format == ImportedFormat.COMPRESSED:\n return file_name\n else:\n return file_name[:-3]\n\n def import7z():\n \"\"\"Import compressed file in various formats.\n\n Supported extensions: .bz2, .zip, .rar, .7z, .tar.gz, and .tar.bz2.\n \"\"\"\n extracted_name, _ = os.path.splitext(file_name)\n destination_name = extracted_name\n temp_dir = 'temp_{}'.format(extracted_name)\n\n cmd = '7z x -y -o{} {}'.format(shlex.quote(temp_dir), shlex.quote(src))\n try:\n subprocess.check_call(cmd, shell=True)\n except subprocess.CalledProcessError as err:\n if err.returncode == 2:\n raise ValueError(\"Failed to extract file: {}\".format(file_name))\n else:\n raise\n\n paths = os.listdir(temp_dir)\n if len(paths) == 1 and os.path.isfile(os.path.join(temp_dir, paths[0])):\n # Single file in archive.\n temp_file = os.path.join(temp_dir, paths[0])\n\n if imported_format != ImportedFormat.EXTRACTED: # Compressed file required\n with open(temp_file, 'rb') as f_in, gzip.open(\n extracted_name + '.gz', 'wb'\n ) as f_out:\n shutil.copyfileobj(f_in, f_out, CHUNK_SIZE)\n\n if imported_format != ImportedFormat.COMPRESSED: # Extracted file required\n shutil.move(temp_file, './{}'.format(extracted_name))\n\n if extracted_name.endswith('.tar'):\n with tarfile.open(extracted_name) as tar:\n tar.extractall()\n\n os.remove(extracted_name)\n destination_name, _ = os.path.splitext(extracted_name)\n else:\n destination_name = extracted_name + '.gz'\n else:\n # Directory or several files in archive.\n if imported_format != ImportedFormat.EXTRACTED: # Compressed file required\n with tarfile.open(extracted_name + '.tar.gz', 'w:gz') as tar:\n for fname in glob.glob(os.path.join(temp_dir, '*')):\n tar.add(fname, os.path.basename(fname))\n\n if imported_format != ImportedFormat.COMPRESSED: # Extracted file required\n for path in os.listdir(temp_dir):\n shutil.move(os.path.join(temp_dir, path), './{}'.format(path))\n else:\n destination_name = extracted_name + '.tar.gz'\n\n shutil.rmtree(temp_dir)\n return destination_name\n\n def importUncompressed():\n \"\"\"Import uncompressed file.\"\"\"\n if imported_format != ImportedFormat.EXTRACTED: # Compressed file required\n with open(src, 'rb') as f_in, gzip.open(file_name + '.gz', 'wb') as f_out:\n shutil.copyfileobj(f_in, f_out, CHUNK_SIZE)\n\n if imported_format != ImportedFormat.COMPRESSED: # Extracted file required\n try:\n shutil.copyfile(src, file_name)\n except shutil.SameFileError:\n pass # Skip copy of downloaded files\n\n return (\n file_name + '.gz'\n if imported_format == ImportedFormat.COMPRESSED\n else file_name\n )\n\n # Large file download from Google Drive requires cookie and token.\n try:\n response = None\n if re.match(\n r'^https://drive.google.com/[-A-Za-z0-9\\+&@#/%?=~_|!:,.;]*[-A-Za-z0-9\\+&@#/%=~_|]$',\n src,\n ):\n session = requests.Session()\n response = session.get(src, stream=True)\n\n token = None\n for key, value in response.cookies.items():\n if key.startswith('download_warning'):\n token = value\n break\n\n if token is not None:\n params = {'confirm': token}\n response = session.get(src, params=params, stream=True)\n\n elif re.match(\n r'^(https?|ftp)://[-A-Za-z0-9\\+&@#/%?=~_|!:,.;]*[-A-Za-z0-9\\+&@#/%=~_|]$',\n src,\n ):\n response = requests.get(src, stream=True)\n except requests.exceptions.ConnectionError:\n raise requests.exceptions.ConnectionError(\"Could not connect to {}\".format(src))\n\n if response:\n with open(file_name, 'wb') as f:\n total = response.headers.get('content-length')\n total = float(total) if total else None\n downloaded = 0\n current_progress = 0\n for content in response.iter_content(chunk_size=CHUNK_SIZE):\n f.write(content)\n\n if total is not None and progress_to is not None:\n downloaded += len(content)\n progress_span = progress_to - progress_from\n next_progress = progress_from + progress_span * downloaded / total\n next_progress = round(next_progress, 2)\n\n if next_progress > current_progress:\n print(progress(next_progress))\n current_progress = next_progress\n\n # Check if a temporary file exists.\n if not os.path.isfile(file_name):\n raise ValueError(\"Downloaded file not found {}\".format(file_name))\n\n src = file_name\n else:\n if not os.path.isfile(src):\n raise ValueError(\"Source file not found {}\".format(src))\n\n # Decide which import should be used.\n if re.search(r'\\.(bz2|zip|rar|7z|tgz|tar\\.gz|tar\\.bz2)$', file_name):\n destination_file_name = import7z()\n elif file_name.endswith('.gz'):\n destination_file_name = importGz()\n else:\n destination_file_name = importUncompressed()\n\n if progress_to is not None:\n print(progress(progress_to))\n\n return destination_file_name", "response": "Imports a file from a source file to a target file."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nload nii data from nii files", "response": "def loadNiiData(lstNiiFls,\n strPathNiiMask=None,\n strPathNiiFunc=None):\n \"\"\"load nii data.\n\n Parameters\n ----------\n lstNiiFls : list, list of str with nii file names\n strPathNiiMask : str, path to nii file with mask (optional)\n strPathNiiFunc : str, parent path to nii files (optional)\n Returns\n -------\n aryFunc : np.array\n Nii data \n \"\"\"\n print('---------Loading nii data')\n # check whether a mask is available\n if strPathNiiMask is not None:\n aryMask = nb.load(strPathNiiMask).get_data().astype('bool')\n # check a parent path is available that needs to be preprended to nii files\n if strPathNiiFunc is not None:\n lstNiiFls = [os.path.join(strPathNiiFunc, i) for i in lstNiiFls]\n\n aryFunc = []\n for idx, path in enumerate(lstNiiFls):\n print('------------Loading run: ' + str(idx+1))\n # Load 4D nii data:\n niiFunc = nb.load(path).get_data()\n # append to list\n if strPathNiiMask is not None:\n aryFunc.append(niiFunc[aryMask, :])\n else:\n aryFunc.append(niiFunc)\n # concatenate arrys in list along time dimension\n aryFunc = np.concatenate(aryFunc, axis=-1)\n # set to type float32\n aryFunc = aryFunc.astype('float32')\n\n return aryFunc"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ncalculating coefficient of determination. Assumes that axis = 0 is time", "response": "def calcR2(predTst, yTest, axis=0):\n \"\"\"calculate coefficient of determination. Assumes that axis=0 is time\n\n Parameters\n ----------\n predTst : np.array, predicted reponse for yTest\n yTest : np.array, acxtually observed response for yTest\n Returns\n -------\n aryFunc : np.array\n R2\n \"\"\"\n rss = np.sum((yTest - predTst) ** 2, axis=axis)\n tss = np.sum((yTest - yTest.mean()) ** 2, axis=axis)\n\n return 1 - rss/tss"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef calcFstats(predTst, yTest, p, axis=0):\n rss = np.sum((yTest - predTst) ** 2, axis=axis)\n tss = np.sum((yTest - yTest.mean()) ** 2, axis=axis)\n # derive number of measurements\n n = yTest.shape[0]\n # calculate Fvalues\n vecFvals = ((tss - rss)/p)/(rss/(n-p-1))\n # calculate corresponding po values\n df1 = p - 1\n df2 = n-1\n vecPvals = stats.f.cdf(vecFvals, df1, df2)\n\n return vecFvals, vecPvals", "response": "calculate coefficient of determination. Assumes that axis = 0 is time\n "} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef calcMse(predTst, yTest, axis=0):\n return np.mean((yTest - predTst) ** 2, axis=axis)", "response": "calculate mean squared error. Assumes that axis = 0 is time\n getTerminal"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\ndetecting reverts that occur in a sequence of revisions.", "response": "def detect(checksum_revisions, radius=defaults.RADIUS):\n \"\"\"\n Detects reverts that occur in a sequence of revisions. Note that,\n `revision` data meta will simply be returned in the case of a revert.\n\n This function serves as a convenience wrapper around calls to\n :class:`mwreverts.Detector`'s :func:`~mwreverts.Detector.process`\n method.\n\n :Parameters:\n checksum_revisions : `iterable` ( (checksum, revision) )\n an iterable over tuples of checksum and revision meta data\n radius : int\n a positive integer indicating the maximum revision distance that a\n revert can span.\n\n :Return:\n a iterator over :class:`mwreverts.Revert`\n\n :Example:\n >>> import mwreverts\n >>>\n >>> checksum_revisions = [\n ... (\"aaa\", {'rev_id': 1}),\n ... (\"bbb\", {'rev_id': 2}),\n ... (\"aaa\", {'rev_id': 3}),\n ... (\"ccc\", {'rev_id': 4})\n ... ]\n >>>\n >>> list(mwreverts.detect(checksum_revisions))\n [Revert(reverting={'rev_id': 3},\n reverteds=[{'rev_id': 2}],\n reverted_to={'rev_id': 1})]\n\n \"\"\"\n\n revert_detector = Detector(radius)\n\n for checksum, revision in checksum_revisions:\n revert = revert_detector.process(checksum, revision)\n if revert is not None:\n yield revert"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef getChargeInfo(self, CorpNum, ItemCode, UserID=None):\r\n if ItemCode == None or ItemCode == '':\r\n raise PopbillException(-99999999, \"\uba85\uc138\uc11c \uc885\ub958 \ucf54\ub4dc\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n return self._httpget('/Statement/ChargeInfo/' + ItemCode, CorpNum, UserID)", "response": "\uc694\uccad\ubc88\ud638 Returns a list of ChargeInfo objects."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef getUnitCost(self, CorpNum, ItemCode):\r\n if ItemCode == None or ItemCode == \"\":\r\n raise PopbillException(-99999999, \"\uba85\uc138\uc11c \uc885\ub958 \ucf54\ub4dc\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n result = self._httpget('/Statement/' + str(ItemCode) + '?cfg=UNITCOST', CorpNum)\r\n return float(result.unitCost)", "response": "\uc804\uc790\ubc88\ud638 \uc804\uc790\ubc88\ud638 \uc815\uc11c \ufffd"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef registIssue(self, CorpNum, statement, Memo=None, UserID=None):\r\n if statement == None:\r\n raise PopbillException(-99999999, \"\ub4f1\ub85d\ud560 \uc804\uc790\uba85\uc138\uc11c \uc815\ubcf4\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n if Memo != None or Memo != '':\r\n statement.memo = Memo\r\n\r\n postData = self._stringtify(statement)\r\n\r\n return self._httppost('/Statement', postData, CorpNum, UserID, \"ISSUE\")", "response": "\uc989\uc2dc\ubc1c\ud589\r\n args\r\n CorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\r\n statement : \ub4f1\ub85d\ud560 \uc804\uc790\uba85\uc138\uc11c object. made with Statement(...)\r\n Memo : \uc989\uc2dc\ubc1c\ud589\uba54\ubaa8\r\n\r\n UserID : \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\r\n return\r\n \ucc98\ub9ac\uacb0\uacfc. consist of code and message\r\n raise\r\n PopbillException"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef register(self, CorpNum, statement, UserID=None):\r\n if statement == None:\r\n raise PopbillException(-99999999, \"\ub4f1\ub85d\ud560 \uc804\uc790\uba85\uc138\uc11c \uc815\ubcf4\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n postData = self._stringtify(statement)\r\n\r\n return self._httppost('/Statement', postData, CorpNum, UserID)", "response": "\uc784\uc2dc\uc800\uc7a5\r\n args\r\n CorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\r\n statement : \ub4f1\ub85d\ud560 \uc804\uc790\uba85\uc138\uc11c object. made with Statement(...)\r\n UserID : \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\r\n return\r\n \ucc98\ub9ac\uacb0\uacfc. consist of code and message\r\n raise\r\n PopbillException"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef update(self, CorpNum, ItemCode, MgtKey, Statement, UserID=None):\r\n\r\n if Statement == None:\r\n raise PopbillException(-99999999, \"\ub4f1\ub85d\ud560 \uc804\uc790\uba85\uc138\uc11c \uc815\ubcf4\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n if MgtKey == None or MgtKey == \"\":\r\n raise PopbillException(-99999999, \"\uad00\ub9ac\ubc88\ud638\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n if ItemCode == None or ItemCode == \"\":\r\n raise PopbillException(-99999999, \"\uba85\uc138\uc11c \uc885\ub958 \ucf54\ub4dc\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n postData = self._stringtify(Statement)\r\n\r\n return self._httppost('/Statement/' + str(ItemCode) + '/' + MgtKey, postData, CorpNum, UserID, 'PATCH')", "response": "\uc784\uc2dc\uc800\uc7a5\r\n args\r\n CorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\r\n ItemCode : \uba85\uc138\uc11c \uc885\ub958 \ucf54\ub4dc\r\n [121 - \uac70\ub798\uba85\uc138\uc11c], [122 - \uccad\uad6c\uc11c], [123 - \uacac\uc801\uc11c],\r\n [124 - \ubc1c\uc8fc\uc11c], [125 - \uc785\uae08\ud45c], [126 - \uc601\uc218\uc99d]\r\n MgtKey : \ud30c\ud2b8\ub108 \ubb38\uc11c\uad00\ub9ac\ubc88\ud638\r\n Statement : \ub4f1\ub85d\ud560 \uc804\uc790\uba85\uc138\uc11c object. made with Statement(...)\r\n UserID : \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\r\n return\r\n \ucc98\ub9ac\uacb0\uacfc. consist of code and message\r\n raise\r\n PopbillException"} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function for\nissuing a new item from the given CorpNum.", "response": "def issue(self, CorpNum, ItemCode, MgtKey, Memo=None, EmailSubject=None, UserID=None):\r\n \"\"\" \ubc1c\ud589\r\n args\r\n CorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\r\n ItemCode : \uba85\uc138\uc11c \uc885\ub958 \ucf54\ub4dc\r\n [121 - \uac70\ub798\uba85\uc138\uc11c], [122 - \uccad\uad6c\uc11c], [123 - \uacac\uc801\uc11c],\r\n [124 - \ubc1c\uc8fc\uc11c], [125 - \uc785\uae08\ud45c], [126 - \uc601\uc218\uc99d]\r\n MgtKey : \ud30c\ud2b8\ub108 \ubb38\uc11c\uad00\ub9ac\ubc88\ud638\r\n Memo : \ucc98\ub9ac\uba54\ubaa8\r\n EmailSubject : \ubc1c\ud589\uba54\uc77c \uc81c\ubaa9(\ubbf8\uae30\uc7ac\uc2dc \uae30\ubcf8\uc591\uc2dd\uc73c\ub85c \uc804\uc1a1)\r\n UserID : \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\r\n return\r\n \ucc98\ub9ac\uacb0\uacfc. consist of code and message\r\n raise\r\n PopbillException\r\n \"\"\"\r\n if MgtKey == None or MgtKey == \"\":\r\n raise PopbillException(-99999999, \"\uad00\ub9ac\ubc88\ud638\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n if ItemCode == None or ItemCode == \"\":\r\n raise PopbillException(-99999999, \"\uba85\uc138\uc11c \uc885\ub958 \ucf54\ub4dc\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n req = {}\r\n postData = \"\"\r\n\r\n if Memo != None and Memo != '':\r\n req[\"memo\"] = Memo\r\n if EmailSubject != None and EmailSubject != '':\r\n req[\"emailSubject\"] = EmailSubject\r\n\r\n postData = self._stringtify(req)\r\n\r\n return self._httppost('/Statement/' + str(ItemCode) + '/' + MgtKey, postData, CorpNum, UserID, \"ISSUE\")"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef cancel(self, CorpNum, ItemCode, MgtKey, Memo=None, UserID=None):\r\n\r\n if MgtKey == None or MgtKey == \"\":\r\n raise PopbillException(-99999999, \"\uad00\ub9ac\ubc88\ud638\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n if ItemCode == None or ItemCode == \"\":\r\n raise PopbillException(-99999999, \"\uba85\uc138\uc11c \uc885\ub958 \ucf54\ub4dc\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n postData = ''\r\n\r\n if Memo != None and Memo != '':\r\n postData = self._stringtify({\"memo\": Memo})\r\n\r\n return self._httppost('/Statement/' + str(ItemCode) + '/' + MgtKey, postData, CorpNum, UserID, \"CANCEL\")", "response": "Cancel a single item from a given CorpNum."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\ngetting Info from Alembic.", "response": "def getInfo(self, CorpNum, ItemCode, MgtKey):\r\n \"\"\" \uc0c1\ud0dc/\uc694\uc57d \uc815\ubcf4 \ud655\uc778\r\n args\r\n CorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\r\n ItemCode : \uba85\uc138\uc11c \uc885\ub958 \ucf54\ub4dc\r\n [121 - \uac70\ub798\uba85\uc138\uc11c], [122 - \uccad\uad6c\uc11c], [123 - \uacac\uc801\uc11c],\r\n [124 - \ubc1c\uc8fc\uc11c], [125 - \uc785\uae08\ud45c], [126 - \uc601\uc218\uc99d]\r\n MgtKey : \ud30c\ud2b8\ub108 \ubb38\uc11c\uad00\ub9ac\ubc88\ud638\r\n return\r\n \ubb38\uc11c \uc0c1\ud0dc/\uc694\uc57d\uc815\ubcf4 object\r\n raise\r\n PopbillException\r\n \"\"\"\r\n if MgtKey == None or MgtKey == \"\":\r\n raise PopbillException(-99999999, \"\uad00\ub9ac\ubc88\ud638\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n if ItemCode == None or ItemCode == \"\":\r\n raise PopbillException(-99999999, \"\uba85\uc138\uc11c \uc885\ub958 \ucf54\ub4dc\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n return self._httpget('/Statement/' + str(ItemCode) + '/' + MgtKey, CorpNum)"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef getDetailInfo(self, CorpNum, ItemCode, MgtKey):\r\n if MgtKey == None or MgtKey == \"\":\r\n raise PopbillException(-99999999, \"\uad00\ub9ac\ubc88\ud638\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n if ItemCode == None or ItemCode == \"\":\r\n raise PopbillException(-99999999, \"\uba85\uc138\uc11c \uc885\ub958 \ucf54\ub4dc\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n return self._httpget('/Statement/' + str(ItemCode) + '/' + MgtKey + '?Detail', CorpNum)", "response": "Get the details of a particular item in a particular CorpNum."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nattaches File to CorpNum", "response": "def attachFile(self, CorpNum, ItemCode, MgtKey, FilePath, UserID=None):\r\n \"\"\" \ud30c\uc77c \ucca8\ubd80\r\n args\r\n CorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\r\n ItemCode : \uba85\uc138\uc11c \uc885\ub958 \ucf54\ub4dc\r\n [121 - \uac70\ub798\uba85\uc138\uc11c], [122 - \uccad\uad6c\uc11c], [123 - \uacac\uc801\uc11c],\r\n [124 - \ubc1c\uc8fc\uc11c], [125 - \uc785\uae08\ud45c], [126 - \uc601\uc218\uc99d]\r\n MgtKey : \ud30c\ud2b8\ub108 \ubb38\uc11c\uad00\ub9ac\ubc88\ud638\r\n FilePath : \ucca8\ubd80\ud30c\uc77c\uc758 \uacbd\ub85c\r\n UserID : \ud31d\ube4c \ud68c\uc6d0\uc544\uc774\ub514\r\n return\r\n \ucc98\ub9ac\uacb0\uacfc. consist of code and message\r\n raise\r\n PopbillException\r\n \"\"\"\r\n if MgtKey == None or MgtKey == \"\":\r\n raise PopbillException(-99999999, \"\uad00\ub9ac\ubc88\ud638\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n if FilePath == None or FilePath == \"\":\r\n raise PopbillException(-99999999, \"\ud30c\uc77c\uacbd\ub85c\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n if ItemCode == None or ItemCode == \"\":\r\n raise PopbillException(-99999999, \"\uba85\uc138\uc11c \uc885\ub958 \ucf54\ub4dc\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n files = []\r\n\r\n try:\r\n with open(FilePath, \"rb\") as F:\r\n files = [File(fieldName='Filedata',\r\n fileName=F.name,\r\n fileData=F.read())]\r\n except IOError:\r\n raise PopbillException(-99999999, \"\ud574\ub2f9\uacbd\ub85c\uc5d0 \ud30c\uc77c\uc774 \uc5c6\uac70\ub098 \uc77d\uc744 \uc218 \uc5c6\uc2b5\ub2c8\ub2e4.\")\r\n\r\n return self._httppost_files('/Statement/' + str(ItemCode) + '/' + MgtKey + '/Files', None, files, CorpNum,\r\n UserID)"} {"SOURCE": "codesearchnet", "instruction": "Implement a function in Python 3 to\ndelete a file from the given CorpNum and MgtKey.", "response": "def deleteFile(self, CorpNum, ItemCode, MgtKey, FileID, UserID=None):\r\n \"\"\" \ucca8\ubd80\ud30c\uc77c \uc0ad\uc81c\r\n args\r\n CorpNum : \ud31d\ube4c\ud68c\uc6d0 \uc0ac\uc5c5\uc790\ubc88\ud638\r\n ItemCode : \uba85\uc138\uc11c \uc885\ub958 \ucf54\ub4dc\r\n [121 - \uac70\ub798\uba85\uc138\uc11c], [122 - \uccad\uad6c\uc11c], [123 - \uacac\uc801\uc11c],\r\n [124 - \ubc1c\uc8fc\uc11c], [125 - \uc785\uae08\ud45c], [126 - \uc601\uc218\uc99d]\r\n MgtKey : \ud30c\ud2b8\ub108 \ubb38\uc11c\uad00\ub9ac\ubc88\ud638\r\n FileID : \ud30c\uc77c\uc544\uc774\ub514, \ucca8\ubd80\ud30c\uc77c \ubaa9\ub85d\ud655\uc778(getFiles) API \uc751\ub2f5\uc804\ubb38\uc758 AttachedFile \ubcc0\uc218\uac12\r\n UserID : \ud31d\ube4c\ud68c\uc6d0 \uc544\uc774\ub514\r\n return\r\n \ucca8\ubd80\ud30c\uc77c \uc815\ubcf4 \ubaa9\ub85d as List\r\n raise\r\n PopbillException\r\n \"\"\"\r\n if MgtKey == None or MgtKey == \"\":\r\n raise PopbillException(-99999999, \"\uad00\ub9ac\ubc88\ud638\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n if ItemCode == None or ItemCode == \"\":\r\n raise PopbillException(-99999999, \"\uba85\uc138\uc11c \uc885\ub958 \ucf54\ub4dc\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n if FileID == None or FileID == \"\":\r\n raise PopbillException(-99999999, \"\ud30c\uc77c\uc544\uc774\ub514\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n postData = ''\r\n\r\n return self._httppost('/Statement/' + str(ItemCode) + '/' + MgtKey + '/Files/' + FileID, postData, CorpNum,\r\n UserID, 'DELETE')"} {"SOURCE": "codesearchnet", "instruction": "Given the following Python 3 function, write the documentation\ndef getMassPrintURL(self, CorpNum, ItemCode, MgtKeyList, UserID=None):\r\n if MgtKeyList == None:\r\n raise PopbillException(-99999999, \"\uad00\ub9ac\ubc88\ud638 \ubc30\uc5f4\uc774 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n if ItemCode == None or ItemCode == \"\":\r\n raise PopbillException(-99999999, \"\uba85\uc138\uc11c \uc885\ub958 \ucf54\ub4dc\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n postData = self._stringtify(MgtKeyList)\r\n\r\n result = self._httppost('/Statement/' + str(ItemCode) + '?Print', postData, CorpNum, UserID)\r\n\r\n return result.url", "response": "\uc778\uc1c4 URL \ud655\uc778 \ud655\uc778"} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef attachStatement(self, CorpNum, ItemCode, MgtKey, SubItemCode, SubMgtKey, UserID=None):\r\n if MgtKey == None or MgtKey == \"\":\r\n raise PopbillException(-99999999, \"\uad00\ub9ac\ubc88\ud638\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n if ItemCode == None or ItemCode == \"\":\r\n raise PopbillException(-99999999, \"\uba85\uc138\uc11c \uc885\ub958 \ucf54\ub4dc\uac00 \uc785\ub825\ub418\uc9c0 \uc54a\uc558\uc2b5\ub2c8\ub2e4.\")\r\n\r\n uri = '/Statement/' + ItemCode + '/' + MgtKey + '/AttachStmt'\r\n\r\n postData = self._stringtify({\"ItemCode\": ItemCode, \"MgtKey\": SubMgtKey})\r\n\r\n return self._httppost(uri, postData, CorpNum, UserID)", "response": "A method to attach a statement to a CorpNum."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef process(self, article):\n if self.article is not None and not self.collection:\n log.warning('Could not process additional article. Package only \\\nhandles one article unless collection mode is set.')\n return False\n\n if article.publisher is None:\n log.error('''Package cannot be generated for an Article \\\nwithout a publisher!''')\n return\n self.article = article\n self.article_doi = self.article.doi.split('/')[1]\n self.all_dois.append(self.article.doi)\n\n #Analyze the article to add entries to the spine\n dash_doi = self.article_doi.replace('.', '-')\n\n #Entry for the main content document\n main_idref = 'main-{0}-xhtml'.format(dash_doi)\n self.spine_list.append(spine_item(main_idref, True))\n\n #Entry for the biblio content document\n biblio_idref = 'biblio-{0}-xhtml'.format(dash_doi)\n if self.article.root.xpath('./back/ref-list/ref'):\n self.spine_list.append(spine_item(biblio_idref, True))\n\n #Entry for the tables content document\n tables_idref = 'tables-{0}-xhtml'.format(dash_doi)\n if self.article.publisher.has_out_of_flow_tables():\n self.spine_list.append(spine_item(tables_idref, False))\n\n self.acquire_metadata()", "response": "Processes an article and adds the appropriate spine items to the internal spine list."} {"SOURCE": "codesearchnet", "instruction": "How would you explain what the following Python 3 function does\ndef acquire_metadata(self):\n #For space economy\n publisher = self.article.publisher\n\n if self.collection: # collection mode metadata gathering\n pass\n else: # single mode metadata gathering\n self.pub_id = publisher.package_identifier()\n self.title = publisher.package_title()\n for date in publisher.package_date():\n self.dates.add(date)\n\n #Common metadata gathering\n for lang in publisher.package_language():\n self.languages.add(lang) # languages\n for contributor in publisher.package_contributors(): # contributors\n self.contributors.add(contributor)\n self.publishers.add(publisher.package_publisher()) # publisher names\n desc = publisher.package_description()\n if desc is not None:\n self.descriptions.add(desc)\n for subj in publisher.package_subject():\n self.subjects.add(subj) # subjects\n #Rights\n art_rights = publisher.package_rights()\n self.rights.add(art_rights)\n if art_rights not in self.rights_associations:\n self.rights_associations[art_rights] = [self.article.doi]\n else:\n self.rights_associations[art_rights].append(self.article.doi)", "response": "Handles the acquisition of metadata for both collection mode and single mode."} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef file_manifest(self, location):\n #Maps file extensions to mimetypes\n mimetypes = {'.jpg': 'image/jpeg',\n '.jpeg': 'image/jpeg',\n '.xml': 'application/xhtml+xml',\n '.png': 'image/png',\n '.css': 'text/css',\n '.ncx': 'application/x-dtbncx+xml',\n '.gif': 'image/gif',\n '.tif': 'image/tif',\n '.pdf': 'application/pdf',\n '.xhtml': 'application/xhtml+xml',\n '.ttf': 'application/vnd.ms-opentype',\n '.otf': 'application/vnd.ms-opentype'}\n\n current_dir = os.getcwd()\n os.chdir(location)\n for dirpath, _dirnames, filenames in os.walk('.'):\n dirpath = dirpath[2:] # A means to avoid dirpath prefix of './'\n for fn in filenames:\n fn_ext = os.path.splitext(fn)[-1]\n item = etree.Element('item')\n #Here we set three attributes: href, media-type, and id\n if not dirpath:\n item.attrib['href'] = fn\n else:\n item.attrib['href'] = '/'.join([dirpath, fn])\n item.attrib['media-type'] = mimetypes[fn_ext]\n #Special handling for common image types\n if fn_ext in ['.jpg', '.png', '.tif', '.jpeg']:\n #the following lines assume we are using the convention\n #where the article doi is prefixed by 'images-'\n item.attrib['id'] = '-'.join([dirpath[7:],\n fn.replace('.', '-')])\n else:\n item.attrib['id'] = fn.replace('.', '-')\n yield item\n os.chdir(current_dir)", "response": "Returns an iterator over the files in a location which yields item elements that can be inserted into the package manifest."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script for\nreturning an appropriate Name and File - As - Name for a contrib element.", "response": "def get_contrib_names(self, contrib):\n \"\"\"\n Returns an appropriate Name and File-As-Name for a contrib element.\n\n This code was refactored out of nav_contributors and\n package_contributors to provide a single definition point for a common\n job. This is a useful utility that may be well-employed for other\n publishers as well.\n \"\"\"\n collab = contrib.find('collab')\n anon = contrib.find('anonymous')\n if collab is not None:\n proper_name = serialize(collab, strip=True)\n file_as_name = proper_name\n elif anon is not None:\n proper_name = 'Anonymous'\n file_as_name = proper_name\n else:\n name = contrib.find('name')\n surname = name.find('surname').text\n given = name.find('given-names')\n if given is not None:\n if given.text: # Sometimes these tags are empty\n proper_name = ' '.join([surname, given.text])\n #File-as name is , \n file_as_name = ', '.join([surname, given.text[0]])\n else:\n proper_name = surname\n file_as_name = proper_name\n else:\n proper_name = surname\n file_as_name = proper_name\n return proper_name, file_as_name"} {"SOURCE": "codesearchnet", "instruction": "Can you implement a function in Python 3 that\nreturns the article description for the current article.", "response": "def package_description(self):\n \"\"\"\n Given an Article class instance, this is responsible for returning an\n article description. For this method I have taken the approach of\n serializing the article's first abstract, if it has one. This results\n in 0 or 1 descriptions per article.\n \"\"\"\n abstract = self.article.root.xpath('./front/article-meta/abstract')\n return serialize(abstract[0], strip=True) if abstract else None"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef heading_title(self):\n art_title = self.article.root.xpath('./front/article-meta/title-group/article-title')[0]\n article_title = deepcopy(art_title)\n article_title.tag = 'h1'\n article_title.attrib['id'] = 'title'\n article_title.attrib['class'] = 'article-title'\n return article_title", "response": "Makes the Article Title for the Heading."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef make_heading_authors(self, authors):\n author_element = etree.Element('h3', {'class': 'authors'})\n #Construct content for the author element\n first = True\n for author in authors:\n if first:\n first = False\n else:\n append_new_text(author_element, ',', join_str='')\n collab = author.find('collab')\n anon = author.find('anon')\n if collab is not None:\n append_all_below(author_element, collab)\n elif anon is not None: # If anonymous, just add \"Anonymous\"\n append_new_text(author_element, 'Anonymous')\n else: # Author is neither Anonymous or a Collaboration\n author_name, _ = self.get_contrib_names(author)\n append_new_text(author_element, author_name)\n #TODO: Handle author footnote references, also put footnotes in the ArticleInfo\n #Example: journal.pbio.0040370.xml\n first = True\n for xref in author.xpath(\"./xref[@ref-type='corresp' or @ref-type='aff']\"):\n _sup = xref.find('sup')\n sup_text = all_text(_sup) if _sup is not None else ''\n auth_sup = etree.SubElement(author_element, 'sup')\n sup_link = etree.SubElement(auth_sup,\n 'a',\n {'href': self.main_fragment.format(xref.attrib['rid'])})\n sup_link.text = sup_text\n if first:\n first = False\n else:\n append_new_text(auth_sup, ', ', join_str='')\n #for xref in author.findall('xref'):\n #if xref.attrs['ref-type'] in ['corresp', 'aff']:\n\n #try:\n #sup_element = xref.sup[0].node\n #except IndexError:\n #sup_text = ''\n #else:\n #sup_text = all_text(sup_element)\n #new_sup = etree.SubElement(author_element, 'sup')\n #sup_link = etree.SubElement(new_sup, 'a')\n #sup_link.attrib['href'] = self.main_fragment.format(xref.attrs['rid'])\n #sup_link.text = sup_text\n #if first:\n #first = False\n #else:\n #new_sup.text = ','\n return author_element", "response": "Constructs the Authors element for the Heading element. This should display the Heading element directly after the Article Title. This should display the Heading element directly after the Article Title."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_heading_affiliations(self, heading_div):\n #Get all of the aff element tuples from the metadata\n affs = self.article.root.xpath('./front/article-meta/aff')\n #Create a list of all those pertaining to the authors\n author_affs = [i for i in affs if 'aff' in i.attrib['id']]\n #Count them, used for formatting\n if len(author_affs) == 0:\n return None\n else:\n affs_list = etree.SubElement(heading_div,\n 'ul',\n {'id': 'affiliations',\n 'class': 'simple'})\n\n for aff in author_affs:\n #Create a span element to accept extracted content\n aff_item = etree.SubElement(affs_list, 'li')\n aff_item.attrib['id'] = aff.attrib['id']\n #Get the first label node and the first addr-line node\n label = aff.find('label')\n addr_line = aff.find('addr-line')\n if label is not None:\n bold = etree.SubElement(aff_item, 'b')\n bold.text = all_text(label) + ' '\n if addr_line is not None:\n append_new_text(aff_item, all_text(addr_line))\n else:\n append_new_text(aff_item, all_text(aff))", "response": "Makes the content for the Author Affiliations element displays after the Heading."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef make_heading_abstracts(self, heading_div):\n for abstract in self.article.root.xpath('./front/article-meta/abstract'):\n #Make a copy of the abstract\n abstract_copy = deepcopy(abstract)\n abstract_copy.tag = 'div'\n #Abstracts are a rather diverse bunch, keep an eye on them!\n title_text = abstract_copy.xpath('./title[1]/text()')\n for title in abstract_copy.findall('.//title'):\n remove(title)\n #Create a header for the abstract\n abstract_header = etree.Element('h2')\n remove_all_attributes(abstract_copy)\n #Set the header text and abstract id according to abstract type\n abstract_type = abstract.attrib.get('abstract-type')\n log.debug('Handling Abstrace of with abstract-type=\"{0}\"'.format(abstract_type))\n if abstract_type == 'summary':\n abstract_header.text = 'Author Summary'\n abstract_copy.attrib['id'] = 'author-summary'\n elif abstract_type == 'editors-summary':\n abstract_header.text = 'Editors\\' Summary'\n abstract_copy.attrib['id'] = 'editor-summary'\n elif abstract_type == 'synopsis':\n abstract_header.text = 'Synopsis'\n abstract_copy.attrib['id'] = 'synopsis'\n elif abstract_type == 'alternate':\n #Right now, these will only be included if there is a title to\n #give it\n if title_text:\n abstract_header.text= title_text[0]\n abstract_copy.attrib['id'] = 'alternate'\n else:\n continue\n elif abstract_type is None:\n abstract_header.text = 'Abstract'\n abstract_copy.attrib['id'] = 'abstract'\n elif abstract_type == 'toc': # We don't include these\n continue\n else: # Warn about these, then skip\n log.warning('No handling for abstract-type=\"{0}\"'.format(abstract_type))\n continue\n #abstract_header.text = abstract_type\n #abstract_copy.attrib['id'] = abstract_type\n heading_div.append(abstract_header)\n heading_div.append(abstract_copy)", "response": "This method creates the headings for all abstracts in the article."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\ncreate the Article Info element for the current article.", "response": "def make_article_info(self):\n \"\"\"\n The Article Info contains the (self) Citation, Editors, Dates,\n Copyright, Funding Statement, Competing Interests Statement,\n Correspondence, and Footnotes. Maybe more...\n\n This content follows the Heading and precedes the Main segment in the\n output.\n\n This function accepts the receiving_node argument, which will receive\n all generated output as new childNodes.\n \"\"\"\n body = self.main.getroot().find('body')\n #Create a div for ArticleInfo, exposing it to linking and formatting\n article_info_div = etree.Element('div', {'id': 'ArticleInfo'})\n body.insert(1, article_info_div)\n #Creation of the self Citation\n article_info_div.append(self.make_article_info_citation())\n #Creation of the Editors\n editors = self.article.root.xpath(\"./front/article-meta/contrib-group/contrib[@contrib-type='editor']\")\n self.make_article_info_editors(editors, article_info_div)\n #Creation of the important Dates segment\n article_info_div.append(self.make_article_info_dates())\n #Creation of the Copyright statement\n self.make_article_info_copyright(article_info_div)\n #Creation of the Funding statement\n self.make_article_info_funding(article_info_div)\n #Creation of the Competing Interests statement\n self.make_article_info_competing_interests(article_info_div)\n #Creation of the Correspondences (contact information) for the article\n self.make_article_info_correspondences(article_info_div)\n #Creation of the Footnotes (other) for the ArticleInfo\n self.make_article_info_footnotes_other(article_info_div)"} {"SOURCE": "codesearchnet", "instruction": "Can you create a Python 3 function that\ncreates a self citation node for the ArticleInfo of the article.", "response": "def make_article_info_citation(self):\n \"\"\"\n Creates a self citation node for the ArticleInfo of the article.\n\n This method uses code from this page as a reference implementation:\n https://github.com/PLOS/ambra/blob/master/base/src/main/resources/articleTransform-v3.xsl\n \"\"\"\n citation_div = etree.Element('div', {'id': 'article-citation'})\n b = etree.SubElement(citation_div, 'b')\n b.text = 'Citation: '\n\n #Add author stuff to the citation\n authors = self.article.root.xpath(\"./front/article-meta/contrib-group/contrib[@contrib-type='author']\")\n for author in authors:\n author_index = authors.index(author)\n #At the 6th author, simply append an et al., then stop iterating\n if author_index == 5:\n append_new_text(citation_div, 'et al.', join_str='')\n break\n else:\n #Check if the author contrib has a collab\n collab = author.find('collab')\n if collab is not None:\n collab_copy = deepcopy(collab)\n for contrib_group in collab_copy.findall('contrib_group'):\n remove(contrib_group)\n append_all_below(citation_div, collab, join_str='')\n else: # Author element is not a collab\n name = author.find('name')\n #Note that this does not support eastern names\n #Grab the surname information\n surname = name.find('surname')\n given_names = name.find('given-names')\n suffix = name.find('suffix')\n append_new_text(citation_div, surname.text, join_str='')\n #Make initials from the given-name information\n if given_names is not None:\n #Add a space\n append_new_text(citation_div, ' ', join_str='')\n #Split by whitespace and take first character\n given_initials = [i[0] for i in given_names.text.split() if i]\n for initial in given_initials:\n append_new_text(citation_div, initial, join_str='')\n #If there is a suffix, add its text, but don't include the\n #trailing period if there is one\n if suffix is not None:\n #Add a space\n append_new_text(citation_div, ' ', join_str='')\n suffix_text = suffix.text\n #Check for the trailing period\n if suffix_text[-1] == '.':\n suffix_text = suffix_text[:-1]\n append_new_text(citation_div, suffix_text, join_str='')\n #If this is not the last author to be added, add a \", \"\n #This is satisfied by being less than the 6th author, or less\n #than the length of the author_list - 1\n if author_index < 5 or author_index < len(author_list) -1:\n append_new_text(citation_div, ', ', join_str='')\n #Add Publication Year to the citation\n #Find pub-date elements, use pub-type=collection, or else pub-type=ppub\n d = './front/article-meta/pub-date'\n coll = self.article.root.xpath(d + \"[@pub-type='collection']\")\n ppub = self.article.root.xpath(d + \"[@pub-type='ppub']\")\n if coll:\n pub_year = coll[0].find('year').text\n elif ppub:\n pub_year = ppub[0].find('year').text\n append_new_text(citation_div, ' ({0}) '.format(pub_year), join_str='')\n #Add the Article Title to the Citation\n #As best as I can tell from the reference implementation, they\n #serialize the article title to text-only, and expunge redundant spaces\n #This might need later review\n article_title = self.article.root.xpath('./front/article-meta/title-group/article-title')[0]\n article_title_text = serialize(article_title)\n normalized = ' '.join(article_title_text.split()) # Remove redundant whitespace\n #Add a period unless there is some other valid punctuation\n if normalized[-1] not in '.?!':\n normalized += '.'\n append_new_text(citation_div, normalized + ' ', join_str='')\n #Add the article's journal name using the journal-id of type \"nlm-ta\"\n journal = self.article.root.xpath(\"./front/journal-meta/journal-id[@journal-id-type='nlm-ta']\")\n append_new_text(citation_div, journal[0].text + ' ', join_str='')\n #Add the article's volume, issue, and elocation_id values\n volume = self.article.root.xpath('./front/article-meta/volume')[0].text\n issue = self.article.root.xpath('./front/article-meta/issue')[0].text\n elocation_id = self.article.root.xpath('./front/article-meta/elocation-id')[0].text\n form = '{0}({1}): {2}. '.format(volume, issue, elocation_id)\n append_new_text(citation_div, form, join_str='')\n append_new_text(citation_div, 'doi:{0}'.format(self.article.doi), join_str='')\n\n return citation_div"} {"SOURCE": "codesearchnet", "instruction": "Explain what the following Python 3 code does\ndef make_article_info_dates(self):\n dates_div = etree.Element('div', {'id': 'article-dates'})\n\n d = './front/article-meta/history/date'\n received = self.article.root.xpath(d + \"[@date-type='received']\")\n accepted = self.article.root.xpath(d + \"[@date-type='accepted']\")\n if received:\n b = etree.SubElement(dates_div, 'b')\n b.text = 'Received: '\n dt = self.date_tuple_from_date(received[0], 'Received')\n formatted_date_string = self.format_date_string(dt)\n append_new_text(dates_div, formatted_date_string + '; ')\n if accepted:\n b = etree.SubElement(dates_div, 'b')\n b.text = 'Accepted: '\n dt = self.date_tuple_from_date(accepted[0], 'Accepted')\n formatted_date_string = self.format_date_string(dt)\n append_new_text(dates_div, formatted_date_string + '; ')\n #Published date is required\n pub_date = self.article.root.xpath(\"./front/article-meta/pub-date[@pub-type='epub']\")[0]\n b = etree.SubElement(dates_div, 'b')\n b.text = 'Published: '\n dt = self.date_tuple_from_date(pub_date, 'Published')\n formatted_date_string = self.format_date_string(dt)\n append_new_text(dates_div, formatted_date_string)\n\n return dates_div", "response": "Makes the section containing important dates for the article."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_article_info_funding(self, article_info_div):\n funding_group = self.article.root.xpath('./front/article-meta/funding-group')\n if funding_group:\n funding_div = etree.SubElement(article_info_div,\n 'div',\n {'id': 'funding'})\n funding_b = etree.SubElement(funding_div, 'b')\n funding_b.text = 'Funding: '\n #As far as I can tell, PLoS only uses one funding-statement\n funding_statement = funding_group[0].find('funding-statement')\n append_all_below(funding_div, funding_statement)", "response": "Creates the element for declaring Funding in the article info."} {"SOURCE": "codesearchnet", "instruction": "Here you have a function in Python 3, explain what it does\ndef make_article_info_competing_interests(self, article_info_div):\n #Check for author-notes\n con_expr = \"./front/article-meta/author-notes/fn[@fn-type='conflict']\"\n conflict = self.article.root.xpath(con_expr)\n if not conflict:\n return\n conflict_div = etree.SubElement(article_info_div,\n 'div',\n {'id': 'conflict'})\n b = etree.SubElement(conflict_div, 'b')\n b.text = 'Competing Interests: '\n fn_p = conflict[0].find('p')\n if fn_p is not None:\n append_all_below(conflict_div, fn_p)", "response": "Creates the element for declaring competing interests in the article."} {"SOURCE": "codesearchnet", "instruction": "How would you implement a function in Python 3 that\nadds the corresponding articles to the article_info_div.", "response": "def make_article_info_correspondences(self, article_info_div):\n \"\"\"\n Articles generally provide a first contact, typically an email address\n for one of the authors. This will supply that content.\n \"\"\"\n corresps = self.article.root.xpath('./front/article-meta/author-notes/corresp')\n if corresps:\n corresp_div = etree.SubElement(article_info_div,\n 'div',\n {'id': 'correspondence'})\n for corresp in corresps:\n sub_div = etree.SubElement(corresp_div,\n 'div',\n {'id': corresp.attrib['id']})\n append_all_below(sub_div, corresp)"} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef make_article_info_footnotes_other(self, article_info_div):\n other_fn_expr = \"./back/fn-group/fn[@fn-type='other']\"\n other_fns = self.article.root.xpath(other_fn_expr)\n if other_fns:\n other_fn_div = etree.SubElement(article_info_div,\n 'div',\n {'class': 'back-fn-other'})\n for other_fn in other_fns:\n append_all_below(other_fn_div, other_fn)", "response": "This will create the other footnotes in the article info div."} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef move_back_boxed_texts(self):\n body = self.main.getroot().find('body')\n back = self.article.root.find('back')\n if back is None:\n return\n boxed_texts = back.xpath('.//boxed-text')\n for boxed_text in boxed_texts:\n body.append(deepcopy(boxed_text))", "response": "Move anyboxed - text elements over to the receiving element."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 function for\nmaking the back - acks element.", "response": "def make_back_acknowledgments(self):\n \"\"\"\n The is an important piece of back matter information, and will be\n including immediately after the main text.\n\n This element should only occur once, optionally, for PLoS, if a need\n becomes known, then multiple instances may be supported.\n \"\"\"\n acks = self.article.root.xpath('./back/ack')\n if not acks:\n return\n ack = deepcopy(acks[0])\n #Modify the tag to div\n ack.tag = 'div'\n #Give it an id\n ack.attrib['id'] = 'acknowledgments'\n #Give it a title element--this is not an EPUB element but doing so will\n #allow it to later be depth-formatted by self.convert_div_titles()\n ack_title = etree.Element('title')\n ack_title.text = 'Acknowledgments'\n ack.insert(0, ack_title) # Make it the first element\n return ack"} {"SOURCE": "codesearchnet", "instruction": "Make a summary of the following Python 3 code\ndef make_back_glossary(self, body):\n for glossary in self.article.root.xpath('./back/glossary'):\n gloss_copy = deepcopy(glossary)\n gloss_copy.tag = 'div'\n gloss_copy.attrib['class'] = 'back-glossary'\n body.append(gloss_copy)", "response": "Glossaries are a fairly common item in papers for PLoS, but it also\n seems that they are rarely incorporated into the PLoS web-site or PDF\n formats. They are included in the ePub output however because they are\n helpful and because we can."} {"SOURCE": "codesearchnet", "instruction": "Can you generate the documentation for the following Python 3 function\ndef make_back_notes(self, body):\n for notes in self.article.root.xpath('./back/notes'):\n notes_sec = deepcopy(notes.find('sec'))\n notes_sec.tag = 'div'\n notes_sec.attrib['class'] = 'back-notes'\n body.append(notes_sec)", "response": "Make back - notes elements."} {"SOURCE": "codesearchnet", "instruction": "Create a Python 3 function to\nconvert the disp - formula elements to conforming elements", "response": "def convert_disp_formula_elements(self):\n \"\"\"\n elements must be converted to conforming elements\n \"\"\"\n for disp in self.main.getroot().findall('.//disp-formula'):\n #find label element\n label_el = disp.find('label')\n graphic_el = disp.find('graphic')\n if graphic_el is None: # No graphic, assume math as text instead\n text_span = etree.Element('span', {'class': 'disp-formula'})\n if 'id' in disp.attrib:\n text_span.attrib['id'] = disp.attrib['id']\n append_all_below(text_span, disp)\n #Insert the text span before the disp-formula\n insert_before(disp, text_span)\n #If a label exists, modify and insert before text_span\n if label_el is not None:\n label_el.tag = 'b'\n insert_before(text_span, label_el)\n #Remove the disp-formula\n remove(disp)\n #Skip the rest, which deals with the graphic element\n continue\n #The graphic element is present\n #Create a file reference for the image\n xlink_href = ns_format(graphic_el, 'xlink:href')\n graphic_xlink_href = graphic_el.attrib[xlink_href]\n file_name = graphic_xlink_href.split('.')[-1] + '.png'\n img_dir = 'images-' + self.doi_suffix()\n img_path = '/'.join([img_dir, file_name])\n\n #Create the img element\n img_element = etree.Element('img', {'alt': 'A Display Formula',\n 'class': 'disp-formula',\n 'src': img_path})\n #Transfer the id attribute\n if 'id' in disp.attrib:\n img_element.attrib['id'] = disp.attrib['id']\n #Insert the img element\n insert_before(disp, img_element)\n #Create content for the label\n if label_el is not None:\n label_el.tag = 'b'\n insert_before(img_element, label_el)\n #Remove the old disp-formula element\n remove(disp)"} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert inline - formula elements to the corresponding image elements.", "response": "def convert_inline_formula_elements(self):\n \"\"\"\n elements must be converted to be conforming\n\n These elements may contain elements, textual content,\n or both.\n \"\"\"\n for inline in self.main.getroot().findall('.//inline-formula'):\n #inline-formula elements will be modified in situ\n remove_all_attributes(inline)\n inline.tag = 'span'\n inline.attrib['class'] = 'inline-formula'\n inline_graphic = inline.find('inline-graphic')\n if inline_graphic is None:\n # Do nothing more if there is no graphic\n continue\n #Need to conver the inline-graphic element to an img element\n inline_graphic.tag = 'img'\n #Get a copy of the attributes, then remove them\n inline_graphic_attributes = copy(inline_graphic.attrib)\n remove_all_attributes(inline_graphic)\n #Create a file reference for the image\n xlink_href = ns_format(inline_graphic, 'xlink:href')\n graphic_xlink_href = inline_graphic_attributes[xlink_href]\n file_name = graphic_xlink_href.split('.')[-1] + '.png'\n img_dir = 'images-' + self.doi_suffix()\n img_path = '/'.join([img_dir, file_name])\n #Set the source to the image path\n inline_graphic.attrib['src'] = img_path\n inline_graphic.attrib['class'] = 'inline-formula'\n inline_graphic.attrib['alt'] = 'An Inline Formula'"} {"SOURCE": "codesearchnet", "instruction": "Can you tell what is the following Python 3 function doing\ndef convert_disp_quote_elements(self):\n for disp_quote in self.main.getroot().findall('.//disp-quote'):\n if disp_quote.getparent().tag == 'p':\n elevate_element(disp_quote)\n disp_quote.tag = 'div'\n disp_quote.attrib['class'] = 'disp-quote'", "response": "Convert disp - quote elements to divs."} {"SOURCE": "codesearchnet", "instruction": "Can you generate a brief explanation for the following Python 3 code\ndef convert_boxed_text_elements(self):\n for boxed_text in self.main.getroot().findall('.//boxed-text'):\n sec_el = boxed_text.find('sec')\n if sec_el is not None:\n sec_el.tag = 'div'\n title = sec_el.find('title')\n if title is not None:\n title.tag = 'b'\n sec_el.attrib['class'] = 'boxed-text'\n if 'id' in boxed_text.attrib:\n sec_el.attrib['id'] = boxed_text.attrib['id']\n replace(boxed_text, sec_el)\n else:\n div_el = etree.Element('div', {'class': 'boxed-text'})\n if 'id' in boxed_text.attrib:\n div_el.attrib['id'] = boxed_text.attrib['id']\n append_all_below(div_el, boxed_text)\n replace(boxed_text, div_el)", "response": "Convert all the boxed - text elements in the main XML file into a single boxed - text element."} {"SOURCE": "codesearchnet", "instruction": "Write a Python 3 script to\nconvert supplementary material elements into a div element.", "response": "def convert_supplementary_material_elements(self):\n \"\"\"\n Supplementary material are not, nor are they generally expected to be,\n packaged into the epub file. Though this is a technical possibility,\n and certain epub reading systems (such as those run on a PC) might be\n reasonably capable of the external handling of diverse file formats\n I presume that supplementary material will remain separate from the\n document. So special cases aside, external links to supplementary\n material will be employed; this will require internet connection for\n access.\n\n As for content in , they appear to strictly\n contain 1